Saverio commented this

A very interesting paper. Can you please answer some doubts that I have? Fig 1 describes x-vector network and it says i…

FR
francisco.s.teixeira replied this
Replied 1 week ago

  Thank you for taking your time to read our work! Here are our answers to your questions:   "Fig 1 describes x-vector network and it says it is adapted from reference 9 paper. How this architecture is different from the one? If they are same, ''adapted'' part can be omitted."- You are correct, we plan on leaving just the reference when we submit the final version of the paper."We see that different features are performing differently for different application. Some cases KB features are better, in some cases i-vectors are performing better. IS there any generalized  features which can be used in all cases?"- As we state in the results section, what we see is that x-vectors perform equally, or better, than KB and i-vectors when we test them on same-language datasets. However, when we test the x-vector approach in a dataset with a different language from the one used to train the x-vector network, KB-based features achieve better results. This is most likely due to the language mismatch, and, as we state in the conclusions, a line of future work would be to train the x-vector network with corpora containing several languages. In any case, in both this task and for OSA detection, the x-vectors perform better than i-vectors, which shows that they are more suitable than i-vectors in mismatched conditions. Overall we do not want to state that x-vectors are the final general solution that can always be used, but instead, that x-vectors are a competitive alternative to KB and i-vector features, and that x-vectors are a promising solution for disease detection tasks in mismatched conditions, which is an important advantage when we consider real world data."Also, different classifiers are used for different purposes. It feels like trial and error. You check all the possible options and choose the best depending on the results. It fails to generalize. It would be better to have consistency in at least classifier if your study focuses  on features."- In this we would have to disagree. The underlying classifier is always the SVM. Changing kernels is a matter of trying to find the best space in which the data are linearly separable. Additionally, as is stated in the Experimental Setup, we find the SVM's parameters through grid search. As such, we try the same parameters and kernels for each feature set, giving all feature sets equivalent conditions.   Please let us know if you have more comments or questions!

Views

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description