Speech analysis is one of the most prominent areas of speech processing due to its wide-spread applications in speech technologies. Much of my Ph.D. work centered on the development of advanced sinusoidal models for the analysis of speech signals in time-varying amplitude and phase modulated sinusoids. Parameter tuning can result in high-quality speech modifications while high-accuracy speech signal decomposition can improve existing algorithms in various applications such as emotion recognition from speech and musical instrument sounds modeling.
I am also interested in processing pathological speech data from the viewpoint of the source-filter separation since non-invasive glottal analysis can reveal pathologies of the vocal folds in a non-expensive way. Finally, deep learning methods have dominated in many speech processing applications. A very challenging application is to use deap learning in emotion recognition solely from speech data and/or to extract features (DNN-based or knowledge-based) to construct a low dimensional (3D) "acoustic emotional space" that can be useful in engineering, cognitive sciences, psychology, and others.
You can find my resume here