Call for Papers

The emphasis is on (but not limited to) the below topics, arranged according to the workshop

Detecting stress, emotion or mental states of people from speech

  • Multi-modal approaches: using other modes such as video and sensor data in addition to speech
  • Relevance of language models for mental state detection
  • Cross-corpus detection on non-acted speech databases in multiple languages and realistic environments

Effects of Audio on stress, emotion and mental states of people

  • Audio-Visual Perception of music
  • Analysis of brain signal responses to audio and visual stimulus
  • Evaluation and Applications: augmented reality, art installations, music animations, computer games, etc

Other topics that are of interest in the context of stress, emotion and mental states

  • Approaches of Explainable AI in music and speech
  • Sounds at inaudible frequencies
  • Novel protocols for assessing mental states, inducing stress or emotion
  • Applications related to the above topics

Submission Process

Submitted papers shall be reviewed by the Scientific Committee. Each paper will receive at least two reviews.
Submitted papers must be original and not simultaneously submitted to another journal or conference. They should follow the INTERSPEECH 2024 format: At most 4 pages of content with possibly one additional page containing references only.