Submission deadline: 15 February 2014
Publication issue: January–March 2015
This special issue will address computational models, techniques, methods, and systems for interactive sonification and their evaluation.
Sonification and auditory display research takes place in a community that builds on a range of disciplines, including physics, acoustics, psychoacoustics, signal processing, statistics, computer science, and musicology. Application examples range from auditory displays in assistive technology for visually impaired people to data exploration and industrial process monitoring. Auditory displays are systems that transform data into sound and present this information to the human user using an interface to allow the user to interact with the sound synthesis process. This transformation of data into sound is called sonification, which can be defined as the data-dependent generation of sound in a way that reflects objective properties of the input data.
The aim of auditory displays and sonification is to exploit, among other capabilities, the ability of our powerful auditory sense to interpret sounds using multiple layers of understanding, perceive multiple auditory objects within an auditory scene, turn our focus of attention to particular objects and learn and improve the discrimination of auditory stimuli.
Auditory displays typically evolve over time since sound is inherently a temporal phenomenon. Interaction thus becomes an integral part of the process in order to select, manipulate, excite or control the display, and this has implications for the interface between humans and computers. In recent years it has become clear that there is an important need for research to address the interaction with auditory displays more explicitly.
Call for Papers
For this special issue, we invite submissions of research papers that deals with (but are not limited to) the following areas:
- Interfaces between humans and auditory displays
- New platforms for interactive sonification
- Reproducible research in interactive sonification
- Mapping strategies and models for creating coherency between action and reaction (e.g. acoustic feedback, but also combined with haptic or visual feedback)
- Perceptual aspects of the display (how to relate actions and sound, e.g. cross-modal effects, importance of synchronisation)
- Applications of interactive sonification
- Evaluation of performance, usability and multi-modal interactive systems including auditory feedback.
- Manuscript submission: 15 February 2014
- Acceptance/Revision notification: 15 June 2014
- Revised manuscript due: 20 July 2014
- Final acceptance notification: 5 Oct 2014
- Final manuscript due: 20 Oct 2014
- Tentative publication: January 2015
- Norberto Degara, Fraunhofer IIS, Audio Department, Erlangen, Germany; email@example.com
- Andy Hunt, University of York, Electronics Deptartment, York, UK; firstname.lastname@example.org
- Thomas Hermann. Bielefeld University, Ambient Intelligence Group, Bielefeld, Germany; email@example.com
Authors are encouraged to send to Norberto Degara (firstname.lastname@example.org) a brief email indicating their intention to participate as soon as possible, including their contact information and the topic they intend to address in their submissions. Submit your paper at https://mc.manuscriptcentral.com/cs-ieee. When uploading your paper, please select the appropriate special issue title under the category "Manuscript Type." (Contact email@example.com with any questions regarding the submission system.) All submissions will undergo a blind peer review by at least two expert reviewers to ensure a high standard of quality. All submissions must contain original, previously unpublished research or engineering work. Papers must stay within the following limits: 6,500 words maximum, 12 total combined figures and tables (each figure and table counts as 200 words toward the total word count), and 18 references.
For more information about the special issue focus, contact the guest editors.