Some of my colleagues are editing a special issue Journal on Interactive Sonification – please submit your papers by the 17th of December 2010.
This is the call:
This special issue will address computational models, techniques, methods, and systems for Interactive Sonifcation and their evaluation.
Sonifcation & Auditory Displays are increasingly becoming an established technology for exploring data, monitoring complex processes, or assisting exploration and navigation of data spaces. Sonifcation addresses the auditory sense by transforming data into sound, allowing the human user to get valuable information from data by using their natural listening skills. The main differences of sound displays over visual displays are that sound can:
• Represent frequency responses in an instant (as timbral characteristics)
• Represent changes over time, naturally
• Allow microstructure to be perceived
• Rapidly portray large amounts of data
• Alert listener to events outside the current visual focus
• Holistically bring together many channels of information
Auditory displays typically evolve over time since sound is inherently atemporal phenomenon. Interaction thus becomes an integral part of the process in order to select, manipulate, excite or control the display, and this has implications for the interface between humans and computers. In recent years it has become clear that there is an important need for research to address the interaction with auditory displays more explicitly. Interactive Sonifcation is the specialized research topic concerned with the use of sound to portray data, but where there is a human being at the heart of an interactive control loop. Specifcally it deals with the following areas (but not limited to), in which we invite submissions of research papers:
• interfaces between humans and auditory displays
• mapping strategies and models for creating coherency between action and reaction (e.g. acoustic feedback, but also combined with haptic or visual feedback)
• perceptual aspects of the display (how to relate actions and sound, e.g. cross-modal effects, importance of
• applications of Interactive Sonifcation
• evaluation of performance, usability and multi-modal interactive systems including auditory feedback
• • • •
Deadline for paper submission: 17th December 2010 Notifcation of acceptance: 4th February 2011 Camera-ready version of accepted papers: 8th April 2011 Publication date: June/July 2011
GUEST EDITORS: Roberto Bresin
KTH School of Computer Science and Communication, Stockholm, Sweden; email@example.com Thomas Hermann
Bielefeld University, Ambient Intelligence Group, Bielefeld, Germany; firstname.lastname@example.org Andy Hunt
University of York, Electronics Dept., York, UK; email@example.com
INSTRUCTIONS FOR AUTHORS: Submissions should be 6 to 12 pages long and must be written in English. Formatting instructions and templates are available on: http://www.jmui.org
Authors should register and upload their submission on the following website: http://www.editorialmanager.com/jmui/ During the submission process, please select “SI (Special Issue) – Interactive Sonifcation” as article type.
Authors are encouraged to send to: Roberto Bresin, firstname.lastname@example.org a brief email indicating their intention to participate as soon as possible, including their contact information and the topic they intend to address in their submissions.
The Journal on Multimodal User Interfaces is a publication of OpenInterface (www.openinterface.org) Editor in Chief: Jean-Claude Martin Founding Editor: Benoit Macq More information on http://www.springer.com/12193 or http://www.jmui.org
Online content: http://www.springerlink.com