Issue No.04 - July/August (2002 vol.22)
Published by the IEEE Computer Society
Perry R. Cook , Princeton University
<p>This marks the first issue of <em>IEEE Computer Graphics and Applications</em> devoted entirely to sound. It might seem that sound and graphics are totally distinct fields with their own science, techniques, language, conferences, and publication venues, but the relationship between sound and graphics is actually natural. In fact, the relations and dependencies are as natural as the complementary senses of vision and audition.</p>
This marks the first issue of IEEE Computer Graphics and Applications devoted entirely to sound. It might seem that sound and graphics are totally distinct fields with their own science, techniques, language, conferences, and publication venues, but the relationship between sound and graphics is actually natural. In fact, the relations and dependencies are as natural as the complementary senses of vision and audition.
These two senses coevolved in humans and animals to let them take advantage of different aspects of the various stimuli coming from the outside world. For example, we can hear all around us, but we must turn around to see behind us, giving rise to the saying "the ears guide the eyes." Attending to speech in a noisy environment is often aided by some components of naturally evolved lip-reading skills. So it's natural to devote some attention to the auditory channel in a journal on applications in graphics, and this issue does just that.
The Field of Sound
There are many venues where professionals discuss and publish sound research. Among them are the Journal of the Acoustical Society of America, IEEE Speech and Audio, IEEE MultiMedia, ACM Multimedia, The Computer Music Journal, and others. The number of audio conferences and workshops is also increasing. Newer conferences such as the International Conference on Auditory Display have been added to the long-established list of conferences and workshops like the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, meetings of the Audio Engineering Society, the International Computer Music Conference, and others. There is no single "big" conference for sound and sound-related fields comparable to ACM Siggraph, but the growing interest in sound among the graphics community has led Siggraph and other graphics venues (such as CG&A) to offer increasing coverage of sound, especially as it relates to graphics and user interface applications.
In This Issue
The four articles offered in this special issue cover different aspects of sound, including analysis and synthesis, simulation of sonic spaces and verification of those simulation models, and entire systems for simulating virtual sonic environments.
My tutorial on basic principles of sound briefly covers sound as a physical phenomenon, digital sound in computer systems, perception of sound, sound synthesis methods, and the simulation of sounds in spaces.
The Tsingos et al. article describes the calibration and validation of simulated models of room acoustics, using a carefully constructed acoustical test system. This audio system, called the Bell Labs Box, is analogous to the Cornell Box used in the early 1980s to investigate lighting and reflectance models for computer graphics.
The article by Dubnov et al. applies statistical analysis to the wavelet decomposition of sound and then uses those statistics and selected wavelet projections to regenerate entirely new sounds similar in character and texture to the analyzed sounds.
Finally, the article by Lokki et al. looks at the construction of entire 3D virtual sonic and graphical worlds for entertainment, architectural modeling, and other applications. In these systems, virtual orchestras can be conducted and auditioned from any location in virtual performance spaces.
I thank the authors and reviewers for making this issue possible. I also thank the CG&A staff for all their help in production, and I especially thank Maureen Stone for her part in planting the seeds that eventually grew into this special issue.
Perry Cook is an associate professor in the Computer Science Department, with a joint appointment in the music department, at Princeton University. His main research areas are physics-based models for sound synthesis, human perception of sound, audio at the human-computer interface, and devices and systems for interactive sound control and artistic performance. He has a BA in music (studying voice and electronic music) from the University of Missouri at the Kansas City Conservatory of Music and a BSEE in engineering from the University of Missouri. He also has a PhD in electrical engineering from Stanford University, where he was technical director of the Center for Computer Research in Music and Acoustics.