This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Creating Interactive Virtual Auditory Environments
July/August 2002 (vol. 22 no. 4)
pp. 49-57
Tapio Lokki, Helsinki University of Technology
Lauri Savioja, Helsinki University of Technology
Riitta Väänänen, Helsinki University of Technology
Jyri Huopaniemi, Nokia Research Center?s Speech and Audio Systems Laboratory
Tapio Takala, Helsinki University of Technology

Sound rendering is the analogue of graphics rendering to create virtual auditory environments. In graphics, images are created by calculating the distribution of light within a modeled environment. Illumination methods such as raytracing and radiosity are based on the physics of light propagation and reflection. Similarly, sound rendering is based on physical laws of sound propagation and reflection. In this article we aim to clarify real-time sound rendering techniques by comparing them to rendering of visual images. We also describe how sound rendering can be performed, based on the knowledge of sound source(s) and listener locations, radiation characteristics of sound sources, geometry of the 3D model, and material absorption data, in other words, the congruent data used for graphics rendering. In this context the term auralization, making audible, corresponds to visualization. Applications of sound rendering vary from film effects, computer games and other multimedia content to enhancing presence experience in virtual reality.

1. T. Takala and J. Hahn, “Sound Rendering,” Computer Graphics, Proc., SIGGRAPH '92, vol. 26, no. 2, July 1992.
2. L. Savioja et al., "Creating Interactive Virtual Acoustic Environments," J. Audio Eng. Soc., vol. 47, no. 9, Sept. 1999, pp. 675-705.
3. M. Kleiner, B.-I. Dalenbäck, and P. Svensson, "Auralization: An Overview," J. Audio Eng. Soc., vol. 41, no. 11, Nov. 1993, pp. 861-875.
4. J.-M. Jot, "Real-Time Spatial Processing of Sounds for Music, Multimedia and Interactive Human-Computer Interfaces," Multimedia Systems, vol. 7, no. 1, Jan./Feb. 1999, pp. 55-69.
5. A. Appel, "Some Techniques for Shading Machine Renderings of Solids," AFIPS 1968 Spring Joint Computer Conf., Thompson Books, Washington, D.C., vol. 32, 1968, pp. 37-45.
6. A. Krokstad, S. Strom, and S. Sorsdal, "Calculating the Acoustical Room Response by the Use of a Ray Tracing Technique," J. Sound and Vibration, vol. 8, no. 1, 1968, pp. 118-125.
7. J.B. Allen and D.A. Berkley, "Image Method for Efficiently Simulating Small-Room Acoustics," J. Acoustical Soc. of America, vol. 65, no. 4, Apr. 1979, pp. 943-950.
8. V. Pulkki, "Virtual Sound Source Positioning Using Vector Base Amplitude Panning," J. Audio Eng. Soc., vol. 45, no. 6, Jun. 1997, pp. 456-466.
9. M. Gerzon, "Periphony: With-Height Sound Reproduction," J. Audio Eng. Soc., vol. 21, no. 1-2, Jan./Feb. 1973, pp. 2-10.
10. J. Blauert, Spatial Hearing: The Psychophysics of Human Sound Localization, 2nd ed., MIT Press, Cambridge, Mass., 1997.
11. N. Tsingos et al., "Modeling Acoustics in Virtual Environments Using the Uniform Theory of Diffraction," ACM Computer Graphics, Ann. Conf. Series (Proc. Siggraph 01), Aug. 2001, pp. 545-552.
12. U.P. Svensson, R.I. Fred, and J. Vanderkooy, "Analytic Secondary Source Model of Edge Diffraction Impulse Responses," J. Acoustical Soc. of America, vol. 106, no. 5, Nov. 1999, pp. 2331-2344.
13. L. Savioja and V. Välimäki, "Interpolated 3-D Digital Waveguide Mesh with Frequency Warping," Proc. 2001 IEEE Int'l Conf. Acoustics, Speech, and Signal Processing (ICASSP 01), IEEE Press, Piscataway, N.J., vol. 5, 2001, pp. 3345-3348.
1. T. Lokki et al., "Real-time Audiovisual Rendering and Contemporary Audiovisual Art," Organised Sound, vol. 3, no. 3, 1998, pp. 219-233.
2. T. Takala et al., "Marienkirche - A Visual and Aural Demonstration Film," Electronic Art and Animation Catalog (Proc. Siggraph 98), ACM Press, New York, 1998, p. 149.
1. ISO/IEC 14496-1:1999, Coding of Audiovisual Objects, Int'l Organization for Standardization, Geneva, Dec. 1999.
2. E.D. Shreirer, R. Vaananen, and J. Huopaniemi, "AudioBIFS: Describing Audio Scenes with the MPEG-4 Multimedia Standard," IEEE Trans. Multimedia, vol. 1, no. 3, June 1999, pp. 237-250.

Citation:
Tapio Lokki, Lauri Savioja, Riitta Väänänen, Jyri Huopaniemi, Tapio Takala, "Creating Interactive Virtual Auditory Environments," IEEE Computer Graphics and Applications, vol. 22, no. 4, pp. 49-57, July-Aug. 2002, doi:10.1109/MCG.2002.1016698
Usage of this product signifies your acceptance of the Terms of Use.