Issue No. 04 - July/August (2002 vol. 22)
Tapio Lokki , Helsinki University of Technology
Lauri Savioja , Helsinki University of Technology
Riitta Väänänen , Helsinki University of Technology
Jyri Huopaniemi , Nokia Research Center?s Speech and Audio Systems Laboratory
Tapio Takala , Helsinki University of Technology
<p>Sound rendering is the analogue of graphics rendering to create virtual auditory environments. In graphics, images are created by calculating the distribution of light within a modeled environment. Illumination methods such as raytracing and radiosity are based on the physics of light propagation and reflection. Similarly, sound rendering is based on physical laws of sound propagation and reflection. In this article we aim to clarify real-time sound rendering techniques by comparing them to rendering of visual images. We also describe how sound rendering can be performed, based on the knowledge of sound source(s) and listener locations, radiation characteristics of sound sources, geometry of the 3D model, and material absorption data, in other words, the congruent data used for graphics rendering. In this context the term auralization, making audible, corresponds to visualization. Applications of sound rendering vary from film effects, computer games and other multimedia content to enhancing presence experience in virtual reality.</p>
T. Lokki, L. Savioja, R. Väänänen, J. Huopaniemi and T. Takala, "Creating Interactive Virtual Auditory Environments," in IEEE Computer Graphics and Applications, vol. 22, no. , pp. 49-57, 2002.