The Community for Technology Leaders
Green Image
Issue No. 04 - July/August (2002 vol. 22)
ISSN: 0272-1716
pp: 49-57
Tapio Lokki , Helsinki University of Technology
Lauri Savioja , Helsinki University of Technology
Riitta Väänänen , Helsinki University of Technology
Jyri Huopaniemi , Nokia Research Center?s Speech and Audio Systems Laboratory
Tapio Takala , Helsinki University of Technology
<p>Sound rendering is the analogue of graphics rendering to create virtual auditory environments. In graphics, images are created by calculating the distribution of light within a modeled environment. Illumination methods such as raytracing and radiosity are based on the physics of light propagation and reflection. Similarly, sound rendering is based on physical laws of sound propagation and reflection. In this article we aim to clarify real-time sound rendering techniques by comparing them to rendering of visual images. We also describe how sound rendering can be performed, based on the knowledge of sound source(s) and listener locations, radiation characteristics of sound sources, geometry of the 3D model, and material absorption data, in other words, the congruent data used for graphics rendering. In this context the term auralization, making audible, corresponds to visualization. Applications of sound rendering vary from film effects, computer games and other multimedia content to enhancing presence experience in virtual reality.</p>
Tapio Lokki, Lauri Savioja, Riitta Väänänen, Jyri Huopaniemi, Tapio Takala, "Creating Interactive Virtual Auditory Environments", IEEE Computer Graphics and Applications, vol. 22, no. , pp. 49-57, July/August 2002, doi:10.1109/MCG.2002.1016698
106 ms
(Ver 3.1 (10032016))