Sound rendering

Abstract
We present a general methodology to produce synchronized soundtracks for animations. A sound world is modeled by associating a characteristic sound for each object in a scene. These sounds can be generated from a behavioral or physically-based simulation. Collision sounds can be computed from vibrational response of elastic bodies to the collision impulse. Alternatively, stereotypic recorded sound effects can be associated with each interaction of objects. Sounds may also be generated procedurally, The sound world is described with a sound event file, and is rendered in two passes. First the propagation paths from 3D objects to each microphone are analyzed and used to calculate sound transformations according to the acoustic environment. These effects are convolutions, encoded into two essential parameters, delay and attenuation of each sound. Timeciependency of these two parameters is represented with key frames, thus being completely independent of the original 3D animation script. In the second pass the sounds associated with objects are instantiated, modulated by interpolated key parameters, and summed up to the final soundtrack. The advantage of a modular architecture is that the same methods can be used for all types of animations, keyframed, physically-based and behavioral. We also discuss the differences of sound and light, and the remarkable similarities in their rendering processes.

This publication has 14 references indexed in Scilit: