The invention is a new sound propagation and perception model for virtual worlds (e.g. computer games) with which:
- sound propagating in surroundings environment; and
- sound perceived by virtual agents (e.g. players and non-player characters in the game) can be simulated closely to the real world.
Rather than using traditional signal processing methods that capture finest details of a sound wave, but require tremendous computing power, the model discretizes a particular sound signal in frequency and time domain, and then extracts from them a sparse set of distinct values that forms a packet sufficiently representing the sound. As a result, the method significantly improves simulation efficiency without losing fidelity.
Unlike in traditional simulation using uniform grids, the model divides the entire space into quads of differing sizes such that the region in the quad has the same acoustic properties. Using the adaptive quad-tree grids with pre-computed propagation values significantly reduces computational time. In return, it enables other functions previously unfeasible.
The models can place multiple virtual hearing agents in different locations of the virtual world, each with human-like sound perception ('virtual ears') to acoustically understand the virtual world surrounding them. Integrated with attention and behavior models, the virtual agents perceive and classify the sounds based on their locally-received sound packet, then take human-like reactions in responding to the received sound.
Pending US rights
Huang, P., SCA '13 Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Pages 135-144
Docket # Z6477