AUDIO MIDDLEWARE
W W I S E
INTERACTIVE Audio
This video explains how I use Wwise to implement adaptive audio systems, including player locomotion, user interface interactions, 3D audio and environment spatialization, interactive and dynamic music system, and SFX for visual elements. These adaptive systems allow real-time audio adjustments and triggers based on player actions and current game states.
F M O D
|
interactive processingOverview of an interactive music system that uses game state data, level design, player position, and object placement within the virtual environment to dynamically alter and process the audio output. The system employs techniques such as adaptive electroacoustic music composition, spatialization, and real-time sound processing to create an immersive and responsive audio experience for the player.
|
Positional audio systemUsing Fmod's event-based system, a playlist of pre-designed sounds in a scattered instrument are triggered in a dynamic and randomized manner, modulated by the player's position in the virtual environment. This results in variations in both amplitude and granular effects through real-time pitch shifting and granulation techniques.
|
|
IN-ENGINE IMPLEMENTATION
The audio implementation for this project was done directly in the Unity game engine, using a combination of its native audio tools along with customized technology developed specifically for this project. Dynamic mixing and FX, musical transitions, single triggers and one shots, pitch and volume variation systems, randomization systems among other were employed.