May 3, 2013

Audio tools


From the very beginning Nurbits was conceived as a rhythm puzzle game, and therefore music is a critical part of the game. As we mentioned before, Unity3D our game engine of choice has limited support for precise musical timing. As a result we spent some time evaluating all the possible ways we could implement the music system in Nurbits. Here we'll tell you about some of the options we looked at, and their advantages and disadvantages. This is going to be relatively technical.

Using prerecorded loops

For awhile we debated basing our music system around prerecorded loops of audio. The major advantage of this approach is that they music would sound really good. This is a good approach for most games, where the music is only a background element and loops can be faded in and out as the player navigates the environment or events occur. However, Nurbits is supposed to be a music game, and the major disadvantage is that the user would have very little creative control over the music other than arranging loops. We would be limited in the amount of interactivity we can have between the player and the music, we would only be able to make the loops play, stop, or switch to another loop. We also debated a hybrid approach where we have a lower quality dynamic system that eventually switches over to the higher quality loops when the player finishes a puzzle and unlocks a loop. We still really wanted to have a fully dynamic music system that involved more player creativity, so we continued looking for a better solution...

Mod tracker files

Unity now supports mod tracker files, which are really nice because they are able to create high quality audio without taking up much file size. They do so by storing small samples of waveform data and arranging and sequencing those samples to create longer pieces of music. We got particularly excited about XRNS2XMOD, a free utility that converts files from Renoise (modern DAW style tracker software) to the xm and mod formats that Unity can read. Unfortunately Unity treats tracker files the same way as they treat normal PCM audio files like .wav and .mp3s. If Unity exposed access to programmatically mix the different channels and switch between patterns in a mod file that would be really powerful. As it stands you can only play, stop, and loop them. We briefly contemplated trying to write our own mod files in real time, but it turned out to be a bit more trouble than we wanted to get into, as mod files are a somewhat complex binary format and we weren't sure what complications might arise trying to get those dynamically written mod files into Unity's asset pipeline.

PureData

Pure Data is a visual programming language that can be used to do some really powerful audio processing, including building soft synths from very basic low level modular building blocks. We found a few open source libraries that integrate libpd, an embeddable version of the core PD libraries, with Unity. Kalimba integrates libpd for iOS and Android builds, and libpd4unity integrates it for Windows. There are also a ton of PD patches freely available online that do really cool and powerful things. However, using these open source libraries would limit the platforms we could target, unless we did some significant development to make those libraries support other platforms, and there is no way to support web builds. We really want to avoid platform specific native code plug-in development if possible. Also none of us have ever used PD before, so there is a learning curve there as well.

Fabric

In Unity 3.5 Unity introduced the Monobehavior.OnAudioFilterRead callback function. It essentially lets you read and write directly to the audio buffer, allowing you to write custom effect filters, or create procedural audio. This gives developers the power to essentially do anything with audio, albeit at the very lowest level. Someone had to be writing some powerful tools on top of that at a higher level, right? After digging around we found Fabric. Fabric itself is a Unity editor extension that adds a very powerful dynamic audio mixing system, we assume piping that audio through the OnAudioFilterRead callback. In addition we got very excited about the Fabric modular synth add-on. The ability to add all of the power we saw in PD to manipulate real time synthesis in our game, but do so within Unity was extremely appealing to us. Unfortunately we contacted the developer and the synth extension was still in development. The timing just didn't match up for us to use it on this project.

UnitySynth

We came across UnitySynth on the Unity forums, a port of the open source C#Synth project to Unity. At first we played around with it and kind of wrote it off because the main function it seems to fulfil when you first look at it is to play .midi files. Midi files are cool because they are small in file size, but they aren't known for their amazing audio quality. UnitySynth uses sound fonts, which much like the mod tracker files we looked at before use small samples to play notes. We found that there are sound fonts available that have higher quality samples than the ones included with UnitySynth. After some hacking we found that we could get UnitySynth's FM synthesis to give us the real time control of it's parameters that we liked when looking at PD and the Fabric modular synth extension. After more hacking we figured out that we could give UnitySynth essentially fake midi files to play, by creating and arranging our own midi events. We managed to get UnitySynth to play each midi channel on a separate AudioSource so that we could apply Unity's built in DSP effects on each one individually. With enough modification we eventually got UnitySynth to do pretty much everything we wanted.


3 comments:

  1. Hi,

    I was wondering how you managed to get unity synth to play each channel on a different audio source? I have been trying to accomplish the same thing with out much success. Did you modify the voices to write to separate buffers and then send each buffer to a different audio source? Any insight would be greatly appreciated.

    ReplyDelete
  2. Hey Steve, We are doing exactly what you said. We modified StreamSynthesizer.FillWorkingBuffer() to take two parameters: the channel index and a float[] buffer for that channel. Then at the beginning of the while loop that loops through the voice nodes we check if each node belongs to the specified channel, and if not continue on to the next one.

    ReplyDelete
  3. Here is another good tool (not audio , of course, but nevertheless) - data rooms virtual / You may store and share your docs with this tool.

    ReplyDelete