UDK Audio Implementation II – SoundNodes and SoundCues pt. 1

Hello there!

Since there is a lot to write about Sound Nodes and SoundCues and I don’t think huge posts are fancy, I decided to break this post in 2, or it would be one more week to get it done.

I actually started to write about SoundActors for this second post, but it could not make much sense if before that we need to know how to handle sound files in UDK. I’m referring to SoundNodes and SoundCues. These are fairly easy to understand.

So, first things first: let’s import our sound files:


The Grouping option will allow you to organize your files. I opted by dividing that into Ambient, VoxPlayer and SFX for the mini-game in question. This will basically allow you to have a sub-folder inside your Package… for a first stage.

If you don’t click ‘Auto Create Cue’ you will be only importing the file itself, that UDK calls SoundNodeWave. Can you use this by itself in the game? You can with the AmbientSoundSimple actor. The properties you can set with this are attenuation and modulation but I’ll enter in details about it on the next post.

Clicking ‘Auto Create Sound Cue’ will do what it says, which can save you some time if you see there’s a point in creating one for each sound you are importing. The following Nodes that are mentioned are just a few of what you can add at any time inside the SoundCueEditor.

To be specific, the Cue Volume numbers are multipliers. You can also change this later.

In the Content Browser, this is how NodesWaves and SoundCues look like:


So, in what cases may not make much sense to auto create sound cues? Let’s say you have 3 sounds for a door opening and during the game you will want to trigger them randomly – this is, you have a bunch of similar doors, but instead of triggering one different sound for each, you trigger the same SoundCue for all, and UDK will randomly pick up just one of the 3 SoundNodes for each instance. Creating a SoundCue for each one is simply unnecessary and it will pollute your Content Browser. If you are  importing a single sound instead, that is most certainly appropriated, although you can always do this later.

What is a SoundCue anyway? It is something that acts as a container for a SoundNodeWave ou multiple SoundNodeWaves. Inside the SoundCue – the Sound Cue Editor – you are able to change a bunch of settings that we will see below.


These are the ‘boxes’ I wrote about previously on this blog series. Here it is the example of the doors’ thing. The first 3 in the chain, are the SoundNodeWaves for each sound imported. The speaker symbol, is a default and the only thing that does is to connect its output to the last SoundNode or SoundNodeWave. It connects to one only – which doesn’t mean you cannot have more than one sound running at the same time, and routing things in parallel: for this, we should use a Mixer Node right after it. Right after it? SoundCues should be read from right to left. This seemed weird to me, particularly looking at the the Random Node, so it will make sense if we think about the nodes as simple filters you send your SoundNodeWaves to.

You can play preview of the whole SoundCue or the SoundNode and change parameters during the playback.

Create a new SoundCue by right-clicking on the content browser window, where the Package you’re working on is shown.

To add a SoundNodeWave to the SoundCue you should have it selected on the Content Browser. Right-click on the background on the SoundCue Editor,  and the reference to the SoundNode appears. The option below ‘Random: [SoundNodeWave name]’ will simply add a Random SoundNode already connected to the specified audio file, which at first glance doesn’t make much sense. We’ll see why below.


If there is nothing selected you can see the properties of the SoundCue:

soundcue properties

The first will be the SoundClass. If your SoundCue is set for Ambient Sound Class, it will be affected by the AmbientZones (this is a quick note until I write more in depth on this). You can directly type on the field the name of an existent SoundClass, but not to create a new one.

Debug allow for debugging.

Volume multiplier doesn’t go higher as 1. You can type higher values but there won’t be any difference. Here, this value is only referring to the SoundCue you are about to create. The SoundNodeWave will be kept at 0.75.

Pitch multiplier goes way higher than 1.

Face FX … this a whole new level. In case you are building your sounds for a Facial Animation, you have the options to relate them to this SoundCue.

Max concurrent play count this is the maximum amount of simultaneous plays of the SoundCue. If you have, for example, 3 triggers that are set to play a given SoundCue on a defined action – let’s say on touch –, but you have this value set to 1, you can touch the triggers as many times as you want simultaneously, but the SoundCue will only play the content audio of the SoundCue once at a time and, by the way, won’t ‘remember and store’ the number of times you trigger that sound while it was playing and play them later.

Let’s take a look on some of the SOUND NODES available in there:

RANDOM: this won’t randomize only the SoundNodeWaves. One can even randomize Modulators. UDK picks them according to the weight parameter you give to the Random SoundNode. One can also random a Random Node.

CONCATENATOR NODE: It’s fairly easier to just think of a mixer that will output each connected node sequentially in the volume we define for each one.

MIXER: It’s plausible and many times desirable to have more than one sound running at the same time, each with specific behaviours, maybe even build a sequence out of it in just one SoundCue, or simply having a continuous sound with others playing randomly at the same time. Simply place a Mixer Node after the speaker and add (right-click) as many inputs as you need.

ATTENUATION With this node one can allow the attenuation itself and spatialization, if this word even exists…

     Spatialize the SoundCue means that we will hear exactly where the sound comes from, it’s what is called ‘placing the sound in a 3D world’. There isn’t an option in here to level the spatialization (as there is, for example, in Fmod), as many times with a fast movement during the game-play when close to the sound source, there is a too much fast response in the object localization.

     Before I proceed, I should definitely point the following video. Despite of being an Fmod Tutorial it explains extremely well:

  • 3D and 2D sound;
  • max and min distances;
  • linear, logarithm and inverse distance curves.

     dB Attenuation at Max is where we define how much should we hear of the sound source at the maximum distance defined on the Radius Max.

     Distance Type if you want to set a specific axis on which the sound will be attenuated. By selecting one, the other axis won’t have any kind of attenuation, hence ‘Infinite(…)Plane.

> Normal: attenuates in all directions;

> InfiniteXYPlane: Only attenuates the  sound over the distance in the Z-axis.

> And so on.

     Radius Min: distance from which the sound source will start to attenuate. Inside radius min the sound will always play at level defined for the SoundCue.

     Radius Max: distance at where the max attenuation defined above is achieved. Between Min and Max Radii values, set by world coordinates, the attenuation happens according to its chosen parameters.

     The LowPass Filter works similarly as the Min and Max Radii for attenuation.

     LPF Radius Min: from which distance from the sound source the LPF should be applied.

     LPF Radius Max: at which distance from the sound source the LPF has its maximum amount.


The ATTENUATION AND GAIN NODE offers the possibility of having a sound augmenting in level the farther the distance, or with a peak distance limit to start out fading out.

     The Minimal Volume refers to the volume to be set between the sound source and the Radius Min. It can be above or higher than 1. This value is stable until it reaches the Radius Min, distance from which the sound will start augmenting in volume according to the model defined in Gain Distance Algorithm . The volume will cease growing at the Radius Peak.

The options for the LPF are exactly the same.


I will be dealing more in detail over the distance parameters in a future post, as it encompasses some simple physics and it can be cooked with other features in UDK, which is beyond the scope of this post. Meanwhile here are shown graphical examples if the curve types in attenuation mode.

I’ll be writing about the remaining SoundNodes in part. 2.


Happy Audio Implementation!

UPDATE! (August-2014) I am no longer into game sound design. I am sorry for not continuing with this series. 😦


3 responses to “UDK Audio Implementation II – SoundNodes and SoundCues pt. 1

  1. Pingback: Game Audio Middleware | Pearltrees·

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s