Introduction
Hello, my name is Jacob Weber and here I am striving to be a rerecording/foley engineer for audio postproduction and to also work on music for film and video games. I've had an overall fun time working in the Unreal Engine 4 and AudioKinetics WWise software for implementing audio for video games. Being able to actually implement sound and designing our own sounds for use in a video game has been phenomenal and an overall learning experience. As a sound designer, it's up to you to be able to get the overall aesthetic of your sounds match what is being shown to the audience.
One thing I learned from this class was that at the end of game development, companies do what is called a postmortem, which is basically to learn what methods worked and what did not during game development. Key points are scheduling, planning, and implementing features; what can be improved for the next project -- a postmortem is not intended to focus on the actual design features of the game. In the following sections, I would like to go over details of what I did, as far as sound design goes, for the UE4 Platformer Game project and Limbo.
UE4 Platformer
What does Implementation mean when it comes to sound design? It is a construction of a real time game audio mix, it is how audio is triggered/sequenced, and brings interactive behaviors assigned to SFX, Dialog, and Music. As far as the aesthetic and sound design of my sounds go, I tried to make the sounds match well with the robot in a city that has been destroyed. When it comes to the aesthetic of any particular game, you want the sound to match what it is you're trying to portray to the audience. Aesthetic is used to inform design direction, create a common theme, describes creative limitations, and is often comprised of single words (dark, childish, ominous, etc.).
As far as sound design goes, there are three main purposes, feedback, entertainment and immersion. With feedback, it needs to be heard to be believed, it engages a second sense, provides offscreen cues and instructions, and adds a sense of realism to the unbelievable. Entertainment, it is subjective and often based on target audience expectations and hyperrealism. Immersion is mentally connecting players to the game experience, drowns out external distractions, and also adds suspension or disbelief.
One technique I implemented was hyperrealism, which means that the sounds implemented are more exaggerated than how they would sound in real life. For example, I used some EQ'n and compression with a touch of reverb, especially on the footsteps and explosions in the game to add more of an authentic experience for the robot. One key thing is you don't want to add too much reverb to where it sounds out of place and overly exaggerated.
I believe I achieved the goals of my game as far as implementation, event timing, mix balance, and project completion goes. When it came to implementation, I found it really easy to maneuver around in Unreal Engine 4 to have the sounds work properly where they were supposed to. When it comes to ambiences and certain sound effects, you can utilize the fallout distance function to have it transition into other ambiences smoothly. Another function you can utilize in UE4 is spatialize (positional sound), which is mono only, allows for sounds to be "3D". You can also implement sounds in Unreal Engine 4 a different couple of ways, such as through the Blueprint editor, which is where you "program" functions together so that the game can operate. You can also use the matinee window and add files to the actors in the game.
I think that we were given more time to work in UE4 and understand it than the time we were given to work in Limbo. With all of the time spent in lab finished and being able to mix, I was able to spend time getting to learn more about the program and figuring out ways to better implement sound design in the program.
As far as the mix balance of the game went, I mixed as implemented sounds and then I went back to focused on the mix as a whole when all of my sounds were implemented. Doing this enabled me to operate faster, ensure everything was in at a good level at the beginning of the process, and being able to better mix everything as a whole since the levels were already pre-set. Of course, while implementing sounds, I wanted certain sounds to have more of an impact in the game such as the explosions and the robot breaking the door of the truck.
I believe most things went right in my project overall as far as the mix goes, implementation, sound design, and the project completion. Granted, I could have done better and been more focused on it, but it was a fun experience overall. The events, (interactive sequencing and mixing) refers to a combination of audio and behaviors in the game. What went wrong with my project? I could've done better in the aspect to where my sounds could have better fit the aesthetic of the overall game as a whole. I will continue to improve upon my abilities with sound design to create an overall better experience for the audience.
Limbo
Audiokinetic's WWise implementation of sound was, by far, more complicated but a lot more enjoyable to implement sounds in. With Limbo, I wanted to fit the aesthetic of the game to what I perceived from it. I had only 1 other person in my group and we chose Amnesia, to be the game where we would implement our sound design technique's from. I researched the sound design concepts of Amnesia, so that when I recorded sounds for the game, it matched well to what I perceived for Limbo through Amnesia. For example, I used old wood that made that eerie, creaky sound and lowered the pitch of the sound for the rotating platform in Limbo. Amnesia is a survival horror game, where you are living through a nightmare.
Unlike UE4 being a game engine, WWise is an Audio Middleware. Middleware is a code library specific to the task (in this case, audio implementation), which allows you to have more options and enhances the engine's functionality. Without having the need to code, you are able to perform complicated actions unlike UE4. WWise uses high level Scripting "plug ins" such as, convolution reverb, impulse designers, rumble control, etc.. It also allows an audio designer to implement complicated actions without coding and audition high level interactive behavior in the Editor.
Since each student had to record their own set of sounds from the list we were given, we were able to implement our sound design to fit that of the game we researched. I was given the tasks to record the ground and wood footsteps. I also had to record the sounds for the cart, rope, rotating platform, and for the boy climbing objects.
As far as Implementation goes, I had a great time learning and implementing the sounds in WWise than I did in UE4. With each action in the game having its own event, I found that implementing sounds was a lot better and I was able to have more customizable control of the sound effects. For instance, for ambiences, I can "stop" an ambience and have it fade out and transition over into a new ambience by putting a "play" object after the "stop" object and having the new ambience "fade in" to transition smoothly. For example, for the "Eye's Open" sequence, I chose a specific "eye's open" event, "stopped" the menu ambience in a global setting with a fade out of 1.25 seconds, and put a "play" object in for the "Eye's open" ambience, with a fade in setting of .75 seconds.
Honestly, for the event timing, I didn't have an issue with a lot of things except learning of the fade times for a smooth transition in the events. After practicing with different settings and events, I was able to create smooth transitions for the ambiences so that it wasn't too conflicting nor did anything stand out. Thankfully, the "stop" and "play" objects help out tremendously with the timing of events that occur in the game.
For the mixing process, I recorded most of my assets at -3 dBFS. From there we can customize the mix to better suit the game and fortunately for us, it helped exponentially for the mixing process. The only thing that stood out for this project is that I forgot to lower the rope climbing assets, because during gameplay, they were too hot and stood out. Overall, the mix of the game turned out really well. With the use of Active-Mixers, I am able to create different objects for multiple groups of sounds. For example, I was able to put the footsteps in a random container so that the different footsteps would play at random to create variance. Limbo contains Non-Linear Audio and Non-Positional Sound. Non-Linear Audio means that the sounds are unpredictable. They creators of the game made it to where the player has complete control what is going on so the sounds should be played based on what the player chooses to do. Non-Positional Sound (2D/camera sound), is in constant perspective - no movement, is mono, stereo or multichannel, and in WWise is known as "2D".
There were a couple of things that went wrong in our project such as, the rope sound being too hot, the cart playing a sound when you came within a certain distance of it, and the ambiences not totally matching the aesthetic of Amnesia. Working in a group enabled us to get more work done in an efficient manner and allowed us to have more time with mixing in-game. The only qualm I have with collaborating was not our perceptions of Amnesia and how each of us worked on the overall sound design for it.
Here is a 30-second clip of our project,
Enjoy.

