The most important advice I can give is to do this because you love it, and it brings you joy.
If it’s part of you and you can never get enough,
you are on the right track.
— Frank Bry

Welcome to my personal projects and school archives page! Below you will find my personal sound design interpretation on a number of different media clips.

 

**Visual content in these videos was used solely for demonstrative, educational, or personal purposes. Many of these demo projects were produced in an educational environment. These sound re-designs used visual film and media that were originally produced and used for other purposes. This new content is not used for profit in any way. Its new audio content is not sanctioned to be redistributed otherwise without permission.**


Personal Projects (2016-Present)

A Sound Effect & Spectravelers Sound Design Competition Submission (2021)

The awesome people over at A Sound Effect teamed up with Spectravelers to create a sound design competition using one of their libraries to short of a crazy EMP type of digital explosion. The library they gave us was so fun, and it was great to use a lot of my doppler plugins on those sounds to stimulate movement. Unfortunately, I discovered the competition about an hour before the submission deadline, so it added the extra challenge of a 1-hour time constraint before submission. In the future, I’m thinking to make a “3 or 4-hour” version of this as a side-by-side to see if I can refine anything.


Westworld Spitfire Audio Scoring Composition Competition Submission (2020)

In a brilliant move, Spitfire Audio teamed up with Westworld HBO to create a scoring composition competition for a dynamic action scene that takes place in the series. This was my first time ever composing music to picture before, and it was really fun! Music drives the emotional flow of a scene more than anything else in sound. I wanted to make the music change and flow as the car and the action does in this specific chase sequence as we switch views and perspectives. Check this out, as well as the awesome and novel-winning entry!


Previous Demo Reel

2016


Vancouver Film School Projects (2015-2016)

Anomaly 2 Sound Re-design

This was my final audio project at VFS, and utilizes all of the audio skills I learned this past year. The audio is a complete re-design with the majority of original audio added to the sound edits. The music was provided by 5 Alarm.

VO:

I recorded the voice-over and the majority of the sounds for the final trailer. I used McDSP's Futzbox plugin to get the radio effect for the introduction and scavenger voices; I made a wet/dry mix with a low-passed signal of the recorded audio as the dry. For the commander’s helmet reverb, I took an impulse recording of a ripped piece of air duct from the factory and used it as the dry signal with a different futz as a wet one. Overall, the process was very challenging. The human voice contains many nuances on top of adding in processing from both reverbs and radio futzs. I had a lot of EQing to do in order to balance the quality of the voice with the effect I wanted in the end.

SFX:

Many of the mechanical sounds for the B-E-A-R tank were recorded in a real metal factory with a few library sounds to accent the current design. The evil dinosaur bots were designed from the squeaks of various doors opening and closing at the film school combined with the bowing of an old metal oil drum. The intro was especially fun to work on; I took a lot of recorded announcer dialogue from previous VFS school projects (with permission of course) and futz them in combination with sonar pings and radio tuning, distortion, and static. It made for a great opening soundscape to establish the context of the rest of the action later on. Many of the snow sliding sounds were taken from a skiing trip at Cypress mountain in northern Vancouver. The laser and gun designs were all done with Komplete Synths: Massive and Reaktor 5.

The most challenging thing for me personally was perspective. With POV (point of view) changes, I had to take into account how much "air" or distance was between the camera and the point source of the sound. I took this into account when recording a lot of the mechanical sounds and ended up making three batches of recordings: close, medium, and far distances. This gave me more leeway when I added my edits.

I hope you enjoy the final redesign of this Anomaly 2 animated intro!


Kodos - Audio Supervisor and Lead Sound Designer

I was the lead sound designer, sound implementer, and sound supervisor for the game Kodos in a game audio collaboration at Vancouver Film School's Game Design department. Using Wwise and Perforce integration, I led a wonderful and collaborative team of four sound designers to record, organize, and implement over 1,000 voice-over assets with varying degrees of playback controlled by RTPC parameters. The Kodo characters required over 600 individual VO assets to be randomized in order to evoke an emotional feeling from the player. I delegated tasks on a bi-weekly basis, effectively communicating among the teachers, the four game designers, and other audio colleagues for QA testing. The collaboration was not for credit on my part and was entirely voluntary on top of a busy school schedule, for good reason; I really enjoyed the way we were able to work well in a larger team setting.

I believe the best QA feedback the team ever received was from a little girl, no more than six years old, of one of the game designers who was working on another game. She giggled and smiled as she played our beta, reacting jubilantly to all of the various sounds in our world. Her father later told us that we had managed to conquer a feat that he could not: obtaining the attention span of his daughter for fifteen straight minutes. Celebrations were most certainly in order!


Film School Shorts

Part of the film school’s curriculum had us go through the entire sound pipeline for a film, from pre-production to production sound, all the way to post-production and mastering. We navigated through an intense but brief process and collaborated with the school’s film department to orchestrate 7-10 minute full shorts over the course of four months. A student in the sound department would work production on a film, then would work as an editor, and then as either dialogue or SFX mixers for the same film two months later in the pipeline, working with all of the turnovers, AAFs, stems, EDL’s, and handoffs as they happened. However, while the average class was between 12-15 people, there were only 6 of us, meaning that we had the added challenge of recording, editing, and mixing two short films instead and working as both dialogue and SFX mixers. It was an amazing experience, and I truthfully wished the entire curriculum was longer!

On Dr. Voodoo, I worked production sound for both the pilot and part II and then edited both SFX and dialogue on both versions later on. Production sound was often a dance of hurry-up-and-wait on those long days on sets. As teams of three, we swapped roles on production sound as the boom operator, the on-set mixer, and the sound assistant. It’s amazing how much time and work has to be done to only record a short amount of footage that later gets chopped down even finer as the production pipeline continues! Needless to say, I gained an astronomical ton of respect for anyone that works in film production from that point onward.

On the Brenda the Exterminator series, I worked three audio post jobs: I mixed the dialogue and music, recorded and edited the walla/looping, ADR, and foley, and was the music supervisor and music editor. The dialogue mixer is probably the most important position at the console on a sound stage. We have a saying in the sound industry that “dialogue is king” and that is definitely the truth. A sound mix for a film is separated into three parts that all have different tracks and routing: dialogue, music, and effects. On larger films, there’s usually a dialogue mixer who mixes dialogue and music and a sound effects mixer to mix sound effects, often separated into subcategories such as backgrounds, sound design, and foley. The dialogue mixer will level and attenuate all the volume set to the dialogue, meaning the dialogue will always sound the loudest and come to the forefront of the mix. The dialogue mixer also has to make sure the dialogue sounds smooth using different software plugin tools like equalization (EQ) and compression. The mixer is also responsible for making sure the dialogue sounds “in the space” of where the scene is taking place, including any ADR (automated dialogue replacement) tracks that contain dialogue that was re-recorded in a studio to replace non-desirable dialogue lines from production. For example: on Brenda, there’s a scene where she starts talking while in an enclosed area that used ADR lines, so I added some reverberation at that moment to create a sense of realism that she’s actually saying what she’s saying in that space. I also had the added fun of deciding what kind of filter I wanted to use for all of the phone calls. In the biz, we term that as “futz.”

Re-recording mixing was unlike anything I have ever done in the profession. It felt like an extremely unique, complex, and organic process, one that was constantly changing and expanding due to the nature of replaying certain parts or scenes over and over again while using automation to write and re-write volume, pan, or any other parameter for a plugin that the mixer would design. We each mixed four short films and a final project in a span of four months as part of our curriculum; I truly wanted to work another year just to fully understand the nuances of it.


Linear Game Audio Test Video Package

In order to emulate real video game audio applications, we took a time-constrained class assignment in which we were given 24 hours, a package of audio files, and a video from EA’s Need for Speed (Shift 2). I had a lot of fun with this one. We were able to edit and process the files any which way but we were not allowed to use any audio files outside the ones we were given. I did my sound design the very old-fashioned way: by taking existing audio and manipulating it with time compression expansion and pitch shifting. The only plugins I used were Altiverb (Outdoor Stadium), Futzbox, Doppler, EQ-7, Pitch Shift, and Timeshift. I pitched many sounds down and up and high or low passed on an EQ to find new frequencies. I slowed specific sounds down and sped them up to increase or decrease the energy of a sound in a specific situation. Speeding up a very long sound will increase its energy or impact. I made great use of the doppler plugin for many of the car-bys and animal roars and breaths to accent many of the transitions in the piece.


Wrath of the Lich King Opening Cinematic

One of my favorite classes in school was term 4’s music editing class. I really really loved learning why music is placed and blocked where it is, usually at the director and music supervisor’s discretion. We got to take a bunch of different examples of music and edit them into different examples of cinema. I liked this one so much that I took it one step further. This is my full take on the opening cinematic for World of Warcraft’s second expansion: Wrath of the Lich King. My favorite part of this was editing and mixing the voice-over; a realm that is very new to me. I got my partner to record his deep voice for the part of Terenas Menethil II, Arthas’ father, whom you hear speaking in the opening. I dropped his voice by one semitone and added 2:1 compression, followed by reverb using the Altiverb plug-in. I used two separate 7-band EQs, one for color and one for ducking out frequencies that were consistently boomy or shrill to the ear. The human voice has a ton of nuance that is taken for granted due to the nature of our ear’s ability to naturally compress and EQ what we hear before the brain processes it. A microphone does not have that luxury and as a result, we get a very “raw” quality to the voice that will sound extremely cacophonous at times. That’s why we have our guardian angels: the dialogue mixers.


Foley

Our earliest project in the year was an audio project for a Foley recording class that contains only Foley sounds that used an animation from the film school. If you don’t know what Foley is, it’s basically emulating real-world sounds cut to picture in real time. For example, I did all of the balloon rubbings and bumping in this short by taking two balloons and handling/rubbing them while recording for the duration of the piece. It was quite fun…and I got to pop them too!

If you’re thinking “Whoa, some sounds are definitely missing,” it’s because they weren’t added intentionally. The point of the project was to do ONLY the Foley sounds and nothing else. A completed project would have BG’s, Music, SFX/SPFX, and Dialogue (this includes efforts); Foley is just one of the many layers of sound added to a film, and what we did is a good example of it. Check it out!


Pyramind School Projects  (2013-2015)

Pacific Craft

One of my earlier assignments in the year; this is a compilation of a movie called Pacific Rim with the intro cinematic to Blizzard’s Starcraft II: Wings of Liberty.

I wanted to challenge myself to bridge both forms of cinema via way of sound design. Many of the sounds in this are based on the same set of machine and mechanical sound recordings, but manipulated and processed in many different ways. I chose the song Supremacy by Muse as a symbol that we are in a constant battle with power, whether it be from our friends, our enemies, or even ourselves.


Prototype 2 Live Action Trailer

This is my take on the live-action trailer for a game called Prototype 2. I decided to choose the beautiful song Reckoner by Radiohead to pay homage to the story.


Furious Bots Gameplay Walkthrough

In this gameplay video of Unity’s Angry Bots, which I aptly named “Furious Bots,” I explore the idea of adaptive audio - audio that supports dramatic action by adapting intuitively and discretely in order to remain contextually appropriate. In this level, I used Fabric in addition to the Unity program to make the music adapt to different environments and enemies.

This was also a featured article on Game Audio Institue. In the article, you will find more details about my process for layering adaptive sounds.


Personal Projects (2012-2015)

Hyper Light Drifter

This is the video for the trailer of Hyper Light Drifter, an absolutely beautiful adventure game that bridges the gap between 8 and 16-bit media. I will say with absolute resolve that Disasterpeace is an incredible artist and musician to score this game. I decided to take the intro video and put my own spin on it. Both the music and the sound design are my creation:


Resistance 3 Trailer

I really wanted to create a mix entirely void of music, where the sound design would ultimately shape the piece rather than other non-diegetic sounds. I also chose to go for a very subdued mix compared to my other, more robust ones. It was an absolute blast to record and work with a lot of train noises on live train tracks. Fun fact: the creaking noises one hears actually aren’t from a train, but from recording the sound of a squeaky wooden door opening and closing multiple times at different speeds.


Blizzard Splash Logo

By far one of my favorite projects to work on: here’s the Blizzard splash logo for World of Warcraft. If you’re a Warcraft fan, you’ll hear some familiar sounds in there I’m sure!


Motion Graphic Projects

These are a series of ongoing projects doing sound design for various motion graphics. I wanted to challenge my ability to make unique sounds that captivate the audience in order to tell a sonic story.

Countdown


Papa Logo Sound Design

I decided to take the motion graphic of this video for Papa and create an original sound design for it as the original only had music. I really loved making and layering pretty much all of the sounds for this one. I added my own spin to the audio by adding a retro element to the transformation of this box that eventually becomes their logo.


Indico Sound Design

I reworked the sound for a short motion graphic titled “Indico”. I did this with the intent of clearly being in an underwater environment, with the serene ambient sometimes interrupted by what I like to call “the great unknown.” I wanted to tell a sonic story by juxtaposing the calm with the unnerving.


Monsters in Cinema

This is an ongoing project of making sounds for various monsters we see in cinema. The idea is to keep adding and refining sounds and monster vocalizations as time goes on; this allows for further experimenting with sound design practices: layering, processing/manipulating audio, and various effect and vocoder plugins.

Monster Mash

 

Monster Mash 2.0


Zombie Game Menu UI Demo

This is part of a series to demonstrate ability with UI sounds. UI sounds are challenging because the sound designer has to find the right sound or set of sounds that will not be too ear-splittingly repetitive, but will also still immerse the player in the setting while enhancing other sound design in the game. UI sounds have to balance a fine line between adding that immersive element to the game without distracting the player from the environment and story.

I plan on eventually adding an entire section with various built-in menus and on-screen UI sounds where one can just hover and click on the various options to hear the sounds. For now, I will post some linear examples like this basic zombie game.


My First Mixing Project - Bridges (2014)

This was my very first ever finished semi-professional mix made at Pyramind Studios. I received the audio stems and MIDI for this song in the form of lackluster Garageband fidelity (actually from Garageband) and decided to rework the entire song into something grand.

I learned a great deal about mixing by working on this transformation. Parallel compression on drums adds an extra nice bite (thank you prefader sends). Separating a single bass into low and high frequencies spreads out the sound. Routing different guitar tracks with different frequencies and timbres to output buses was essential to workflow. Careful use of effects and processing meant night and day in terms of quality. Learning to fine-tune your ear, to take breaks from the consistent sound of your work, and to really objectify your mix was paramount to making a successful end product.

 

This was my first ever mix. The original composition was low fidelity audio and simple MIDI and given to me in stems from Garageband. I decided to put it through the wringer. Props to my friend Jon for the original composition!


Other Music Audio Projects

NIN Closer - now with more synths! (2013)

This was one of the class assignments I made in the earlier part of the school year at Pyramind. This was made entirely in Reason. It started my love of processing synths and layering sounds.

 

This was one of the first projects I created at Pyramind. I reworked an arrangement of Nine Inch Nails' "Closer" entirely out of reason using combinators and numerous rack extensions!

 

MK Ultra Dream (2011)

This was one of my final sound design projects as an undergrad at Northwestern University. I was playing with convolution reverb for the track labeled MK Ultra by Muse. I took the first guitar lick and time-stretched it, then convolved it with itself (being the entire song). Normally when the impulse is convolved with something much longer, one can get the ethereal dreamy-like effect, which was what I was going for. The convolution also distinguished the main pitch for the song, as well as made the entire piece sound like a giant church organ.

Hardcore convolution reverb in effect.