Edge of Nowhere
Creating the Score For Insomniac's First VR Title

Composer, Michael Bross

Edge of Nowhere is a VR-driven third-person action-adventure that was released for Oculus Rift in June 2016. It was developed by Insomniac Games.

The game is story-driven with both tragedy and horror elements that is inspired by H.P. Lovecraft’s “At the Mountains of Madness.” This game’s story is set in Antarctica in the early 20th century.

I spent approximately 6 months working on the project and created 2 ½ hours of final music, with much of that music written in the last 3 months. My work finished in May 2016. The score ended up being a blend of orchestral elements and designed atmospherics. Most of the live orchestral elements were recorded with both a 52-musician group at Ocean Way Studios in Nashville, TN. and also a smaller group session at Fantasy Studios in Berkeley, California.

Here is my process to create the score...

Project kickoff - What direction do we want to go?

First steps included meetings with Jamie McMenamy (audio lead) and Brian Allgeier (creative director), who walked me through the game’s concept, gameplay and story. I also was given storyboards and concept art as well as design documents. And I also received captures of gameplay. I’m based in the San Francisco / Bay Area area, and so I flew to Burbank, California and Durham, North Carolina, to both of Insomniac’s studios for additional meetings and to play through the game in its current state.

Edge-of-Nowwhere-screenshot-1-1024x576.jpg

From here, the Jamie and I tossed around a lot of ideas about direction of the music.

We knew that it had to capture:

  • A sense of mystery
  • The beauty of awe-evoking landscapes
  • Horror and tragedy that the main character experiences.

We also knew emotionally what the score needed to be, but we weren’t sure what stylistic directions we wanted to go.

Some initial directional possibilities we discussed:

  • A “period” score because the game was set in the early 20th century.
  • A soundtrack using early synthesizers. And mixing that with horror.
  • Something distorted and heavy.
  • A contemporary design with orchestral and electronic elements.
     

My Process

Some composers decide exactly what they want to create and I’ve been down that road, too, on some projects, but I also enjoy not knowing initially what I’m going to create and being open to exploration. There’s an uneasiness that comes with that. An anxiety. Of feeling around in the dark. But there’s also the joy of discovery. Of letting something unfold. Of affecting a process and letting the process affect me. Because not only do I want to create something fresh and unique and amazing, in this process I want to grow.

Also, I know a lot of my composer friends like to establish themes and motifs in the first steps of their process, but I don’t usually focus exclusively on themes or motifs in the early phase. For me, what’s important is capturing a vibe. Establishing a “sound”. An atmosphere. And from that, themes begin to organically appear in the process.

I begin my development process by creating “source”. What do I mean by that?

Creating Source

It means I’m *not* writing music yet. Instead, I begin by creating raw material that will help me create music.

Here are some approaches I took for this project:

  • Recording 2 - 3 hours worth of guitar feedback.
  • Recording piano, cymbals and other sampled material that I would then manipulate.
  • Playing and recording violin myself (I don’t play violin).
  • A recording session of a 6-piece string section consisting of 4 violin players and 2 cellists at Fantasy Studios in Berkeley, CA.

I have some examples for you.
Here is a piano chord, basically a raw recording with a touch of reverb...


Now here's that same piano chord, but processed. I pitched it down and did a "time stretch" on it along with adding long-tail reverb and delay...

The session at Fantasy was interesting. I recording a ton of different building blocks that I wanted to help me build cues, but I didn't really record any cues themselves at this session. For example, I recorded a ton of different effects such as risers, creepy tremolos, scrapes, chord sequences,  and many different articulations. I was a bit nervous because I had no idea how it was going to unfold, but it really turned out great! This session turned out to be an essential resource.

Here is a brief video of the session...

Fantasy Session

With this project as with other projects I’ve worked on, I find the process of creating source to be invaluable. This gave me access to a deep pool of building blocks unique to this project as I moved through the development timeline.
 

Music Sketches and Concepts

After a few weeks of creating source material and as I continued to do that, I began to also create sketches.

Here, I was trying out and developing ideas. Some of this was purely for the purpose of concept and some of it we dropped into the game to get a feel for how it worked against gameplay.

We also begin talking about how the music system would work in the game. The project was using Wwise and the music system was conceived by the audio lead based off game designer input.

I began creating compositions with that system in mind. We then refined the music system over a period of a month or two.

The sketch and concept phase was still an exploration phase. I was trying out different approaches. What that also meant was that I wrote a lot of music that I ended up throwing away.  I probably wrote over 30 minutes of music that didn’t ended up up being used. This sounds like a lot, but this, to me, is an essential part of giving the project a unique voice.

My feeling is...
If you're not throwing material away, you aren't doing it right.

Finding a voice for the game is an iterative process of discovery.

Theme Development

As mentioned earlier, when it came to themes, I did not focus on creating them in the beginning and instead focused on creating the musical "sound" of the game.

As I was going through the "source" and "sketch" process (as described above), I continued to absorb the story, the characters' motives and any other information that might influence the tone and direction of the soundtrack.

After immersed in all that, it became less effort for me to develop the key motifs for the game.

Here are a few examples along with background info for certain themes:

MAIN THEME—With the main theme, i wanted to capture the 4 emotional elements that the overall soundtrack needed to capture: the mysterious, beauty, tragedy, horror. I began with a simple chord structure (Gm - D, Fm - C). And then I came up with a very simple, but sing-able melody on top of that.
Also, in the “B” section of theme is what I consider the “sub theme”. And we used this as a vehicle to further the story as the main character finds journals left by other characters in the game.

GREAT ONE THEME -- There are 4 key characters in the game. One of them is called “The Great One”. And I don’t want to give details without spoiling the game for those who haven’t played it. But all I’ll say is that The Great One is a darker force in the game that becomes a greater focus as the player journeys further into the story.

Here is the core motif and after that an example of how it was incorporated into a full cue:

GIANT (BOSS) CREATURE .  I have a friend, Sergey Phomin, who is a talented sound recordist based in Minsk, Belarus. A couple years ago he recorded some alp horns at the Museum Complex of Ancient Crafts and Technologies in Belarus. He played them for me before I began work on Edge of Nowhere and so I was looking for an opportunity to use them, because they were really awesome. That opportunity came for the Giant Creature / boss battle.

Game Development Phase

With some of my themes in hand along with all my source material, I was ready for production and to be working within the overall game development pipeline.

Since I had already worked with the Insomniac team, we had a high level of rapport along with a process we had refined on past games which saved us a lot of time. In general, our process worked along these lines:
 

1. The Insomniac team would send me a video capture of a level.

2. I would do a spotting session where I would identify the music’s flow through the level. The audio lead would give feedback on that. Then, we’d create a spotting sheet so I and the Insomniac team were aligned on what music was to be written and where it would play in the game.

3. I’d compose / produce the music, usually doing a delivery of any cues I created at the end of each week. The audio lead would then review and give feedback where need and I’d iterate based on that feedback. Once we were both happy with the cue...

4. I’d then implement that music into Wwise and they would do in-game integration.

5. Sometimes music would need to be revised once we heard it in game and so I would make my revisions and re-implement in Wwise.

Live Orchestra

Insomniac gave me a "live" budget, which allowed me to record a 52-piece group in Nashville. We did a day-long session and I think I got about 25 - 30 minutes of music out of this. I recorded all cues in separate passes, where strings were recorded in one pass and brass was recorded in another. That along with the fact that we recorded earlier enough in the development process allowed me to re-edit this material at the track / stem level and create whole new unique cues. So, I was able to get a lot of mileage of those recordings.
 

Music System

The in-game music system accommodated a score that reacted to player actions, game state and story.

The basis of this system was the Combat area system, which scaled within 4 intensity levels:

A. Safe. When player was "safe" from danger.
B. Enemy Near. When an enemy was near, but not close enough for player to be detected.
C. Enemy Alerted. AI-driven enemies "detect" the player within their vicinity.
D. Conflict / Combat. Player and enemies in direct conflict.

The music system also allowed for special, contextual driven cues. These cues were normally used to support story or a special area in the game that required a different treatment. And of course, there were opportunities to score in-game cutscenes and the music system accounted for those, too.

Virtual Reality

While working on a VR title has a lot in common with non-VR titles, this new medium present new challenges. Sometimes these aren’t foreseen until they are upon us.

One issue that came up was that the cutscene designers crafted cutscenes that catered to VR, but we didn’t yet have tools developed that would capture an encompassing 360 degree view. Of course, cues needed to be synced to visual action, but if that action was  happening outside the field of view of the capture, it was difficult to score. The team decide to send me captures of the same scene from different angles. So, if you can imagine having to wrangle multiple videos of the same scene to score it, think about the challenge that this entails.

Edge of Nowhere was one of the early major titles for the Oculus Rift launch. And with VR still being in its infancy stages, working on VR was still so so new and we were still figuring out how to best create for the medium.

If this was a post mortem, I would be talking about 2 things: what went right and what could be done better next time. Many things went very well on the project and I’m happy with the results. But there were some things I would want to improve upon and that mostly revolves around mixing for VR.

Here’s what I would do differently for a VR game next time:

A. 3D mixing. Spatialized mixes.

The game music was mixed in stereo. Stereo is beautiful. I love it but...being able to use not just horizontal mix space but also vertical mix space is more immersive and also creates an open mix. This allows for more room for sound design and dialog.

B. Move away from “headlocked” mixes.
This is an extension of the first idea (spatialized mixes).
This game, as well as most games, have music locked to the head. So, when a player’s head turns, the music turns with it.

C. Move away from “frontal” mixing? Worth exploring!

These days we tend to mix with more emphasis on the front. Think about it. Mixers put most of their work into the Left and Right speakers.  It’s made sense so far because that’s where our screens are. But what about games where the player can move in any direction. Do we need to do this? I don’t know. I’ve heard this done on the game Farlands and I thought it worked great. I think it’s worth considering.

Final Thoughts

With it's story-driven world, Edge of Nowhere was a joy to work on. Being involved on this project was incredibly rewarding and was one of the most creative soundtracks I've produced. Thanks to the Insomniac team for having me aboard once again.