Develop Brighton and 30 min task experiment

So last week I’ve had a blast of a time, it really has been great. I got the attend develop conference in Brighton UK for the third time in a row which is always an amazing few days, I got to meet so many different people from all areas of the games industry.

Develop Conference

This year I attended the audio track at develop, this year had a amazing line up of speakers and topics, which I must admit I don’t think I’ve seen a better line up anywhere else. So I’ll give you a rundown of the day though i’ll give you a rough overview of each of the talks

Until Dawn – Linear Learnings For Improved Interactive Nuance – Barney Pratt

Afterwards Barney Pratt – Audio Director of Supermassive in which he was showcasing various audio methods and techniques to build horror and tension as well as a Movie style audio palette within Until Dawn, such as using motifs within the music to signal danger and express the relationship between characters.

Barney also showed how they used foley to reflect the characters feelings by using sharper quicker sharper sound for enhance the feeling of agitation to more mellow sound expressing the calmness of a character.

One of the most interesting features to be shown, was within the dialogue system. Traditional planning wasn’t working, as for cut scenes the jump behind shots was too harsh jumping from left to right. Centre panning like in traditional film wasn’t the answer as Barney stated it felt lifeless within the realms of game audio. So the system they created was a combination of the two with a difference the panning law was halved so if a sound had to go 100% right it would only go 50% and the results were great to hear.

VR Round Table – Barney Pratt, Todd Baker, Matt Simmonds

The whole talk gave a great insight to some of the current methods, techniques as well as the current problems. However this talk looked more towards the techniques to create the assets rather than that of the technical side of HRTF functions and the like.

One of the major things that was mentioned, rather than using stereo stems for ambience, use quads instead as stereo stems don’t allow for a player to be places inside of the audio fields only in front or behind musch like speaker on a TV. Along with this point the guys also mentioned the Quads can use quiet different content with regards north/south and east/west of the player.

Another issue was the placement/inclusion of music within a VR game, unlike traditional game in which most music is played directly to the player, this doesn’t work often taking the playing out of the experience. Instead it was said that music needs to be placed via real world sources, giving the player some context of where the music is coming from rather than randomly just playing music. However this is mostly dependant on the style and content of the game, as in Todd’s game a lot of the content created was musicial in nature and style thus the real world sources aren’t need in this case.

All in all the end message was to do what needed to be done to create the experience desired, there are no set rules only things to be aware of when creating.

Creating New Sonics for Quantum Break – Richard Lapington

This talk was fascinating, I’d already heard a lot about the game with regards to the music system and that how in sync it was to the game play. However it seems like it was much more than just plain syncing, he talked and showed a lot of various systems where not only was the music system in sync but in fact the visuals were in sync to the SFX.

This was is such a nice thing to see, that all areas of a team are essentially in sync when creating the game and no area was being left in the dark, as what happens with too many games these days.

They had built custom plugins to run from Wwise to game game engine and vice versa so that various data from the audio could be used to manipulate or be manipulated by the visual effect happening in game. This whole project would of been a dream for any audio person to have audio play such a vital role within a game.

However the whole process didn’t come without its troubles,  the team went through a lot to discover what “time” sounded like and went through many different versions of this, thus resulting in a confusing idea of what time was. He then went on to  mention that eventually the team decided to do 30mins break out sessions in which they had 30 mins to create a sound of time, they went through many irritations of thing before finding “the sound”. From there they use this as a solid template and was able to create more assets and had a clear vision of what time sounded like in there game.

This 30 mins break out session gave me a great idea to further my skills in all area of game audio, which i’ll will explain what I’m doing towards the end of this blog piece.

THE FREELANCE DANCE – Kenny Baker, Chris Sweetman, Rebecca Parnell, Todd Baker

This is the talk I felt I needed to hear as only recently myself gone freelance, and it so really nice hear that actually  what people had gone through to get were they are how they operate on a daily basis as well as a project to project basis.

One thing that came up that kinda put a smile on everyone faces, it that all the guest on the panel were all mentioning, that slowly and surely audio is being held in a lot higher regard and has started to be thought about earlier and earlier in the development cycle rather than becoming an after though.

Audio is now becoming a focus, people and developers are wanting good quality audio, with the birth of VR, everyone is pointing out how important audio is for the user experience, they’re now starting to look at non VR projects and wanting the best audio for that too.

One thing however that was noticeable between all of the participants on the panel is that all of them came from some sort of industry back ground rather than coming straight into the world of being freelance, it would be nice to see if out there anyone has made it this way.

Assassins Creed Syndicate: Sonic Navigation & Identity In Victorian London – Lydia Andrew

I’m a great fan of this series, always have been since the very first release so needless to say I was eagerly awaiting this talk and it didn’t disappoint.

Lydia went on to talk in detail at how they looked through all the boroughs of London, trying create a sonic quality to match the visuals. They created and recorded for each borough the different classes of people focusing on their accents and what they were talking about, to the details within the various carriages that patrolled in the area making sure that a player could stand in an area of the game close their eyes and be able to pin point exactly where they are in terms of London.

Another great area she talked about how back in Victorian England song were a huge part of culture, and that they actually set about looking through history book and even got an expert on the case to identify popular song that they could have NPC’s actually sing in game but not only this they decided on a singular song to which would be use as almost a motif running through out the entire sound track.

Dialogue Masterclass – Getting The Best From Voice Actors For Games – Mark Estdale

Mark talk went through the various methods he uses and has developed over the years to truly get an honest reaction out of a performer. He goes on to mention how he doesn’t give the actor a script before hand and instead tries to put the actor in a mind frame of the character that they are performing.

By giving the actor visuals of their character the place in which they are performing along with any other audio cues, it helps immerse the actor with the game and feel like they are part of the world, so that when the actor is hit by a reaction, the performance from the actor is truthful rather than becoming a rehearsed playback.

Another small detail mark also uses rather than put the actor in front of a stand alone microphone say a U87 meaning the actor has to stand in a fixed position, he instead using a headset mic that in attached to the actor. This now mean that the actor is now free to move around the room and express them selves rather than having to be aware of the pick up pattern of the microphone. After working with mark for a few years I can honestly say the results are amazing.

Stay On Target – The Sound Of Star Wars: Battlefront – David Jegutidse, Martin Woehrer

 Now I believe this was the talk everyone in the room was waiting for, it not everyday you get to peer behind the curtain of one o the world most beloved franchises Star Wars. the guy went on to talk about how much back ground research they had to do, even having taking a trip to Skywalker ranch in which they had a chance to discuss how various audio elements were constructed. One thing that was great to hear is that they wanted to keep the sound design as organic as possible, using real world sounds the create something else, much like what was done within the original movies.

They went into some great detail about how they used some actual elements from the film track, that they then had to clean up and modify to met the same audio standards as today’s modern audiences expect. they also used various methods to deconstruct some elements that only one or very few instances in the actual film.

They then went on to show case some of there work, pin pointing out the various element used in each of the sounds showcased. One thing that was great to see and hear was the fact they had actual in game footage but with other audio elements taking out so that we could actually hear the solo’d elements.

30 min experiement

As mentioned in the Quantum break talk, they had 30 min break out sessions to create a particle sound. I’m going to do a similar thing. One area my self that I am not the best at is synthesis, I tend to do a more organic approach to my own sound design, I can create various sounds using synths, however I still find it difficult. So from now on I’m gonna set aside 30 mins a day really delving into various synths and researching to better my own understanding and attempt to create a weekly patch (that hopefully if all goes well i’ll release for people to download). Each week I’ll choose a style from basic keys words or from the category lists feature in the given synth i’ll be using.

I’ll then write up a weekly blog explaining my thought process and the guide to what it was I did within the patch.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s