What's new

Do composers/producers need to be able to mix surround?

My 2 cents (PERSONAL OPINION BELOW):
Forget about baked formats (e.g. 5.1, et all), they do not represent the current state of any technology. The whole technology concept of our era is moving towards personalised experiences that adapt to the consumer's needs and not vice-versa (consumers having to adapt to what is given).

For this reason, skip working directly in 5.1 (I personally would even skip Stereo), and work directly in Atmos, even if you are mixing for Stereo, work your project in Atmos and simply use the Renderer in 2.0 mode (NOT Binaural, never use Binaural if you are working with Monitors, you CAN use Binaural to check on your headphones IF YOU WANT, but stick to 2.0 to work).

Later, give the producer whatever they ask you (stems, tracks, Atmos projects, whatever), they'll know what to do.

The reason why doing this is that you'll be working with objects, even if in 2D, even if you are not to deliver a 3D mix, even if you don't have an Immersive setup - you will be working in something that can be later easily pushed to other dimensions.

From Atmos you can easily go "down" to 5.1, or Stereo. From stereo you can hardly go "up". 99% of people who complain about Atmos, complain about "that album that was recorded for and mixed for stereo and later was released in Atmos"

The above is a highly controversial OPINION, and is completely fine if you disagree :)
 
99% of people who complain about Atmos, complain about "that album that was recorded for and mixed for stereo and later was released in Atmos"
I always asked myself, if the Atmos haters ever heard a good Atmos release with appropriate listening equipment. Some of them for example used in-ears to test the experience … LOL
 
My 2 cents (PERSONAL OPINION BELOW):
Forget about baked formats (e.g. 5.1, et all), they do not represent the current state of any technology. The whole technology concept of our era is moving towards personalised experiences that adapt to the consumer's needs and not vice-versa (consumers having to adapt to what is given).

For this reason, skip working directly in 5.1 (I personally would even skip Stereo), and work directly in Atmos, even if you are mixing for Stereo, work your project in Atmos and simply use the Renderer in 2.0 mode (NOT Binaural, never use Binaural if you are working with Monitors, you CAN use Binaural to check on your headphones IF YOU WANT, but stick to 2.0 to work).

Later, give the producer whatever they ask you (stems, tracks, Atmos projects, whatever), they'll know what to do.

The reason why doing this is that you'll be working with objects, even if in 2D, even if you are not to deliver a 3D mix, even if you don't have an Immersive setup - you will be working in something that can be later easily pushed to other dimensions.

From Atmos you can easily go "down" to 5.1, or Stereo. From stereo you can hardly go "up". 99% of people who complain about Atmos, complain about "that album that was recorded for and mixed for stereo and later was released in Atmos"

The above is a highly controversial OPINION, and is completely fine if you disagree :)
Context is important here (I speak as someone who's often mixing soundtracks in surround and music in Atmos).

If you're working on your own artist releases, or making music to please yourself and you're not answerable to anyone else, then absolutely compose and produce in Atmos and have fun exploring the possibilities.

If you're a composer writing music for media though, the chances are it's going to pass to another audio person before it goes out into the world, and at that point having it as an object based deliverable is a massive PITA in most scenarios.

Everyone will want stems, everyone's on a deadline. If you're having to get an episode of TV out the door each week, or export 50 cues for a feature film, it's going to be a non starter to sit and manually print stems into the renderer in real time for each thing that needs separation, then bounce out a channel-based downmix of it.

The other factor in this scenario is that it's not really your sonic playground. It's nice to have some rough context while writing and give options but ultimately it's the music editor, dubbing mixer, game sound designer etc who have responsibility for blending everything together and getting it out the door, and your carefully written object metadata is unlikely to survive the process.

All that being said, properly executed (and monitored) Atmos is a beautiful thing, it just needs the right time and the right place.
 
Context is important here (I speak as someone who's often mixing soundtracks in surround and music in Atmos).

If you're working on your own artist releases, or making music to please yourself and you're not answerable to anyone else, then absolutely compose and produce in Atmos and have fun exploring the possibilities.

If you're a composer writing music for media though, the chances are it's going to pass to another audio person before it goes out into the world, and at that point having it as an object based deliverable is a massive PITA in most scenarios.

Everyone will want stems, everyone's on a deadline. If you're having to get an episode of TV out the door each week, or export 50 cues for a feature film, it's going to be a non starter to sit and manually print stems into the renderer in real time for each thing that needs separation, then bounce out a channel-based downmix of it.

The other factor in this scenario is that it's not really your sonic playground. It's nice to have some rough context while writing and give options but ultimately it's the music editor, dubbing mixer, game sound designer etc who have responsibility for blending everything together and getting it out the door, and your carefully written object metadata is unlikely to survive the process.

All that being said, properly executed (and monitored) Atmos is a beautiful thing, it just needs the right time and the right place.
I was working with a producer/mixer friend last week and he was talking about commercial releases in surround getting more royalties from apple. But to make it eligible he'd have to invest $50-100k in an atmos rig and then still the artists would only get 10% more spotify like royalties. It doesn't make financial sense at this point though atmos seems to be winning the latest beta/vhs format battle.

I work in theatre where mixing to stems slows the process down too much, but its not too big a deal to set up DP to bounce all stems in one go if the sessions are set up properly first.

Makes total sense to deliver with stems totally balanced for a mix but separated in case.
 
Top Bottom