How many times have you been at a live event and the audio didn’t sound right? More than once, I’m sure. And I’ll bet the problem had to do with balance. Balance is all about relationships. Hang with me guys because I am going to ask you to commit!
Mix relationships are the key to good mixes. It’s not about the type of audio console or which plug-ins you have, though those can help. Before a good studio engineer or live engineer ever thinks about grabbing a digital plugin or slamming down some reverb, they get their relationships right.
There are three areas of mix relationships:
1. Technical relationships.
This is the stuff you get from stage and the stuff you mix. It’s really about four properties of sound and how each mix channel uses them for differentiation and constructing those relationships. These properties are:
- Stereo placement (panning)
Channels have to be balanced so they each sit in the proper place in the mix and that’s done by altering or adding (in the case of effects) these properties.
2. Vocal relationships.
Whenever there’s more than one singer, the vocalists need to be balanced in some way. They might need to be blended if they sing together or it might be a duet where each can stand out. And let’s not forget all the work on backing vocalists and blending them together while supporting the lead. Vocal relationships are so important.
3. Instrument relationships.
Let’s think about the main areas of a song; the lead instrument, solo instrument, rhythm instrument, percussion, and the song hook. In those first four, a small band might have the same instrument occupying more than one position. But the rhythm guitar can’t be so loud that it washes out the keyboardist who is playing the song hook. The hook, those repeated notes or musical phrases that many times define the song.
The instruments have to be in a right relationship with each other. Maybe the bass is out-front this song but the next, it’s tucked in the back.
How to Create Balance
Now let’s talk about how to create that balance. Some people can hear balance just by thinking about it. That’s cool. Other people need a little more help so let’s consider the 3D model. Some digital mixers (and soon an IEM system) use a stage display on their screen to aid with this – the closer to the front of the screen, the louder, and location on the stage is for panning.
The 3D model works like this. Pick up two objects, each represents a sound source. Let’s say you have a pen and a pencil. The pen is the lead vocal and the pencil is the acoustic guitar. Then follow below:
1. Place the pen and pencil next to each other, at arm’s length away from you. That’s both sounds at the same volume, no panning.
2. Move the pen (vocal) closer to your face. That’s the vocal volume increasing.
3. Move the pencil (guitar) a foot off axis, to the right. Now the guitar is panned slightly right.
That’s the basic idea of audio in a 3-dimensional mix field. Wherever the object is placed, that’s where it would sit in the mix. But wait, we’ve talked forward and backward and left and right but what about up and down?
Use up and down to think about frequency dominance. Therefore, the bass guitar would be down low and an instrument like the acoustic guitar would be higher up (higher frequencies). But how exactly do we get these different up/down, left/right, front/back separations?
A good relationship has to allow for individuality and that’s gained through separation.
How to Create Channel Separation for Balance
Remember those four technical relationships? Here’s where they come into play.
1. Volume separation.
After I set my gain, I move to volume balancing. I’ve seen this accomplished a few ways. A friend of mine runs all his faders hot and then pulls them back to where they should be. I go the other route. I start them low and then bring them up, always building. Drums, then bass, then guitars, until my vocals are on top. Either way, what’s important is the first step in mixing is getting a general volume balance before doing anything else.
I will note that if there’s a channel that needs a healthy amount of compression, I’ll add that at this time. So at this point, the volume balance is good but I will have to revisit it as modifying EQ and adding effects can throw these volume out of whack a little.
2. Spatial separation.
I noted in a recent article, Improve Recordings with these Panning Tips, how panning can help a recording or live stream. See that article if you’re running a mono house mix but sending a stereo mix for recording/live stream. With a house stereo mix, this can be tricky depending on the layout of the room. The wider the room, the more stereo separation you can get. And while your FOH location might be dead center, if you pan an instrument too far to one side, the people on the other side of the room might not hear it at all.
In general, kick drum, bass, and lead vocals would be centered. The kick and bass are omni-directional sounds anyway. Apply slight left and right panning to the toms (if two toms, pan one slight left and one slight right). Instruments can be panned a bit farther out, to the side of the stage in which they’re located. The goal in panning isn’t to create an amazing stereo mix – save those for listening to music on headphones. What you want is enough panning to provide spatial separation thus added clarity.
Is panning necessary for a great mix? No. Does it help? It can when done well…or well done, like a steak…but I don’t like my steaks well done…I digress.
3. Frequency separation.
Now it’s time for the EQ work. And now is when you’ll get a few cliché statements. Clean up the audio first. Cut narrow, boost wide. Cut before boosting. Never let them see you sweat. The suit makes the man. Wait…perhaps a little too cliché.
In channel separation, be it vocals or instruments, you do want to clean up the channels first but here’s another huge tip – don’t try to make a channel sound like something it isn’t – unless that’s done for effect. For example, the acoustic guitar covers a wide frequency range but it’s not there to be a low end instrument. By excessively boosting the lows, you don’t get more bass, you get a muddy mess.
As I heard Andrew Stone once say, you want the instrument to own its frequencies. Every vocal and instrument has a specific fundamental frequency range. These are the core frequencies the instrument/vocal produce and you don’t want two instruments competing for the same frequency space. Go back to the 3D model idea and imagine a kick drum and bass in the same point in space. Which stands out? Neither stands out.
Frequency separation takes a few passes when mixing. You can’t start at one end of the console and work your way to the other. There are a few methodologies that can reduce the amount of bouncing around (trying to get everything to sit in the right freq space). The first is to start with the low end and work up to the vocals, just like with vocal balancing. This process helps create a clean low end but when you get up to the vocals, you might have to go back to other channels to carve out frequency space the vocals need.
The second method is to start with the vocals first. Get those sounding good and then work your way down. This means everything you do keeps those vocals out front. But in either case, you’ll still need to bounce around a little, that’s just how mixing goes…and I love it! Of course, during all this mixing, you might warm up a vocal or decrease an overly bright guitar. You know what’s needed. If not, listen to reference recordings to get an idea of your mix goals.
Let me just go back to something I said a little bit ago, “don’t try to make a channel sound like something it isn’t – unless that’s done for effect.” I’ve had times where I cut the lows and highs from a lead vocal and it created a cool vocal effect that fit the song – no effects processing, just by using EQ adjustments. But if you’ve got a singer with very few highs in their voice, don’t try boosting the highs to make up for that. Clear a path for their natural voice and let it stand out for what it is.
Do note that large EQ changes will change the volume of the channel so don’t be surprised if you need to modify the volume.
4. Effects separation.
Imagine that 3D mix model with all your vocalists and instruments at different points in space. Another way to move them from front to back and up and down is with effects. For example, reverb on backing vocals can push them farther back in the mix, which can be what you want. But if the reverb is on the lead vocal, it could push the vocal farther back in the mix and thus require a volume bump.
Talking effects separation can be tricky because of the types of effects and how it’s used. For example, if I added a bit of distortion to an acoustic guitar (yes I have, and trust me I had good reason) then that’s going to add to the low-mids. So, I might have to alter the channel EQ to make up for that.
In digital consoles, not only do you have access to a lot of effects, you can also access a lot of extra effects parameters. For example, you can have reverb but also control the frequency range it affects.
In summary, the majority of mix problems I hear are the result of bad relationships and by understanding the three areas of mix relationships and the four ways to put channels into the right relationship, you’ll be able to create a rock-solid music mix.
The Next Steps
One of the key steps in creating a balanced mix is getting vocals to stand out – and sound good. So if you’d love to learn more about that, check out…
Thought? Questions? Comments?