Honestly, I just don’t get it. I’m at a church or event and the lead vocal is buried or it’s over the top loud or the lead instrument is sitting so far back in the mix, the musician could walk off the stage and no one would notice. What’s more is these techs think they’ve got the mix right – I can tell because they’ve stopped mixing and look happy!
Let me be very clear, I love working with techs who want help. I can see their efforts when they mix. They are working hard to figure out how to create a great mix – sometimes just a better mix than what they’ve been doing. These aren’t the people I’m talking about. But it does lead me into today’s post; what to listen for in your mix and how to make it better.
Where to Start
I’m going to make an assumption, as much as I hate to, but here goes. You already know how the song should sound. If you’re not sure what I’m talking about, check out this article:
So, it’s practice time and the band is into their first song but it’s not sounding right. Let’s get into what to listen for.
The other day, I caught these words from a studio engineer on how he starts his mix process;
“I’ll usually be throwing things up very quickly, almost like doing a quick monitor mix, and balance everything very quickly, just to see how all the elements are supposed to sound together, and I have a basic feeling of the entire track.” – Andy Wallace
I start with my channel volumes low and then bring them up to support my vocal. If you’re in a small room with an acoustic drum kit, then that kit volume is going to define how loud the vocals need to be to rise above everything else.
Imagine a song in layers with the top layer being the lead vocal. Everything else needs to fall under that. You’ll likely have the lead instrument just under that. Then we get into the supporting instruments and backing vocals.
If the volume balance is off, EQ isn’t going to help one bit.
It would be so easy to list off a bunch of instruments with key frequency areas for cuts and boost but that’s where too many of us, myself included, get off track.
Each instrument has a role in a song and that role drives what we do with the instrument. Take the acoustic guitar, an instrument that can produce a wide range of frequencies. Do you take all the frequencies or do you only emphasize a portion?
For a single worship leader and a guitar, you take everything. However, as the band grows in instruments, you need to re-think this approach. Even with a few instruments, you can let the acoustic guitar ring out. But what about a full band with keyboards and drums and bass, etc.? Time to re-examine the role.
In the case of a full band with the acoustic guitar playing rhythm and as the instrument of focus, it’s best to cut back on the lows and emphasize the mids and above so it stands out. But such a solution isn’t enough.
Studio engineer Dave Pensado considers the acoustic guitar’s role in his mixing and takes this idea further and says to consider how the rhythm instrument needs to sound. He asks himself the question, do I want the sound of the rhythm or do I want the instrument to stand out, so the sound of the pick on the strings is more prevalent?
For every instrument, you have to decide how present it needs to be in the mix and this comes by knowing its role in the song.
When you can identify the role of an instrument, you can work on EQ and whatever effects and processors would help. And the channels must work with each other – you can’t boost a frequency range on one instrument that then covers up another instrument.
And this is where I notice mixes have problems. I’ve listened to church mixes and when the tech let me take over, most of the time, I only had to rebalance volumes and make some slight EQ changes.
How Does It Happen?
How do these bad mixes come about?
- A tone-deaf audio tech. (Don’t ask.)
- The tech doesn’t know how it’s supposed to sound.
- Mixing in isolation.
- Times change.
I’ve provided a link above to take care of number two so let’s look at number three.
What happens is the tech solos the channel in the house, or via headphones, and mixes it without consideration of the other channels. The idea is to create perfect sounding channels. But mixing doesn’t work that way. They have to be blended together.
Now to that final point. I run audio for three services every weekend. Same songs and same band. But I still have to tweak the console from service to service and song to song, sometimes for EQ or effects but usually it’s for volume. (Not to mention the normal mix dynamics that go on during a song).
At the end of practice, you don’t have the perfect mix for the service. You have something close that gives you a great place to start.
So the next time you’re mixing, listen for those three areas:
- Volume Balance
- Instrument Roles
- Channel Relationships
To really dig into mixing music, check out this article: