
Every setting…for a reason.
[Guest post by: Jose David Irizarry. Jose does a great job of capturing the work required for vocal mixing. I call this the “shorter” guide because if you’ve read my vocal mixing guide, you’ll know why. That being said, Jose brings in some great stuff not covered in the guide, such as mixing vocals like musical chords…read on to see what I mean.]
In general, bass and drums are the cornerstone of a musical theme in a band. Then guitars, keyboards and other instruments complement the harmonic setting of the musical arrangement. Finally, on top of all this, vocals are the crown jewel of the song. I’d like to direct focus to vocal mixing.
The Source: A Human Voice
The human voice is one of the most—if not the most—common sources of sound. It has a wide frequency range (75 Hz—8 kHz, including harmonics), comparable to that of the piano. The human voice also has a wide dynamic range covering an astonishing 80 dB (40—120 dB).
Basic (fundamental) voice ranges:
- Bass: 75 – 300 Hz
- Baritone: 100 – 400 Hz
- Tenor: 135 – 500 Hz
- Alto: 180 – 700 Hz
- Soprano 250 – 1100 Hz
You can expect a lot of variability and stability issues. A sung melody can contain a sample of the entire dynamic range just in a single verse, an example of how much its intensity can vary.
Another attribute of the voice is its timbre or tonal color. This distinguishes one instrument from another, a guitar from a banjo, or a violin from a flute. Now, in many cases it’s quite difficult to distinguish one electric guitar from another, but the timbre of human voices distinguishes one singer from another singer.
Capturing the Vocal Source
To capture and reinforce vocals for live performances, the most foolproof technique is close mic’ing. You need to have, at the least, an idea of how the vocals sound (refer to my previous paragraph), as well as the microphone handling technique of the singer(s), in order to make a good decision on what mic(s) to choose from (assuming you have this luxury). Know your mics—know their characteristics, the types, the polar patterns, the strengths and weaknesses.
Some voices may take advantage of the proximity effect that certain mics provide to accentuate the low end of a weak voice, but sometimes a mic with a more open polar pattern is necessary in order to be able to capture an exited jumping singer. Using a single mic for two singers at the same time generally isn’t a good idea unless you’re using the right mic in the right setting.
And in general for live applications, the tighter the polar pattern, the better and cleaner the vocal pick-up will be. It’s advisable to use mics with a directional pattern as opposed to an omni-directional pattern.
Another tactic that can help is to train vocalists on the proper use/handling of mics as well as the various types. They should understand basic mic properties and polar patterns—this knowledge can help them do a better job.
In addition, it’s usually a good idea to use high-pass filters (roll-off at 70—100 Hz) on vocals. HPF also helps in eliminating background/stage rumble and mic handling.
When mic’ing an ensemble or vocal group, it’s a good idea to use the same mic for all vocals. Remember that different mics have different polar patterns and frequency responses, and this can complicate the EQ of your stage monitors when trying to eliminate problematic frequencies.
Processing the Voice
The second stage after capturing the voice is a good preamp. If your console/mixer lacks this, acquire and insert a quality external preamp. There is a huge variety and at different price-points to choose from.
The preamp will define the quality of the signal being passed to the rest of the audio path. EQ for the voice can be tricky. My approach is to eliminate what is not needed, like a sculptor remove the unnecessary material from the stone to uncover a masterpiece. Keep EQ flat as much as possible, only eliminating those frequencies that cause trouble, particularly mid-lows that “opaque” the voice.
Between 250 Hz – 1.5 kHz, a notch filter can help in reducing nasal resonances which, most times, are annoying. This frequency is different for each person, and is created by a combination of the natural resonance of the nasal cavities and the skull. In certain cases, a boost at 3 kHz can add clarity to the voice, making it more intelligible and/or helping it stand out (cut through) in the mix. A boost at 5 kHz can add brilliance, while a boost in the 8 kHz range adds “air” or high end to the source material.
Another frequently used treatment is compression. Unfortunately, a lot of folks don’t have the slightest idea of how to apply compression to a voice. To me, it’s both science and art, and took me years to really understand it. I continue to experiment and search for new approaches and techniques.
As with any other piece of gear, fully understand your compressor (read the manual, it was printed for a reason). Compressors have personality. A particular unit may work beautifully well for one thing, such as drums but, may be terrible for voice. After identifying the right compressor for your needs, use it on all vocals (especially on the worship leader or lead singer).
Here are suggested values for common compression parameters:
- Ratio: Very strong and aggressive voice (4:1 to 6:1); all others (2:1 to 3:1)
- Attack: 20 ~ 60 ms
- Release: 300 ms ~ 1 sec
- Threshold: begin at the max and start reducing it until you get 3 – 5 dB of gain reduction during the louder passages.
Use the output compensation button [if your mixer has this functionality]! It’s there for a reason (too). Use it to compensate for the gain reduction of the compression stage circuitry. Without this compensation, the gain reduction and pumping effect will be very noticeable to the ear.
Remember that what the compressor does is to reduce the dynamic range of a variable signal, confining it into a smaller range. The ratio and threshold define the upper limit of this range, and the output compensation defines the lower end of this range. Without it you’re only half-done. Properly used compression should not be noticed.
Mixing Vocals
A lone singer in a band is usually rather easy to mix—make sure the lyrics can be heard intelligibly without overpowering the band. Toss in background vocals (BGV) in addition to the lead and the mix can easily get out of control.
The EQ approach for BGV is different than for lead vocals. You can be more aggressive with BGV EQ to keep feedback or bleed under control (increased amount of open mics) without affecting their overall appearance.
Pay special attention to the lead vocal mic. The lead vocal or worship leader has to be clearly distinguished without overpowering the others. The lyrics, as well as any spoken words during the performance, need to be heard clearly by the audience. The BGV and/or choir level must fit in the whole mix.
A little background in music theory: chords. When there’s a lead singer and a BGV group, it’s likely that the BGV group is doing a two-voiced harmony. By adding the lead’s first voice (or melody), you now have a three-voiced chord for every note where the BGV sings. It’s like a piano where each finger playing a key represents a human voice. Fully defined chords are composed of at least three notes. Each note contributes to the quality of the chord (major, minor, seventh, ninth, augmented, suspended, etc.).
The melody is always the first voice (typically sopranos in choir setting). If one of these notes can’t be heard, then the intended quality of the chord is missed. Translating this into the mix, make sure that all voices are present. In live mixing, I’ve noticed that the BGV singing the first voice requires a little boost over the other vocals.Unison—one note (same frequency)
- Duo—two-voiced chord
- Chords (fully qualified)—three, four-voiced (an up) chords
- Modern voicing—makes some interesting alterations to the second and third voices (e.g., the third voice can be singing notes one octave up or down during the entire song or during some verses).
Constructing the Mix: Balance
A) Identify each voice’s voicing, especially the first voice or soprano. The harmony of each chord has to be heard (quality).
B) Pair equal voices (people singing exactly the same thing), try to make them sound like one (level wise).
C) Balance each group relative to other groups – consult the musical director or group leader to get feedback about the balance between the different voicings. Sometimes a background vocalist may sing the melody to support the lead singer.
D) The group with the first voice or sopranos should be a bit over the other groups (perceived level). If there is a leader or lead, then the leader has to be slightly on top. Compressors (properly used) are a huge help in placing the lead voice in the mix, and they also free you from having to “ride the faders” all the time. Another use for the compressors is that they help to maintain the harmony (i.e., the relative level between the different voicings). After the initial compressor setup, use the compressor output level and the faders to fine-tune the balance.
E) The perfect balance is reached when you can’t distinguish individual voices or singers (besides the leads). The entire choir or group sounds like one huge instrument or an organ.
F) Once your voices are mixed, use the console’s sub-groups to set the balance between the music and the vocals. In general, the vocals are set around the same level as the music. What makes the vocals stand out in the mix should be how well you planned and managed the frequency distribution of the band and the vocals. When “EQing” the band’s instruments remember to leave room (frequency wise) for the singers.
G) You should hear a “sound” that is proportional to the size of the choir or vocal group (i.e., if you have a 6-piece choir, it should sound like a 6-piece choir, not like a quartet, a trio or a duet).
Constructing the Mix:
FX (Effects) I have to confess that I’m a big fan of effects, but can’t use them all the time for various reasons. A mix must be artistic, and part of this is refraining from certain things we like and being sensitive to each song—and even to each performance of the same song.
Some songs may require long reverb, while other songs require nothing at all. Reverb and delay can be used subtlety to provide depth to the vocals. In some particular (and extreme) cases, long delays can be used to create the illusion of a second BGV group repeating a small passage. Be creative and feel free to experiment. Not all songs require the same reverb or delay. As with any piece of gear, know your FX processor(s) and parameters.
Psycho-acoustic Phenomena
Perceived balance of vocals and the band—when mixing the same group performing a well-known song (to you) on a regular basis, unconsciously you might tend to put the vocals at a level (relative to the rest of the band) too low, thereby making it hard for the audience to hear them. Now, you may “hear” or perceive the lyrics clearly because you already know them and/or have heard them many times. However, this is not the case for the audience. (I’ve seen this psycho-acoustic phenomenon happen with professional shows and tours many times.)
Your ears are your most valuable possession. They are more important than your legs or arms (food for thought). So take good care of your ears by visiting a hearing specialist at least twice a year, and don’t ever introduce anything solid into your ear channel, not even to clean it (even a cotton-tip stick can cause damage to your ear drums).
Every time you hear a sound, your brain tries to decipher what it is and what it means. This indicates that those structures of your brain dedicated to process sound, communication, and speech are constantly working, even when you’re just sitting around the house doing nothing or even sleeping. Particularly when working with audio, this consumes energy and eventually you get tired. Behold, you’ve reached the threshold for hearing fatigue. It happens even at lower sound pressure levels. A good way to prevent this fatigue is by taking regular breaks when exposed to continuous music, say, every 30 minutes or so.
Flu, cold, allergies and congestion affect your perceived sound field and also the frequency response of your ears. Avoiding mixing if you have one of these conditions.
And finally, don’t fly-by-wire: Use your EARS, not your eyes. Be prepared, plan ahead, and take notes. The goal is professionalism and excellence on your part.
Jose David Irizarry has been heavily involved in live sound for almost a decade, mixing Christian music and popular events, and he’s also currently the technical director at his church in Canovanas, Puerto Rico. He can be contacted at josedirizarry@gmail.com
One statement in point F that reads: “When “EQing” the band’s instruments remember to leave room (frequency wise) for the singers.”, this I agree with.
With singers that use soundtracks, I consistently set the mid controll on the tracks to 1K, and dip it by 6dB. It makes room for the voices, without the tracks masking, and competing with, the vocals. For everyone who has read the previous sentence, try it sometime, it’s readily audible.
If you don’t use tracks, try to put all your music into a subgroup and do that EQ change to the group. (If you can. Typically, those of us mixing with analog mixers don’t have acess to an extra sweepable EQ that can be put on a subgroup.) See if you like what it does for the vocals.
Quaid,
This is an area I wrestle with from our current crop of up and coming audio techs in my church. “Make sonic room in the spectrum for vocals” is very valid in concept but much more difficult in practice. It is the “Phil Spector” method and can quickly make or break a good live mix depending on the skill and ears of the audio tech.
Let’s explore this a bit to understand my reservations. If we have a mixed group of male and female singers, the sonic space they occupy is typically 100hz-1.2khz for fundamental notes plus overtones out to about 8khz. That is essentially all of the audible frequencies except one octave at the upper and lower boundaries. A very big spectrum indeed and this is where 95% of all music is heard. In order to make sonic space my 1st year techs will often pull down the guitars and keys broadly below 1khz as much as 6-8db. Now they have made sonic space for fundamental vocal tones but there is a problem. The side effects of doing this is that when soloed, the guitars and keys have lost their natural tonal qualities with the left hand on the keys essentially going away (fundamental chords) and the guitars sounding more like mandolins with no warmth left. Many song intros and instrumental passages require full natural sounding instruments but now these sound thin and weak and their sonic impact is lost due to our overuse of eq.
The solution is to use subtlety and choose eq to “create sonic space” if and only if we can preserve the natural tonal quality of the instruments when soloed on headphones. No easy trick. More than 3db is often too much and we simply need to use fader control so the vocals always can be heard clearly. We must trust our ears and wield our power with digital eq weapons graciously. As an audio tech my unwritten oath is to first “do no harm to the performance” and I think most behind the mixing desk agree with this in principle.
Remember that in a fully acoustic performance without mics or electronics, the voices, guitars, piano, cello, violins and many other instruments share the same sonic space and if the performance is of high quality there is no audible clash of overlapping frequencies from 100hz-8Khz. The music simply sounds very natural and beautiful with a great natural mix. If we measured this with a real time analyzer we would find most of the sonic energy is between 200hz-3khz and that is what our ears are accustomed to hearing. I actually experienced this while singing the Messiah last night with orchestra. The room acoustics were excellent and the only mic was for the director to speak to the audience between movements.
Stepping off soap box now… Just one Sr. sound tech’s opinion. Others will surely disagree. :)
Dear Sr. Sound Tech,
Thanks for your very interesting comments. I also agree with your approach about the “sonic space” or better yet, the frequency band space, its exactly the concept I was mentioning in the article. However, in your comparison with the choir and the orchestra in a natural acoustic setting you are forgetting one fundamental detail: the sound system. It is nearly impossible to reproduce the perfect mixture of sounds found in a natural environment, in a man-made sound system. The complex interactions between the different instrument’s sound waves in the air behave differently when we try to capture them with mics and mix them in a console and then pretend to reproduce them in a PA. That’s why we need to create “sonic space” in the mix, to try to overcome the limitations of the PA (compared to the pure natural sound). This technique is used when we are close mic’ing. For an orchestra and choir ensemble 1 or 2 ambient mics will do the job and such case there is no need to worry about “sonic spaces”. The point is that we can not compare the example of a symphonic orchestra in a natural and pure environment with an environment where many mics and inputs are being mixed together. Moreover, like with any other mixing technique, it should be used with caution and the final judgment is based in what we all hear. Thanks!
Jose,
I appreciate what you are saying and we may be confusing several different issues here. If the system is introducing a bunch of midrange energy into the mix, my recommendation is to deal with this during system setup and room tuning rather than during a live mix if it can be avoided. My first goal in a live mix is to reproduce and reinforce the voices and instruments as truly and accurately as possible while leaving room to fix problem areas and enhance the overall sound. I still recommend using care when choosing EQ “to create sonic space” for voices. We choose EQ if-and-only-if you can maintain the natural quality of the instruments and avoid making the guitars sound like mandolins and the piano sound like a harpsichord. I say this because due to heavy handed EQ, I have been hearing a great deal of soloed mandos and harpsichords lately where magnificent guitars and grand pianos once roamed free . Sometimes simply pulling the fader down 3db on the guitar reduces midrange energy and is a more effective approach to creating sonic space for the vocals while preserving the natural timbre of the guitar. JMHO
As always, there is more than one way to skin a cat and each sound tech must choose the method that suits their situation the best.
Bravo! The most clear and concise how-to on mixing vocals yet. These are very close to the goals and methods I was trained to use many years ago. Trust quality mics, trust your ears, minimal eq to fix problems only, subtle use of effects, and make certain you can hear the lyrics and all harmony parts as one.
These days the technology at our fingertips is so much better but it seems that skills behind the console are becoming a lost art. I often suffer through a service where I cannot hear the lyrics with heavy handed eq or delay effects all over it, or drums overpowering everything and cannot hear vocal harmonies at all. So much work to do, so little time.
Thanks! Just trying to help other churches and ministries. Blessings!
Don’t forget to add depth with the vocals.
Hi and thanks for your comment on my article “Mixing Vocals”. I really value your feedback, could you please provide me some additional details on what do you mean by adding depth with the vocals. Thanks in advance and blessings!