PDA

View Full Version : Another bedtime read - i quite like this one



Milesy
27-02-2005, 10:44 PM
==============================================

Why Your Mixes Suck

Lionel Dumond
Media and Mastering Editor



The time has finally arrived.

Your latest and greatest work is almost done. You started with what you feel is a damn fine song. You carefully planned the arrangement. You've captured some killer tracks. And then, you sweated every detail of the mix. You tweaked, pulled, pummeled, and then re-tweaked, re-pulled, and re-pummeled those tracks until it all sounded something like what you thought you were hearing in your head when you started. And finally, you now hold in your hands The Final Mix. No more hedging. You're ready to commit forever. This is the sound you're going to leave to posterity.

Your magnum opus is now ready to be mastered.

Right?

Well, maybe. Or, maybe not.

One of the most important things you should expect from any good mastering facility is a well-trained set of ears listening to and evaluating every minute detail of your music. That facility should then give you a brutally honest, totally objective opinion of the quality of your music and mixdown, presented in a constructive a manner as possible -- along with suggestions for improvement and possible solutions for problems in your mix.

But hey... you've done that already, haven't you?

Again, the key word here is objectivity. The more you've been involved in a particular project, the less of it you have. If you've spent hours and hours mixing a project, your sense of objectivity has been unalterably compromised. If you wrote the song, performed on the record, recorded the tracks, and then mixed it yourself, your objectivity is all but out the window.

Not only is this an unavoidable fact of life in the studio, but in a lot of other places, too. There is a very good reason doctors should not attempt to treat themselves, psychologists should not analyze themselves, writers should not edit themselves, and attorneys should not defend themselves in court. That reason is lack of objectivity! Heck, I wouldn't even suggest that an experienced, professional mastering engineer master his or her own music, on exactly these same grounds.

(For a great article which explores this subject in more depth, see Rip Rowan's editorial in the October 1998 issue of ProRec.)

A big part of my job as a mastering engineer involves listening and critically evaluating mixes that clients submit for mastering. In just about all cases, this material is what the client considers a finished mix -- the fruits of his or her very best efforts. The stuff I hear coming through here ranges from "genius" (pretty rare) to "garbage" (again, pretty rare, and I'm not talking Butch Vig here, folks). Most of it falls somewhere in the middle -- mostly good attempts, but with problems. Fortunately, most of the problems are usually fixable -- if not in mastering, then by going back to the drawing board and reworking the arrangement, the recording, or the mix a bit (sometimes, quite a bit).

But what kind of problems? Ahhh... finally we are coming to the point of this whole article! First, let me point out that every mix is different, and thus presents its own set of challenges However, over the years, I have noticed a few distinct patterns, especially from mixes done in home and project studios. Many of the same maladies seem to crop up again and again. In this article, I've compiled a "rogue's gallery" of the Top Ten most common problems an objective set of ears is likely to catch in a given mix. Most mixes suffer from only one or two of these, if any. Don't you wish.

Ambiance Problems

Great-sounding music doesn't get pumped directly into our brains. People play music, and those performers and their instruments exist in some kind of physical space. The sounds they make travel through the air and bounce off the boundaries of that space, as well as off other nearby physical objects, and this all becomes, to the listener, part of the overall sound. Also, performers playing together each occupy different positions -- a listener can perceive some of their sounds as originating from "close by," and other sounds as originating from "further away."

Modern studio techniques, like close-miking and the use of dead rooms, can rob your music of the subtle ambient cues that make the music come alive. Judicious use of reverb and delay are great tools that can bring a sense of "air", spaciousness, and realism to a mix. Clever use of ambiance can also add a "front-to-back" dimension to a recording that complements the "left-right" stereo soundfield. (Cool... instant 3-D!)

In short, the proper use of ambient effects in a mix is like the spices in your favorite dish -- just the right kind and amount adds zest and flavor, but too much of the wrong kind makes it inedible. Vocals swimming in reverb to the point of intelligibility. Drums booming off the clouds. Guitars drowning in a swamp of echo. In short, ambiance problems. I hear them all the time.

A lot of these problems could be avoided by a) picking the right kind of ambient effect for the job, b) learning what their parameters mean, and how to adjust them, c) using different ambient programs for different instruments, and d) simple good taste and common sense!

Most reverb effects can be placed into one of three categories -- plate, room, or hall. Plate reverbs, with their lower density algorithms, shorter decays, and shimmery finishes, are generally best for vocals. Room-type reverbs (which usually come with parameters that allow for room simulations of different sizes and construction materials) are good for drums and most percussion. Hall reverbs are for those special situations where you really need an instrument to go "boom." Carefully chosen hall reverbs can sometimes be applied to an overall mix with good results (don't go too heavy here!)

Learn how to manipulate the ambient effects you're using. Pre-delay is the amount of time it takes for the reverb effect to "kick in." Early reflections are the very short echoes that occur before the "thick" part of the reverb takes over. Longer pre-delays, and/or longer or less dense early reflections, can allow sung syllables or the attack of an instrument to poke through before it's washed away. Decay time is the amount of time it takes for a reverb to fade into "silence," usually defined as -60 dB (which is why it is sometimes referred to as RT60 time). Shorter decay times can help control ambient "buildup," where reverberant effects can start to pile up on top of each other (this isn't related to the "waxy buildup" you see in Pledge commercials, but it's almost as ugly!) And don't forget... reverb effects have level controls, too. Watch those send and return levels!

Applying different types of reverb to different parts of your mix (instead of relaying on a single ambient effect for everything) can really liven things up and make your mix more interesting to listen to. Different manufacturers' reverb and delay boxes utilize their own unique algorithms, and thus sound different from each other -- Roland, Sony, Lexicon, and TC Electronics all make great sounding reverb units, but they don't sound anywhere near the same. Get your hands on more than one if you can, and put those extra sends on your board to good use!


EQ Problems

I wrote a pretty good article (if I can say so myself!) in the April 1998 issue of ProRec that explained the role of equalization in carving out a good mix. Allowing each instrument to claim its own sonic space is crucial to creating a mix that is well-balanced across the frequency spectrum, allowing each competing instrument to make its statement without crowding the others out.

Some of you need to read that article again! The price of ignoring the wisdom contained therein can be a great big unintelligible mud-puddle of a mix. There's music there, but that pad part... uh, it could be a cello, could be an oboe -- I just can't tell! And your wall o' guitars are embroiled in the sonic equivalent of the Jerry Springer Show -- all fighting so hard with each that you can't hear a damn thing they're saying.

Or sometimes, an otherwise pretty decent recording can sound completely butt-less (no low end) or sound like there are blankets over the speakers (no sizzle or air). The kick and/or bass (if they can even be heard) sound wimpy and lifeless, while the cymbals sound like garbage-can lids -- and the vocalist sounds like he or she literally "phoned it in" (I mean, like the vocal was sung over the telephone!)

A complete treatise on EQ goes way beyond the scope of this article; and besides, I've covered it before. Suffice to say that poor EQ decisions are usually borne of one of two reasons: a) a lack of experience, b) monitor that are shy on bass (very common with small/cheap nearfields), over-emphasize the midrange (ala Yamaha NS-10s) contain poorly tuned subwoofers, or otherwise lie to you about the sound you're getting.

A lying monitor system isn't necessarily a mix-killer, as long as you know what they're lying about -- in other words, you are so familiar with their sound that you aware exactly where they fall short, and can thus compensate properly for those shortcomings. Yamaha NS-10s may not be the flattest speakers, but I have heard some killer mixes come off of them, because those engineers are intimately familiar with the sound of the NS-10. Of course, it's easier to have monitors that are accurate to begin with, so what you hear closely approximates what's really contained in your tracks.

As for a lack of experience... that can only be overcome as you read more, learn more, and watch and listen to others who know what they're doing. Keep experimenting, too -- trial-and-error is the original teacher, and believe it or not, it's how a lot of the most knowledgeable pros in this business have built their chops.

Inconsistent Levels.

This can range from the occasional errant note that just pops out at the inopportune time, to a whole track that wanders in and out of the mix like the bass player went out on a beer run during the chorus. Any highly dynamic instrument, such as horns, brass, bass, and percussion, can suffer from the problem of inconsistent levels. The voice is especially prone to wide fluctuations in dynamics -- and nothing can kill a track faster than a singer who fades in and out for no apparent reason.

There are all kinds of things that can cause these problems. Horns and brass can be awfully tough to record, especially if the player moves a lot, which is why you might consider a clamp-on mic as opposed to a stand-mounted one in some situations. With vocalists, it's often a case of how well they can "work" the mic -- some know enough to back off a little during the louder parts, and to move in some during quieter passages; while many have no clue how to control their dynamics at all. And with some musicians, it's merely a case of just plain sloppy playing!

In these cases, compression can be your best friend. A compressor can tame peaks in a track by attenuating levels that exceed a given loudness in a precisely controllable way. Compressors can also raise the level of quieter parts via the use of makeup gain. These two processes working together can do wonders to level out an inconsistent track.

A compressor, however, is not always the best answer. If the track only needs to be tamed here and there, sometimes spot treatment, rather than overall compression, is the way to go. If you are working in a DAW, the easiest approach may be to select the offending portion and either raise or lower the level as needed. Another time-honored method is gain-riding -- the practice of moving the faders at the proper times, and by the appropriate amount, during the mixdown (some guys like to gain-ride during record as well, a practice I don't recommend unless you know exactly what you're doing.) Back in the days when I started out, it was considered a necessary skill to be able to "play" the console during a mixdown, just like one would play an instrument. You needed to have the mix in your head well enough so that you could do the moves as they needed to be done, all by hand. A complicated mix might sometimes take two, or even three engineers, depending on how much manipulation had to be done during the mix and how many hands were needed. DAWs and dynamic console automation has made this a thing of the past in most studios. Okay, enough reminiscing for now...

Panning Woes

Proper soundstaging is a major consideration in creating a successful, well-balanced mix. As we've already discussed, ambient effects can be used to control the "front-to-back" dimension of the sound stage. Panning controls the "left-right" relationship of timbres, which is the other half of the soundstage equation.

Proper soundstaging is something a lot of people tend to overlook. I recall observing a test at a trade show as a young (and still pretty green) recording engineer. People who came by this one particular booth were asked to listen to a mix over a pair of normal speakers and in headphones, and then briefly describe what they thought was wrong with the mix and how it could possibly be improved. The listeners cited everything from spectral balance to subtle distortion. But, the fascinating thing was this -- of the twenty people who took the "test" (and, sad to say, I was one), all of us failed to notice the single most glaring faux pas -- the whole mix was panned dead center mono! That experience had an tremendous effect on me. From then on, I was a lot more aware of proper soundstaging, I can assure you!

Like proper EQ, levels, and the use of ambiance, panning is a great way to make room for various instruments and to bring variety and interest to a mix. It seems, however, that panning issues tend to confuse some folks. I am often asked, "where should I put this or that instrument in the mix?" "Where does the bass go?" "Should the hi-hat always be on the left or on the right?" "Is it best to pan the crash opposite the ride?" The simple answer is "I don't know unless I listen to your mix," and even then, they way I would do it isn't the only good way.

The key is to always have a good reason for putting an instrument where you are putting it. Don't just throw things around without a plan. And don't be afraid to try something a little quirky just to see if it works. For example, if you listen to the early Van Halen records (the ones produced by Ted Templemann), you'll almost immediately notice that the dry part of Eddie's rhythm guitar is always panned hard left. Weird, yeah... but it sounds right on those songs!

One sure way to screw up a mix, especially relative to panning, is to mix through headphones. My advice is, don't. Headphones provide a grossly exaggerated picture of the left-right spatial relationships in your mix. An ideal stereo field is sixty degrees apart relative to the listening position. Headphones are 180 degrees apart, and also don't account for the fact that the sound emanating from each of two normal speakers reaches both of your ears, not just one. Sounds panned dead center seem to originate from your pituitary gland, not from in between a set of speakers where they belong!

Also, be sure to observe the "equilateral triangle" rule in regards to monitor placement, making sure not to place nearfields too far apart. A set of monitors placed too wide will create a "hole in the middle" effect, where center-panned material will seem to emanate from the two sides, and not from the middle at all.

Frequency Cancellation and Phasing

As you're panning around, it's important to check your mix in mono from time to time. When you switch to mono, do some instruments tend to get buried in the mix? If so, this is a pretty sure sign you've got phase problems. Sometimes, phase problems are apparent even listening in stereo, depending on how severe the problem is, what kind of mics were used and how they were placed, relative track levels, and how you've got things panned.

When a single sound source is picked up by more than one microphone, the sound can reach the mics at different times. When the tracks are mixed, the crest of the sound wave picked up by Mic A can be partially canceled out by the trough of the sound wave picked up by Mic B at exactly the same time. The amount of cancellation varies across the frequency spectrum, and thus, this phenomenon is often called frequency cancellation. This is exactly how a phaser effect works, by the way; though with a phaser you can usually control the frequencies which are canceled or have the effect automatically "sweep" across a part of the spectrum in a precisely defined manner.

Phase anomalies can easily creep up whenever you use a multiple mic setup, such as in miking a drum kit; or even through mic bleed, as when you are recording a band live and each sound source is picked up by mics intended for other sound sources. Sometimes, there's little you can do about that, except to alter your mic positions or to reverse the phase on some of the mics using an adapter or specially wired cable. Many mixing consoles and most DAWs provide a method of reversing the phase of a previously recorded track as well.

Overuse of Effects

I remember the very first digital delay I ever bought -- the venerable Effectron I from DeltaLab. It could provide up to1024 milliseconds (a little over one second) of delay. It could do feedback regeneration and had an LFO, too. That was about it. And as I recall, the A/D converters were 12-bit and noisy as hell. But heck, I can't complain, because I got such a smokin' deal on the thing -- only $800!

I used that blue beast on every mix. I learned to do flanging, phasing, chorusing, slapback, and every other delay-based effect with that one box, because that was all I had at the time.

Things sure have changed, haven't they? With the proliferation of computers, DAWs, plug-ins, and super-cheap (and good-sounding) digital effects boxes by the hundreds, even a typical 15-year-old Bob Clearmountain wannabe has an arsenal of effects that would have put the modern pro studio of 20 years ago to shame. At that age, it would have taken me a whole summer of washing dishes to buy one decent effects unit, the equivalent of which can be had today free for the taking off the Internet as shareware!

As is often the case, this bounty of blessings has become a curse!

For those folks who send me mixes full of every trick and toy in your rack, I have a message: I hereby give you permission to use your good taste and musical judgment in choosing which effects to use. It is not necessary to use every effect at your disposal in every mix, okay? There... don't you feel better now? I'll bet you do... and the people who have to listen to your music will, too!

I can only give you the same advice I've already given hundreds of times -- don't slap an effect on anything unless you have a good reason to do it. And, prior to your final mixdown, check it all one more time. Go through every track, note the effects you're using, eliminate them one by one, and then check your sound. Does that cool trick really enhance, or does it detract from, the overall mix? Is it really necessary to have it in there? If not... get rid of it!

Sibilance and Plosives

Plosives are the percussive vocal sounds -- such as "p" and "b" -- that create a sudden blast of air from the mouth and can result in an annoying popping sound in an otherwise good track. Sibilance occurs when recording vocal sounds that contain a lot of high-frequency energy -- such as "s" and "ch" -- which, if not controlled, can also wreak havoc on a track.

Though I often hear mixes that contain both, I have found that sibilance is a lot more common. Plosives are sometimes a little easier to handle at the mastering stage if they've somehow slipped past the mixing engineer. Sibilance, on the other hand, is very difficult to tame in an overall mix without adversely affecting other tracks.

One way to help alleviate the plosive problem before it happens is through the use of a pop filter, those round plastic rings with a layer or two of hose material stretched across them. Pop filters usually don't do a whole lot to tame sibilant material -- though they can help a little, if only by enforcing a given distance between the singer and the mic.

Experienced singers know how to work the mic well enough to back off or move a bit off-axis to the mic during sibilant passages. An especially windy or lispy singer can often be controlled if you have them sing at an angle relative to the mic, so that part of the sound travels across, rather than directly into, the mic element.

Another good trick that really works is to use sound substitution -- replacing sibilant sounds with similar sounds that are a little less harsh. I recently had a vocalist replace the word "reach" in a background vocal, singing "reesh" instead. That helped the track quite a bit!

The best way to tame an already recorded sibilant is to use multiband compression, which works just like a regular compressor except only certain frequencies in the sound are attenuated. It differs from a regular EQ as well, because the multiband compressor only acts on sounds that exceed a given threshold, while an EQ acts on all sound within a specified frequency range, regardless of level.

If you're using a DAW, and there aren't too many plosives and sibilants to deal with, it's sometimes quicker and easier to simply select the offending occurrences and lower the volume of the selected portion only.


Distortion

Most engineers know what digital distortion sounds like, what causes it, and how to avoid it. This is the really harsh, ugly, easy-to-spot distortion that occurs when you overload an A/D converter such that the number of bits that would be required to recreate that sound exceed the number of bits available. This often causes the too-loud portion of the resultant waveform to "wrap around" back to zero, which causes some pretty nasty crackling.

However, there are many other kinds of distortion that can slink their way into a mix, and most are not as immediately apparent. Mics, mic preamps, the inputs and outputs of your mixer, your mixer's sub-busses, analog tape -- in fact, almost everything in the signal path -- can introduce insidious artifacts whenever their physical or lectrical limitations are exceeded.

To avoid this, it's important to know what those limitations are and how to operate within them. For example, always remember that mic elements are physical entities that only have so much travel and give. The loudest sound a mic can handle is expressed by its SPL rating. If you exceed that level, you're asking for trouble.

Ditto for electronic gozintas and comzoutas. This points up the need for proper gain staging -- setting the various levels along the signal path so that you don't overload any portion of that path, causing distortion. Of course, too low a level at any point can lead to excessive hiss, noticeable hum, audible RF interference, or other noise. You need to learn how your equipment works, and more importantly, how it all works together.

Of course, if you're attempting to create a special effect with it, certain types of distortion can be used to your advantage, too. Just make sure that you know what you're going after here!

Poor Arrangements

As has often been said, there's no accounting for taste -- though, given some of the mixes I've heard in the past, I wish I could!

The wrong choice of timbres that come in at the wrong times. Meter that doesn't swing. Dead, clock-like tempos. Syncopation that doesn't bounce. Grooves that don't groove. Poor kick-bass relationship. Key modulations that go nowhere. Songs where every track plays from beginning to end. Orchestration that's just too over-the-top -- too loud, too overbearing... just too much of everything!

I've often heard it said that music is a language -- which incidentally, is something I don't believe at all. (If you can compose a piece of music that can tell me what you had for breakfast this morning, then I'll reconsider.) That being said, great music is nevertheless a complex thing, and very difficult to describe. A great song contains elements of both the beautiful and the strange. Music is all about tension and release; ebb and flow; dissonance and consonance. Like a luxurious tapestry, there are various textures woven throughout. Like a masterful painting, there are elements of both light and shade. Like a well-executed play, it builds in a cohesive way; there is drama and relief; conflict and catharsis, climax and denouement. Great music contains depth, soul, warmth, love, animation. It lives and breathes.

This can all be tough to grasp unless you are particularly gifted, or have been trained in orchestration, arranging, or classical compositional technique, but you can learn how it's done. Listen to the best classical pieces -- preferably live, if you can. Watch and listen to how themes are introduced, restated, and developed throughout different sections of the orchestra during the piece. Listen to how a movement builds, then breaks down, then builds again. Listen for how key modulations are used, and how different instruments create different textures and effects in the music.

You've got to learn to do the same thing in your compositions and mixes. Given the relatively short duration and somewhat limited structure of commercial pop, this can be a challenge; but don't forget that you have a lot more tools and tricks at your disposal that Mozart and Beethoven ever dreamed of!

There have been hundreds -- maybe thousands -- of books written on this very subject. There's certainly not enough room or time to cover it here in any way which could begin to do it justice, and besides, this is all so subjective that laying down a set of "rules" would be a waste of time, anyway. But that doesn't make it any less important. Remember folks... arranging is everything.

Cheesy MIDI Programming

There was no way I could end this article without mentioning my one big Pet Peeve of All Peeves. If I hear another rinky-dink piano track, lifeless robotic drum groove, or soulless sax solo, I'm reaching right through this screen and I'm coming after your ass. Yeah, buddy, that's right. Don't look away. I'm talkin' to you right now....

MIDI is a tremendous tool. It ranks right up there in music history with the invention of the fortepiano and the glorious day that a young guitar player names Les Paul figured out how to make a tape recorder that could play more than one track at a time.

But MIDI is just that -- a tool -- and nothing more. It's a protocol that defines a way of pushing 1s and 0s around. As such, there is nothing inherently musical about it at all. A stilted melody is a stilted melody, whether it came from Band in a Box or a real band. MIDI can make the music play, but only a musician can make music.

It's important to understand that MIDI has its limits. We've already pushed and hacked MIDI way out past its original purpose, and developed uses for it that its developers couldn't have imagined -- and that, as Martha Stewart would say, is a good thing. Yet, even given the inherent constraints of MIDI, it is indeed possible to create incredibly realistic, lifelike, and beautiful music with it. Granted, it takes work -- lots of it. MIDI programming is only about 20% composing, and at least 80% tweaking and retweaking; adding expression, dynamics, subtle timing and pitch variations, and all the other warm and fuzzy and human things that make up a great performance.

If you want to improve your MIDI programming fast, I heartily recommend the book "The MIDI Files" by Rob Young, published by Prentiss Hall. I've read it multiple times, and to this day it's never far from my computer.


Well, folks, that just about wraps it up for this edition. If you manage to master most of the material presented here, you're well on your way to creating mixes that will knock some socks off. But don't you dare stop here! Keep listening to mixes you like, study them, pick them apart, and try to emulate them in your own work. Keep reading everything you can get your hands on. Ask people who are doing work you admire how they are getting those results. Don't ever stop learning, and don't give up. You can do it.

And speaking of great mixes, the clock on the wall is telling me it's almost martini time...

278d7e64a374de26f==