When depicted visually like this, it becomes quite clear what the problem is. The three elements are all fighting for domination of a single small area in the frequency spectrum. By doing so, none of them is clearly audible, with the added disaster of a large amplitude peak being created (wasting headroom). Although, as I have said, a degree of overlap is entirely normal, ultimately there is not room for all three to occupy the same spot like this.
What can we do about it? Well, the first obvious step is to separate the two synths. They need to be next to each other, not on top of each other. Since the yellow one is already somewhat lower than the grey one, it makes sense to attempt to pull the yellow one left a bit, and push the grey one right a bit. To achieve this we may allow more low-frequencies through on the yellow one (by lowering a hi-pass filter cutoff, or reducing any low-cut EQ we have, for example), whilst sculpting away some frequencies from the yellow synths upper end (with our subtractive EQ, as on the previous page). For the grey synth, we do the reverse: roll off more of its low-end, whilst allowing more upper frequencies to come through (if applicable). In addition, we can reduce the strength of the pad in the frequencies occupied by both the synths, with an EQ notch or two. The pad will still be full strength around them, so we wont notice a significant change in its timbre - at any rate, it is a background element, so we can afford to twist it around a bit in order to fit the mix rather more than we could if it were a lead element.
Having made these changes, lets look at our visualisation of splodges.
As you can see, everything now fits perfectly. The bad news is that this scenario is entirely fictional. Unless you are an incredibly talented and experienced engineer with a wide range of awesome EQ tools at your disposal, your chances of using EQ shaping to take a track where three sounds are seriously clashing in frequencies, and magically making it all lovely, are in practice very slim. But whilst this example was deliberately exaggerated, and EQ alone wont fix a royally fucked-up track, EQ certainly can help to improves matters when used in this way.
The problem encountered above, of the two clashing synth parts, is something that we often run across in my live drumnbass band keiretsu. With 10 musicians on stage, we must be very aware of ourselves, to avoid our tracks becoming so cluttered that important parts are masked by other, clashing, parts. Obviously we do not have the luxury of being able to tweak a graphic eq over each and every sound we produce on stage! Therefore we instead use a variety of 'musical' means of fixing these EQ clashes. Some of the most important include:
If these and EQ still leave your mix unsatisfactory, there are a few other technical routes to try and help with clashing sounds:
Did that make any sense? I hope so... Anyway... of course, this isnt to say additive EQ is wrong. Especially not in drumnbass where there are no rules! You could use extreme additive EQ as a heavy sound-munging tool, for example. It also comes in handy, where, for example, your snare has the snap you want, just not quite enough of it. A nice 2db boost at the sweet spot is a lot easier than adding a whole new layer.
Where additive EQ is definitely discouraged is in situations where subtractive EQ provides an equally worthy alternative. For example, if you have two overlapping sounds, and you want sound A to be more dominant than sound B, you could boost those parts of sound A being obscured by part B. Far better, though, to cut those parts of sound B which are obscuring A. Aside from my bizarre metaphorical explanation above, there is one simple reason why this better: headroom, again. Yes, any time you make something louder with your EQ, thats eating into your total headroom, which will ultimately only serve to make your finished track all the quieter. If you can achieve the same result (A dominates B) by removing something from B, then you are not eating up any more headroom, rather you are keeping it available.
On first read, its funny, but on closer inspection, it is extremely sound advice. You see, the human ear is a helluva lot better than hearing things which are there than it is at hearing things which are not there. Drumkit toms, being as they are (a) tuned and (b) beaten hard with a mic millimetres from the surface, are notorious for resonating and causing ringing and feedback. Part of the solution is a sharp EQ notch at the resonant frequency - however, when placing a cut on the EQ and then scrolling the frequency, it is sometimes hard to pick out exactly where you need to be. What is a lot easier is adding a huge boost, and then sweeping the frequency. Sooner or later, all hell will break loose, the drumkit will sound utterly atrocious, the mics will be feeding back like there is no tomorrow - and you know you've hit it. Just flip the boost to a cut and you're sorted.
The same technique can be very helpful when producing. If there is something "annoying" about a sound, it is usually quite hard to work out exactly what annoys you about it, let alone what frequency band this annoyance is emanating from. However, if you add a huge EQ boost, then scroll the frequency, you will often stumble on something very annoying indeed. Its kinda like zooming in on a picture to better spot flaws in the details, I suppose.
This is as much as I shall cover on EQ for now. First, it is time to cover the basics of compression, and having done so, we will be in a position to start putting the two together, with a brief look at the powers of multiband compression.
<< Part Five: EQ: Practical Applications 2
Part Seven: Compression: 1 (to be continued...) >>