Documentos de Académico
Documentos de Profesional
Documentos de Cultura
8:14 PM
When talking about improving the perceived quality of our productions and mixes, its
easy to focus mainly on the technology we use, or the layout and acoustic treatment of
our listening environment.
Its easy to forget that just as important as our software, equipment and studio is what
happens to the signal after all of that, once the sound has made it into our internal
hearing system our ears and brains and how that sound is actually experienced.
So thats what this article is about: some tips and tricks for operating and manipulating your listeners
built-in equipment!
If real is what you can feel, smell, taste and see, then real is simply electrical signals interpreted by
your brain
This basically falls under the heading of psychoacoustics, and with knowledge of a few
psychoacoustic principles, there are ways that you can essentially hack the hearing system of
your listeners to bring them a more powerful, clear and larger than life exciting experience of
your music. Knowing how the hearing system interprets the sounds we make, we can creatively
hijack that system by artifically recreating certain responses it has to particular audio phenomena.
Here youll also gain some extra insights into how and why certain types of audio processing and
effects are so useful, particularly EQ, compression and reverb, in crafting the most satisfying
musical experience for you and your listeners.
For example, if you incorporate the natural reflex of the ear to a very loud hit (which is to shut down
slightly to protect itself) into the designed dynamics of the original sound itself, the brain will still
perceive the sound as loud even if its actually played back relatively quietly. Youve fooled the brain
into thinking the ear has closed down slightly in response to loudness. Result: the experience of
loudness, quite distinct from actual physical loudness. Magic
Because you have to wonder: how do the machines know what Tasty Wheat tasted like? Maybe
they got it wrong. Maybe what I think Tasty Wheat tasted like actually tasted like a crunchy Skrillex
Hacking The Hearing System Page 1
they got it wrong. Maybe what I think Tasty Wheat tasted like actually tasted like a crunchy Skrillex
bassline etc etc.
Thats probably enough Matrix references, so lets free your mind (ha!) and read on:
2. Frequency Masking
There are limits to how well our ears can differentiate between sounds occupying similar
Hacking The Hearing System Page 2
There are limits to how well our ears can differentiate between sounds occupying similar
frequencies. Masking occurs when two or more sounds occupy the exact same frequencies: in the
ensuing fight, generally the louder of the two will either partially or completely obscure the other,
which seems to literally disappear from the mix.
Not this kind of masking although they do appear to be having a lot of fun here.
Obviously this is a pretty undesirable phenomenon, and its one of the main things to be aware of
throughout the whole writing, recording and mixing process. Its one of the main reasons EQ was
developed, which can be used to carve away masking frequencies at the mix, but its preferable to
avoid major masking problems to begin with at the writing and arranging stages, using notes and
instruments that each occupy their own frequency range.
Even if youve taken care like this, sometimes masking will still rear its ugly head at the mix, and its
difficult to determine why certain elements still sound different soloed than they do in the full mix
context. The issue here is likely to be that although the root notes/dominant frequencies of the
sound have the space they need, the harmonics of the sound (that also contribute to the overall
timbre) appear at different frequencies, and its these that may still be masked.
Again, this is probably the point where EQ comes into play: theres much more info on strategies
for EQing to enhance character and clarity in The Ultimate Guide To EQ.
Loud noises!
The brain is used to interpreting the dynamic signature of such reduced-loudness sounds, with the
initial loud transient followed by immediate reduction as the ear muscles contract in response, so it
still senses very loud sustained noise.
This principle is often used in cinematic sound design and is particularly useful for simulating the
physiological impact of massive explosions and high-intensity gunfire, without inducing a theatre full
of actual hearing-damage lawsuits.
Check out the visceral fuel station explosion in modern classic 28 Days Later (that leads to this scene
with flaming zombies!) for an expert example of audio-enhanced violence. Not to mention the
excellently done noises of the zombies themselves
The ears reflex to a loud sound can be simulated by playing manually with the fine dynamics of the
sounds envelope. For example, you can make that explosion appear super-loud by actually shutting
down the sound artificially following the initial transient: the brain will interpret this as the ear
responding naturally to an extremely loud sound perceiving it as louder and more intense than the
sound actually, physically is. This also works well for booms, impacts and other epic effects to
punctuate the drops in a club or electronic track.
The phenomenon is also closely related to why compression in general often sounds so exciting and
can be used in its own way to simulate loudness more on that below.
I had a broken wire on one of my microphones, which had been set down next to a television set and
the mic picked up a buzz, a sputter, from the picture tube just the kind of thing a sound engineer
would normally label a mistake. But sometimes bad sound can be your friend. I recorded that buzz
from the picture tube and combined it with the hum [from an old movie projector], and the blend
became the basis for all the lightsabers.
Ben Burtt, in the excellent The Sounds Of Star Wars interactive book
Does that make Burtt one of the original Glitch producers..?
Johnny Cash
Also consider Johnny Cashs track I Walk The Line, which featured his technique of slipping a piece of
paper between the strings of his guitar to create his own snare drum effect. Apparently he did this
because snares werent used in Country music at the time but he loved their sound and wanted to
incorporate it somehow. The sound, coupled with the train-track rhythm and the imagery of trains
and travel in the lyrics, brings a whole other dimension to the song. And all with just a small piece of
paper.
One last example: the Roland TR-808 and TB-303 were originally designed to simulate real drums
and a real bass for solo musicians. They were pretty shocking at sounding like real instruments, but
by misusing and highlighting what made them different from the real thing turning the bass into
screaming acid resonance and tweaking the 808 kick drum to accentuate its now signature boom
originating Techno producers like Juan Atkins and Kevin Saunderson perceived the potential in their
sounds for something altogether more exciting than backing up a solo guitarist at a pub gig.
So remember that whether youre creating a film sound effect or mixing a rock band, you dont have
to settle for the raw or typical instrument sounds you started with. Similarly, if you find that upsidedown kitchen pans give you sounds that fit your track better than an expensive tuned drum kit, use
the pans! If you find pitched elephant screams are the perfect addition to your cut-up Dubstep
bassline (it works for Skrillex), by all means The only thing that matters is the perceived end
result no-ones ears care how you got there! (Whats more, theyll be subliminally much more
excited and engaged by sounds coming from an unusual source, even if said sounds are taking the
place of a conventional instrument in the arrangement.)
Amon Tobin knows a thing or two about layering, unusual samples and cinematic sounds
Hacking The Hearing System Page 8
Amon Tobin knows a thing or two about layering, unusual samples and cinematic sounds
Weve already mentioned that our ears can have trouble deciphering where one sound ends and
another similar one begins. And its another psychoacoustic phenomenon that our ears are
incredibly forgiving (or put another way, easily deceived) when it comes to layering sounds together,
even across wider ranges of the frequency spectrum done carefully, they simply wont distinguish
between the separate components and will read the sound as one big, textured sound. This is
basically how musical chords work, and its also a huge principle behind creating an expensive and
lush Hollywood-style sound design, and really complex, How did they do that? effects in electronic
music.
In his book on the Practical Art Of Motion Picture Sound, David Yewdall talks about creating layered
sound effects as though there were chords: