How to produce like a grown up – Part 1
My name is Yoav, I’m a producer/composer and I have three cats.
Some of you know me from Distorted Harmony, some from ARP and some of you don’t.That’s it, enough about me.
For demonstration purposes I did a completely new mix for ARP’s – Gone (Not here) – listen to the original mix and master by Forrester Savell.
I have to clarify and stress one thing; this is my workflow, it’s the way I work after years of trial and error and experience. It will not work for everybody but (and with that in mind), it’s important you know what to expect from a producer – and from yourselves.
Lets start with the Drums.
As you can see, I already quantized (time-align them to the grid) and edited the drums. Both the kick and the snare already converted to a well sequenced MIDI track (file) and are ready to be processed. Why is that important? Why not leave it to the mixing engineer? Because it’s NOT HIS JOB! Better yet, you’d be surprised to find out that most engineers charge extra (deliberately) on editing for that exact reason. Another reason why not to leave it to the mixing engineer is that you recorded it, you know how it sounds and how it should be. You decide how much to quantize, where and how the edits should be – the choice (and control) IS YOURS. Don’t leave such an important decision to the mixing engineer.
* Mistake #1 *
I sent it to Forrester without choosing the snare sample and basically put the decision (the sound of the snare) in his hands. The reason was, I couldn’t find a sample I was comfortable with and wanted to see what he will come up with. I felt I could trust him and that was enough for me, but it’s not a valid excuse, next time I WILL choose and print MY sample BUT – will always include the MIDI file, just in case.
The bass is sequenced, meaning it’s myself playing on a MIDI controller. Unfortunately we didn’t have time to record the real thing. In this case, I sent Forrester the bass channel printed with distortion (see mistake #2). A bass will always be (at least) two channels – DI and distortion. I don’t care if the studio you tracked at had the most amazing Fafner – make sure you always have a (clean) DI channel! You’d be surprised how many engineers would choose to use just the DI when it comes to bass.
* Mistake #2 *
I lost the original DI track due to a technical error (including the MIDI track) and being lazy, I decided to leave the printed track instead of re-tracking (sequencing) it. That’s not a valid excuse because as you can hear, it does sound rather diarrhea-ish.
My arch-enemy while working on this EP.
Lets take a step back. If you don’t have a DI tracked – TRASH IT AND RECORD AGAIN! I don’t care you had a modded Randall SATAN with the sexiest 4×12 Mesa OverSized – if there’s no DI – TRASH IT and redo the guitars, especially if you’re just starting out and don’t have much recording and studio experience. I’d love to expand and talk about audio sources, the performance, the tools and equipment – but I’ll save that for my next article. For now, I’ll address and talk about my own sources, played by the amazing Alon Tamir. It pains me to say, but we tracked using a very cheap Jackson guitar that went out of tune every 10 bars (on a good run) – but here’s a great example where the guitar absolutely sucks yet sounds great thanks to an amazing player who struck hard and with the right (aggressive) picking. If you don’t know this already – yeah, a great guitar tone is mostly the player’s fingers and playing. So, thanks to Alon’s amazing playing, the DI sounded great in every ampsim I used. Problem was, I didn’t like the sim itself. I tried them all when finally, I tried Brainwork’s bx_megadual and loved it! Forrester got the DIs and my Reamp with a note – use which ever you like, but this is the vibe and sound I’m looking for. What I’m trying to say is – don’t leave the sound and vibe of the guitar tone to the mixing engineer. Even if it takes you a month you’re the one who should set and find it, not him. The guitar sound and tone is one of the most important things in a production, do you really think you should leave it to someone who’s “not connected” to the song as much as you are?
Guitar sound examples. The first is what I gave Forrester to mix.
* Tiny tip: Never listen to just one guitar (in mono), always to the complete stereo image. *
Unfortunately the only microphone I had at the time was a Shure Beta57. Luckily, it sounds great “on” Ran Yerushalmy. As you can see, I gave Forrester just TWO channels (not including backing vocals), but if you’ve listened to the audio clip below – you probably noticed it’s not a single track/channel. Every part in the song (verse/chorus etc.) is made out of at least four different tracks and is produced and processed differently – summed to one group/buss/channel. Why? because I’m the producer, I decide. I know how the vocals should and will sound. How much distortion will be in each section. How way or narrow the image will be. My vocal processing took years to develop. There’s no DI/reamping etc.. It’s your job and duty (and the producer’s) to know exactly how you/he want it to be. To know beforehand how it will sound and sit in the mix.
The rest of the gang.
Thanks to Jonathan Barak who mixed Distorted Harmony’s first album, a mixing engineer and a friend – I sum everything, and make sure all my processing (distortion, delays, reverbs etc.) are printed and ready to mix. It’s quite crucial, as with the vocals, to know how it will be/sound/sit in the mix, you don’t want an awesome sounding reverb to overkill and mask everything in a mix because you printed too much of it. As you can see, I like to divide the gang into Keys, Synths (which usually have sub sections), Orchestra etc.. It’s quite simple, the engineer will EQ the orchestra differently than the piano (for example) – If you printed the two together, you ruined the two sources and fucked my post processing.
Keys and Orchestra
Pads and Arps
This post is also available in: Hebrew