How Far I’ve Come… Part Two

Hello again! Time for another trip down memory lane!

On the last blog, I tore apart my first ever recording. Being my first ever mixing experience, there was a lot of negatives and I spent a lot of time pointing out those negatives. In this blog I will focus more on correcting mistakes. Unfortunately, I couldn’t locate the final Pro Tools session used for one, but I’ll be using the other to show what I did, and what I would do differently now.

STOCKHOLM SYNDROME

Oh boy…

For the next recording task, I was in a group with Sam Bidgood and Tom McPhee. After working with Graham Coe in my first recording venture (and since Tom, Sam, and I weren’t that good at singing or guitar), I asked him to help with this project. Our main mutual interest was Muse, and we soon began churning out some covers. The two tracks in this blog were made for the two technology modules in first year, one in each semester. The first one was chosen after we both said it at the exact same time, it was fate…

Rather than critiquing each individual instrument in detail and basically repeating everything I said in the last blog, I’m going to dive right in and sum up the overall mix:

  • Drums still not sitting quite right in the mix. The snare sounds too distant and gets lost too often in loud sections. The cymbals sound too ‘roomy’. The kick and toms are also pretty ‘boomy’ but lack proper low end. A side-chained 60Hz on the kick would have done wonders! EQ could have been used to cut undesirable frequencies and give the kit more punch, while time-based effects could have made them sound larger without just mixing in more of the room. Parallel compression also could have been utilised to give the kit a ‘breathing’ sound and make it bigger sounding also.
  • Vocals are okay in terms of compression – just okay – but they are far too loud, especially with the harmony parts. EQ’ing could have also been used to make them sit on top of the music without just boosting the overall volume. More ‘brilliance’ (16kHz) to give them some sparkle and less ‘mud’ (250Hz) would half cleaned them up. Time-based effects and harmonic distortion could have added some size and
  • Guitars sound messy, especially along with the keys and bass due to them all being distorted. Again, EQ’ing could have carved out spaces for each instrument to achieve greater clarity and avoid them fighting for space in the EQ spectrum. They also sound very ‘roomy’, a room mic was used and should not have been as loud in the mix.
  • The bass seems to get in the way of the kick and the guitars, this could have been avoided by using side-chain compression with the kick and EQ cuts.
  • The keys in the chorus could have been EQ’ed to make them come through more without the level having to be raised as much.
  • The guitar feedback at the end was a pretty cool addition, but slight phase issues with mics used meant it wasn’t as epic as it could have been. Aligning the phase of the mics using delay would have made them sound even better!
  • There is much more dynamic movement that in ‘Mountains’, although some of the changes are too severe and quite messy.
  • Overall, an improvement on ‘Mountains’. Less things get lost in the mix in louder sections (although some things still do!) and clarity is a bit better. Still a way to go though!

BUTTERFLIES AND HURRICANES

Safe to say I had definitely discovered compression…

This project featured Graham Coe, Tom McPhee, and Martin Stewart. Before beginning this mix, Graham and myself agreed it would be a great idea to try and contact some of the people who worked on the track. The production of the track was pretty staggering so some pointers from people who actually worked on it would be ideal! Engineers Paul Reeve and John Cornfield emailed back with some very detailed accounts of what was done in the recording process. The questions asked were very focused on the recording, us being n00bs didn’t realise the mixing process is probably (arguably) more important than the actual recording (although a good recording can help the mixing process massively). Paul Reeve warned us that recording string parts one at a time would massively effect the quality of the finished product. Did we listen? Of course not, we were n00bs!

Right, same drill:

  • Drums still aren’t right! Snare is over-compressed and sounds squashed. Overheads have a nasty boost at high frequencies and sound harsh when the crashes are hit. Toms still ‘boomy’. The kick is lost in a lot of it, EQ scoops and side-chain gated 60Hz could have brought it through more. Still quite ‘roomy’ sounding, reverbs and delays could have added space instead of just sounding more like the drum booth! Again parallel compression would have worked wonders in boosting the power of the kit.
  • Vocals are pretty compressed. This isn’t necessarily a bad thing, but coupled with the MASSIVE boost in sibilance (for some reason I liked it, what was I thinking?!) they sound very tinny. EQ has been used creatively here, but not very tastefully. The boosts are too severe and make the vocals sound cheap and nasty. Harmonies sit nicely at points, but are generally too loud. Stab vocals (64 layered harmony tracks) are too sudden, reverb and EQ’ing could have smoothed their attack and allowed them to sit more comfortably in the track.
  • Keys could sound cleaner, attack and cut-off times etc could have been adjusted so it was less abrupt. Sounds messy when other things come in; again, EQ’ing could have been used to carve out a space and allow it to sit comfortably in the mix.
  • A total of 40 string tracks (violin, viola, and cello) were recorded one at a time which caused major phase issues. Instead of sounding like a symphony orchestra as intended, they sound fake and indistinguishable as multiple takes. They are also all dry, with no reverb or delay, which adds to the phasing problem.
  • Piano is barely noticeable during verses. It is supposed to drive the track but instead is lost in the background. The solo piano sound okay, but could have been a lot better with better use of EQs and some parallel compression. It also seemed to clip at a couple of points which really ruins the atmosphere.
  • Bass guitar is pretty muddy and again gets in the way rather than supporting the lower frequencies. Again (blah de blah)  carving on EQs and compression could have helped if used effectively!
  • Overall a very ambitious project that could have been so much better if effects were used with more taste and less haste! (Sorry…) Dynamic movement was disappointing; some sections are supposed to explode in, but lack of automation etc meant the sforzandos lacked power.

So! Getting a little better by the end of first year, but as you can tell by my generally negative comments I was still a bit of a n00b. I had still to learn of enhancing a kit sound using samples as well as the acoustic take and parallel compression. In general, my lack of knowledge with EQ meant I was boosting harshly to get things to cut through rather than carving spaces for instruments to sit together.

I had some way to go, although at least I was showing ambition!

David

How Far I’ve Come… Part One

Recently, I’ve been thinking back to first year and realising what a massive n00b I was with recording and mixing.

I’ve dug up some old projects that I have previously mixed (if you can call it that) in first year, from my first ever recording experience, to ambitious 172-track project! The following blogs will tear them to pieces, analysing what I did wrong when mixing these tracks. I could write an essay on what was wrong with each from plugging in the first microphone, but I wouldn’t want to bore my faithful readers…

So let’s get started!

MOUNTAINS

This was my first ever recording experience, ever! Rebecca Shearing, Cara Nelson, Jo Paterson, Neil Mackenzie, Graham Coe, Sam Thorne, and myself were put in a group and we decided to record ‘Mountains’ by Biffy Clyro. Dave Hook let us bright-eyed first years loose in the studio with only basic knowledge on how to record in pro tools and this happened:

Where to start…

Starting with the drums, which were mic’ed up pretty terribly (I’m pretty sure Greg Smart came in after we were done and said “Time to mic this kit properly!”), It is pretty obvious we hadn’t discovered compression and hadn’t a clue about EQ! The drums lack punch and space, they almost sound like one room mic was used to record (which it might as well have!) They get lost in the louder parts, and only the hi-hat really cuts through during the verses.

N00b error #1 – The drums were pretty much dry, no compression, no reverb, and very little knowledge of EQ’ing resulted in a weak, empty sounding drum sound.

Next, if I remember right, we tracked guitars. To say they sound a bit weak in the mix is an understatement. I have no memory of what mic was used, but the guitars sound distant and lack power so the mic/mics must have been a bit far away from the amp! Again, with only EQ used (and not very effectively) the guitars lacked presence and power. They are meant to drive a rock track but instead sound empty and un-fulfilling. Mix-wise, they are lower than, and occupy a similar frequency to the piano and vocals. This means they struggle to be heard and the track sounds pretty feeble as a result! The solo also lacks presence and is pretty disappointing. (The mixing, not the playing!)

N00b error #2 – Not carving out spaces for the instruments that are meant to be the ‘focus’ of the track can resulted in the track sounding imbalanced and messy.

Bass guitar was recorded primitively by shoving the kick drum mic in front of the bass amp and hoped for the best. The pre-amp level mustn’t have been touched, because the signal is very low and as a result really struggles to be heard in the mix. No compression meant the bass track was very un-even and messy, see n00b error #1!

Piano was recorded by dragging the electric piano in the studio across the room and jacking it straight into the behringer pre-amp. The signal was pretty noisy, and since it was mono lacks any sense of space and sounds dull and thin. We didn’t know about instrument tracks on pro tools, so didn’t know how to use the MIDI keyboard provided! The piano ‘solo’ is way back in the mix, and again is really quite disappointing. (The mxing! Not the playing!)

N00b error #3 – Lack of research into methods of recordings narrowed our options and resulted in a poorly recorded audio track.

Finally, vocals were recorded last. Again (this seems to be a theme here…), with poorly used EQ, no compression, and no time-based effects added, the vocals were dull, weak, and got lost at louder sections. The harmony vocal is barely audible for most of the track, you could be forgiven for not noticing it at all!

A positive (I know!! Something good!!!) to note about the vocals is the double-tracked harmony at the end. This was my first ever producer-like idea, and although executed quite poorly (there are some phase problems and the could have been wider and reverb added), is probably the thing I like most/dislike least about the mixing!

So, Overall: 

  • The track is messy, weak, and dull due to little or no EQ or compression. 
  • The levels are not fitting for the genre. Drums are far too quiet and get lost. The piano is more prominent than the guitar, whereas the guitars should be driving the track. Vocals get lost in louder sections as a result of no compression being used.
  • No automation in levels meant the track has very little dynamic movement.
  • The track has little width to it, with only guitars being panned to give any sense of space at all.

But you couldn’t really expect a lot better from what was (most of our) first recording experiences! Everyone has to start somewhere, little 17-year-old David really didn’t know what he was doing, and it shows!

Next time I will tear apart the other two uni projects I completed in first year, ‘Stockholm Syndrome‘ and ‘Butterflies and Hurricanes‘.

David

Delay-ay-ay

Now that I have your attention with that frankly hilarious joke, let’s learn about delay without any, eh, further ado…

Delay is an audio effect which records an input signal, stores it, and then plays it back after a set period of time. Essentially an echo of the original signal! It can be a one-time ‘slap-back‘ delay, or it can repeat multiple times.

The first delay effects were achieved using tape loops improvised on reel-to-reel magnetic recording systems. The length of the delay could be controlled by the length of the tape between the ‘record head’ and the ‘playback head’ (or read and write heads) of the machine which were a couple of inches apart. This length was increased by looping the tape around mic-stands and splicing it back together making an endless loop of tape, meaning the signal would go round and round, being constantly re-written by the write-head and picked up on a delay by the read-head.

Still with me? Good!

Delay has been around for a while; artists such as Elvis and Jerry Lee Lewis using it to add more space to their vocals making them sound bigger, and Pink Floyd using it to create a massive wall of sound. (That’s one of my favourite guitar solos by the way, it doesn’t get much better than that!) Time-based effects are a great way to make individual instruments or elements fill out a track and sound huge.

A famous user of delay is guitarist Brian May of Queen. His solos fill stadiums with the one guitar by using (among other effects) delay to create three-part harmonies with himself! This is the trademark Brian May sound:

In my own mixing, I had the perfect opportunity to use delay recently while recording a cover of ‘Oh Darling!‘ by The Beatles. Although the original track doesn’t actually use delay on the main vocals, I felt some slap delay would add to the vintage feel of the track!

Here is the vocal track with delay added:

This mix isn’t completed, but you can (hopefully) tell I’m going for a very vintage sounding delay. I achieved this sound by doing this:

Voc Delay

I used the lo-fi plug-in to get it sounding old and worn, pretty dirty with harmonic overtones (like an analogue delay unit might), before adding delay, reverb, and then EQ to take away the high and low frequencies. This left an un-obtrusive, warm sounding delay that sounded a bit like the classic analogue delays used in the 60s. (Starting to believe my own bullspit)

Here is the delay by itself, so you can hear what I’m on about:

This was influenced by The Beatles’ producer George Martin, who used many prominent vocal delays in Sgt Pepper’s Lonely Hearts Club Band. The delay in this album cuts through well, but is unobtrusive and acts to support the vocal sound rather than fight against it.

Here it is mixed in with the guitar track, the mix is far from finished but I reckon I’m in the right direction!

Before this year I never really used delay, sticking with reverb to add space to vocals and guitars. After experimenting with delay in this mix however, I have found a new favourite toy! You can expect a million more things with slap-back delay in the future…

Pro Tip: Don’t add delays or other time-based effects to the main vocal track, create an auxiliary track and work with them from there! This allows greater control over effects levels.

ALSO check out this video for another example by Dave ‘The Dude’ Pensado! “Hey guys, I was cruisin around Gearslutz…

David

Stem Mixing/Submix

Hey there! I wanted to talk a bit about Stem Mixing; a method of mixing I use in practically every mix!

Stem Mixing (or sub-mixing) is the process of grouping audio tracks and setting the output to auxiliary tracks called ‘stems‘. Eg sending all the drum tracks to a ‘drums’ auxiliary, sending all the vocal tracks to a ‘vocals’ auxilary track, and so on! This enables greater control over levels of each instrument, being able to quickly and easily change the levels of, say, all the guitar tracks at once. Going even further, you can group different instruments together as aspects of the track, like Dave Pensado’s method:

  • All Drums
  • Music (Keyboards and Guitars)
  • All Vocals
  • Bass
  • All Effects
  • Hi-End (Cymbals)

In this video, Dave shows how stems can give you greater control over different aspects of the track. Changing the levels of these aspects can greatly affect the overall feel of the track. Here’s some examples:

  • Too much Bass can make the kick sound weaker, and give the track a more ‘urban‘ sound, whereas too little can leave it sounding empty.
  • Too little guitars can leave the voice and drums exposed, and make the track lose it’s power – sounding more ‘pop’ than ‘raw’, whereas too much can over-power the vocals and leave them sounding weak.
  • Too little high-end can leave the track sounding dull, whereas too much can over-power the brightness of other instruments and get too ‘washy‘.

It is important to realise, however, every genre suits a different mix of levels. Hip-hop and urban music suits more low end, rock suits more guitars, acoustic suits lots of vocals. Remember though! There is no right and wrong! Everything is subjective, and every mix is different!

Pro tip: When mixing, use other tracks in the same genre or style as the one you are working on as a reference. It is easy to get lost in your mix and lose perspective!

I find stem mixing makes the mixing process much easier, especially in the latter stages of mixing where you’re trying to polish everything off having got individual tracks sounding and interacting how you want them. It also helps in mixes with a large number of tracks, breaking the mix down into more manageable chunks!

Here’s an example of how I have broken a track down:

This mix had 38 audio tracks, which was almost un-workable in terms of mixing everything one by one. After getting each individual track sounding good (EQ, comp, etc) and interacting how I wanted them, I grouped them into categories to simplify the mix into a more manageable chunk. From there, I could add plug-ins to the stems and effect multiple tracks at once.

Every engineer would have done this differently! I chose to break things down this far and no more because I still wanted control over some details like the cello solo and double bass levels. Always organise a mix so you are comfortable with it, not just because some blog told you to do it some way! You are likely to get better results when you are happy!

Another use of stem mixing is in world of video games, as seen in the Amon Tobin ‘Infamous’ soundtrack. In this video, the audio is broken down into stem form and then organised into low, medium, and high intensity elements. These elements are brought in and faded out during game play depending on where the player is, and what he is doing at any specific time. During calmer periods, only low intensity elements would be present. As something is encountered by the player, medium intensity elements would be faded in. Finally, at high octane moments of gameplay, the intense elements would be brought in to heighten the players sense of involvement during intense periods. This adds excitement to fights and chases, while adding atmosphere and anticipation to less dynamic periods in the game.

This is a really cool way of thinking about stems. They don’t just need to group all the drums, or all the music; they can group any category of tracks together for ease of mixing!

David

Parallel Compression

Parallel compression (or the  New York compression trick) is a technique used to compress an audio track while avoiding squashing the attack/dynamics. This is a quick guide of how to use it to breathe some life into a drum kit!

Here’s the original un-compressed drum track:

Firstly, send the tracks you desire to be compressed to an auxiliary track via a bus (as shown below).

Then, add compression to the auxiliary track. This means the original un-compressed track plays alongside the compressed track. The original attack of the louder peaks are retained, but the quieter parts are subtly (or not so subtly, depending how the compression is applied) raised. Here’s the compression I used below, all buttons in so pretty extreme!

This technique can give a drum kit a sort of breathing sound, really livening up a dull sounding kit. Here’s the drum track blending the compressed signal with the original drum take:

If you can’t quite tell the difference, here it is changing between the two:

(I’ve been quite subtle with blending in the compressed track here, but this was done to taste. Experiment with different attack and release times as well as threshold and ratio to achieve the sound you want!)

Pro tip: As well as compressing an auxiliary track, you can try adding other effects such as distortion or saturation. Blend them in with the dry track to add character to instruments like bass guitar or synths!

David

Coming soon…

Thought I’d mention some things I’m going to be doing in the near future!

Recording

Recently I’ve been recording with Dawn Coulshed in the studio. We originally recorded two of her tracks ‘The Door’ and ‘Paper Butterflies’ a couple of weeks ago, but went back and re-recorded all the parts with some expensive microphones that made a huge difference as they really suited her voice and guitar (Neumann > AKG). Dawn also added some really nice harmonies to ‘Paper Butterflies’ that added to the dynamic of the track. I’ve done a quick mix of these but need to polish them off over the next few days before going back into the studio on Thursday. Look out for some uploads soon!

I’ve also been in talks with guitarist Stephen Maxwell about possibly recording a tune we played back in our 3rd year performance exams. Since this tune has a lot of improvisation and indefinite lengths of sections, we’ve decided it would be best to play it together first rather than in separate takes. We would probably play it through together until I got a drum take I liked (could take a while with my solo…) and overdub bass, guitar, and horns on top. We have yet to sit down and discuss our method of recording properly, but will probably do so on Tuesday when we’ll be having a rehearsal.

We have also discussed a more stripped back recording of ‘Oh Darling!’ by The Beatles featuring Emily KellyMaxwell and Emily played this track together for Emily’s vocal exam and would be a great track to record!

Playing

I am now part of the Tinderbox Orchestra which rehearses every Sunday in Summerhall. It is a contemporary youth orchestra which plays modern styles of music, writes original music and collaborates with top local bands and artists. We play classics such as ‘Fire‘ by Jimi Hendrix and ‘Paranoid Android‘ by Radiohead, as well as songs by Edinburgh artists such as Mike Kearney and North Atlantic Oscilation.

Elsewhere, I will be playing with Stephen Maxwell and Ally McLachlan in the Jazz Bar on December 13th for the Napier Songwriter Showcase to raise money towards the singers booking venues for their exams next year. Stay tuned for updates…

 

David

Haas Effect

I’ve been on the Sound on Sound website quite religiously recently, and have discovered a few interesting techniques that I may use in my future mixes. One of which is the Haas Effect, named after Helmut Haas.

Basically, it is a way of changing the listener’s directional perception of a signal without the use of panning. The effect is achieved by duplicating a mono signal into two separate tracks, panning one left and one right, and then delaying one side by very small amounts (3-7 milliseconds). Since once side (whichever one isn’t delayed) will reach the listener first, it makes their brain believe it is coming from that direction. After about 10 milliseconds of delay the effect wears of and the sense of a definite direction the sound is coming from is lost, although it now sounds like a stereo signal rather than mono!

This video explains it very well, and with a Canadian accent. The guy clearly knows what he’s talking aboot…

 

David