Back to Blog

“The Gold Chain” – The Definitive Guide To Achieving Modern Trap (Travis Scott-Like) Vocals

 

In the early days of Hip-Hop, way before ‘Trap’ took the music world by storm and dominated the airwaves, there was virtually nothing to protect us against the inevitable ear sore that was produced when rappers began to venture outside their area of expertise and sing their hooks in an attempt to secure a spot on the Billboard charts.

 

Sure, when an artist or vocalist was off by a number of cents or a few semi-tones technology could ultimately swoop in and save the day… but what about the people who didn’t exactly have an exceptional voice (or even a melodic bone in their body) yet possessed a passionate desire to express themselves through music?

 

Table of Contents

Enter Auto-Tune.

 

Believe it or not, pitch correction was actually a thing before T-pain used it, but it wasn’t until he revolutionized Antares Audio Technologies’ Auto-Tune effect that people truly realized the magical potential/abilities it held. Once the secret was revealed, we witnessed an explosive emergence of artists who started utilizing this effect (amateurs and icons alike) with the hopes of enhancing their sound and gaining recognition.

 

Side Note: Auto-Tune made such an enormous impact on the music world that when one of the industries leading plugin developers (Izotope) released their own version of this effect, which was endorsed by and modeled after T-pain himself, it failed miserably and had to be discontinued within the first year; proving (that at the time) nothing could compare to the results that the legendary Auto-Tune generates.

 

As time passed, Auto-Tune was no longer viewed as unique or different, but rather a vital necessity that was utilized in (almost) every popular, chart-worthy song you encountered; not only pertaining to Hip-Hop or R&B, but EDM, Pop, Country, and even Rock as well. With the new generation of artists, it’s become SO customary to apply Auto-Tune when recording that it’s usually set up in the beginning of a session and forgotten about, regardless of how bad or good the person’s voice is.

 

I’ve even met musicians who weren’t capable of satisfactorily/appropriately recording a track without Auto-Tune because they didn’t recognize (or couldn’t identify with) their own voice when it was totally stripped of pitch modulation. Others were even under the impression they HAD to use it, otherwise their listeners would think they were ‘cheap’ or unprofessional.

 

It’s gotten to the point where the most sought-after artists (such as Travis Scott, Chris Brown, Kevin Gates, and Migos) are acknowledged and praised more for the way their vocals are processed than the actual lyrics themselves. Most people assume that these major player’s Vocal Chains (in addition to the basics of course) ONLY consist of some pitch correction, distortion, reverb, and delay which might have been accurate about 5 years ago… but not anymore.

 

Today, I’m going to divulge some highly esteemed secrets that are capable of taking the most commonly known, frequently used vocal effect in the urban (and now pop) genre to the next level; manipulating it to our advantage and transforming it into a ‘vocal synthesized’ effect, distinct from your average Vocoder or Talk Box.

 

I’ll demonstrate how to successfully achieve this effect by utilizing various Formant, Synthesis, and Pitch Correction techniques to completely/methodically sculpt vocals into the synthesizer-like effect that’s heard ALL over the radio, loved worldwide, and (presumably) found in your favorite song of today.

 

Keep in mind that everyone has a different and unique voice, so naturally the specific vocals you’re working with will determine which settings should be applied. Having clarified that, the concepts/processes are essentially the same and relatively simple to comprehend.

 

DISCLAIMER: We are NOT going into intricate detail about the Dynamic Processing Chain (although I am slightly touching on it near the end of the lesson) as that subject can easily be found all over the internet and varies greatly from person to person, voice to voice; research what best fits your specific parameters. I will, however, briefly go over some of the extra effects (e.g. Flangers, Doublers, Phasers, Delay, Reverb, Distortion, etc.) as these ‘effect processors’ are optional and are contingent upon the particular song you are working on.

 

VOCAL PREPARATION

 

First off, if you’re working directly with the artist whose voice you’re intending to apply this effect on (or you are the artist), I highly suggest you thoroughly prep yourself for this mix. Meaning, if you’re ‘tracking’ the vocals, be sure to obtain multiple alternative takes.

 

We’ve all met or know of rappers who insist on getting the verse complete In ‘one take’ (like Jay-Z for example) which in essence is fine… but when attempting to attain this particular effect, the more material you have to work/experiment with, the better. Of course, if you’re mixing and/or mastering an artist’s stems and that’s all you have at your disposal, you’ll have to make due, but try to get as many takes as you possibly can as it will ultimately make your job a lot less frustrating.

 

It is a proven fact that nobody in the entire universe is capable of recording or emitting the same EXACT noise (or verse) twice, regardless of their musical skill level. This may sound like a ‘bummer’ but, when you compile various takes, it’s actually these minute differences that are ideal (for this distinct process) because, when they are used for ‘doubles’ or sent through synths and processors, they allow for incredibly prodigious results. I have ascertained (for this specialized style/process) that recording a few ‘mumbled’ takes serves as an invaluable asset to acquire during the mixing phase.

 

For example… try whisper for a take, or making an alternate/different version of the hook (and/or verse) by taking an extremely ‘monotone’ take and turning it either up OR down to create a harmonious, yet conflicting sound. Sometimes these takes work best when feeding them through a vocoder or pitch/formant shifting device, as we will be doing today.

 

NOTE: Be sure to have the artist record AT LEAST two set of doubles and AT LEAST two sets of adlibs, so you have plenty of material to work with. However, if you are not working directly with the artist and only possess the stems or one take, not to worry – half the time in ‘trap’ music, the doubles are created artificially as it is utilizing a plugin such as ‘Doubler’ by Waves or even the old school method manually doing it by hand.

 

In my experience, it’s somewhat difficult to procure a desirable (or even usable) sound when attempting to process rap vocals. I recommend you strongly urge the artist to ‘sing’ their verse/hook (regardless of skill) as it makes the processing job significantly less daunting. It’s exceedingly strenuous to ‘decode’ precisely what is being sent through a Vocoder, so even if the artist’s voice isn’t the greatest, try it anyway as it might end up being perfect for this anomalous process.

 

Now, decide which take is going to be the designated (main) take. Additionally, grab 2-3 alternate takes that you feel blend cohesively with the main verse you intend to process.

 

NOTE: ideally, the ‘back-up’ takes should vary slightly from your main take, by either being less/more harmonious and/or be performed with a different tone or demeanor.

 

PITCH CORRECTION

 

NOTE: When attempting to accomplish this, make sure to use a pitch correction plugin that offers this particular processing. Tools like Melodyne (and the like) focus primarily on the actual pitch correction whereas tools like Auto-Tune and Wave’s Tune are exceptional 3rd party options to use when aiming to produce that classic ‘Auto-Tune effect’. The best part is, they’re fairly simple to understand and conquer.

 

Although Antares is the undisputed worldwide leader in vocal processing technology, it doesn’t mean it’s the ONLY one out there. However, this technology is comparatively new – first deemed ‘popular’ after Cher’s 1998 hit single ‘Believe’, which is reputed to have used Auto-Tune, and has since become prevalent to more obvious OR more subtle degrees throughout popular, electronic-oriented music.

 

The iconic ‘T-pain effect’ is known in the engineering world as Pitch Quantization (the process of transposing performed musical notes, which may have some imprecision due to expressive performance, to an underlying musical representation that eliminates this imprecision) and is achieved using a method called ‘Phase Vocoding’ which employs FFT and, at times, even incorporates a little Granular Synthesis as well.

 

By putting Auto-Tune on it’s most aggressive setting, it corrects the pitch the EXACT moment it receives the signal, it produces a ‘robotic’ sound (reminiscent of a Synthesizer) and comprises characteristics similar to Vocoders, Talkboxes, and the famous Roger Troutman (the T-pain of his time).

 

Years ago, there was an extremely likely chance that the stock ‘correction’ plugin your DAW offered probably couldn’t duplicate the illustrious Auto-Tune effect but thanks to a rapid increase in the number of people attempting to make music and the profound advancements of music technology, most DAWs now come equipped with a plugin that produces very similar results. Plus, there’s nothing wrong with utilizing the tools you already have, know, and are comfortable with.

 

Personally, (when it comes to stock devices) I use Propellerhead’s Neptune Sound in Reason because it’s quality is absolutely phenomenal. I also frequent FL Studio’s Pitcher (even though it’s not exactly ideal) as it does a satisfactory job at emulating the intended effect. Additionally, Logic includes a native pitch correction option and although Ableton itself doesn’t provide a plugin that ships in box, ‘Max’ for Ableton Live extends many acceptable alternatives.

 

AUTO-TUNE

 

#1: Find The Key Of Your Song

 

This step should be completed relatively easy if you are, in fact, the producer and/or have the MIDI files at your disposal. Most people are not accustomed to Auto-Tune’s setting, so they instinctively turn the ‘correction’ on it’s default setting (Chromatic’) WITHOUT finding the correct key of the song first, which is a fatal mistake in this case and leaves little to be desired.

 

By leaving Chromatic on, you’re only telling Auto-Tune to take the notes that are NOT in tune and fix/alter them so that they ARE in tune, but it’s not actually conforming your voice to match the key of the song; which is where Auto-Tune comes into play.

 

AUTO TUNE SCALE

 

The reason Auto-Tune is so highly renowned and considered so innovative is because of it’s ability to take notes that are sung (or rapped) In the incorrect key of the song and shift/transfer them to the nearest accommodating note, making your overall song sound supremely better.

 

With that being said… be extremely cautious not to accidentally (or purposely) disregard this step, as it’s realistically the MOST important/vital one!

 

If you can’t decipher what key the current song is in, try mapping out the scales. If you’re completely clueless as to what key your song is in, start by mapping out the most common scales first;  Odds are if you are working with a Hip-Hop or Pop song it will be in either the Major or Minor scale.

 

You’ll only need to map out the scale once, then start moving it up the 12 notes. For example, map out ‘C Major’ and if it’s not that scale, but you believe it’s in the Major scale, highlight ALL the notes and transpose them up one note at a time so your root (bottom) note goes from ‘C Major” to ‘D Major’ (and so on). Continue this process until you find the appropriate scale.

 

If the scale you’ve mapped out has notes that sound ‘off’ or simply don’t fit the particular song, this plainly means that’s not the correct scale.

 

NOTE: You could try an alternate approach to locate the corresponding scale by using Auto-Tune. Simply enable Auto-Tune and start changing/altering it’s ‘scale’ settings until you feel it’s correcting/quantizing to the correct pitch. Although it does take the same amount of trial and error, if you’re not big on ‘music theory’ and/or composition, this technique may be slightly easier.

 

WARNING: When this step is not done correctly (usually due to it being overlooked, ignored, or disregarded as unimportant) it can be INCREDIBLY counterproductive because it has the unfortunate potential to turn an ‘off key’ performance into an audible monstrosity, capable of making even the most tone def of artists and listeners cringe. So, before you start recording and/or mixing anything, make sure to identify the CORRECT key the individual song you’re working with is played in!

 

#2 Vocal Setting

 

The exact vocal setting you choose is completely dependent on the individual artist’s voice/voice characteristics, but I’ve discovered that by setting it to ‘Alto/Tenor Voice’ (regardless of what type of voice you’re working with), it tricks Auto-Tune into shifting the vocals UPWARD and ‘fills in’ the missing notes with a synthesized voice.

 

NOTE: I have personally come to the conclusion that this method comparatively works best but, in some instances, an opposing setting may sound better (depending on the artist), so I encourage you to experiment, play around with, and cater to the unique voice you’re working with.

 

#3: Set The Correction Speed to 0

 

By setting the ‘correction’ speed to 0, you’re enabling the correction to kick in as soon as the pitch fluctuates/shift notes. This is the primary function responsible for the ‘robotic’ effect that is heard all across the airwaves.

 

AUTO TUNE EFX

 

NOTE: If you are seeking a more natural, human element, turn the correction speed UP until you’ve reached the desired/targeted sound.

 

#4: Set The Correction Amount or Ratio All The Way Up (if applicable)

 

Depending on the specific plugin you’re using (at the time), the names of the parameters themselves may vary, but the concept is essentially the same. If you’re experiencing difficulty locating this parameter, it’s assumably because the plugin you’re utilizing doesn’t offer one; I wholeheartedly advise you to reference your manual, as that will surely answer any and all question you may have.

 

WAVES TUNE

 

Regardless of which pitch correction plugin you’re currently utilizing (e.g. Auto-Tune, Wave’s Tune, or the like), you must bring the pitch correction speed down to 0 and turn any/all correction parameters UP. The key here is to make the artist’s song sound as ‘unnatural’ as possible by pushing the correction to it’s absolute limit.

 

REMEMBER: Auto-Tune was originally created to serve as a correction AID (not for the purposes we’re intending to use it for), so we must set it up properly in order for it to work the way we want.

 

At times, you may see a ‘humanize’ function (which will make it less robotic, and should only be used with experienced artists who know how to carry a tune), and if you want to keep somewhat of a ‘natural’ feel, I suggest you play around with it. The better the artist is WITHOUT Auto-Tune, the more leeway you’ll have, in regards to creativity and parameter manipulation.

 

FORMANTS

 

This next process I’m introducing is one that most people feel bears the closest resemblance to the effect we’re aiming for. It is accomplished by taking a duplicate track and/or the artist’s doubles and altering the formant; shifting it either up, down, or (in some cases) both.

 

Although there is nothing wrong with transposing the actual pitch itself (up or down a few octaves) to achieve our desired sound, it’s not normally what the expert engineers use. If you truly intend to master this stylistic effect, you’ll have to incorporate some sort of Formant Shifter plugin (which has become standard as of late). Luckily, a substantial amount of respectable Pitch Correction Modules have a Formant Shift feature built in, though I prefer to apply it in a separate process.

 

What exactly is Formant Shifting? Essentially, when you ‘formant shift’ vocals, what you’re doing is altering the TIMBRE of that particular voice/sound, not the pitch (although it appears so). If you’ve ever exercised a plugin with ‘gender’ parameters to control/change the tone of vocals from male to female (or vice versa), you’ve used a form of formant shifting. You also may have encountered it in the form of a ‘formant filter’ which is fundamentally just a fancy, complex EQ method used to create vocal sounds (like vowels) using any sound source.

 

Formants appear, naturally, everywhere throughout speech and sounds. They are responsible for the character and ‘tonal quality’ of an individual’s voice, as well as their vocal characteristics and uniqueness (a ‘vocal fingerprint’ if you will). In the sound design realm, formant shifting is an immensely valuable tool used in various ways, for various purposes. Recently, It has become increasingly popular due to the fact that it’s being used in top-charting songs and is highly revered/lionized. Today, however, what we’re aiming to accomplish is reasonably simple.

 

NOTE: Most DAWs have a formant shifting parameter PAIRED with the pitch shifting algorithm and, since this technology has been implemented for some time and has been greatly improved upon, using the built-in stock option should suit your needs just fine. When attempting to replicate the sounds that are heard in today’s most notorious songs, it’s achieved best when using both pitch AND formant shifting simultaneously.

 

As I previously stated… everyone possesses their own signature voice, so the proper settings and whether you should formant/pitch shift up, down, or both are contingent upon the distinctive voice at hand. So, while the exact parameters will vary, the overall approach and processes are principally the same.

 

For this particular example, I am using ‘Little Alter Boy’ by Soundtoys not only because it’s my personal favorite (due to it’s simplistic nature and the total quality of output it produces) but also because I feel it is the most commonly used, often heard plugin. Plus, I happen to know that it’s the most frequently employed plugin amongst the professionals, which is the reason it became so popular in the first place.

 

If you wish to adjust formants using Ableton’s stock features you must enter “Complex Pro” Mode. You can do this by selecting the algorithm below the selected loops BPM settings. You may not venture out of the capabilities of the default warp mode often, or even ever but by doing so we gain the ability to shift formants.

 

ABLETON FORMANTA

 

FORMANT & PITCH SHIFTING

 

When it comes to the ‘pitch altering’, there are various ways to approach this effect and several plugins that are exceptional at producing it (all with their own special characteristics).

 

NOTE: ‘Little Alter Boy’ by Soundtoys is an incredible plugin that is most commonly used to achieve this effect due to it’s pleasing, sonic-depth and convenient/great sounding ‘drive’ parameter. The drive parameter allows us to add distortion to the signal WITHOUT any further post-processing required.

 

When it comes to the actual administration of this effect, it’s fairly straightforward. The exact settings constantly change, based on the artist’s personal preference and performance. The secret to this technique is not really HOW it’s applied, but more WHEN to apply it. Don’t forget to take the overall vibe of the song into consideration as well.

 

In certain circumstances (contingent on the unique effect/feel you’re yearning for) you’ll either want to shift the formant, pitch, or both (as I am demonstrating today).

 

Let’s begin with the ‘doubles’ as they are typically the intended target of this particular effect.

 

DOUBLES

 

#1: Decide where/when on the track you want to apply this effect.

 

I prefer to apply it to the artist’s doubles first. That way, the effect is apparent throughout the entire song, but doesn’t sound too overwhelming.

 

If the artist provides you with doubles that are basically carbon copies of the verse, you’ll have to do a little editing. Take out the beginning half of every bar and leave the second half as it is, or take out everything EXCEPT the last few words.

 

Either make duplicates of the artist’s doubles or, if you have two different versions (again, the ideal situation), ‘pan’ 1 version about 40-50% left, and the other version about 40-50% right.

 

#2: Open up your pitch shifting and formant plugins on BOTH tracks.

 

Be sure that you are using a plugin (or algorithm) that can alter the pitch without affecting the tempo. Most DAWs /plugins can perform this task, it’s usually just a matter of setting it up properly. In Ableton, it’s as rudimentary as enabling the ‘warp’ function before transposing the pitch.

 

Take one set of doubles and pitch it UP 12 semitones, then take the other set of doubles and pitch it DOWN 12 semitones. I prefer using the “Quantize” Algorithm.

 

ALTER BOY PITCH DOWN

 

KEEP IN MIND: You can always choose to back off a little if you feel it’s too extreme, I just like to use 12 because I know (regardless of the song) everything will stay in tune.

 

Other numbers that result in great sounds, when pitching it up or down, are 5 semitones (a fourth) or 7 semitones (a fifth). This is undoubtedly an incredible way to create vocal harmonies, but it may cause some unwanted clashing with the overall song, so proceed with caution.

 

An efficient way to prevent any lamentable clashing is to assign a pitch correction plugin AFTER the pitch shifter (if using values other than 12). This will take any notes that aren’t in the song’s scale and rectify/amend them accordingly. Having Auto-Tune applied to two different sets of vocals can cause some adverse effects, but there’s no harm in trying it first, as the results can sometimes turn out amazing. Be sure when applying the Auto-Tune for this specific purpose it is purely applying pitch correction and nothing else.

 

ALTER BOY PITCH UP

 

#3: Adding Formants

 

Formants are what gives an artist’s vocals that ‘it’ factor which pitch-shifting alone cannot accomplish. Again, this is determined by personal preference and the overall vibe of the song at hand. There are really no rules when it comes to this particular step, and it’s extremely hard to mess up. As I previously stated (when applying formants) we’re not actually altering/changing the pitch of the vocals, but rather the tone.

 

Bringing the formant down on a vocal that you already pitched up can occasionally sound spectacular. For this example, however, I simply turned the formant UP about 3-4 Semitones or 25% on the pitched up vocals, and DOWN about 3-4 Semitones or 25% on the pitched down vocals.

 

#4: Applying A Relatively High Amount Of Distortion Or Saturation

 

By applying a relatively high amount of distortion or saturation, it gives the song a ‘dirty’ or ‘crunchy’ sound the saturation is famously known for.

 

For those of you using ‘Little Alter Boy’ by Soundtoys, try turning the ‘drive’ parameter UP a decent enough amount. This will bring a ‘screaming’ effect/factor to the vocals you pitched up, and a ‘dirty’ (yet clean) effect/factor to the vocals you pitched down.

 

This effect became prominent after Kanye West used it during his ‘808’ and ‘Heartbreak’ days and has been universally acknowledged and recognized since then.

 

MAIN VOCALS

 

As always, the exact settings vary (depending on the specific vocals you’re working with) but the process itself is applied the same. It’s traditionally best to only alter the main vocals at designated times, as the doubles are usually substantial enough.

 

If you’re only privy to 1 set of main vocals, you’ll inevitably have to apply it to the main vocals themselves. If this applies to you’re specific situation, you have two options:

1. Open up a pitch AND formant shifting plugin on an available ‘aux send’, then route your main vocal at 100%. Play around with the settings, choose a desired one, and then automate the ‘sends’ level.

 

ALTERBOY SEND

 

NOTE: Every so often you’ll work with an artist who wants (or insists on having) the effect to be active throughout the entire song, but generally it sounds best when it weaves in/out and ‘builds up’ in stages.

 

2. This option is essentially the same as the first one, except you insert a reverb plugin before the pitch shifter as opposed to a pitch/formant shifting plugin applied to the dry vocal. Adjust the settings so it has a reasonably short ‘tail’ and ‘decay’ time (a ‘room’ reverb conventionally works best). I have also found turning off the pre delay usually sounds best with this effect.

 

ALTERBOY REVERB SEND

 

Now… listen to the effect itself. It’s seemingly subtle but, if you only have 1 vocal at your disposal, it may be your best option. Plus, it’s a technique that isn’t used as frequently used can be refreshing and pleasing to your listeners.

 

NOTE: These steps can be applied to ‘adlibs’ as well (using more drastic settings). I encourage you to try out new settings/methods, you never know when you’ll stumble upon something phenomenal that you could perhaps contribute to your genre. You never know, you could invent the next big trend.

 

Another plugin that I know is frequently use for this process is ‘Doubler’ by Waves. Although it solely changes the pitch, not the formant, it’s back-end process seriously makes up for it. Simply throw the ‘Darth Vader’ preset onto your main vocals, and you’re all set!

 

NOTE: Having it on for the entire verse can be overkill, so I suggest you strategically ‘automate’ it’s level, and have it either build-up gradually OR just have it present at select times.

 

VOCAL SYNTHESIS

 

After the Auto-Tune and Formant Modulation is complete, comes the third piece of the puzzle: Vocal Synthesis.

 

While Auto-Tune modernizes sounds and the formant settings emit sounds reminiscent to those heard in the ‘chopped and screwed’ era, the ‘vocal synthesis’ is used to make voices sound similar to instruments; bringing a very ‘musical’ or melodic vibe to even the most monotone of vocal chords.

 

What exactly is Vocal Synthesis?

 

Vocal Synthesis is a process that is achieved by using a specific type of synthesizer (normally a Vocoder, as it’s the most beloved and has been around the longest) that analyzes an incoming audio signal and then ‘synthesizes’ it’s own version, based on the ‘carrier’.

 

In the past, this effect stood alone, but recently producers/engineers have been blending the synthesized vocals with the LEAD vocal to create an almost ‘synthetic’ quality. If executed correctly, it can even turn something like a monotone rap verse into a harmonious masterpiece.

 

Since it’s inception back in the late 1930’s, we’ve discovered that we can essentially feed any signal into a similar device to create a wide range of inspiring effects.

 

I believe the majority of artist engineers today (the last year or 2) use ‘Vocal Synth’ by Izotope, simply due to the fact that it incorporates the 4 most preferred, iconic ‘vocal synthesis’ tools (which are Poly Vox, Compuvox, Vocoder, and Talkbox). If you don’t currently have this tool at your disposal, any of the aforementioned synthesizers should suffice.

 

NOTE: If you do NOT own Vocal Synth, chances are your specific DAW offers you their own version of a vocoder.

 

This step is less about replicating existing work or creating a carbon copy of your favorite mainstream song, but more about experimentation, allowing your originality to shine, and making creative decisions that will enhance your song’s overall appeal and help it to stand out/sound professional.

 

KEEP IN MIND: Not EVERY artist you hear on the radio ‘synthesizes’ their voice, but when you’re longing for that ‘Travis Scott’ effect, it is a vital factor.

 

A LITTLE HISTORY: This process was first introduced to the public by AT&T at the 1939-1940 Worlds Fair and was originally used solely as a ‘speech synthesis application’. Soon after, it was used for encrypted high-level voice communications during World War ll. Once it was incorporated with music (around the 1960’s) it paved the way for many spin-offs, including a unit that first enabled an electric guitar to be used as the carrier tone.

 

THE VOCODER

 

I prefer to use ‘VocalSynth’ by Izotope because it’s comprised of the 4 most popular vocoder types. If you’re a fan of vocoders and don’t know about this plugin, check it out, as I believe it’s the absolute best on the market (for now).

 

If you do own VocalSynth, there is an extraordinary preset called ‘Rap Beef’ that you can use as is (other than maybe a few minor adjustments). The exciting part about it is, you do NOT have to concern yourself with ‘carriers’ as it includes everything all wrapped into one, without any mandatory routing or setup required.

 

In this example… I simply set up VocalSynth on an Aux Send, opened up the preset ‘Rap Beef’, routed it to the MAIN vocals, and voilá!

 

VOCAL SYNTH e1518030865952 - Unison

 

For those of you using stock plugins: How to successfully set up and use Ableton’s vocoder is described below:

 

1. Make a duplicate of the vocal track you wish to ‘synthesize’ then locate Ableton’s vocoder and apply it directly onto the duplicate track.

 

DUPLICATE TRACK

2. Switch the Carrier to ‘external’.

 

By default, the carrier is set to ‘noise’ which is indeed an intriguing feature, but results in a more ‘haunting’ sound than harmonious.

 

VOCODER EXTERNAL SIGNAL

 

3. Decide what carrier source you wish to use.

 

VOCODER ANALOG

 

This decision is just as vital as the Modulator itself (vocals). Since we’re utilizing stock plugins, go ahead and open up ‘Anolog’. If you have other synths, choose your favorite.

 

NOTE: I tend to use ‘Serum’ for most of my projects due to it’s broad spectrum of sounds, but anything that you think sounds best will work fine too.

 

4. Set the ‘Audio From’ to the desired synth (in this case, Analog).

 

VOCODER AUDIO FROM SIGNAL

 

The Audio From is set to ‘pre-effects’ (by default) which will suffice in this situation because at the moment we have no effects routed through Analog. If you DO wish to apply some effects, and want the processed synth to be the carrier, you would set it to ‘post-effects’ instead.

 

NOTE: When effects like Reverb or Delay are sent into the Vocoder, they don’t always yield the best results and can sometimes get overwhelming for the synth to reproduce.

 

For now, let’s play it safe and send a ‘dry’ signal through the Vocoder.

 

5. Record enable the synth you’re using as your carrier and bring the volume ‘fader’ ALL THE WAY DOWN.

 

VOCODER RECORD ENABLE

 

NOTE: If you skip this step, you will clearly hear the synth and Vocoder in action when its time to play back the audio, which usually doesn’t sound great. Be sure to not mute the output or change the monitoring settings as that will prevent the signal from being sent through the vocoder. Just simply turn the volume fader down.

 

You might have completed the steps above, but still feel like you accidentally skipped a step because you SEE the signal being sent through the Vocoder but don’t HEAR the effect. This is because you must feed the carrier notes in order for the Vocoder to produce sound.

 

When using a standard Vocoder, it receives the audio signal and reproduces a digital representation WITHOUT pitch information. The pitch that is emitted from the Vocoder is fully dependent on the notes you feed the carrier.

 

6. Record your MIDI information.

 

MIDI FILES

 

Chord progressions usually sound best (and most harmonious). When you play one note at a time it ends up sounding extremely ‘robotic’. Either way, there is no exact formula to apply or settings to use. The only way to achieve the results you want is by experimenting.

 

I recommend taking a look at the ‘instrumentation’ (if available) and grabbing the progression or melody used in the actual song, that way it’s guaranteed to work cohesively and sound good.

 

If you do NOT possess the song’s MIDI files at your immediate disposal, I suggest you start experimenting; play the chords and notes that compliment the current key of the song.

 

If you aren’t achieving the desired tone but have a great melody, try recording the MIDI and interchanging the presets and synths until you find the appropriate one for your track.

 

NOTE: The sound that is being emitted by the Vocoder will rely more on the carrier than the actual settings of the Vocoder itself.

 

7. Scrutinize the Vocoder’s settings.

 

I like to enable the ‘enhance’ feature (for a more pronounced sound) and turn up the ‘attack’ (especially when vocoding rap vocals) because it makes the Vocoder sound more like a synth than a robot. Also, manipulating the release can lead to an extremely desirable ‘panning’ effect, so test out for yourself. Even if it does not fit your current track, it’s a really cool effect you can implement in the future.

 

Finally… adjust the formant knob (if you wish), find a good level for this layer to sit at and there you have it!

 

ABLTON VOCODER

 

ADDITIONAL EFFECTS

 

Although the effects and processes discussed above are (what I consider) ‘genre-defining’, they are certainly not the ONLY ones you need to assimilate in order to attain the desired results/sounds. If you’ve listened to today’s biggest artists (such as Travis Scott, Future, etc.) close enough, you’ll inevitably hear that the processing chain does NOT stop there.

 

When utilizing these additional effects, you have the freedom to go crazy and unleash you’re inner most super-producer. Experiment with them and you’ll be amazed at the results you uncover, as you have the ability to cater to each track (individually), giving them their own unique feel/vibe and capturing their true essence.

 

1. Reverb

 

Reverb is an effect applied to an audio signal that simulates reflections that are naturally produced when sound bounces off walls, objects, or really anything in it’s way (think of the atmosphere created when you yell in a church) and is a MUST to include, even when using the pitch/formant shifting trick I described earlier.

 

You could get extremely imaginative when using Reverb by assigning random types to random parts/sections, but that is completely optional (and not advised). It’s customarily best to select a Reverb that blends well with (or matches) the Reverb being emanated from the instrumental itself.

 

NOTE: If you crave an ‘atmospheric’ vibe (commonly heard on the radio), I would use a ‘Hall’ Reverb. If a ‘Hall’ Reverb is not accessible to you, but you still want an atmospheric vibe, use a Reverb with a LONG tail. Convolution Reverbs are also a great option, as you can simulate any space.

 

Like everything else in this genre, Automation is key, and automating your Reverb throughout the song (e.g., mix level and decay time) will provide it with the ‘depth’ and ‘movement’ that’s usually only achieved by/reserved for the professionals.

 

Habitually, I don’t apply Reverb directly onto the track, but instead send it to an ‘aux return/send’. This way, it won’t effect the dry signal, I will have FULL control over it, and I’m able to include EQ after it as to sculpt it further. I traditionally use this method for most effects (including Delay) so each effect can have it’s own/unique send and mini-processing chain.

 

2. Delay

 

Delay is an audio effect (and an effects unit) which records an input signal to an ‘audio storage medium’ (in most cases), and then plays it back after a period of time; the delayed signal may either be played back MULTIPLE times, or played back into the recording again, to create the sound of a repeating, decaying echo. Artist’s voices today are essentially covered with Delay.

 

I personally prefer to utilize a ‘ping-pong’ Delay, adding it heavily to the adlibs and doubles, not so much on the main vocals, as applying it this way prevents the song from appearing ‘cluttered’.

 

If you are solely working with an artist’s lead vocals, automating the ‘input level’ during designated parts/sections of the song (like at the end of a bar, or the last few words of a bar) generally yields supreme results, with minimum clutter. I set it up on a ‘return’ track, and do some post-processing to sculpt it and keep it clean.

 

3. Distortion

 

As an effect, distortion is any process that alters the sound in the harmonic (tone, timbre) domain. I have noticed, for the most part, that Distortion is applied subtly to the main vocals, and is cranked up extremely high on the doubles/adlibs; giving it a ‘telephone’ type of effect.

 

Again, automating the level and/or amount is typically what I hear being used all across the radio. Additionally, I hear distortion being applied heavily to the Delay, so don’t be reluctant to add it AFTER the Delay (if using a send) and crank it up!

 

4. Flanging, Chorus and/or Phasing

 

When utilizing any of these three tools, you’re creating an audio effect produced by mixing two IDENTICAL signals together, delaying one signal by a small/gradually changing period (usually smaller than 20 milliseconds).

 

KEEP IN MIND: All three of these tools are relatively similar in the way they sound and function, so regardless of which one you have access to, you should be able to successfully achieve this effect. Almost every DAW on the market incorporates at least one of the three, so use what you have.

 

You can apply any of the three effects directly on the vocal track and, if using a unit without much control, I suggest reverting back to automation.

 

You always have the option of applying a subtle amount, just to be ‘steady’ throughout the entire song, but I like to have it fade in and out so it adds some ‘life’ and movement (plus, it’s what I hear done most often).

 

Using ‘Doubler’ by Waves (the same plugin I mentioned previously) can also produce a Chorus-like effect and is frequently used on various popular tracks that I hear. The great thing about Doubler is, not only does it yield amazing results, but can also stereoize your mono vocals as well.

 

Simply open it up to its default setting, take the ‘pan’ position of the 2 doubles and bring it closer to the center. The FARTHER away you place these pan positions from the center, the MORE stereo you will make the image; giving the illusion you are working with multiple takes that sound identical.

 

NOTE: This is also a phenomenal way to easily introduce ‘artificial’ doubles to a track when solely working with one lead vocal!

 

I have found that a vocal ‘chain’ is like a fingerprint…. even though they may appear to look (or in this case, sound) the same, they most certainly are NOT. Each one should be unique, catering to and dependent on the particular track at hand.

 

I have also found that major artists/influences, such as Travis Scott, do not typically use all of these effects simultaneously, but rather strategically apply a few chosen ones in designated areas throughout the track. The fact is, his vocal chain is solely responsible for his insane popularity, and is essentially why he’s favored over his competitors (for the moment, that is). If you invest some serious time into expertly comprehending, executing, and experimenting with these techniques, it could be YOU who uncovers the next big craze.

 

It’s the minute details that define you as an artist/creator and can help you to stand out in this oversaturated field. Be creative, think outside the box, and keep in mind that your vocals can be the defining factor of your song, so never settle for mediocre or ‘passable’ – shoot for incredible and unbelievable.

 

I wholeheartedly hope you learned some brand new techniques that you can experiment and play with, or increased your existing knowledge on a particular subject. Remember… there’s no rules when it comes to applying these effects, so have fun with them and make them your own!

 

4 comments
    AvatarAdam Taha says:

    It be great for us members if these articles are videos too. I can see in video the steps and I can use the written content to swiftly go over things again than playing full video. I’m more of a digital visual person who likes to see the knowledge happening in action. Great content. Just wish all of these posts has a video with it as well that goes through what’s been written.

    samuelsamuel says:

    Awesome article. Milion thanks. very helpful!

    AvatarGreco Rodiles says:

    Agree with Adam, Please make a video about what is explain in this article, Thanks bro

Leave a Reply

0:00
0:00

logo

Producer Flow – Pink Noise

$49

CART

You have no items in your cart.