Man’s obsession with robotics has made quite an impression on the world of music.  From “Mr. Roboto” to “More Bounce to the Ounce”, synthetic vocals have been celebrated by artists like Kraftwerk, Roger Troutman, Peter Frampton, Kanye West and many others.  When did the use of robotic voice in music begin?  How far has it come as the technology has evolved?  Here is the history of that familiar robot voice in music, from its invention in the 1930s to the Billboard hits of today.

1939 World’s Fair: Synthesized Speech Yields Music

At the 1939 World’s Fair in New York, Bell Telephone Laboratory introduced the first ever synthesized speech device.  The Voder, as it was called, created comprehensible speech-like sounds using a radio valve and a gas discharge device.  The radio valve would create dynamic vowel sounds, whereas the gas discharge unit would present sibilance.  Together, these systems produced clear speech that could be recognized by most every attentive listener.  Fair-goers were treated to hearing the first-ever robotic voice, speaking clearly and audibly to its audience as commanded by Bell Labs.

A few decades later, Bell Lab’s Voder device was modified to mimic more than just the inflections of speech, it was modified to change pitch in a melodic fashion.  This allowed its users to make the Voder not just speak– but sing.

The Modern Vocoder is Born

(image courtesy: stretta)

In 1970, famed synth maker Robert Moog borrowed Voder-style technology to provide voice control on his music synthesizers.  This meant that a user could speak or sing into a microphone to control sounds on the Moog Modular Synth.  If a user selected a violin-style synth sound, that sound would then take the character and speech pattern of the voice of its user.  If its user said the phrase “the quick brown fox jumps over the lazy dog”, the Moog synth would speak this phrase in the tone of the violin-style synth sound.

As analog synthesizers became cheaper and more accessible to musicians, Vocoder technology was used more frequently in modern music.  In the late 70s, vocoders began appearing in rock music and disco, as championed by the German synth pop group Kraftwerk.  Arguably the vocoder’s most recognizable early hit was 1983′s Mr. Roboto by Styx:

The Talkbox: Low Tech Perfection


While much more primitive than its robot voice counterparts, the Talkbox is possibly the most sonically authentic and musical in character.  Whereas Vocoder and Autotune technology both rely on complex audio processing (one electronic, the other digital), the Talkbox is quite simple.  Plug your guitar or synth into a Talkbox, a speaker forces the sound output down a tube and into/around the player’s mouth.  Strike a chord and move your lips, the output will be naturally shaped by your mouth– and a “talking” sound will occur.

In the history of 20th century pop music, the Talkbox was arguably the most popular source of the robot voice.  Used sparingly by Stevie Wonder, frequently by Peter Frampton and ubiquitously by Roger Troutman, some of the greatest artists of the 70s and 80s were quite involved with the Talkbox.  Roger Troutman, a Funk/R&B artist who acheived fame as the frontman of the band Zapp, championed the Talkbox with a string of hits from “More Bounce to the Ounce” to the memorable chorus of Tupac Shakur’s “California Love”.  A shining example of Troutman’s talent with the Talkbox was shown on the early 80′s hit “I Can Make You Dance”.  Do yourself a favor- watch this one in its entirey below:

Antares Autotune: You Too Can Be a Pop Star


Flip on a top 40 radio station at any given moment in the day, it is likely that you’ll hear the magic of Autotune within minutes.  From pop to R&B to hip hop, Autotune is everywhere.  In its original application, Autotune helped birth a series of bubble gum pop artists from Britney Spears to the Backstreet boys.  Today, when used for a creative effect, it is the muse of performers like Kanye West, T-Pain, Snoop and others.

When Antares first released the Autotune technology, it was to be used to correct basic pitch inconsistencies in an artist’s voice.  If a singer strayed off key, Autotune would automatically adjust the pitch in real-time.  When Autotune was pushed to higher levels of processing, the vocal would switch between pitches unnaturally, giving a robotic vocal effect.  In 1998, this effect was popularly employed by the artist Cher on her hit song “Believe”. A decade later, this effect would find a rebirth in hip hop and R&B thanks to the record producer T-Pain and other artists.

The problem with the Autotune effect is its simplicity.  Anyone with a computer and a microphone can produce melodic vocals without a hint of vocal talent.  Rappers like Snoop and Kanye West have sung on some of their most recent hits (“Sensual Seduction” and “Love Lockdown”, respectively).  It takes talent out of the equation thanks to the assist of technology.  This was not true with either the Vocoder or the Talkbox, as both of those technologies required talent with both musical instruments and audio technology.  To see how simple the Autotune effect can be to use, check out this brief (and funny) tutorial:

The fact that the Autotune effect masks much of the character of a singer’s voice, it makes the users of this effect sound similar.  As a result, the effect itself is generic in nature, limiting its creative employment over time.  While pop music is currently saturated with the Autotune effect, it will eventually become passe’– and fade from pop music accordingly.

Thanks for reading, GearCravers, Diggers, Stumblers and friends from Reddit.  What is your feeling on the robot voice’s use in music?  Who, in your opinion, has used it most tastefully?  Least tastefully?  Leave your thoughts in the comments, we’re curious about your take.