Skip Global Navigation to Main Content

Digital Technology and Popular Music

Artists can more easily manipulate music in its digital form

29 July 2008
Nick Haworth and “La Rocca” record their debut album  (© AP Images)

Nick Haworth and “La Rocca” record their debut album in Los Angeles, 2006.

(The following is excerpted from the U.S. Department of State publication, American Popular Music.)

During the 1980s, new technologies – including digital tape recorders, compact discs, synthesizers, samplers, and sequencers – became central to popular music. These devices were the fruit of a long history of interactions between the electronics and music industries and between individual inventors and musicians.

Analog recording transforms the energy of sound waves into physical imprints or into electronic waveforms that follow the shape of the sound waves themselves. Digital recording, on the other hand, samples the sound waves and breaks them down into a stream of numbers. A device called an “analog-to-digital converter” does the conversion. To play back the music, the stream of numbers is converted back to an analog wave by a “digital-to-analog converter” (DAC). The analog wave produced by the DAC is amplified and fed to speakers to produce the sound.

Synthesizers that allow musicians to create musical sounds began to appear on rock records during the early 1970s, but their history begins earlier. One important predecessor of the synthesizer was the theremin, a sound generator that used electronic oscillators to produce sound.

Another important stage in the interaction between scientific invention and musical technology was the Hammond organ, introduced in 1935. The sound of the Hammond B-3 organ was common on jazz, R&B, and rock records. The player could alter the timbre of the organ through control devices called “tone bars,” and a variety of rhythm patterns and percussive effects were added later.

The 1980s saw the introduction of the first completely digital synthesizers capable of playing dozens of “voices” at the same time. The MIDI (Musical Instrument Digital Interface) specification, introduced in 1983, allowed synthesizers built by different manufacturers to be connected with and communicate with one another. Digital samplers were capable of storing both prerecorded and synthesized sounds. Digital sequencers record musical data rather than sound and allow the creation of repeated sound sequences (loops), the manipulation of rhythmic grooves, and the transmission of recorded data from one program or device to another. Drum machines rely on “drum pads” that can be struck and activated by the performer.

Digital technology has given musicians the ability to create complex 128-voice textures, to create sophisticated synthesized sounds that exist nowhere in nature, and to sample and manipulate any sound source, creating sound loops that can be controlled with great precision. With compact, highly portable, and increasingly affordable music equipment and software, a recording studio can be set up anywhere. As the individual musician gains more and more control over the production of a complete musical recording, distinctions between the composer, the performer, and the producer sometimes melt down entirely.

At each stage in the development of popular music, new technologies have opened up creative possibilities for musicians, creating a wider range of choices for consumers. We are accustomed to thinking of technology as an agent of change. In some cases, however, the new digital technologies have allowed musicians to excavate the musical past. The techno musician Moby did precisely this on his bestselling 1999 album Play, when he sampled segments of performances by Georgia Sea Islands singer Bessie Jones, among others.

In the 21st century, technology continues to affect how popular music is made, recorded, reproduced, marketed to, and enjoyed by listeners. A new standard for digital music making was introduced in 1992 with the Alesis ADAT. The core of the ADAT system was an eight- track digital synthesizer/recorder that could expand to 128 tracks by adding additional units. This meant that a consumer could set up a basic home studio at relatively small expense, while professionals could use the same technology to build highly sophisticated digital sound facilities.

The 1990s also saw the introduction of music software programs such as Pro-Tools, running on personal computers. This software allowed recording engineers and musicians to gain more control over every parameter of musical sound, including not only pitch and tempo but also the quality of a singer’s voice or an instrumentalist’s timbre. One complaint voiced against ProTools and similar software by some musicians is that it allows the correction of musical errors, including the substitution of individual notes and phrases and the alteration of a musician’s sonic identity. From this perspective, “imperfection” is a necessary part of music as a form of human expression.

  • Keywords: