The term "computer music" encompasses a wide range of compositional activities, from the generation of conventionally notated scores using data calculated by the computer, to the direct synthesis of sound in a digital form within the computer itself, ready for conversion into audio signals via digital-to-analog converter, amplifier, and loudspeaker.
There are three basic techniques for producing sounds with a computer: sign-bit extraction, the use of hybrid digital-analog systems, and digital-to-analog conversion. Sign-bit extraction has occasionally been used for compositions of serious musical intent. Little interest persists in building hybrid digital-analog facilities because some types of signal processing, such as reverberation and filtering, are time-consuming even in the fastest of computers. Digital-to-analog conversion has become the standard technique for computer sound synthesis because it is the most versatile method of computer sound generation. Since the sound wave is constructed directly, there are almost no restrictions on sound properties.
To use a computer for music production, the composer or performer first "calls up" from the computer memory the appropriate precompiled program, which is written in a programming language such as FORTRAN, ALGOL, PL/1, PASCAL, BASIC, or COBOL. The program includes various "instruments," i.e., digitally stored musical waveforms, and the operator selects the instruments to use before indicating to the computer in detail—note by note, correct in pitch and timbre—the musical composition to be reproduced.
The computer then translates the instrument definitions into a machine language program, and, if necessary, puts the score into the proper format for processing. After that, the program actually "plays" the score on the instruments, thus creating the sound. The processing of a note of the score consists of two stages: initialization and performance. At the initialization of a note, the values that are to remain fixed throughout the duration of the note are set. During the performance of a note, the computer calculates the actual output corresponding to the sound.
The advantage of digital-to-analog conversion is that the computer can be called upon to assemble the individual sounds into a composition so that the composer need only be concerned with the conception of the piece and the preparation of that conception for the computer. Other advantages are that almost any general-purpose computer can be used for sound generation, and the devices of a synthesizer can be simulated by a computer program. A disadvantage is that the music cannot be altered in real time.
As early as 1843, it was suggested that computers might be suitable for the production of music. Referring to Charles Babbage's "Analytical Machine" (a precursor of the modern computer), Ada Byron King, Countess of Lovelace, suggested that the engine could be used for making music if the necessary information could be understood and properly expressed.
It was not until 1957, however, that computer-generated music became a reality when Max Mathews, an engineer at Bell Labs, began working on computer generation of music and speech sounds. Together with John Pierce and Joan Miller, Mathews wrote several computer music programs, the best known of which is MUSIC V. This program was more than just a software system for it included an "orchestration" program that simulated many of the processes employed in the classical electronic music studio. It specified unit generators for the standard waveforms, adders, modulators, filters, and reverberators. It was sufficiently generalized that users could freely define their own generators. Thus, MUSIC V became the software prototype for music production installations all over the world.
One of the most notable successors of MUSIC V was designed by Barry Vercoe at the Massachusetts Institute of Technology during the 1970s. His program, MUSIC XI, ran on a PDP-11 computer and was a tightly designed system that incorporated many new features, including graphic score output and input. MUSIC XI was significant not only for these advances, but also for its direct approach to synthesis, thanks to improvements in the efficient use of memory space. Thus, MUSIC XI became accessible to a family of much smaller machines that many studios were able to afford. Another major advance was discovered in 1973 by John Chowning of Stanford University, who pioneered the use of digital FM (frequency modulation) as a source of musical timbre.
The most advanced digital sound synthesis is conducted in large institutional installations, most of them in American universities, followed by European facilities. Examples of American installations are Columbia University, University of Illinois, Indiana University, University of Michigan, State University of New York at Buffalo, and Queens College, New York. European facilities include the Instituut voor Sonologie in Utrecht, the Netherlands; LIMB (Laboratorio Permanente per l'Informatica Musicale) at the University of Padua, Italy; and IRCAM (Institut de Recherche et de Coordination Acoustique/Musique), part of the Centre Georges Pompidou in Paris, France.
Computer technology has led to a tremendous expansion of music resources by offering composers a spectrum of sounds ranging from pure tones to random noise. Computers have enabled the rhythmic organization of music to a degree of subtlety and complexity never before attainable. They have allowed composers complete control over their work, if they so choose, even to the point of bypassing the performer as an intermediary between the creators of music and their audience. Perhaps computers' greatest contribution to music is that they have brought about the acceptance of the definition of music as "organized sound."
see also Codes; Film and Video Editing; Graphic Devices; Music; Music Composition.
Joyce H-S Li
Dodge, Charles, and Thomas A. Jerse. Computer Music: Synthesis, Composition, and Performance. New York: Schirmer, 1985.
Grout, Donald Jay, and Claude V. Palisca. A History of Western Music, 5th ed. New York: W. W. Norton, 1996.
Horn, Delton T. The Beginner's Book of Electronic Music. Blue Ridge Summit, PA: Tab Books, 1982.
Manning, Peter. Electronic and Computer Music. Oxford: Clarendon, 1985.
Morgan, Robert P. Twentieth-Century Music: A History of Musical Style in Modern Europe and America. New York: W. W. Norton, 1991.
Schrader, Barry. Introduction to Electro-Acoustic Music. Englewood Cliffs, NJ: Prentice Hall, 1982.
Towers, T. D. Master Electronics in Music. Rochelle Park, NJ: Hayden, 1976.
"Music, Computer." Computer Sciences. . Encyclopedia.com. (September 20, 2017). http://www.encyclopedia.com/computing/news-wires-white-papers-and-books/music-computer
"Music, Computer." Computer Sciences. . Retrieved September 20, 2017 from Encyclopedia.com: http://www.encyclopedia.com/computing/news-wires-white-papers-and-books/music-computer
computer music, term used to describe music composed or performed with the aid of a computer. The first substantial piece of music composed on a computer was the Illiac Suite (1956) by the avant-garde composer Lejaren Hiller (1925–94). Computer music can be divided into two distinct production techniques: MIDI (Musical Instrument Digital Interface—see electronic music) and software synthesis. In MIDI production a computer is used to control the outputs of synthesizers and signal-processing devices. Software synthesis, however, involves the use of a computer to mathematically represent and manipulate sounds. This technique was created in the late 1950s by a team headed by Max Mathews at Bell Laboratories in Murray Hill, N.J. The techniques were further advanced by Godfrey Winham and Hubert Howe at Princeton. Today major centers of software synthesis include the Institute for Research and Coordination of Acoustics and Music (IRCAM) in Paris, the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford Univ., the Computer Audio Research Laboratory (CARL) at the Univ. of California at San Diego, and the Media Lab at the Massachusetts Institute of Technology. Software synthesis frequently involves the use of sampling, a technique that represents a sound as a series of discrete measurements of amplitude (loudness). This digital representation of a sound can then be manipulated by various techniques, including filtering, which reduces the loudness of a specific part of the frequency spectrum; time delay, which can be used to simulate various types of echo or reverberation; and frequency shifting, which is used to alter the pitch of a sound.
Sounds can also be directly created by the computer, allowing it to act as a synthesizer. Some recent research into sound production by computers utilizes a technique called physical modeling, which attempts to model the physics of natural instruments or sounds. Computers can also be used to compose music by a process known as algorithmic composition. In this technique various details of a composition are determined by the computer according to a specific program written by the composer. Another area of computer music involves the interaction of humans and machines in live performance. Various techniques have been developed to enable a performer to actively control the output of a computer while a performance is under way.
See C. Roads, Composers and Computers (1985); F. R. Moore, Elements of Computer Music (1990); R. Dobson, A Dictionary of Electronic and Computer Music Technology (1992).
"computer music." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (September 20, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/computer-music
"computer music." The Columbia Encyclopedia, 6th ed.. . Retrieved September 20, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/computer-music