Computer-assisted music composition is an exciting field. Some of the most interesting possibilities of such composition are depicted in Figure 1, where the possibilities of musical information flows are illustrated. The traditional way of using the computer to compose music is to use software to produce music written in Common Music Notation (CMN), or so-called Western Music Notation, that can be printed and given to human performers who execute music using traditional instruments such as piano, guitar, or violin.
However, the computer can also be used to produce a music score automatically. In this case, the composer becomes a programmer and builds a computer program rather than a music score. A significant example of this may be found in the early work of American composer and teacher Lejaren Hiller (1924–1994), who wrote programs that composed most, if not all, of such scores as the Illiac Suite for String Quartet (1956) and Computer Cantata (1964).
Another path from the composer to the listener consists in using direct audio synthesis via specific computer programs (e.g., CSOUND, FM synthesis) that integrate the power of a programming language (like "C") with specialized functions to treat audio signals. The elaborated signals are played through a Digital to Analog Converter (DAC). Significant examples of this method may be found in works like Turenas, by American inventor and composer John Chowning (1934–), that utilize only purely synthesized sounds. This kind of processing on audio data can also be done directly in hardware if real-time interaction is needed. These features can be executed by human performers through the use of computer peripheral hardware, including electronic synthesizers, workstation keyboards, and effect generators. This last option is largely used in popular or light commercial music. These various methods of composing and performing music using the computer can be combined in multiple ways. For example, Répons, by French composer and conductor Pierre Boulez (1925–), combines the programmable real-time capabilities of a digital synthesizer with a sizable ensemble of traditional instruments.
Languages for Music Modeling
Music is one of the most complex languages. Many elementary operations are necessary to create a music score; these include editing, saving, loading, visualizing, executing, coding grabbed sounds, printing, and analyzing data. Specific music models and languages have been defined to perform these functions. These can be classified into three main categories: sound-oriented, notation-oriented, and analysis-oriented models.
Sound-oriented models address music modeling by considering the sounds produced, regardless of the visual representation issues of music notation and multiple visualization modalities. Classical examples of sound-oriented models are MIDI (Music Instrument Digital Interface) and CSOUND. Of these, MIDI, first developed in 1982, is the most common. MIDI uses a limited number of music notation symbols (e.g., notes, rests, accidentals, clefs, ties, and dots). MIDI is a protocol designed to allow computers and electronic instruments of different manufacturers to work together. MIDI provides a set of common functions such as note events, timing events, pitch bends, pedal information, and synchronization. It is one of the most used languages for the interchange of music information because it allows storing music in a compact form.
Notation-oriented models focus on representing scores on the screen and printing them on paper by using a set of notation symbols. Software examples include Finale, Sibelius, Igor, and Score.
Analysis-oriented models such as EsAC, Humdrum, and MusicData are used to describe music for its further analysis at the level of harmony, style, and melody. These models tend to code music with numbers in order to make metric analysis easier while neglecting several detailed interpretation symbols.
A new category of music languages has been proposed in order to exchange music on the Internet. Samples of this type of language include SMDL, WEDELMUSIC, and Music XML. Some of these are based on the XML (eXtensible Markup Language) programming language currently in use for much Internet content.
Notational Music Composition
Common music editors allow the composer to write music in different ways by using specific input devices such as tables, light pens, or special keyboards. Many allow users to edit with a mouse, import files from other music languages, and use optical music recognition techniques (wherein music information is extracted from a scanned printed or handwritten sheet) or audio recognition (detecting the pitch and rhythm from an audio source). Innovative solutions for music composition have benefited from the increasing presence of distributed systems. For example, MOODS is a cooperative system for editing and manipulating music. The most common input method is using a MIDI keyboard to enter the notes. The composer plays the keyboard, producing the melody for a specific voice or track (typically an instrument) which is translated by the device into MIDI codes and sent to the computer where the codes are converted into music notation. The composer is thus able to see the played notes on the monitor, to store the music, and to print it. Another typical feature is related to the output of music: after the music input phase, the composer can choose the instrument associated with each track and listen to the execution or use it as accompaniment. To do this, the computer sends MIDI commands to drive external (e.g., keyboards, tone and effects generators, controllers) or internal (e.g., software synthesizer) reproducers.
In algorithmic composition, the composer builds a program rather than a score. This form of composition is totally led by the computer. Two main methods—deterministic or probabilistic—have been developed. The first approach generates notes without random selection. The variables supplied to such a process may be a set of pitches, a musical phrase, or some constraints that the process must satisfy. The other approach integrates random choice into the decision-making process, generating musical events according to probability tables which estimate the occurrence of certain events.
Audio Music Composition
Computer-based audio music composition is based on the elaboration of sound samples. Composers combine synthetic or digitized sounds using specific editing programs (groove machine, audio editing) or hardware instruments (sampler keyboard, sequencer) that provide support for functions such as:
- Multi-track recording or sequencing, to manage several audio tracks simultaneously for playback or recording process;
- Time stretching, which allows for changing the length of audio data duration without affecting the pitch;
- Pitch shifting, in which sound samples are modified in order to transpose the pitch of notes;
- Effect applying, to add reverberation, delay, modulation , and dynamics effects.
Audio editor and electronic devices provide the composer with several sound generation techniques like:
- Additive sound synthesis. This common technique merges and filters signals with different wave-forms, different frequencies, phases, and amplitudes to produce new sound signals.
- Subtractive sound synthesis. In this technique, a complex audio signal is modified through filtering so as to modify the frequency contents.
- Physical modeling synthesis (PhM). This begins with mathematical models of the physical acoustics of instrumental sound production. The equations of PhM describe the mechanical and acoustic behavior of an instrument being played.
- Frequency Modulation (FM) Synthesis. This is based on the theory underlying the FM of radio-band frequencies. This form of synthesis was introduced by Yamaha in 1983 with the DX7 synthesizer. In the basic modulation technique, a carrier oscillator is modulated in frequency by a modulator oscillator. This allows the computer to modify amplitude and create harmonics that generate the new signal.
see also Animation; Film and Video Editing; Music; Music, Computer.
Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi
Bellini, Pierfrancesco, Fabrizio Fioravanti, and Paolo Nesi, "Managing Music in Orchestras." IEEE Computer, September, 1999, pp. 26–34.
Roads, Curtis. The Computer Music Tutorial. Cambridge, MA: MIT Press, 1998.
Selfridge-Field, Eleanor. Beyond MIDI—The Handbook of Musical Codes. London: MIT Press, 1997.