Inaudible Technology: The Trail of the Lonesome Mike

views updated

10
Inaudible Technology: The Trail of the Lonesome Mike

Sound Infrastructure and Mise-en-Scène

We maintain sound has a place in the pictures of tomorrow. Producers have to learn what that place is.

Maurice Kann, Film Daily, 1 April 1928

We have unlimited variety in a motion picture. Unity is the quality desired, the thing to strive for, as no motion picture can qualify as a meritorious work of art without it.

John F. Seitz, Cinematographic Annual 1930

For decades the myth was that Hollywood was in a state of equilibrium that the coming of sound disrupted. In John Seitz's thinking, sound had introduced potentially divisive "variety" and now it was necessary to restore "unity." Unquestionably, sound had caused decisive changes in the way films were produced. Throughout the twenties Hollywood had achieved efficiency, predictability, and sustained economic success by developing standard practices which were applied to acting, storytelling, photography, editing, and so on. The film industry's task was threefold: to develop strategies which would maintain the high level of attendance and entice new moviegoers to become regular consumers; to find efficient ways to incorporate the talkies into existing patterns of production with a minimum of economic disruption; and to ascertain and respond to audience selectivity. Only if all of these conditions could be satisfied would the talkies become a viable new film form.

Sound Infrastructure and Mise-en-Scène

Infrastructure refers to those fundamentals of production which are normally hidden from moviegoers, and mise-en-scène is the term for everything the camera lens reveals. Of course, they are two sides of the same coin. In chapter 7 we saw how the studios proceeded tentatively during 1928. Sound was treated as a novelty, not as a transition to a permanent form. Public support for sound cinema encouraged (and financed) investment in equipment, physical plant, and exhibition sites; by the time the 1929–1930 season began, those investments had pushed Hollywood past the point of no return. The

studios rapidly abandoned silent production (although silent distribution continued for some time) as they sought to revise standards of production and norms of audio expression in films. Some of the infrastructural changes which took place included switching to a new kind of film stock, trying out new artificial light sources, and establishing soundrelated practices.

While at first the newfound sonic capability was flaunted in mise-en-scène, Hollywood rapidly changed course to restrain and modulate acoustic effects. This practice was in response to audiences who made it clear that sound would be acceptable in features only if it did not interfere too much with the traditional storytelling movie. A few critics and consumers mourned the passing of silents, but the American public, commentators, and the majority of trade editors cheered the studios on with the rallying cry of Progress. Kann exhorted,

We draw attention to the indisputable fact that almost every new dialogue picture denotes an advance over its predecessor. Perfection, of course, is still among the unachieved. The technique is new, the medium of sound unknown. It will take time. But we have an abiding faith in the ingenuity and the ability of the creative element in production that the problem will be surmounted. (Film Daily, 29 January 1929, p. 1)

In less than two years, Hollywood decided on uniform practices which allowed it to achieve something like the "classicism" of the silent period. Trade organizations, engineers, technicians, and industry suppliers pressed forward with unabashed confidence in scientific advancement. They worked quickly to streamline practices—offsetting variety by a return to unity, in John Seitz's terms—and getting down to the business of making popular films.

Lights

Many aspects of the industry's transition to sound involved processes and procedures which the public neither detected nor found interesting. Lighting in cinematography is all-important yet unnoticed (as sound would become) except in special-effects circumstances. Indoor lighting for silent movies was a mixture of arc light, produced by creating a DC spark between a carbon and a steel rod, and Cooper-Hewitts mercury vapor light, which were arranged in banks of tubes and whose light resembled that of modern streetlights. During the transition to sound Hollywood changed to incandescent lights. Traditionally this infrastructural change has been linked to the need for noise reduction on the set because the arcs gave off a "fizz" or "sizzle," and the Cooper-Hewitts' transformers hummed. Technicians were fond of saying that "arc lamps cannot be used because their sputtering interferes." In reality, though, these problems could be surmounted by applying a "choke coil" capacitor to arcs and by moving the Cooper-Hewitts' transformer. The carbon ash and the bluish light these units gave off were more serious obstacles than their acoustic problems.1 At the time of The Jazz Singer's release, an article in Film Daily was already carrying a headline, "Carbon Lamp Seen as Doomed." So the change was under way before the conversion to sound began. Indeed, some studios, such as Paramount, Pathé, and Sennett, converted to incandescence before they installed sound.2

As they had done with sound, the studios banded together to maximize the potential of incandescent light and to negotiate in unison with manufacturers. During 1927 and 1928 the American Society of Cinematographers, the Society of Motion Picture Engineers, and the Academy conducted what became known as the "Mazda tests" (after the brand name of the General Electric lamps).3 There were compelling pragmatic and aesthetic reasons for preferring incandescents, but the determining factor was that they consumed at least one-third less electricity than arcs or Cooper-Hewitts. Jack Warner announced that, as a direct result of the tests, he would convert to incandescents. By the time Jolson began shooting The Singing fool in June 1928, Warner Bros, was using only "inkies."4 They were adopted by most major studios early in 1929. Bordwell has maintained that "the Mazda tests were a turning point in the history of Hollywood technology," not because noiseless incandescent lighting made sound recording possible, but because the tests were another instance of associationism—the studios cooperating to face a mutual problem.5 The tests also substantiated Hollywood's ongoing infatuation with engineering and "scientific" solutions.

Mazda lights had advantages and disadvantages for sound recording. Obviously, they were silent. They could be brought in close, and their illumination could be manipulated to create a wide range of lighting effects (soft, harsh, patterned, etc.). Their intensity could be faded up or down. But they also produced intense heat, which was hard on the actors, and they taxed soundstage air-cooling systems, which in turn produced noise.

Film Stock

At the same time that incandescents were being introduced, the traditional black-and-white orthochromatic film emulsion, sensitive to wavelengths at the blue end of the spectrum, was being phased out by panchromatic emulsions, which recorded a wider frequency of light waves. (Thus, the new stock was better suited to the wider spectrum of light emitted by Mazdas.) Like the switch to inkies, this transition occurred while the talkies were being created. Salt contends that all the studios had adopted pan film by 1927. Certainly by 1928, Du Pont's and Kodak's improved formulas had become the industry standards. Eastman Kodak Supersensitive Panchromatic, a high-speed film (for that time) that became available in February 1931, was used primarily for newsreel and night photography. RKO-Pathé, however, switched to this stock exclusively for all its productions because sets could be lit with less electricity.6 The industry's conversion to these picture-taking emulsions was independent of sound.

Obviously, because Vitaphone played the sound track from a disc, the type of film used for shooting or for release prints was irrelevant. For sound-on-film, however, the emulsion was crucial. Single-system recording (recording the picture and the sound on the same film inside the camera) presented problems. The image and the sound tracks had different developing requirements, so the result was a compromise. After early 1929 single-system was used primarily for newsreels, as in Fox-Case Movietone, and for location work. For double-system recording, the picture track and the sound track were processed separately for optimal results, then printed together to make a "married" release print. Eastman and Du Pont devised specialized films for recording sound tracks. These were orthochromatic with the very fine grain required to reduce background hiss.

Eastman Type 1507 negative for sound recording was marketed in 1928 and made specifically for variable-density tracks. In March 1929, Du Pont introduced two soundrecording stocks, Type VA and Type VD, for variable-area and -density. These competed with Eastman Reprotone. Eastman Type 1359 became accepted by the industry as the standard in 1932 for variable-density sound.7 One reason Warner Bros, held out longer than other studios in converting to optical sound was because its wax blanks were much cheaper than the negative stocks for optical sound. The savings, said one executive, amounted to many thousands of dollars annually for the studio.8 As competition increased and the price of raw stock declined, Warners had less incentive to stay with discs.

Camera Design And Sound Abatement

The standard studio camera of the 1920s, manufactured by Bell and Howell, was prized by cinematographers for its rock-steady image. However, the mechanism that made it a great silent camera, a steel registration pin, also made it useless for sound recording because of its loud clattering noise. A competing camera made by the Mitchell Company became the new standard. As early as 1927 the Warner Bros. cinematographer Hal Mohr had patented a way to render a Mitchell camera "noiseless." He used it to shoot Bitter Apples.9 The basic modifications involved replacing the external steel-spring tension belts on the film magazine with leather belts. (These equalizers took up slack as the stock ran through the camera.) The silent-standard four-hundred-foot roll of film was replaced by a thousand-foot magazine, which made it possible to record a full projection reel in one take, lasting up to ten minutes. Inside, fiber composition gears supplanted the metal ones.

Nevertheless, the sensitive and omnidirectional Western Electric capacitor mike still managed to pick up the slightest whir, making it necessary to isolate the source of the racket, the camera, inside a soundproof booth. The famous "icebox" was unwieldy, though not as absolutely static as legend has it. Mounted on wheels, it could be pushed around the soundstage with its cables dragging. The opening (covered with optically ground glass to reduce refraction) was wide enough to facilitate short pans and reframing movements. It was also possible to use the booth outdoors; for Hell's Heroes (1930) William Wyler mounted one on tracks for moving shots on location in Death Valley. Directional microphones operating out of mobile sound trucks made transporting the icebox outdoors unnecessary.

The studio camera booth contained a speaker monitor so that the one or two camera operators inside could hear the playback as well as communicate with the sound engineers by intercom. To compensate for the increased distance from the actors, lenses of longer focal length were used.10 This practice resulted in decreased depth of field (a shallow plane of focus), which is clearly visible in films from 1927 to 1930. Sharply focused foreground figures stand out against blurred backgrounds. Some filmmakers tried to explore shallow depth of field creatively, as Mervyn Roy did in Little Caesar(1930). Tony, in a tender scene, is convinced by his mother to confess his crimes. As she walks away from the camera crying, she is allowed to go out of focus, suggesting her son's teary vision of her.

Each studio had its own tinkerers who devised ways to liberate the camera from the booth. Fox had its horse blankets. The Warner Bros. blimp, though constructed of resilient material, weighed thirty-seven pounds." The MGM prototype was a light-weight housing containing a fibrous filler that fit snugly over the Mitchell body yet allowed focusing access to the viewfinder. In March 1931, AMPAS charged a committee with the task of achieving uniform camera silence.12

Once the camera quieted down, it could be placed on heavy-duty tripods, dollies, "Rotoambulators" (a combination dolly and crane), or giant cranes (as in Broadway [1929]). The myth that the talkies were stage-bound single-take affairs may apply to some of the earliest examples, like De Forest Phonofilms, but is dispelled simply by looking at the films of 1929—1930. Most contain at least a few very fluid moments. Salt concludes that

if one makes a rough addition of all the cases, one finds that in fact there was remarkably little discontinuity in the use of camera movement across the transition to sound in Hollywood.… The use of the mobile camera in their early sound films by such second and third rank talents as Eddie Sutherland (The Saturday Night Kid [1929]) and Paul Sloane (Hearts In Dixie [1929]) attests to the vigour with which a burgeoning fashion could be pursued in the face of technical obstacles. (Barry Salt, Film Style and Technology: History and Analysis [London: Starword, 1983], p. 229)

Aspect Ratio

Disk recording had no effect on the film image, so the Vitaphone picture filled all the area on the film stock between the sprockets, as in silent prints. The image had an aspect ratio (the proportion of height to width) of three by four (usually expressed as 1:1.33). With Movietone sound-on-film, the strip occupied by the sound track was borrowed from the picture area, resulting in an almost-square aspect ratio (1:1.15). The director Paul Fejos, for one, noted that the "use of sound changes the proportion of the screen slightly," but he felt that the square had more creative potential. "This change is an improvement over the old style and gives greater flexibility."13 But producers, directors, and cinematographers were not totally in charge of the shape of the image. John Aalberg, an RKO-RCA technical consultant, observed in 1930 that theater owners, "for artistic reasons," had insisted on continuing to show optical-track films with three-by-four-proportioned aperture plates in the projector, even though doing so cut off 10 percent of the picture height. To compensate, cameramen had begun to frame their shots with more head room. Consequently, the sound engineer had to raise the microphone higher, away from the actor, in order to keep it out of the frame. Finally, in 1932, the industry adopted the "Academy ratio" with an aspect of 1:1.37 (close to three by four). This was achieved by adding a "hard matte"—black strips at the top and bottom of the frame on the film stock—thus conforming the sound-film image to the proportion that projectionists had been using for years.14

The Grandeur process, developed by Earl Sponable, debuted in September 1929 and was the most successful of several widescreen formats introduced during the period. Grandeur recorded and projected the image on 70-mm film stock in a very wide rectangular aspect ratio of 1:2.13. William Fox financed this venture by forming a partnership with Harley L. Clarke, the Chicago utilities magnate who also owned General Theaters Equipment Company (and soon would replace Fox as the chief executive). Fox and General bought an interest in the Mitchell Company to manufacture and control access to the special cameras. A 70-mm Super-Simplex projector was made by the International Projector Corporation to Fox specifications. Besides the expansion of the image, Grandeur also markedly improved the standard sound delivery system. It used a quarter-inch optical track, three times wider than the 80-mil track on 35-mm film. Film Daily observed that the extra area greatly reduced noise, "a troublesome matter with talking pictures at present."15

The experiment cost William Fox $2 million. Grandeur looked and sounded great (as restored prints of The Big Trail [1930] show). Other studios prepared to join the widescreen fad. Within weeks, MGM debuted its Realife process (for Billy the Kid [1930]), and Warners unveiled its 65-mm Vitascope format (for Kismet [1930]). Both used discs for the sound track in order to take advantage of the full width of the film stock for the picture. RKO used the Spoor-Berggren Natural Vision system, which recorded on 63.5-mm stock and played the sound back on a separate film track. When Spoor shot Lady Fingers at the Gramercy Studios he placed his huge static camera eighty feet from the stage. The action was also filmed by regular cameras fifty feet from the stage. The film recorded the musical revue in one continuous take. Though advertising and press releases suggested that Natural Vision was a stereoscopic process, Belton claims that this was just publicity hype. Mordaunt Hall, however, often mentioned the "illusion of depth and distance" afforded by all these widescreen processes. He complained that in Danger Lights "while the persons occupying the centre of the stage are in focus others in the background seem more out of focus than usual." He also noted that the widescreen image created a jarring effect when edited, a complaint to be heard about CinemaScope a quarter century later.16

Fox's move was widely regarded as a power play designed to force another major change onto an industry still trying to cope with sound. John Belton has speculated that, "through his sound patents, Fox hoped to secure a hold over sound film production, reducing other studios to the status of his licensees. The successful innovation of Grandeur could have resulted in the displacement of 35mm technology as the dominant mode of production, distribution, and exhibition and would have given him a similar control over image technology."17 Ostensibly to agree on industry standards (but also to thwart Fox), the Academy declared a moratorium on widescreen in 1931.

Screens

Sound brought changes to the most visible yet least noticed aspect of the movie experience, the projection screen. Where should the loudspeakers be placed? The General Electric engineer Edward Kellogg recalled, "Our sense of the direction from which sounds come is too keen for us to be fooled by loudspeakers placed alongside or above the screen. Sound must come from directly behind the screen to give a good illusion. This is one of the lessons that was learned early." Though Phonofilm and the earliest Vitaphone experimented with placing speakers down in front of the screen to simulate the absent orchestra, this practice was fleeting. Kellogg attributed the invention of the 1927 sound-transmitting screen to Earl Sponable. The basic problem was resolving the screens two incompatible functions—reflecting light and transmitting sound. A compromise was reached by perforating the material. The more holes there are in a screen, the more sound comes through undistorted, but the less light is reflected. The SMPE determined that perforating 4-5 percent of the surface area was optimal. In 1928 the higher-reflecting Cinevox screen appeared. The Vocalite Sound Screen ("porous but not perforated") was also "chemically correct for colored pictures." Da-Lite marketed an "eggshell" screen with a slightly yellow color to offset the bluish beam of the arc light in the projector.18 ERPI also touted its "official" Ortho-Krome screens for both color and sound rendition.

Sound Recording

The years 1927—1931 saw a steady increase in the proportion of signal-to-noise on the sound track and in theatrical reproduction. (That is, the sounds became louder and the silent passages became quieter.) This increase corresponded to a change in the fundamental conception of movie sound, away from producing a faithful recording of the filmed event to constructing a noise- and distraction-free sound track which assigned priorities to the voice and other sounds.

The transition occurred in roughly three stages. At first the main concern was external noise abatement: isolating the camera, constructing airtight studio buildings, filming outside of the city or at night, and warding off loud aircraft. These concerns were raised not only by the properties of the Western Electric mike but by its placement high above the actors to keep it out of camera range. The soundstage itself was part of the noise reduction system. The major studios built specially engineered spaces designed to isolate the interior from the noisy city streets outside. They also contained the mixing booth for the recordist. Usually the recording apparatus was in a separate room or even a separate building, connected to the stage by umbilical cables. ERPI supervised some construction, but the studios also engaged consultants such as Professor Knudsen of USC. At least one new firm, the Austin Company, thrived as specialists in "acoustic science" and designed soundstages for MGM and Columbia.19

It was gradually realized that some speech clarity could be lost because a trailing voice or muffled tone consistent with the illusionistic space of the shot or justified by the narrative could be tolerated. Thus, the barely audible conversations of the cowboys in The Virginian and the clanging background noises in Danger Lights are interpreted by the viewer as convincing environmental sounds, not as errors in recording.

In the early 1930s advances in film stock, microphones, and new tactics for placing mikes enabled technicians to isolate the voice from its background and to dub in effects, rather than mixing them live during the take. The sound track came to be seen more as an ensemble constructed in post-production rather than as a record of an acoustic performance. Putting it another way, the early aesthetic of the sound film as a transmitter of virtual events gave way to a view of sound as an edited entity, parallel to (but seldom matching in complexity) the image track.

Synching to Playback

A crucial change in sound recording occurred when studios began to film action which matched a previously recorded sound track. The first playback session, according to lore, was improvised during the shooting of the "Wedding of the Painted Doll" number in The Broadway Melody (1929). When Thalberg ordered retakes, the sound technician Douglas Shearer convinced him to reshoot while the dancers accompanied a playback of the already satisfactory disc. The dancers kept to the beat. Not only did this allow for more camera and performer mobility, it enabled control over the recording of the sound since the music could be registered under optimal conditions in a separate studio. Actually, this was a throwback to the earliest sound films, for example, the Gaumont Chronophone lipsynching and dancing to prerecorded sound became standard for musical numbers.20

Playback also enabled crews to dispense with offscreen orchestras and to shoot musical numbers outdoors. Thus, in Rio Rita (1929) the singers and choreographed chorus lines perform with orchestral accompaniment on the desert location, just as if they were on a soundstage. Scenes requiring close views of singers, however, were difficult to lipsynch. Eddie Cantor recalled, "My one problem was the novelty of recording songs in advance, then appearing before the camera in costume and make-up, and mouthing the words to a playback."21 In Whoopee! (1930) we see a modification of the playback system. In the "Making Whoopee" number, Cantor seems to have been first recorded going through his routine live. The camera pans to keep him framed in a medium close shot. We can tell he is not lip-synching to a playback because he flubs the word telephone and claps his hands without losing the sound track. This recording, then, became the source for the playback record during the dancing and chorus shots, which are in long-shot, in which a slight loss of lip-synch is not detectable. So the solution was to combine a live close-up master shot, which shows him singing in synch, with long shots showing him dancing (and doing a flip) synched to a playback of the sound from the master shot.

The unlinking of the microphone from the live event opened the door to unlimited intervention in the construction of the music track. It was only a short time until the whole sound track was susceptible to "sweetening"—improving (or creating) an otherwise nonexistent acoustic environment.

Noise Reduction

Some strategies for improving the signal-to-noise ratio were straightforwardly pragmatic: diminish the noise and boost the signal. In response to pilots' fondness for Hollywood and Burbank airstrips, studios asked "aviators from nearby airdromes…to fly high when passing over studios, because noise of their motors penetrate sound proof stages." The studios optimistically painted signs on their soundstage roofs and hoisted red flags during shooting.22 To silence pedestrian traffic, the producers adapted radio's "on air" sign and furnished stages with red lights on the door of the camera booth and around the set to warn would-be noisemakers.23

Normally, to boost the signal, one simply tried to move the mike as close as possible to the speaker, perhaps hiding it in a prop on the set. But a strong voice presented a particular difficulty: the only way to prevent such a voice from overloading a capacitor mike and causing distortion was to move it back. This was how Lawrence Tibbett was recorded for The Rogue Song (1930). "The sound boys finally finished up with the mikes 15 feet back, the orchestra anchored, everybody grabbing hold of something, and [the director Lionel] Barrymore dictating, 'Fire when ready, Larry.'"24 This technique had the side effects of increasing the pickup of ambient noise and heightening scale-matching problems by making the singer sound distant if the camera was not also pulled back.

The weak link in sound reproduction was still at the consumer end, in exhibition. In May 1931, J. I. Crabtree, president of the SMPE, told the group that radio music sound was now superior to theater sound. The aptly named Progress Committee in October reported that some improvement in sound was observed in the better type of theater (that is, the studio-affiliated chains), but that there was no noticeable improvement in sound reproduction in general. One persistent problem was that vacuum tubes introduced a "frying" noise into the theater's sound system if not properly maintained.25 With the lessening of ERPI's influence, the studios had little control over the sound quality in theaters they did not own.

Microphone

The acoustic engineers' challenge was to curb the microphones hunger for all sounds and to make mikes more portable. The standard Western Electric condenser microphones required frequent ERPI service and were cursed with a propensity to register equally all sounds from any direction. In an early effort to limit its omnidirectionality, Carl Dreher, an RKO engineer, developed what he called the beam microphone—it kept "extraneous noises out of the beam." For this reason, it was also called a concentrator microphone.26 Dreher's cumbersome but effective product used a parabolic reflector dish about three feet across to gather sounds from one source and focus them on the pickup. The studio's widely publicized application of his parabolic mike was in Danger Lights, released in late 1930. Listening closely to the sound track, one can hear the technique foregrounded in the many outdoor scenes set with trains nearby. Their background sounds create atmosphere, and yet the conversations are still intelligible. The dialogue reproduction, however, sounds muddled, and the voices seem farther away than the actors. William LeBaron proclaimed that microphones were now as mobile as cameras and that outdoor locales, abandoned since the advent of sound, would return to favor.27

The parabolic mike was made obsolete (for ordinary use anyway) when RCA released its ribbon microphone in January 1931.28 Later referred to as a velocity microphone (and still later as a cardioid mike), this microphone consisted of a thin metallic ribbon which generated a minute voltage when moved by air vibrations. This current was thermionically amplified and gave very good reproduction, with two added advantages. Clarity was increased because it did not pick up reverberation, and it was directional. The ribbon moved only front-to-back, so it recorded nothing coming in from the side (or from the back when the casing was in place). This directionality produced its distinctive heartshaped pickup pattern. As long as the mike was aimed away from the camera (which was blimped as well), camera noise was a thing of the past. Dreher began using ribbon mikes on all RKO pictures in the summer of 1931 and praised them for eliminating "move-ins" (resetting the mike for each shot), thus helping to keep the sound level on the track at a uniform level.29

Western Electric also introduced a new concept, the electrodynamic transmitter (later called the dynamic coil mike). The characteristic "salami" preamplifier formerly attached to the condenser pickup could now be as far as two hundred feet away, thereby making it easier to hide the mike on set, put it in a mobile unit, or swish it around on a boom. ERPI boasted that since the electrodynamic transmitter was less affected by dust and moisture, it was no longer necessary to store the mike in a desiccator jar when not in use(!). Though D. W. Griffith used this "super-selective type microphone" to record The Struggle (1931), the dynamic coil did not come into widespread use in Hollywood, as Salt points out, until late in the 1930s.30

Sound Editing

Much of the film editing technology and many of the filmmaking divisions of labor still in use were devised to cope with sound. The picture editors were generally the ones who cut the sound tracks, effectively doubling the time it took to edit a film. Not surprisingly, sound was treated as an analogue to the picture, with similar names for similar effects (dissolve, straight-cut, etc.). King Vidor recalled the rigors of editing Hallelujah! (1929) without any new equipment:

New York: David McKay Company, 1972], p. 15">

I rigged a push-button control from the projection room theater to a flashing lamp in the projection booth. The operator was instructed to make a grease pencil mark on the moving film when the light flashed signaling the onset of [a] line of dialogue. Afterwards the editor and I would go to the cutting room and try to synchronize the two tracks.

When we would return to the theater to view the sequence with sound, we would invariably find that the synchronization was two to six feet off—the result of the time it took for me to press the button and the operator to reach into the mechanism of the projector with the marking pencil. (Vidor, King Vidor on Film Making [New York: David McKay Company, 1972], p. 15)

Paramount, under Roy Pomeroy, was probably the first studio to have a designated team of sound editors. Andy Newman and Merrill White there were credited with developing new devices to edit the picture and sound tracks separately.31 John Aalberg, at RKO, invented an attachment for the Moviola viewing machine for cutting picture and sound. Analogous to the visual match-action cut, "the instrument enables the cutter to literally 'cut a word in half' with little difficulty."32 Lodge Cunningham, a Christie sound engineer, was among several proud editors who laid claim to making first sound lap dissolve (in Divorce Made Easy [1929]). Using this technique, voices in one sequence faded out while those in the next faded in, merging perfectly, it was said, with no "dupings of scenes."

Manufacturers quickly supplied the field with suitable equipment to streamline the editing chores. The Neumade Synchronizer went on the market in August 1930. Its sprocketed hubs fixed to a single shaft transported image and sound tracks together in locked synch for easy editing. Eastman and Du Pont's edge-numbered stock also helped editors keep frames aligned.33 Even film splicing had to be modified. A straight-cut sound splice made a loud pop unless each cut was obscured. Editors advised using specially formulated "blooping ink" or, better yet, a piece of black film. Eastman stepped in with its sound film patches for blooping in 1931.34

Let us also mention Pat Bernard at RKO, the "most photographed man in talking pictures." He was the "marker" (later the "clapper boy"). "His duties consist of marking each scene by clapping blocks of wood together, giving the camera and microphone time to record the action, as each scene is taken during production."35 By matching the sound of the clack on the optical track with the moment of contact in the picture, the editor could establish perfect synch.

Dubbing

Though associated mainly with foreign-language adaptation, there was little new or remarkable about dubbing, a term derived from phonograph work. It referred to the practice of combining sections from previous records to make a new composite. An ERPI engineer wrote that "the need for dubbing was anticipated. In fact, it was considered as a simple application of already developed processes."36 In The Jazz Singer, Warners used the technique to edit Cantor Rosenblatt's performance of the Kol Nidre, which Jolson then lip-synched to a playback (predating the "Wedding of the Painted Doll" incident). The surviving Vitaphone "Re-recording notes" contain an annotation for this disc: "2204—Cantor Rosenblatt. Instead of being used in its entirety, a portion of the record was duped onto 2214 record along with the musical score. The record as such is therefore not being used."37 This notation indicates that disc-to-disc dubbing was in use as early as 1927. Two years later the cinematographer William Stull confirmed that the technique was still in use. He wrote that, while Vitaphone discs could not be cut, "it is possible to play off any given scene from one record to another, which gives the process a certain degree of flexibility in editing, or assembling the finished picture."38 In actual practice, though, it was difficult to "edit" discs with any precision by re-recording, and the drop-off in quality was sharp. Later Warners edited optical sound and then transferred it to make master discs.

Dubbing with optical sound was also unsatisfactory in the early stages because each generation of printing added another layer of background noise. Undaunted, Pathé News experimented with a "sound double exposure" in 1928. The second issue of the newsreel contained scenes of a memorial parade superimposed over a French battlefield. Simultaneously, the sound track played a distant trumpet over a recitation of "In Flanders Fields."39

The ERPI engineer K. F. Morgan defined the various dubbing-related terms current in 1930: scoring (adding music to a picture with dialogue or sound effects); synchronizing (adding new sound effects or dialogue to a sound picture); and re-recording (transferring one or more film or disc records to a new film or disc by the electrical process originally used). He reported extensive use of synchronizing and scoring. Distinctive street sounds, water, revolvers, and so on, were routinely synchronized after the shooting was completed. "Libraries" of sound effects (obviously based on radio precedents) were readily available for re-recording. Special amplifiers had been designed to compensate for the loss of high frequencies and distortion of lows inherent in the re-recording process. Morgan even claimed that defects in the original recording, like "tubbiness," might be improved artificially during re-recording.40 George Lewin of Paramount told an SMPE conference that, in addition to utilizing dubbing for "the faking of dialogue for foreign versions of domestic pictures," routine re-recording could also equalize volume levels from scene to scene. As re-recording became a universal practice, it took control of monitoring the gain away from the local projectionist.41

Music Scoring

J. P. Maxfield, of Bell Labs, divided film music into pre- and post-scoring. Pre-scoring referred to recording music in a studio and playing it back on the set. He argued that it was best limited to incidental music, marching, and so on. Consistent with his scalematching approach to the sound track, he argued that music used in this way had to match the "acoustic tone" of the scene. There were specific times when pre-scoring should not be employed: "It is difficult to pre-score a song in which the singer appears in a close-up or semi-close-up in the picture, since it has been found that the singer pays more attention to keeping in synchronism with the record than to acting. It is, therefore, preferable under these conditions to make a direct synchronous take"—that is, recording the singing and music together live (as in the Whoopee! example above). Post-scoring was "the addition of music and occasionally dialogue to a scene which has already been photographed." Here, too, he insisted that care be taken to match the acoustic to the visual space of the shot. Post-scoring would soon become the universal method of making music and singing sequences.42

Boom

Boom devices for supporting the microphone seem to have spontaneously come into use as soon as microphones were light enough to lift on the end of a shaft of some sort. Gloria Swanson recalled:

Gloria Swanson, Swanson on Swanson [New York: Random House, 1980], p. 359">

In 1925 when Henri [de la Falaise, her husband] and I arrived in New York from Paris, Lee De Forest asked us to do a talking segment as a stunt for a presentation at the Lambs Club. He got Allan Dwan and Tommy Meighan and Henri and me into a little studio in Manhattan and had us talk to one another while a cameraman photographed us and another technician waved a microphone around on a pole. We all sounded terrible; none of us could believe our own voices. (Gloria Swanson, Swanson on Swanson [New York: Random House, 1980], p. 359)

The MGM writer Samuel Marx attributed the innovation to Lionel Barrymore, who "came to the [MGM] studio [in New York] and began to direct sound tests of the new players. Inhibited by the stationary microphones, he tied one to the end of a fishing pole, then moved with it, holding it above the heads of the players. Barrymore's innovation led to the contrivance of a boom-stick which became standard equipment, and movies began to move again." Bosley Crowther attributed the innovation to Louis B. Mayer's assistant Eddie Mannix, who improvised "an apparatus on the order of an old-fashioned well sweep on which to swing the microphone." Dorothy Arzner at Paramount has also been identified as the inventor.43

While many claimed credit, E. C. Richardson, of the Mole-Richardson Company, sketched in a 1930 paper for the SMPE a plausible scenario:

As the sets became larger it became necessary to use a plurality of microphones and to fade from one circuit to another as the actors moved about. This operation of fading from one microphone to another contributed to errors in recording which while excusable a year ago would be highly criticized today.

To obviate the use of plural microphones several devices were used. For instance, a microphone was sometimes suspended from the ceiling by means of a cord and moved about with a long pole, an operation quite obviously called "fishing." Some studios had their prop departments construct supporting arms or booms which would facilitate the quick placement of microphones. Most of these pieces of equipment were hurriedly made and crudely constructed and none too satisfactory in their operation.

[MGM is using a boom which] consists of a substantial base supporting a vertical column which in turn supports a lever arm having an adjustable portion which can be extended or retracted at will by operating a cable drum by means of a crank from the floor. The under-balanced portion of the boom and the weight of the microphone are counterbalanced by a fixed counter-weight and the boom is operated upon its vertical and transverse axis by an operating lever. (Film Daily, 7 May 1930, p. 6)

This is a description, not surprisingly, of the Mole-Richardson standard boom introduced in 1930. It also had a cable-operated swivel to enable the operator to point the diaphragm directly at the speaker and change positions during shooting. In 1932 the Jenkins and Adair Company introduced a cleverly designed portable boom with a telescoping tube. Its case, when filled with sand or water, became the counterweight.44

The boom and the new directional microphones improved the signal by placing the recording device closer to the source, the speaker's mouth. In one sense, the boom was too successful because it reduced not only unwanted background noise but desired sounds from the environment as well (including "room tone"); the latter thus had to be recorded separately and dubbed back in during post-production.

The silencing of the camera and the introduction of directional mikes combined with this simple tool to produce high-quality vocal recording. But like other film technology, the boom did not instantly replace its predecessors. A still from Smiling Irish Eyes (1929) reminds us that the boom was available to be used wherever needed. Yet Show Girl in Hollywood (1930) discloses in its studio shots that the mikes were still hanging from the soundstage rafters. The studio scenes in Free and Easy (1930) show the mike on a boom in some shots and slung from the ceiling by wires in others. Obviously these mike placement methods coexisted. Sound-recording directors shared knowledge and dipped into their pool of techniques as needed.

One salient feature of the Hollywood legend is the representation of the boom as the device that tamed the microphone's lack of control. Part of the allure of the story is the resourcefulness of technicians who applied tools from pretechnical society (brooms, fishing poles, well sweeps, etc.) to a new technology.

Multiple-Camera Cinematography andMise-en-ScÈne

In order to add rhythmic editing and motion to their films, the studios resorted to an expensive and film-wasting technique, multiple-camera cinematography. Using more than one camera to film a scene was nothing new. Events which could not be easily restaged, like fights and stunts, and footage shot for export processing—the "foreign neg"—had been filmed with several cameras for years. But for routine work in the silent days the filmmakers preferred the multiple-take method. They lined up each shot, custom-lit it, photographed it, then moved the camera to reframe from another angle at a different distance. The actors had to repeat their performance each time, trying to replicate their movements.45 This system produced views which could then be edited together to produce the characteristic long-shot/medium-shot/close-up analytic cutting pattern of the classic silent film. Warner Bros. and Vitaphone wanted to maintain some semblance of this time-tested, audience-grabbing style of filmmaking. They were hamstrung, however, by their sound technology. The Vitaphone booths were too bulky to facilitate quick setups and retakes.

The multiple-camera method made it possible to keep the late-twenties pattern of analytic cutting intact. The cinematographer Lee Garmes explained how it worked:

We had six cameras on every scene. One camera would be on a long shot and then there would be two cameras spaced on the right and two or three on the left. Each camera was getting a different size picture. If the scene ran a minute, two or maybe three minutes, they had the film to go with the record. So the needle would go around and do that two minute scene or whatever it was and they had it covered with six cameras. They felt that had to be done because we were in the hands of the sound department. Of course, photography went right out of the window. There wasn't any photography. It was just horrible. (David Prince et al., "Lee Garmes, A.S.C.: An Interview," Wide Angle 1, no. 3 [1976]: p. 74)

This practice was a major departure from the multiple-take system of silent film production.46 Instead of repeating lines read by the director (or simply improvising), actors in the first talkies were expected to memorize a whole day's shooting, much as theater actors do. Many comments about silent film actors' "stupidity" were probably references to their resistance to memorizing large chunks of script. Of course, with multiple cameras rolling, retaking to repair a blown line would ruin a lot of film. The fear of a director's wrath when retakes were needed must have been more intimidating to actors than the famous "mike fright," especially among those without stage experience. To avoid having to do expensive retakes, the dialogue director rehearsed scenes thoroughly before shooting. In one extreme case, Alan Dwan devoted seventy-two hours to filming a complete dress rehearsal of What a Widow!, with Gloria Swanson, "so that analysis and revision may be afforded prior to filming the actual picture."47 Such theater-like rehearsals had been rare in Hollywood, perhaps because the Studio Basic Agreement between actors and producers provided compensation only for time spent before the cameras. Until a new contract was negotiated, actors had to report for rehearsal on their own time—another reason stars resisted the talkies!

To accommodate movement on the set, the director of photography had to devise a lighting scheme which would keep the set uniformly lit for all the action in the sequence. Even E. B. DuPar, the head cameraman for Vitaphone, complained about having to light three to five angles at the same time.48 The heat of the incandescents, magnified by the need to turn off noisy fans and air conditioning during shooting, produced a sweltering environment that roasted camera operators in their booths and made stars' makeup run.

Like the cinematographer trying to anticipate all the lighting problems, the sound technicians had to anticipate the actors' movements. As with multiple-camera lighting, mike placement had ramifications for the viewer's comprehension of screen space. Should the sound "follow" the camera as it changed angle and distance? Mordaunt Hall complained about On Trial (1928) that "there are giant heads, four or five feet high, that speak with the same volume as those in the long shots. Not that one anticipates or wants stentorian tones, but it might be an improvement to have as few close-ups as possible." Concerning My Man (1928), he found it a pity that the director "does not keep his camera on Miss Brice while she is entertaining a gathering, for when Mr. Mayo turns his camera on the throng listening to Miss Brice, the voice still comes from the centre of the screen. Miss Brice's vocal efforts also are just as resonant when her back is turned to the camera."49 Rick Altman has detailed the theoretical debate between those sound technicians who thought that screen voices should be maintained at a fixed level and those who advocated that the voices vary according to viewers' implied distance from the speaking source.50 The former view, which Altman calls the unified body theory, was put forward by technicians working for RKO (Carl Dreher, John L. Cass). The latter position, called scale-matching, was taken by the followers of the Bell Labs scientist Joseph P. Maxfield.

Those who argued for conceiving the sound track as a simulation of human perception (the unified body) noted that we hear continuously, without sounds "jumping around." To achieve this effect of spatial continuity, multiple microphones had to be used. It was necessary to suspend several mikes above the "hot spots" and to follow the motion by pulling the mikes with ropes or fading between inputs in the mixing booth. The rule was to provide one mike for each camera position: if the camera moved, the mike moved, and only one mike at a time should be activated. "The insistence on this requirement on one of the early pictures," Maxfield scoffed, "led some humorist to call this technic 'The Trail of the Lonesome Mike.'"51

Maxfield's alternative approach was designed to achieve scale-matching. The scene was set up with only one microphone near the camera and in line with the subject. The recording level varied according to the distance of the speaker from the camera. If an actor faced away from the camera, or moved toward it, his or her voice diminished on the sound track, as it would be perceived to do by anyone present on the set. One of the jobs of the sound technician, according to Maxfield's view, was to intervene to adjust the recording operation to produce the most intelligible speech, even if it meant cheating or faking environmental "fidelity." (Telephone engineers routinely manipulated sound to prioritize speech.)

Each theory had its convincing arguments. The unified body approach ideally produced maximum dialogue clarity because the sound mixer could use the closest micro-phone. Scale-matching produced a more convincing illusion of integrated visual and acoustic space but sacrificed intelligibility when the speaker was too far from the mike. James Lastra has pointed out that in both these conceptions of the film, the listener is a static auditor in the best seat in a theater. This "fidelity-based" model was more suited to phonograph recording or broadcasting than to cinematography. Furthermore, these were theoretical positions. In actual filmmaking practice, there was a clash between electric company sound technicians, with their sonic representation ideals, and Hollywood craftsmen, who had a different sort of spectator ideal. The filmmakers wanted to restore the "mobile" spectator of the silent cinema.52

It is difficult to find a pure example of concrete applications of either approach, and the films which might illustrate them do not correspond to the expected RCA (unified body) versus Western Electric (scale-matching) opposition. The studio that clung most rigidly to multiple-miking was not RKO, as we would predict, but Warner Bros. Its ERPI-outfitted stages in Burbank and Brooklyn contained many banks of microphones suspended from the ceiling. And one of the best examples of scale-matching is in an RKO film, The Vagabond Lover (1929). When the camera is close, the voices boom; when it is at a distance, the voices are weak. In everyday work, the image and sound space of the early talkies seldom matched this closely.53 Altman analyzed Applause (1929) and found that a scene would be

shot with a single microphone, while two cameras are churning out images of different scales. Once edited together, the two simultaneous camera takes produce a scene typical of the period. Perhaps it is fitting to remark here that the term editing, entirely appropriate for the images, is less so for the sound, since the sound take used is apparently continuous and uncut. In fact, it would be perfectly correct to say that the contemporary practice of using a single microphone system synchronized to two or three cameras fairly begged early editors to use a continuous sound track as the bench mark to which they edited the various images. (Rick Altman, "Sound Space," in Rick Altman, ed., Sound Theory/Sound Practice (New York: Routledge, 1992), p. 51)

Multiple-camera work changed the space of the silent film. With the traditional multiple-take method, props could be moved and actors repositioned to achieve the desired spatial composition (as in the overhead shots of Garbo in The Kiss [1929]). No such cheating was possible with the multiple-camera method because extreme angles would reveal the other cameras. To compensate, one camera was frequently positioned to give a high-angle view (as in On with the Show [1929]). But the space was not fragmented, as it often was in silents.

The number of cameras used depended on the complexity of the scene and on whether silent and export versions were being shot. Some simple Vitaphone shorts and trailers used only two (long-shot and medium-shot at an angle), while as late as 1932 Paramount was still on occasion using up to ten. A shot in Showgirl in Hollywood (1930) shows the sound engineers scrutinizing the record-cutting machines while a take apparently using six camera booths is in progress. One mobile camera might pan or track to follow the action. The goal was to furnish the film editor with complete coverage of the scene from one wide view—the so-called master shot—and a sufficient number of angles and focal lengths from which to fashion a smoothly cut sequence adhering to the established principles of continuity (observing the 180-degree rule, no jump cuts, etc.). The filmstrips could be combined in any way desired as long as frames were neither lost nor added; loss of synch would have resulted. Because match-on-action cuts helped mask the edit, and since laying the synched strips of film side by side and cutting anywhere would produce a match-cut automatically, this kind of transition was common.54 Editors also developed tricks such as laying a word over a cut to further "soften" the edit.55

The result of this method was a simulacrum of the late-twenties Hollywood silent editing that is so close it often takes frame-by-frame viewing and listening with head-phones to detect whether multiple-camera cinematography was used. Doing so reveals that the technique was widespread, but not ubiquitous. It lasted at Warners/First National officially until 1931, when disc recording was abandoned. But at other studios directors and cinematographers resisted the awkward procedure. Paramount practiced it intermittently. In The Love Parade (1929), Lubitsch seems to have used extensive multi-camera shooting in some scenes and a single camera in others. MGM used it much less than other producers; its filmmakers usually limited themselves to filming two angles at a time. MGM films (and those of other studios as well) often combined multiple-camera and multiple-take methods. For example, in Untamed (1929), one scene takes place on a three-room set. It uses a combination of a tracking shot, multiple cameras, and inserts of a shot—reverse shot conversation made with a single camera. In The Bishop Murder Case (1930), it is easy to identify inserted close-ups because the background sounds change with each cut (an example of scale-matching). Three cameras were used in the interrogation scene—one for the master shot, one oblique from the right, and one from a high overhead angle. However, a scene shot around a dinner table, which would have been tricky to stage without revealing multiple cameras, was made the traditional way with one camera and multiple takes. Because of not only the expense but the complaints from directors and cinematographers, the studios gradually reinstituted the old multiple-take system.56 Lee Garmes, for example, claimed that he was able to convince Jack Warner to let him shoot Disraeli (1929) with one camera (although he may be remembering a later film).57

While the multiple-camera system seemed unwieldy and restrictive to technicians, for a while the end justified the means. The final result on the screen matched the prevailing silent-film convention for a naturalistic representation. The sound track was headed in a parallel direction. In order to make a convincing illusion of natural sounds, highly artificial and labor-intensive means were required. While such manipulation did a disservice to "authenticity," in fact no one was really interested in a truly authentic representation of the one-shot static camera variety. That it was electrical wizardry that produced the illusion of presence (as it did in phonography, the radio, and the telephone) gave the technicians license to reassemble image and sound in a synthesis which was more "real," by audience standards, than what had been recorded.

The film industry at the end of the 1920s was not resting on some static plateau of technological stability, waiting to be disrupted by something like the coming of sound. On the contrary, experimentation with talkies was only one of several technological changes already under way. Roland Barthes, meditating on the technology of reproduction, noted a historical paradox: "The more technology develops the diffusion of information (and notably of images), the more it provides the means of masking the constructed meaning under the appearance of the given meaning."58 This certainly describes the efforts by filmmakers, producers, and equipment manufacturers who, in deciding how to hold in check the potentially disruptive technology of synch-sound dialogue, opted for "masking" it as "natural" or "realistic," rather than exploit its possibilities as a new kind of expressive filmmaking. In its first years, sound was most successful when it "stood out" from the image. Its electrical nature and its startling synchronism with the image were selling points. But after the conversion was complete, the emphasis changed and technology was diverted to making sound "invisible," exactly as editing, lighting, framing, camera movement, and the other arrows in Hollywood's quiver had been sharpened to create an unobtrusive illusionism. The sound track progressively was isolated from the image track to eliminate the inherent randomness of the natural world.

Paradoxically, the copy of the acoustic environment was inauthentic. Lastra has mentioned that around 1931 sound technicians settled on a planar system for sound. Voice, music, and sound effects were recorded on different filmstrips and then mixed together in just the right proportion. Furthermore, they were ordered into a hierarchy of fore-ground and background sounds.59 Sound technology moved progressively toward limiting randomness by isolating (or fabricating) individual elements and constructing the scene according, not to what it originally sounded like, but to what it should sound like. Much as different types of emulsions, filters, and development processes had evolved to control the chemical process of photography, acoustic technology was channeled toward the structured arrangement of sounds on the track.

Lee de Forest, when asked to rank the most prominent inventions related to the talkies, produced an interesting list: automatic volume controls, which played back the sound track at a constant level; the silent splicing device (probably referring to blooping tape); the baffle board enclosure for microphones, which extended their low-frequency range; talking-picture paint specially formulated to reduce echo on sets and in theaters; and lightweight cameras.60 These diverse and otherwise superfluous inventions are linked by their contribution to one major function: reducing some sounds (noise) while amplifying others.

Audiences and producers agreed that sound was becoming more realistic. "Realism," which has always been trotted out to justify new technology, can mean many different things.61 Sound certainly connoted a sense of "being-there-ness" in the public's imagination. Long before the theorist André Bazin elaborated his notion of "total cinema," film-makers were linking sound with color and stereoscopic movies. "The demand for color photography," wrote Seitz, "increased to an almost unbelievable extent after the advent of sound and is steadily increasing. No doubt the incongruity of black and white images speaking lines and singing songs like living beings created a demand for a greater illusion of reality. This color photography helps to supply." Roy Pomeroy was already at work on a stereophonic sound system in 1928.62

Whether owing to the economic constraints of 1930-1932, the inherent conservatism of corporations, or simply a general will to resist change both in Hollywood and in the theaters on Main Street, sound was treated as something to be domesticated rather than spotlighted in its own right. A few voices, mainly influenced by European attitudes, called for radical acoustic experiments, but these theorists (like Eisenstein, who also appealed for a square screen) were not taken seriously in the United States. De Forest's list of recent achievements in the talkies might have been challenged in its details but not in its overall intent: overcoming Hollywood's horror of noise (defined as any sonic element not under the control of engineers). Certainly there were experiments that went against the grain—in short subjects, isolated moments in features, animated cartoons—but these were not mainstream trends.

The path of least resistance for producers would have been to continue treating sound as a highlighted add-on. Instead, they rejected sound cinema's alternative acoustic properties. The result was that, according to Barthes's model, when the studios adopted sound technology they devoted their efforts at first to exhibiting it, then to making it disappear. The major technologies (and the minor ones de Forest mentioned) were called to action in order to facilitate this process. The result was that by around 1931 an ideal of an acoustically and pictorially unified cinema was more or less achieved—paradoxically, through the radical means of multi-camera work and planar sound tracks. It may even be that this ideal of unity strengthened the narrative component of classical cinema: a more engaging (one might say enchanting) story would draw the moviegoer's attention away from the mechanical distraction of the early reproduction apparatus. Technology was pushed to give audiences what they desired: to hear stars act out stories with "natural" voices against an appropriate background—"unheard" music, "inaudible" environmental sounds, and "noiseless" silence.