The first article under this heading is devoted to a discussion of the impact of technology upon society and of conditions affecting technological change; the second article focuses upon the impact of technology upon international relations. The relationship of technology to the social sciences is also reviewed in other articles throughout the encyclopedia. Examples of the technology of non-Western societies are found in Crafts; Culture Change. The relation of technology to environment is discussed in Domestication; Ecology; Urban Revolution. Levels of technological-social integration are examined in Agriculture, article on comparative Technology; Hunting AND Gathering; Industrialization; Pastoralism; Peasantry. The economic aspects of technology are discussed in Agriculture, article on productivity And Technology; Economic Growth; Innovation; Patents; Production; Productivity; Research AND Development; Technical Assistance. Various aspects of the technological revolution brought about by the electronic computer are reviewed in Automation; Computation; Cybernetics; Information Storage AND Retrieval; and in the biographies of Babbageand Wiener. Also of relevance for an understanding of technology are Creativity; Diffusion; Economic Anthropology; Economy AND Society; Engineering; Science; and the biography of Ogburn.
I. THE Study OF TechnologyRobert S. Merrill
II. Technology AND International RelationsWarner R. Schilling
One of the most persistent themes in the social sciences, history, and the humanities is the impact of technology and technological change on all aspects of social life. Major changes in human life have been associated with major technological changes, such as the “food-producing revolution,” the “urban revolution,” and the “industrial revolution” and its modern continuations; even the evolution of biologically modern man has been influenced by innovations in tool using.
Given the long history of concern with the social consequences of technology, it is puzzling that technological systems, unlike such similar aspects of culture as political, legal, economic, social, and magico-religious systems, are not the focus of an established specialty in any of the social sciences. The academic institutionalization of the social study of technology does not even approach that recently attained by its sister subject, science. One reason for this discrepancy is that technologies are not thought to be very interesting. They appear to be readily understandable, to present few intellectually challenging or significant problems. On the other hand, controversies about the conditions and consequences of technological change continually recur and seldom seem to be resolved. It has been only in recent years that developments in the social sciences and in technology itself have pointed toward the real possibility of coherent, systematic, and focused study of some of the major socially significant aspects of technology.
Definition . Technology in its broad meaning connotes the practical arts. These arts range from hunting, fishing, gathering, agriculture, animal husbandry, and mining through manufacturing, construction, transportation, provision of food, power, heat, light, etc., to means of communication, medicine, and military technology. Technologies are bodies of skills, knowledge, and procedures for making, using, and doing useful things. They are techniques, means for accomplishing recognized purposes. But, as Weber recognized long ago ( 1957, p. 161), there are techniques for every conceivable human activity and purpose. The concept of technology centers on processes that are primarily biological and physical rather than on psychological or social processes. Technologies are the cultural traditions developed in human communities for dealing with the physical and biological environment, including the human biological organism. This usage contrasts with others which are rather arbitrarily narrower, such as those which focus only on modern industrial technology, or only on crafts and manufacturing, or on “material culture” (see, e.g., British Association . . . 1954).
Another major distinction is that between the natural sciences and technology; the former emphasize the acquisition of knowledge, while the latter stresses practical purposes. This is a rough distinction with a number of complications, but it provides important guides in the investigation of sociocultural systems (see Polanyi 1958; Merrill 1962). Recently, the impression that modern technology is primarily applied science has led to the use of such phrases as “science,” “science policy,” and “science and society” to refer to both the sciences and the practical arts. When this undifferentiated usage enters into work on science, it tends to obscure important differences that need study (compare Barber 1952 and Kaplan 1964 with “Science and Engineering” 1961).
Problems of study . In broad perspective, the study of the conditions and consequences of technical change merges into the general study of sociocultural change. Available evidence certainly suggests that all the major features of a society influence what technological changes occur, the ways they are used, and the repercussions of their use. The kind of broad-ranging inquiry which results is evident in the one sociological tradition focused on technology—that stemming from William F. Ogburn (see Gilfillan 1935; Allen et al. 1957)—as well as in work on diffusion of innovations, economic growth, automation, economic and business history, and the technological aspects of international relations.
A different perspective may be obtained by viewing the problem the other way round. One can ask what needs to be known about technology if one is to have a basis for tracing interconnections between technology and the rest of society. What are the significant characteristics of technologies? How can they be empirically studied? These questions focus attention on the direct links between technology and society, on the features of technologies which mediate more remote influences in both directions. Adequate analysis and empirical study of technology from this point of view seem essential if technology is to be a well-understood subsystem that can be incorporated into larger systems of analysis.
The first of the two major themes in studies of technology-society relationships concerns the wide variety of social effects linked to technology by its influence on the kinds and amounts of goods and services which can be provided for the support of a wide variety of human activities and purposes. Here the focus is on the role of technology in production. The second theme concerns the ways in which social and other conditions directly influence technology. Here the focus is on technological change.
Technology and production
Relatively explicit arguments dealing with the effects of technology (and physical environment) on the “forms,” “types,” or “developmental stages” of society are especially prominent in recent discussions of cultural evolution and cultural ecology. Throughout this literature, the central link between technology and society is viewed as the effect of technology in limiting or making possible the supply of various amounts and kinds of important goods and services. For example, it is argued that, by limiting the amount of subsistence goods, particularly food, that can be produced, technologies limit population densities and thus affect the social system itself. Or it is argued that advances in technology which make possible greater outputs of subsistence goods per man-hour thereby “free” time from subsistence production and so make possible the support of craft, religious, military, governing, and other specialists. This in turn makes possible larger-scale, more sedentary societies with more complex economic institutions, social stratification, centralized authority, and so on. Concern with these kinds of relationships has led, in anthropology, sociology, and elsewhere, to the use of such concepts as “surplus” or “energy per capita” to characterize the aspect of technologies linked to such social consequences (see Orans 1966).
It is evident that issues concerning technology-output relations have had an essential quantitative as well as a qualitative aspect. This is an extremely significant fact for students of technology; however, most descriptions of technologies do not include quantitative information unless that happens to be an explicit part of the practitioner’s traditions —as, for example, in much modern food preparation and in engineering. As the significance of quantitative features of technologies, whether they are culturally explicit or not, has become clearer, there have been increasing efforts to obtain such data by field workers interested in economic anthropology, cultural ecology, nutrition, housing, consumption, and small-scale industry, and by students of the prehistory and history of technology. However, outside economics and economic history, most of these efforts have not been guided by clearly defined notions of just what data are relevant for what purposes. Careful analysis of direct technology-output-society linkages is a precondition for defining and obtaining data on relevant characteristics of technologies.
Although economists have developed systematic formulations of the quantitative aspects of technology-output relations, these ideas are little known among other social scientists. Almost all work in anthropology, as well as much work in sociology and history, shows little awareness, let alone systematic use, of the idea of production functions, of concepts concerned with productivity, and of technical economic theory relevant to the study of specialization. A variety of recent developments in economics, particularly in the areas of activity and process analysis, programming, and input-output analysis, are making this lack of awareness even more acute (see Cowles Commission . . . 1951; Koopmans 1957; Manne & Markowitz 1963; Dorfman et al. 1958; Chenery & Clark 1959). A set of powerful ideas, computational methods, and strategies of empirical research directly applicable to the study of technology-output relations are becoming available. They make possible kinds of systematic study of this aspect of technology which are of the greatest significance.
An empirical strategy. The general strategy suggested by the developments mentioned is simple but essential. This strategy is to study the role of technological and related factors by comparing actual situations with calculations of what would be possible with particular technologies under various alternative assumptions about other relevant variables. Such work has barely begun, but two examples can be given.
The first comes from one of the most active areas of application of modern economic tools to the study of technological, economic, and related changes: the “new economic history” (see Fogel 1964a; 1965). Fogel (1964b), using programming and other methods, examined the economic impact of railroads on the U.S. economy in the nineteenth century by comparing what actually happened with what the economy would have been if only older modes of transportation had been used. In contrast with prevalent interpretations imputing large economic effects to the development of railroads, Fogel’s results suggest that the effects were relatively slight. Whether or not this result stands up in the face of further work, it clearly shows that sorting out the effects of technological factors in complex sequences of change requires the use of highly sophisticated methods. The second example is Hopper’s study of the use of farming resources in an Indian village (1957; 1961; see Schultz 1964, pp. 44-48, 94-96). Using programming methods, he was able to show that resources were being used efficiently, given the technological and external demand conditions. His results also indicate that additional investments in the usual forms of capital would yield low rates of return. Such conclusions are contrary to inferences made by many students of Indian village institutions, who believe these institutions constitute barriers to efficiency. Moreover, by showing that output and income are actually being limited by resource and technology conditions, a wide variety of issues concerning the roles of these factors and the kinds of changes that would increase income are brought face to face with empirical findings.
Quantitative technology-output relationships . How should technologies be conceived and characterized if we are to study their relations to output possibilities? The classic concept in economics is the idea of a “production function.” From the standpoint of output possibilities, a particular technology is characterized by (1) the kinds of inputs used; (2) the kinds of output (or output mix) produced; and (3) the quantitative relations between amounts of inputs used and maximum quantities of output that can be physically produced. If a technology can produce several outputs in variable proportions, production functions can be used to derive an output-possibility, or efficiency, frontier. The frontier will consist, for any given set of inputs, of those combinations of outputs which cannot be exceeded in the following sense: no output within any efficient combination can be increased without decreasing some other output in that combination. All sets of outputs physically producible with the technology from a given set of inputs will then lie on or within the output-possibility frontier corresponding to that set of inputs. In addition to the qualitative kinds of inputs and outputs involved, it is the quantitative input-output relations made possible by a technology which must be known if their implications for production possibilities are to be studied.
Generally, in economics, production functions have been derived only for firms, industries, or other sizable production entities. Moreover, economists usually use price weights to aggregate physical inputs and outputs and use such economic data to estimate input-output relations. This has meant that, in actual use, the production function concept has been institution-bound because only in price-market systems are the requisite economic data available and meaningful. It has also meant that the work has usually been so remote from physical technology that its relevance for students of technology was not evident. Input-output analysis, though phrased in terms of physical technology, also is usually used with economic (aggregated price-weighted) data. However, there has been some interest in seeing whether input-output coefficients or more general production functions could be determined from engineering or other data closer to physical technology rather than from more general economic statistics. In this way, technology and economics are being brought closer together (see Research Project . . . 1953; Vajda 1958; Manne & Markowitz 1963).
The conceptual innovation central to the new programming methods is deceptively simple: instead of conceiving of production functions as characteristics of establishments, firms, or other large-scale institutional systems, these methods consider input-output relations at the level of the component steps, stages, or processes (”activities”) which make up each particular technology. In other words, the quantitative framework is brought into more direct relation to technologies known and used by practitioners. However, if one thinks of a factory, a peasant household, or a group of hunters and gatherers, let alone larger regions or societies, it becomes clear that the number of inputs, the number of activities or processes, and the number of outputs are usually very large. This is where mathematical advances and digital computers are useful. Methods are continually being extended and improved for making calculations of points on production-possibility frontiers for systems of hundreds of input-output equations. It is the availability of these computational methods that gives empirical importance to the conceptual framework.
By working closer to physical technology, programming methods make it possible to study more explicitly the relations between the details of technological processes and the economically significant characteristics of production-possibility frontiers based on them. Moreover, such frontiers may be estimated in situations where older economic techniques can be used only with great difficulty or not at all. Finally, by explicitly focusing on component processes, the programming framework brings out into the open two major links between physical technology and output possibilities that have generally been obscured by the older production function framework.
The first link may be discovered by asking whether, given data on input availabilities and on the input-output characteristics of the known technological processes, we can then directly determine output possibilities. Are there any other intervening links? Suppose a potter carries out all the steps of pottery making, from gathering clay and other materials to final firing and finishing. Is her rate of output determined or limited solely by her skills, the time she spends, her equipment, and the physical characteristics and locations of her sources of firewood, clay, temper, etc.? There is at least one other major factor that will influence what she can produce: how she arranges her work, in the sense of where she does various things and how she schedules her activities. This will affect, for example, the amount of moving and carrying she does, the extent to which she can economize time by carrying out activities such as forming pots while others are drying or being fired, how close she can come to carrying out processes on the most efficient “batch” scale, etc. Thus, even within a simple production unit, there are major problems of what Koopmans (1957, pp. 69-70) has called physical maximization or physical planning which will influence outputs obtainable from given inputs and a given technology. These problems were almost completely obscured by the older production function framework.
Where do such work routines fit into our picture of technology? One answer would be that they are additional, essential parts of technological traditions. This would imply that our potter, for example, thought her particular schedule to be just as necessary for the successful making of pots as mixing clay and temper in the right proportions. However, accounts taken from a variety of societies indicate that, though some rigid constraints on production routines may be found, such routines have an appreciable amount of flexibility, in the sense that they are altered or adjusted to varying circumstances. It is highly probable that separable sociocultural factors and processes influence production scheduling. Therefore, we have isolated a major link between technology and output possibilities that needs careful study. Instead of gathering data, say, on the most usual pottery-making routine, the researcher has to ask himself what data are needed to determine the routine that would be most efficient within the culturally defined technological constraints. How do the routines and their variations compare with calculated “efficient” routines for the varying circumstances? Only then will he be able to say to what extent technology and resources are actually limiting output. And then he will also be in a position to assess the production effects of the sociocultural factors which generate the production routines actually followed.
That these issues are not trivial may be seen by reference to an old and important line of argument: the importance of division of labor (specialization) as a way in which outputs can be increased with a given technology and resources. The idea is that if persons with given initial skills and capacities are able to specialize their performance to a greater degree, then their total production will be greater. Similar arguments are applied to the use of different kinds of soils and other resources. Here, again, there is a vast body of incidental evidence that patterns of specialization, even in highly traditional societies, are flexible rather than completely rigid. Therefore, one must explore the magnitude of the consequences of alternative patterns of specialization for production, taking into account such additional activities as transportation. Only then can one really determine how outputs are limited by technology-resource factors or by sociocultural factors related to specialization. Most assertions about technological limits on outputs under given conditions are not only rough guesses; they are guesses made on the basis of relatively little examination of what might be possible if only technological and resource constraints were operative. Thus, programming and related methods open up the possibility of making calculations of the magnitude of the production effects of what we might call alternative production arrangements. [For examples of special problems in dealing with locational interdependencies and with various economies or diseconomies of scale and locational agglomeration, see Central Place; Programming; Spatial Economics.]
A second major implication of the programming framework is that one can compute output possibilities for alternative arrangements of physical production (i.e., persons, facilities, activities, and movements of goods and persons) apart from the sociocultural institutions that “lead to,” “bring about,” or “generate” any particular pattern of conduct and events. We thus have a clear distinction between these two very different kinds of problems. It becomes apparent that many analyses have ignored this distinction, moving directly from technology to institutions. This results in confusion, especially of theoretical frameworks, controversy, and failure to study the variety of issues involved.
Technology—resource-output linkages . Output possibilities depend not only on technology but also on resources or inputs. The links here are more complex than is usually realized, and they involve technology in ways that are only beginning to be studied seriously. If we take the simplest case, natural resources, it is an old idea in anthropology that only culturally known natural features can be resources. Despite this recognition, careful attention to a society’s knowledge of resource locations and characteristics and ways of finding new sources of supply is relatively rare. Knowledge of this complex part of technology (which we might call resource technology) and quantitative information about the resources actually known are both critical to the study of resource-technology-output relationships. Similarly, outside economics there is a curious tendency to neglect the systematic study of the quantitative role of another major type of resource: physical capital, in the sense of durable, man-made improvements, equipment, structures, and inventories. This is especially odd because the stock of physical capital, like population, clearly depends on past social and other processes. The importance of capital formation processes as a link between technology and output possibilities can be assessed by determining how output possibilities vary with assumed changes in the amounts of various kinds of capital goods available. A similar strategy can be used to deal with the complex linkages between population density, labor resources, and output possibilities. One can examine not only the effects of alternative population densities but also the effects of alternative assumptions about culturally or biologically defined “subsistence levels” on production possibilities when they are so constrained.
Other recent work in economics, stimulated by a search for sources of economic growth in modern industrial societies and by the economic problems of nonindustrial societies, has uncovered two additional links between resources, technology, and output possibilities. The first is the fact that many of the production effects of a technology depend on the extent to which it has been physically “embodied” in capital goods (see Salter 1960; Green 1966). Similarly, the second development stresses the importance of human capital, in the sense of the technologically relevant knowledge and capacities actually “embodied” in a society’s population [see Capital, Human]. So far, studies of these linkages between technology and resources have been concerned only with their over-all economic significance and thus have used highly aggregated economic data. More detailed work is crucial for students of technology.
These linkages bring out the important fact that describing a society’s technology requires much more than listing the technologies “known,” in some unspecified sense, to some members of the society. They also call attention to critical socio-economic processes influencing technology-output relationships.
Production arrangements . There is another line of argument, intertwined with the one we have been considering, which we may now examine. Instead of being concerned with what outputs can be produced with a given technology, these arguments assume that certain levels of output of certain goods or services are needed or desired. They then argue that certain production activities and arrangements are “required” if these levels of output are to be obtained with a given technology under given conditions. In this way, various societal features have been interpreted as technologically “necessary”: the kind of division of labor; household size and composition; local group size, geographic distribution, and spatial movement; daily, seasonal, annual, and other cycles of productive activity and movement; various economic and political institutions, etc. Many plausible, but largely qualitative, arguments of this sort have been made (for older reviews, see Forde 1934; Mead 1937; for a comparative study of task groups, see Udy 1959; for an important comparative case study, see Hill 1963). The framework previously outlined for the study of alternative production arrangements provides a way of assessing the magnitude of their effects on production. Some of the issues that need examination may be indicated by reviewing a few raised by the vast literature concerning the effects of modern industrial technology on factory and firm organization and on work life.
First, we may note that industrial technology is not all of a piece but varies markedly from industry to industry; the implications of such differences for work organization and work life are just beginning to be studied (see Blauner 1964). Second, as the discussion of physical planning indicated, it is not to be assumed that the particular patterns of factory size, task composition and subdivison. grouping of tasks into jobs, and work group arrangements involved in production are a direct consequence of the requirements of physical technology. There is evidence that the patterns of task organization developed by production and industrial engineers are strongly affected by implicit sociopsychological theories, without much exploration of the efficiency of alternative arrangements (see March & Simon 1958, chapter 2; Walker 1962, part 2).
Third, assumptions that particular institutional arrangements are necessary for effective performance need questioning. For example, a widespread theory argues that functionally specific, universal-istic criteria of recruitment, advancement, and releasing of personnel must be used if industrial technology is to operate effectively. This reasoning assumes that knowledge and skills have a critical effect on performance (see Levy 1952). Largely on this basis, Abegglen (1958) interprets paternalistic arrangements in Japanese factories as dysfunctional. However, it is probable that performance is significantly affected by what has recently been called “commitment,” as well as by knowledge and skills. The net effect of Japanese social institutions, which promote a high degree of organizational commitment, may, in the Japanese setting, promote efficiency rather than inefficiency. [See Paternalism.]
Finally, and perhaps most important, consideration of the nature of modern industrial economies indicates that the organizational tasks confronting production units are as much a consequence of the changing technical and socioeconomic milieus in which they operate as of the units’ production technology. This can be seen if one imagines a particular industrial technology being used in a completely “stationary” economy with constant demand, supply, price, technology, and population conditions. (For a vivid picture of some of the implications of stationary “circular flow,” see Schum-peter 1912; 1939.) Production could then be an almost completely routinized, even traditionalized, process. This would obviously have very far-reaching implications for the roles of authority and for the kinds of coordination possible (e.g., informational signaling rather than use of commands). It very well may be that many of the organizational and other effects attributed to industrial technology are more consequences of rates of change than they are of particular technologies per se.
Conclusions . It seems clear that tools are now available for making a major empirical attack on many issues concerning technology-output-society relationships. This would require a large-scale effort. However, it is also likely that technology, by itself, will not turn out to be such a powerfully influential factor as some social scientists have thought. Nevertheless, social phenomena are so complex that being able empirically to “factor out” the influence of a major subset of variables is critically important in improving our ability to understand all the others. This strategy is now available in the study of technology.
We now turn to the study of factors influencing technologies themselves. Technologies are important not only because they affect social life but also because they constitute a major body of cultural phenomena in their own right. These phenomena pose numerous problems whose study may shed light on a wide range of issues in the social sciences.
Viewed in broad perspective, the practical arts align themselves with many other sets of traditions and customs which are pre-eminently cultural, in the sense that they exhibit historically specific origins, development, and distribution. In this respect they differ from those aspects of social organization which frequently exhibit similar forms in historically unrelated societies. Therefore, prehistory, history, and ethnography are especially important in understanding the course of human technology over space and time. The history of technology has begun to establish itself as a discipline with the publication of two major collaborative histories (Singer et al. 1954-1958; Daumas 1962) and the establishment of a professional society and the journal Technology and Culture. However, in scholarly apparatus and in the use of interpretive analyses the history of technology is in its early stages (see [Review Issue] . . . 1960). The “discipline” has several rather independent subdivisions, such as prehistory, ethnography (see Bordaz 1959a; 1959b; Hodges 1964; British Association . . . 1954; Matson 1965), agricultural history, and the history of medicine (see Sigerist 1951-1961; Underwood 1953; Zimmerman & Veith 1961).
The task of understanding technological phenomena and formulating theories of technological change is clearly an enormous and difficult one. This is especially true because our general understanding of historically specific cultural change might best be described as meager and unsatisfactory. Nonetheless, one can find in the rapidly expanding body of recent work a number of clues which point toward major possibilities of systematic study. This brief review will be divided into three sections: recent technological change in modern Western societies; the development of Western technology; and past and present non-industrial technologies.
Technological change in the modern West. The complexity of modern technology makes it seem an odd place to start, but two other factors make it suitable. First, deliberate technological change has been institutionalized in Western societies for some time. Most modern technologies include not only traditions for making and doing things but also traditions for “advancing the state of the art,” for producing new knowledge, processes, and products. Modern technologies are culture-producing as well as culture-using sociocultural systems. Such cultural change seems easier to understand than less institutionalized change. Moreover, when seen in this light, these technologies are similar to such other culture-producing traditions as science, law, art, literature, music, philosophy, history, and journalism. These similarities suggest that what can be learned about each such culture-producing culture may shed light on the others.
Second, events during and immediately after World War n have jolted economists into taking a hard look at technological change in the West (see Universities-National Bureau . . . 1962; Ohio State University . .. 1965). Their work is modifying the common conception that technology grows in an autonomous, cumulative, accelerating fashion, little affected by outside influences. (Such a theory probably has never been held literally by those social scientists who are referred to as supporting it—see, e.g., Ogburn 1922; Leslie White 1949; 1959; Hart 1959—but the idea is nonetheless widespread.) Recent work indicates that in the various private sectors of modern economies, the amount of effort devoted to technological changes, and the magnitude of the changes themselves, are strongly influenced by economic demand and profitability.
This accentuates the importance of distinguishing major steps in the process of technological change which differ in their dependence on physical facilities and other resources and in their relationship to economic costs and rewards. The first step is invention or applied research, by which is meant the processes of getting new ideas and bringing them to the point of technical feasibility demonstrated through small-scale testing. This is different from the later steps: development of workable full-scale plans; innovation, which means putting plans into actual, full-scale practical use; and imitation or diffusion of innovations to additional producers and users. In addition, minor processes of improvement may occur in any of these phases. Finally, the spread of technology, even within one society, let alone between societies, is not just a matter of literal imitation but usually involves significant processes of technologicaladaptation to the local habitat and to local economic and other conditions (Merrill 1964). In anthropology and sociology, invention and innovation are terms often used for all of the first three steps, while acceptance distinguishes intrasociety spread from diffusion between societies.
There has also been a burgeoning of studies of institutional and social factors which influence the pressures and rewards leading technologists and users of technology to focus attention and resources on changes in certain directions rather than others. These include studies of the social characteristics of business firms, the organizational characteristics of research and development laboratories in industry and government, governmental support and policy, weapons development organization (e.g., Peck & Scherer 1962; Scherer 1964), institutional factors in medicine, and social factors in the diffusion of innovations. In some cases, economic and social analyses appear to conflict, although it is more likely that the interpretations are complementary.
Problem-solving capabilities are as crucial as incentives in determining the directions taken by inventive activity and technological change. The most direct determinants of problem-solving capacities are the technological traditions themselves— the states of the arts. Changes within technology, as well as outside it, obviously have something to do with the very recent large expansion of resources devoted to research and development (R&D); with the rapid increase in organized R&D efforts as compared with those of independent inventors; with the expanding role of professional scientists and postgraduate engineers in R& D; with the increasingly radical nature of the technical advances being achieved, particularly in military technology; and with the remarkably wide differences in R & D efforts and accomplishments between industries and technological fields [see Research AND Development].
So far, it has proved difficult to gain a more precise understanding of the role of technological and related scientific knowledge in technological change. There are indications that technological change is not a simple function of its “cultural base” in the sense of the number of elements available for combination (Ogburn 1922; Hart 1959). Similarly, the idea that recent trends are due to the rise and development of “science-based” industries and technologies (e.g., Maclaurin 1954; Brozen 1965) has been foundering on the difficulty of specifying just what a science base is. Advances in fundamental science do not directly trigger technological changes as frequently as is usually assumed (compare Meier 1951 with Nelson 1962 and Schmookler 1966). General economic-technological histories of particular industries provide helpful information but usually do not make clear the factors involved in technological change (e.g., Bright 1949; Haber 1958; Maclaurin 1949; Passer 1953).
The most revealing information is found in case studies which enable the reader to see situations from the “inside,” as technologists see them—to see the problems involved, the tools available and the ways they are used, and the results achieved. (In varying degrees and ways such views may be found in such works as Cohen 1948; Condit I960; 1961; Enos 1962; Klein 1962; 1965; Killeffer 1948; Marschak 1962; Marshall & Meckling 1962; Merrill 1965; Nelson 1962; Development of Aircraft.. . 1950; Straub 1949; Wright & Wright 1951.) Several extremely important points emerge from the examination of such cases. First, while it is essential to know the body of “results” which are part of a technology in order to understand the way it changes, one also must know the methods and techniques, the approaches and procedures, the tactics and strategies (Conant 1951) which are used to tackle new problems. To study a sequence of technological changes without knowledge of the technological traditions used in producing them is to be confronted with extremely enigmatic phenomena. Second, technologies and technological problems are incredibly diverse. Any attempt to generalize too quickly and too broadly is likely to obscure rather than clarify the ways technological change comes about. Third, each technology, and even each significant technological problem, is an intricate world of its own. Adequate understanding requires intensive study of a kind still relatively rare in work on technology.
Such intensive study, to be useful, requires a clear focus on determining how technologies work. In addition to historical studies, basic sociological research on technology is required. We know surprisingly little about the occupational and professional groups, organizations, institutions, and institutionalized roles which play a part in the use and development of the practical arts (see Merrill 1961). Furthermore, there is a widespread notion in the social sciences that the cognitive structure of technologies is equivalent to that of the empirical sciences, with the minor modification that if-then statements are converted to rules of practice (e.g., Parsons 1937; 1951; Barber 1952). There is good evidence that this conception is drastically askew, but the only major counterformulation (Polanyi 1958) has not been developed. Nor has much use yet been made of developments in engineering which have led to more explicit conceptualizations of what is involved in engineering design, development, and systems engineering (see Asimow 1962; Alger & Hays 1964; Starr 1963; Goode & Machol 1957; A. D. Hall 1962).
The development of Western technology . One of the most fascinating problems of technological change is the rise and continuing development of Western “industrial” technology, and it has attracted a corresponding amount of attention. Here only a very few themes closely linked to technology itself will be discussed, with emphasis on the question of relations between science and technology. Clearly, one major possibility is that the special development of technology in the West was linked to another unique Western development: the development of those cultural traditions we now group together under the label “science.” Accumulating historical work has pushed back the sources of both the “scientific revolution” and the “industrial revolution” well into the Middle Ages (see, e.g., Gille 1962; Lynn White 1962a; Crombie 1952; Hodgen 1952; Taton 1957-1964, vol. 1; Singer et al. 1954-1958, vol. 2). Moreover, a relatively continuous series of technological changes links the medieval developments with the conventional late eighteenth-century beginning of the “industrial revolution” (see, e.g., Singer et al. 1954—1958, vol. 3; Nef 1932; 1950; 1964). Despite such formal parallelism, the evidence suggests a high degree of independence of changes occurring in the two traditions well into the nineteenth century and beyond. On the other hand, there appear to be an increasing number of less specific influences from the sciences on technology (and vice versa) which are difficult to document and articulate (see “Science and Engineering” 1961). Finally, it is clear that there was great heterogeneity in the patterns of change within each group of traditions. Many of these phenomena are evident in discussions of relations between various craft and learned traditions (e.g., Crombie 1961; A. R. Hall 1952; 1959; Smith 1960).
The historical study of technologies whose traditions are largely unwritten presents extremely difficult problems. Even when evidence from artifacts and from pictorial and other representations is used to supplement documents, and all are interpreted with great sophistication by an author intimately acquainted with the practice and theory of the art he is studying, findings are often extremely uneven and much remains puzzling (e.g., Smith 1960). The source of difficulty appears to be one we have encountered before: the difficulty of interpreting a sequence of technical changes without intimate knowledge of the cultural traditions and contexts from which they emerged (see Lynn White 1962b; A. R. Hall 1962).
Past and present nonindustrial technologies . A second major result of recent work bearing on the history of Western technology is the increasing accumulation of evidence, much of it still hotly debated, that a significant fraction of medieval, early modern, and even some later Western technological changes were, or grew out of, diffusions from Asian societies, particularly China.
Studies of the technologies of Asian (especially Needham 1954-1965; Lynn White I960), early Near Eastern, classical (Forbes 1955-1964), and New World civilizations seem to have one increasingly important implication: Instead of making matters more intelligible, the more detailed evidence adds many more puzzles than it provides even tentative solutions. This may seem very discouraging, but it may have a positive effect. It may eliminate the tendency to think that technological phenomena provide no really significant intellectual problems worthy of concentrated scholarly attention.
Nonetheless, the presence of important problems, however fascinating, does not stimulate scholarly effort unless there are ways of making some headway toward their solution. If a major cause of the historical and prehistorical puzzles is the scarcity of data on unwritten or incompletely recorded technological traditions, what can be done about it? There is one important kind of evidence that could be brought to bear: data provided by really intimate studies of the great variety of non-industrial technologies still being practiced in various parts of the world. The connections between these present-day technologies and earlier ones are known with varying degrees of precision. In any case, studies of these nonindustrial technologies provide an opportunity to understand the nature and varieties of technological traditions outside the modern Western tradition and the ways such traditions change.
Existing nonindustrial technologies are not so much “unstudied” as they are studied from points of view which do not yield the kinds of data that seem crucial for the interpretation of technological change. Most detailed studies by cultural anthropologists, ethnologists, and students of folk life have been strongly historically oriented and museum-oriented, describing characteristics of technological practices and artifacts which are useful for tracing historical connections among technologies and among the peoples practicing them. As a consequence, there has been a tendency to think of technologies as fixed sequences of standardized acts yielding standardized results. Descriptions of technologies made from a craftsman’s or technologist’s point of view (e.g., Guthe 1925; O’Neale 1932; Conklin 1957; Shepard 1956) and incidental observations in other studies strongly indicate that this conception is very misleading. Desired technical results are not obtained automatically. Materials vary, circumstances differ, and manipulations are hard to control. Accidents, poor results, or failures occur and are always a possibility. Even “primitive” technologies have a variety of procedures for adapting actions to circumstances, detecting difficulties, and making corrections. A more adequate conception of a technology is that it is a flexible repertoire of skills, knowledge, and methods for attaining desired results and avoiding failures under varying circumstances (Merrill 1958; 1959).
Such a “functional” view of technologies themselves (as against their relations to other things) is surprisingly rare in the social sciences, despite the widespread use of functional ideas. Malinowski recognized the possibilities of this approach only after he returned from the field (1935, vol. 1, appendix 2). Ford’s systematic formulation (1937) approximated it but was not followed up. The one major context in which functional problems of technologies have received considerable attention is in studies of magic. Although Malinowski’s ideas about magic and technology were not completely developed (Leach 1957; Nadel 1957), little explicit research on this subject has been done, except for Firth’s evidence (1939) that magic can inhibit technological change. Even this idea has not been pursued, although the thesis that magic is a major traditionalizing force is central to much of Max Weber’s work and is important in Sombart’s analysis of the development of technical rationality. Instead, social anthropologists working in this area have focused largely on the social and symbolic interpretation of witchcraft, sorcery, and magic, though all of these impinge on technology through their role in the interpretation of illness, technical accidents, and abnormal successes.
Despite this neglect, ethnographic accounts contain numerous incidental observations which indicate that deeper study of nonindustrial technologies will shed a great deal of light on processes of technological change. Careful analysis of a few relatively well described pottery technologies has already shown that the flexible procedures used to deal with day-to-day problems may operate as sources of significant technological changes under particular circumstances (Merrill 1959). Almost every society has techniques for producing nonstandardized products, such as houses, storage facilities, trails and roads, vessels, settlement or field layouts, and water-supply and drainage arrangements. These have to be “designed” to fit particular local conditions, special uses, or availability of materials. Such designing requires a set of adaptive procedures which may be closely linked to technological change just as the little-studied bodies of knowledge used in routine engineering design play a significant role in modern technological change (Merrill 1961; 1965). Flexible procedures are especially evident in agriculture, where one also finds surprisingly frequent indications of the deliberate use of “trial and error” even in non-literate societies (see, e.g., Richards 1939; Schlippe 1956; Allan 1965; Conklin 1957).
This evidence suggests that technological traditions are far more complex than usually realized and that they contain numerous features of the greatest significance for understanding the possibilities and processes of technological change. Even “accident,” that unpredictable source of change, is well known to depend on a “prepared mind” (see Usher 1955), and preparation has major cultural components. It also appears that the study of the relatively minor, but more frequent and therefore more observable, technical changes involved in various kinds of routine technological adaptation is likely to clarify our understanding of the relations between cultural traditions and cultural change and to provide an essential basis for interpreting more radical “creative” innovations (see Merrill 1959; compare Barnett 1953).
A number of theoretical developments in ethnography have clarified the distinction between cultural traditions as the conceptions that guide action and the behavior, artifacts, or other results brought about by their use (Goodenough 1957). Using this idea and techniques from descriptive linguistics, a series of procedures is being developed for the precise identification of the conceptual categories, taxonomies, and distinctions that participants in a culture use in structuring their world and their actions. Usually called ethnoscience, this work might better be called ethnotechnology. There have been studies of disease diagnosis, color distinctions, plant classifications, curers, firewood, cultural ecology, etc., which have clear technological implications. So far, little has been done to extend this approach to the study of these numerous “inarticulate” or “tacit” (Polanyi 1958, pp. 100-102) aspects of actually making and using things which performers cannot describe or explain in words even when questioned systematically. Harris (1964) has sketched some ways an observer could detect and formulate interconnected regularities in actual sequences of behavior which could be applied to this problem. He believes that his observer-oriented approach is superior to and incompatible with the ethnosemantic approaches which focus on actors’ frames of reference. However, the basic ideas appear to complement rather than contradict one another. Another approach which appears widely applicable is to search for implicit feedback control systems guiding skilled performance (Merrill 1958; 1959).
Because of its focus on conceptual systems, work on ethnoscience (ethnosemantics) may be usefully related to work in psychology on perception and cognitive theory [see Cognitive Theory; Perception, article on social Perception; see also French 1963]. Cognitive theory, in turn, provides a link to work on creative thinking and creativity significant for the study of technological change. So far, the most relevant psychological work has been on scientific creativity (see, e.g., McKellar 1957; Taylor & Barren 1963), but work on technological creativity is beginning (e.g., MacKinnon 1962).
It thus appears that there are foundations for the more systematic study of technological change and some of the direct links between technology and social life. It remains to be seen whether these potentialities will be realized through the development of technology as a coherent discipline in the social sciences.
Robert S. Merrill
Abegglen, James C. 1958 The Japanese Factory: Aspects of Its Social Organization. Glencoe, 111.: Free Press.
Alger, John R. M.; and Hays, Carl V. 1964 Creative Synthesis in Design. Englewood Cliffs, N.J.: Prentice-Hall.
Allan, W. 1965 The African Husbandman. New York: Barnes & Noble.
Allen, Francis R. et al. 1957 Technology and Social Change. New York: Appleton.
Asimow, Morris 1962 Introduction to Design. Englewood Cliffs, N.J.: Prentice-Hall.
Barber, Bernard 1952 Science and the Social Order. Glencoe, 111.: Free Press. → A paperback edition was published in 1962.
Barnett, Homer, G. 1953 Innovation: The Basis of Cultural Change. New York: McGraw-Hill.
Blauner, Robert 1964 Alienation and Freedom: The Factory Worker and His Industry. Univ. of Chicago Press.
Bordaz, Jacques 1959a First Tools of Mankind. Part 1. Natural History Magazine 68:36-51.
Bordaz, Jacques 1959b The New Stone Age. Part 2. Natural History Magazine 68:93-103.
Bright, Arthur A. 1949 The Electric-lamp Industry: Technological Change and Economic Development From 1800 to 1947. New York: Macmillan.
British Association For THE Advancement OF Science 1954 Notes and Queries on Anthropology. 6th ed., rev. London: Routledge. → The first edition was published in 1874.
Brozen, Yale 1965 R & D Differences Among Industries. Pages 83-100 in Ohio State University, Conference on Economics of Research and Development, Columbus, 1962, Economics of Research and Development. Edited by Richard A. Tybout. Columbus: Ohio State Univ. Press.
Chenery, Hollis B.; and Clark, Paul G. 1959 Interindustry Economics. New York: Wiley.
Cohen, I. Bernard (1948) 1952 Science, Servant of Man: A Layman’s Primer for the Age of Science. Boston: Little.
Conant, James B. 1951 Science and Common Sense. New Haven: Yale Univ. Press.
Condit, Carl W. 1960 American Building Art: The Nineteenth Century. New York: Oxford Univ. Press.
Condit, Carl W. 1961 American Building Art: The Twentieth Century. New York: Oxford Univ. Press.
Conklin, Harold C. 1957 Hanunóo Agriculture: A Report on an Integral System of Shifting Cultivation in the Philippines. Rome: Food and Agriculture Organization.
Cowles Commission For Research In Economics 1951 Activity Analysis of Production and Allocation: Proceedings of a Conference. Edited by Tjalling C. Koop-mans. New York: Wiley.
Crombie, Alistair C. (1952)1959 Medieval and Early Modern Science. 2d rev. ed. 2 vols. Garden City, N.Y.: Doubleday.
Crombie, Alistair C. 1961 Quantification in Medieval Physics. Isis 52:143-160.
Daumas, Maurice (editor) 1962 Les origines de la civilisation technique. Paris: Presses Universitaires de France.
Development of Aircraft Engines [by Robert Schlaifer] and Development of Aviation Fuels [by S. D. Heron]: Two Studies of Relations Between Government and Business. 1950 Boston: Harvard Univ., Graduate School of Business Administration, Division of Research.
Dorfman, Robert; Samuelson, Paul A.; and Solow, Robert M. 1958 Linear Programming and Economic Analysis. New York: McGraw-Hill.
Enos, John L. 1962 Petroleum Progress and Profits: A History of Process Innovation. Cambridge, Mass.: M.I.T. Press.
Firth, Raymond W. (1939)1965 Primitive Polynesian Economy. 2d ed. Hamden, Conn.: Shoe String Press.
Firth, Raymond W. (editor) (1957) 1964 Man and Culture: An Evaluation of the Work of Bronislaw Malinowski. New York: Harper.
Fogel, Robert W. 1964a Discussion. American Economic Review, 54, no. 2:377-389.
Fogel, Robert W. 1964b Railroads and American Economic Growth: Essays in Econometric History. Baltimore: Johns Hopkins Press.
Fogel, Robert W. 1965 The Reunification of Economic History With Economic Theory. American Economic Review, 55, no. 2:92-98.
Forbes, Robert J. 1955-1964 Studies in Ancient Technology. 9 vols. Leiden (Netherlands): Brill. → A second edition of volumes 1-4 was published in 1964-1965.
Ford, C. S. 1937 A Sample Comparative Analysis of Material Culture. Pages 225-246 in George P. Mur-dock (editor), Studies in the Science of Society: Presented to Albert Galloway Keller. New Haven: Yale Univ. Press.
Forde, C. Daryll (1934) 1952 Habitat, Economy and Society: A Geographical Introduction to Ethnology. London: Methuen.
French, David 1963 The Relationship of Anthropology to Studies in Perception and Cognition. Volume 6, pages 388-428 in Sigmund Koch (editor), Psychology: A Study of a Science. New York: McGraw-Hill.
Gilfillan, S. Colum 1935 The Sociology of Invention: An Essay in the Social Causes of Technic Invention and Some of Its Social Results. Chicago: Follet.
Gille, Bertrand 1962 Le moyen age en Occident (Ve siecle-1350). Volume 1, pages 425-598 in Maurice Daumas (editor), Les origines de la civilisation technique. Paris: Presses Universitaires de France.
Goode, Harry H.; and Machol, Robert E. 1957 System Engineering: An Introduction to the Design of Large-scale Systems. New York: McGraw-Hill.
Goodenough, Ward H. (1957) 1964 Cultural Anthropology and Linguistics. Pages 36-39 in Dell H. Hymes (editor), Language in Culture and Society: A Reader in Linguistics and Anthropology. New York: Harper.
Green, H. A. John 1966 Embodied Progress, Investment, and Growth. American Economic Review 56: 138-151.
Guthe, Carl E. 1925 Pueblo Pottery Making: A Study at the Village of San Ildefonso. New Haven: Yale Univ. Press.
Haber, Ludwig F. 1958 The Chemical Industry During the Nineteenth Century: A Study of the Economic Aspects of Applied Chemistry in Europe and North America. Oxford: Clarendon.
Hall, A. D. 1962 A Methodology for Systems Engineering. Princeton, N.J.: Van Nostrand.
Hall, A. Rupert 1952 Ballistics in the Seventeenth Century: A Study in the Relations of Science and War With Reference Principally to England. Cambridge Univ. Press.
Hall, A. Rupert 1959 The Scholar and the Craftsman in the Scientific Revolution. Pages 3-23 in Institute for the History of Science, University of Wisconsin, 1957, Critical Problems in the History of Science. Edited by Marshall Clagett. Madison: Univ. of Wisconsin Press.
Hall, A. Rupert 1962 The Changing Technical Act. Technology and Culture 3:501-515.
Harris, Marvin 1964 The Nature of Cultural Things. New York: Random House.
Hart, Hornell 1959 Social Theory and Social Change. Pages 196-238 in Llewellyn Gross (editor), Symposium on Sociological Theory. New York: Harper.
Hill, Polly 1963 The Migrant Cocoa-farmers of Southern Ghana: A Study in Rural Capitalism. Cambridge Univ. Press.
Hodgen, Margaret T. 1952 Change and History: A Study of the Dated Distributions of Technological Innovations in England. Viking Fund Publications in Anthropology, No. 18. New York: Wenner-Gren Foundation for Anthropological Research.
Hodges, Henry 1964 Artifacts: An Introduction to Primitive Technology. New York: Praeger.
Hopper, William D. 1957 The Economic Organization of a Village in North-central India. Ph.D. dissertation, Cornell Univ.
Hopper, William D. 1961 Resource Allocation on a Sample of Indian Farms. Unpublished manuscript, Univ. of Chicago.
Kaplan, Norman 1964 Sociology of Science. Pages 852-881 in Robert E. L. Faris (editor), Handbook of Modern Sociology. Chicago: Rand McNally.
Killeffer, David H. 1948 The Genius of Industrial Research. New York: Reinhold.
Klein, Burton H. 1962 The Decision Making Problem in Development. Pages 477-497 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Klein, Burton H. 1965 Policy Issues Involved in the Conduct of Military Development Programs. Pages 309-326 in Ohio State University, Conference on Economics of Research and Development, Columbus, 1962, Economics of Research and Development. Edited by Richard A. Tybout. Columbus: Ohio State Univ. Press.
Koopmans, Tjalling C. 1957 Three Essays on the State of Economic Science. New York: McGraw-Hill.
Leach, Edmund R. (1957) 1964 The Epistemological Background to Malinowski’s Empiricisms. Pages 119-137 in Raymond W. Firth (editor), Man and Culture: An Evaluation of the Work of Bronislaw Malinowski. New York: Harper.
Levy, Marion J. JR. 1952 The Structure of Society. Princeton Univ. Press.
McKellar, Peter 1957 Imagination and Thinking: A Psychological Analysis. London: Cohen & West.
Mackinnon, Donald W. 1962 Intellect and Motive in Scientific Inventors: Implications for Supply. Pages 361-384 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Maclaurin, W. Rupert 1949 Invention and Innovation in the Radio Industry. New York: Macmillan.
Maclaurin, W. Rupert 1954 Technological Progress in Some American Industries. American Economic Review, 44, no. 2:178-189.
Malinowski, Bronislaw (1935) 1965 Coral Gardens and Their Magic. 2 vols. Bloomington: Indiana Univ. Press.
Manne, Alan S.; and Markowitz, Harry M. (editors) 1963 Studies in Process Analysis: Economy-wide Production Capabilities. New York: Wiley.
March, James G.; and Simon, Herbert A. 1958 Organizations. New York: Wiley.
Marschak, Thomas A. 1962 Strategy and Organization in a System Development Project. Pages 509-548 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Marshall, Andrew W.; and Meckling, William H. 1962 Predictability of the Costs, Time and Success of Development. Pages 461-475 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Matson, Frederick R. (editor) 1965 Ceramics and Man. Viking Fund Publications in Anthropology, No. 41. New York: The Fund.
Mead, Margaret (editor) 1937 Cooperation and Coin-petition Among Primitive Peoples. New York: McGraw-Hill. → A paperback edition was published in 1961 by Beacon.
Meier, Robert L. 1951 Research as a Social Process: Social Status, Specialism, and Technological Advance in Great Britain. British Journal of Sociology 2:91-104.
Merrill, Robert S. 1958 The Cultures of Technologies. Unpublished manuscript.
Merrill, Robert S. 1959 Routine Innovation. Ph.D. dissertation, Univ. of Chicago.
Merrill, Robert S. 1961 Advances in Routine Engineering Design and Their Economic Significance. Unpublished manuscript.
Merrill, Robert S. 1962 Some Society-wide Research and Development Institutions. Pages 409-434 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Merrill, Robert S. 1964 Scientific Communities and Technological Adaptation. Pages 15-20 in The Diffusion of Technical Knowledge as an Instrument of Economic Development. National Institute of Social and Behavioral Science, Symposia Studies Series, No. 13. Washington: The Institute.
Merrill, Robert S. 1965 Engineering and Productivity Change: Suspension Bridge Stiffening Trusses. Pages 101-127 in Ohio State University, Conference on Economics of Research and Development, Columbus, 1962, Economics of Research and Development. Edited by Richard A. Tybout. Columbus: Ohio State Univ. Press.
Nadel, S. F. (1957) 1960 Malinowski on Magic and Religion. Pages 189-208 in Raymond W. Firth (editor), Man and Culture: An Evaluation of the Work of Bronislaw Malinowski. New York: Harper.
Needham, Joseph 1954-1965 Science and Civilisation in China. 4 vols. Cambridge Univ. Press.
Nef, John U. 1932 The Rise of the British Coal Industry. 2 vols. London: Routledge.
Nef, John U. 1950 War and Human Progress: An Essay on the Rise of Industrial Civilization. Cambridge, Mass.: Harvard Univ. Press. → A paperback edition was published in 1963 by Harper as Western Civilization Since the Renaissance.
Nef, John U. 1964 The Conquest of the Material World. Univ. of Chicago Press.
Nelson, Richard R. 1962 The Link Between Science and Invention: The Case of the Transistor. Pages 549-583 in Universities-National Bureau Committee for Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors. Princeton Univ. Press.
Ogburn, William F. (1922)1950 Social Change, With Respect to Culture and Original Nature. New edition with supplementary chapter. New York: Viking.
Ohio State University, Conference ON Economics OF Research AND Development, Columbus, 1962 1965 Economics of Research and Development. Edited by Richard A. Tybout. Columbus: Ohio State Univ. Press.
O’neale, Lila M. 1932 Yurok-Karok Basket Weavers. University of California Publications in American Archaeology and Ethnology, Vol. 32, No. 1. Berkeley: Univ. of California Press.
Orans, Martin 1966 Surplus. Human Organization 25:24-32.
Parsons, Talcott (1937) 1949 The Structure of Social Action: A Study in Social Theory With Special Reference to a Group of Recent European Writers. Glencoe, 111.: Free Press.
Parsons, Talcott 1951 The Social System. Glencoe, 111.: Free Press.
Passer, Harold 1953 The Electrical Manufacturers, 1875—1900: A Study in Competition, Entrepreneurship, Technical Change, and Economic Growth. Cambridge, Mass.: Harvard Univ. Press.
Peck, Merton J.; and Scherer, Frederick M. 1962 The Weapons Acquisition Process: An Economic Analysis. Boston: Harvard Univ., Graduate School of Business Administration, Division of Research.
Polanyi, Michael 1958 Personal Knowledge: Towards a Post-critical Philosophy. Univ. of Chicago Press.
Research Project ON THE Structure OF THE American Economy 1953 Studies in the Structure of the American Economy: Theoretical and Empirical Explorations in Input-Output Analysis, by Wassily Leontief et al. New York: Oxford Univ. Press.
[Review Issue of] A History of Technology, by Charles Singer et al. 1960 Technology and Culture 1, no. 4.
Richards, Audrey I. (1939) 1961 Land, Labour and Diet in Northern Rhodesia: An Economic Study of the Bemba Tribe. Oxford Univ. Press.
Salter, W. E. G. 1960 Productivity and Technical Change. Cambridge Univ. Press.
Scherer, Frederic M. 1964 The Weapons Acquisition Process: Economic Incentives. Boston: Harvard Univ., Graduate School of Business, Division of Research.
Schlippe, Pierre De 1956 Shifting Cultivation in Africa: The Zande System of Agriculture. London: Rout-ledge.
Schmookler, Jacob 1966 Invention and Economic Growth. Cambridge, Mass.: Harvard Univ. Press.
Schultz, Theodore W. 1964 Transforming Traditional Agriculture. New Haven: Yale Univ. Press.
Schumpeter, Joseph A. (1912) 1934 The Theory of Economic Development: An Inquiry Into Profits, Capital, Credit, Interest, and the Business Cycle. Harvard Economic Studies, Vol. 46. Cambridge, Mass.: Harvard Univ. Press. → First published as Theorie der wirtschaftlichen Entwicklung.
Schumpeter, Joseph A. 1939 Business Cycles: A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. 2 vols. New York and London: McGraw-Hill. → An abridged version was published in 1964.
Science and Engineering. 1961 Technology and Culture 2, no. 4.
Shepard, Anna O. 1956 Ceramics for the Archaeologist. Carnegie Institution of Washington, Publication No. 609. Washington: The Institution.
Sigerist, Henry E. 1951-1961 A History of Medicine. 2 vols. New York: Oxford Univ. Press.
Singer, Charles J. et al. (editors) 1954-1958 A History of Technology. 5 vols. Oxford: Clarendon.
Smith, Cyril S. 1960 A History of Metallography: The Development of Ideas on the Structure of Metals Before 1890. Univ. of Chicago Press.
Starr, Martin K. 1963 Product Design and Decision Theory. Englewood Cliffs, N.J.: Prentice-Hall.
Straus, Hans (1949)1952 A History of Civil Engineering: An Outline From Ancient to Modern Times. London: Hill. → First published in German.
Taton, Rene (editor) (1957-1964) 1964-1965 A His-tory of Science. 3 vols. New York: Basic Books. → First published in French. Vol. 1: Ancient and Medieval Science. Vol. 2: The Beginnings of Modern Science. Vol. 3: Science in the Nineteenth Century.
Taylor, Calvin W.; and Barron, FRANK (editors) 1963 Scientific Creativity: Its Recognition and Development. New York: Wiley.
Technology and Culture. → Published since 1959 by the Wayne State University Press for the Society for the History of Technology.
Udy, Stanley H. JR. 1959 Organization of Work: A Comparative Analysis of Production Among Non-industrial Peoples. New Haven: Human Relations Area Files Press.
Underwood, E. Ashworth (editor) 1953 Science, Medicine, and History: Essays on the Evolution of Scientific Thought and Medical Practice, Written in Honor of Charles Singer. 2 vols. New York: Oxford Univ. Press.
Universities-National Bureau Committee For Economic Research 1962 The Rate and Direction of Inventive Activity: Economic and Social Factors. National Bureau of Economic Research, Special Conference Series, No. 13. Princeton Univ. Press.
Usher, Albert P. 1955 Technical Change and Capital Formation. Pages 523-550 in Universities-National Bureau Committee for Economic Research, Capital Formation and Economic Growth: A Conference. Princeton Univ. Press.
Vajda, S. 1958 Readings in Linear Programming. New York: Wiley.
Walker, Charles R. 1962 Modern Technology and Civilization: An Introduction to Human Problems in the Machine Age. New York: McGraw-Hill.
Weber, Max (1922) 1957 The Theory of Social and Economic Organization. Edited by Talcott Parsons. Glencoe, 111.: Free Press. → First published as Part 1 of Wirtschaft und Gesellschaft.
White, Leslie A. 1949 The Science of Culture: A Study of Man and Civilization. New York: Farrar, Strauss. → A paperback edition was published in 1958 by Grove.
White, Leslie A. 1959 The Evolution of Culture: The Development of Civilization to the Fall of Rome. New York: McGraw-Hill.
White, Lynn Jr. 1960 Tibet, India and Malaya as Sources of Western Medieval Technology. American Historical Review 65:515-526.
White, Lynn Jr. 1962a Medieval Technology and Social Change. Oxford: Clarendon.
White, Lynn Jr. 1962b The Act of Invention: Causes, Contexts, Continuities and Consequences. Technology and Culture 3:486-500.
Wright, Okville; and Wright, WilburMiracle at Kitty Hawk: The Letters of Wilbur and Orville Wright. New York: Farrar, 1951.
Zimmerman, Leo M.; and Veith, Ilza 1961 Great Ideas in the History of Surgery. Baltimore: Williams & Wilkins.
Technology can be generally conceived of as encompassing man’s methods and tools for manipulating material things and physical forces. The relationship between technology and international relations has been continuous and intimate. From the time of man’s most primitive polities, the foreign-policy problems and opportunities of states have been influenced by the nature of their technology for transport, communication, warfare, and economic production. The glory of Athens rested on silver mines, and the might of Sparta on a process for making steel; the Romans ruled through roads, and the Assyrians overran Babylon and Egypt with the chariot. The contemporary effects of hydrogen bombs and intercontinental missiles dramatize a relationship between technology and power and between power and policy that goes back in time through the steam engine and gunpowder to the ox, hoe, and sword and into prehistoric time.
The relationship between technology and foreign policy is neither a new nor a neglected subject among students of world politics. Political geographers have long sought to explore the influence of geographic environment on the foreign policies of states, and in doing this they have had to take account of the manner in which technology has enabled man to adapt to and alter the conditions imposed by his environment. Scholars engaged in the effort to develop quantitative means for measuring and comparing national power have made extensive use of a variety of technological indices such as steel or energy production. Students of nationalism and international organization have been interested in the part played by developments in transportation and communication in the formation of modern states and in the contribution that technology may make toward the establishing of regional or international arrangements among those states. Most recently, stimulated by the events of the last two world wars and the advent of nuclear weapons, scholars have given considerable attention to the interrelationships among weapons technology, military strategy, and foreign policy.
Research in all these areas has had to contend both with the familiar problem of how to make general observations about the effects of a single variable and with the additional problem that the consequences of technological change have become increasingly difficult to analyze. Taken in their aggregate, the technological developments of the past three centuries have had an extensive and accumulative effect on international relations. But the more complex man’s technology has become, the more it has served to multiply his choice of actions, and in consequence, considered individually, technological innovations have become increasingly less determinative in their effect.
A survey of past influence
The dominant technological development of the past three centuries has been the large-scale and increasing substitution of inanimate for animate energy as the motive force for man’s machines. This substitution had its beginning in the use of gunpowder and wind and water power, but it was only when man discovered how to convert the heat from the burning of fossil fuels into mechanical energy, and how to convert mechanical into electrical energy and back again, that inanimate energy became both plentiful and transportable. It is this energy base that has made possible the whole complex of technological developments that constitutes modern industrial civilization.
None of the key elements in the international political process has been untouched by the industrial revolution. The structure of the state system and of states themselves, the purposes and expectations moving state policy, and the means available to states for achieving their purposes have all been significantly altered.
Consider the changes in the structure of the state system, that is, in the number, location, and relative power of its members. As the industrial revolution transformed the bases of military power and increased its mobility, international relations became global, rather than regional, in scope, and the relations among the members of this global system became continuous, rather than episodic. The hegemony of Europe over the other continents, which began with such rudimentary energy advantages as the sail and cannon, became virtually complete with the advent of the steamship and improved ordnance. The fate of the technically inferior polities was well summed up in the couplet of Hilaire Belloc, “Whatever happens we have got/The Maxim gun and they have not”; and until the industrialization of the United States and Japan, world politics was essentially European politics.
The structure of the European state system itself was no less affected by the new technology. The disparity in power between large and small states was greatly increased (contrast the vulnerability of the Lowlands in 1914 and 1940 with their military exploits against Spain in the sixteenth century and against England in the seventeenth century), and the enhanced opportunities for union, voluntary or involuntary, saw the number of states in Europe reduced from some four hundred at the time of the Treaty of Westphalia to less than one hundred by 1815 and to a mere thirty in 1878. Drastic changes also occurred in the distribution of power among the Great Powers, most notably as a result of the early industrialization of England and the later displacement of France by Germany as the dominant power on the Continent.
These changes in the number, relative power, and location of the states making up the international system have had great consequence both for the stability of the system and for the character of the strategies pursued by individual states within it. Two world wars testify to the instabilities introduced by the rise of German and Japanese power. Similarly, the whole character of American foreign policy changed when the United States moved from a power position where its continued survival depended upon the commitment of European power and interest elsewhere to a position where American military potential exceeded that of the major European powers combined.
The effect of technology on the internal political structure of states has been equally striking. Just as gunpowder brought an end to castles and made it more feasible to establish effective national governments, so the later technology of mass transportation and communication enormously increased the ability of governments to mobilize the time and energy of their citizens. The development of the urban-industrial state has created both new political elites and new political relationships between elites and masses, most notably in the development of mass democratic and mass totalitarian states. Although the foreign-policy consequences of these changes are difficult to disentangle from the effects of other variables, the greater command that central governments can now exercise over people has certainly contributed to the ability of such governments to wage more intensive and more sustained warfare. The state’s need for popular support has also brought public opinion (with its moods and emotions) into the conduct of foreign policy and has enlarged the foreign audience for state diplomacy to include government-to-people communications as well as government-to-government communications. Finally, the dispersion of political and bureaucratic power that has attended the development of the industrial state has greatly increased the complexity of the process through which foreign policy is made, with the result that the opportunities for confusion, contradiction, indecision, and instability in the conduct of policy have been significantly increased.
The impact of technology on the purposes of state policy has been most marked on the intermediate level of the ends-means chain. States have continually pursued such general goals as “plenty,” “glory,” and “power,” but there has been considerable change in the operational definition of these goals. Among the underdeveloped states today, the effort to secure industrial technology has itself become one of the major preoccupations of foreign policy, and among the industrial states scientific and technological achievements are now prized as symbols of power and prestige. Consider also the changing value that states have assigned to particular territories on the globe. The steamship contributed to the imperialism of the late nineteenth century by opening to commerce areas difficult to reach by sail; the political importance of the Middle East in the twentieth century has been largely the result of European dependence on its oil reserves; and the advent of the missile-firing nuclear submarine has endowed even the geography of the Arctic with strategic significance. As for the contribution of technology to the more “ultimate” purposes of state policy, the present conflict between the Soviet Union and the United States owes much to the fact that each has evolved a different conception of the proper arrangement of things and people in an industrial society and is persuaded that its conception of the good life must and should prevail elsewhere.
The relation of science to foreign policy has been for the most part indirect, since society usually experiences new additions to scientific knowledge in the form of the technical applications of that knowledge. This is not the case, however, with respect to man’s general expectations about the course of human events. Here, new knowledge about man and the universe has led directly to a reorientation of such expectations.
The belief of seventeenth-century and eighteenth-century European statesmen in the balance of power as the natural order of state relations reflected in part their appreciation of the picture of measured order and equilibrium that science then presented of the physical world. Similarly, European and American policies at the turn of the nineteenth century were conditioned by a set of expectations about the “natural” struggle of states and the “inevitability” of the victory of the stronger over the weaker that had been stimulated by Charles Darwin’s theories about the evolutionary process.
As in the case of state goals, the general means available to states for securing their purposes have remained the same (persuasion, bargaining, and coercion), but the techniques through which states may employ these means have been greatly altered by recent technology. The development of more rapid and more reliable means of transportation and communication has transformed the conduct of diplomacy. The increased speed of communication between states permits choices to be made on the basis of more recent information about the actions and interests of others. The Anglo-American War of 1812 would probably never have occurred if an Atlantic cable had been available to inform Washington that the British were planning to repeal their orders-in-council. The handicaps that the slow transportation of the period imposed on negotiations are exemplified in the odyssey of President Madison’s peace commissioners; they left Washington in April 1813, hoping to meet the British in St. Petersburg, but did not catch up with them until August 1814, at Ghent. Today, the words of governments can be spread almost instantaneously around the world, and their agents are only hours away from the most distant foreign capitals. Central governments can also now exercise far greater control over the actions of their ambassadors and military commanders abroad. The initiative and independence that formerly could sometimes be displayed by a distant ambassador (as in the case of the contribution of Britain’s Stratford Canning in Istanbul to the coming of the Crimean War) have now been displaced, potentially at least, by the kind of detailed and continuous command and control that President Kennedy exercised over his representatives in the field during the 1962 Cuban missile crisis. [See Diplomacy.]
These changes have significantly increased the pace and coherence of international relations, but they have had no effect on the propensity of those relations to turn to violence. The more rapid communication of words and transport of negotiators provide in themselves no promise that conflicts between states will be either less frequent in occurrence or more easily resolved. There is no guarantee that a conversation over the “hot line” will prove any more effective in preventing war than was the 1914 “Willy-Nicky” correspondence over the telegraph, and as in the case of the 1960 summit conference, the jet plane can bring the major figures of the world quickly together for a dialogue that will only drive them further apart. Similarly, while governments can now exercise greater control over their men and machines in the field, they may not always choose to exercise that control, or the men in the field may not heed it (American policy in the Korean War provided some examples); and there are, in any event, no grounds for expecting, just because policy is more coordinated, that it will for that reason be either more belligerent or more pacific.
In assaying the impact of advanced communication and transportation technologies on the conduct of foreign policy, it is important to note that what technology has given with one hand, by increasing the speed of communication and transportation, it has taken with the other, by decreasing the time available for decision. Not even the telegraph was able to offset the pressures placed upon diplomats in 1914 by the mobilization tables of the general staffs, whose own time pressures were the result of the contribution that the railroad had made to the speed with which armies could be assembled and deployed on enemy frontiers. Indeed, it would be a fair hypothesis that successive increases in the volume and speed of action-forcing agents (messages, visits, events) have so accelerated the pace of international relations that, despite increases in the number of people engaged in the conduct of international relations, policy makers have been not only deciding more but thinking less.
The same double-edged effect can also be seen in the result of advances in technology for military command and control. Today’s technology permits strategic choices to be made on the basis of more complete and more rapidly processed information and to be executed with greater precision than in the past, but contemporary military technology has also increased the complexity of strategic problems and made strategic choices far more irreversible in their consequence. As a result, it is doubtful whether contemporary strategic nuclear forces—with their radar, teletypewriters, electronic locks, and computers—are any more “manageable” as instruments of policy, in a meaningful political sense, than were the armies and navies of World War i, with the telegraph and radio, or the armed forces of Napoleon, with the horse and semaphore.
Since the conduct of international relations is ever oriented toward the prospect of war, the relationship between technology and foreign policy is nowhere more evident than in the consequences of changes in the means available to states for coercion. Developments in the means of warfare have affected all the elements in the international political process previously discussed. Note has already been taken of the changes in the structure of the state system that resulted from the near synonymity of great military power and great industrial power. Similarly, the development of governmental structures capable of controlling every sphere of human activity, and the conduct of diplomacy for its impact on domestic as well as foreign audiences, have reflected the state’s need for mass armies and the military importance of the civilian labor force. And nowhere has the reciprocal relation between ends and means been better demonstrated than by the advent of twentieth-century total war. As improvements in technology increased the number and the destructive scope of weapons of war, thereby increasing the costs in treasure and blood entailed in their production and use, compensation was sought through enlarging the purposes of war, and this, in turn, served to stimulate the belligerents to still greater destructive efforts.
The destruction that attended the last two world wars has also left its mark on some of the general expectations about Western civilization, most notably that concerning its inevitable progress. The development of ever more destructive weapons has been accompanied by the disappearance of the few limitations (such as the discrimination between civilians and soldiers) that were formerly thought desirable or at least expedient during the exercise of violence in the name of state policy. The very value structure of science and technology, by emphasizing a pragmatic rather than an absolutist approach to problems, may have contributed to the dominance of military expediency over previously accepted humanitarian norms. At all events, the increasing destructiveness of weapons, coupled with the expectation that future warfare will be governed by the rule of “anything goes,” has served to call into question one of the fundamental premises of Western culture: the belief that advances in science and technology will result in man’s ultimate benefit.
One of the most striking demonstrations of the effect of changes in military technology on international relations has been that afforded by the development of nuclear weapons. Like the railroad and the steamship before them, nuclear weapons have revolutionized the character of war and the power relationships among states. The new weapons have widened the disparity between large and medium powers, increased the influence of scientific and military elites (and hence their policy perspectives) in state structures, and elevated new goals, such as deterrence and arms control, into the higher ranks of state purposes. The destructive character of nuclear weapons has also led to a dramatic change in expectations about the suitability of general war as an instrument of foreign policy. Thus, their unwillingness to contemplate the certainty of nuclear war compelled the Soviets to revise their theories about the inevitability of war with the United States. Similarly, the dominant expectation in Western capitals has been that, since there are no purposes states could achieve by a nuclear war that would be worth the lives that would be lost in its fighting, nuclear weapons will have the effect of making highly unlikely an all-out war between states which possess them.
Whether such a revolutionary consequence for the conduct of foreign policy can be ascribed to the development of nuclear weapons seems at best problematical. Certainly, nuclear weapons have made war against a well-prepared opponent seem irrational. Nevertheless, the expectation that war between nuclear powers will be prevented by their recognition of the costs involved is open to serious question. To begin with, as many students of military policy have pointed out, deterrence is neither technically simple nor politically automatic. All aside from the possibility of irrational acts, there will be many opportunities for statesmen to conclude—accurately or inaccurately—that the capabilities of their opponent make the costs of war bearable or that the intentions of their opponent make the costs of war unavoidable. Even more to the point, the argument that the loss of life which would attend a nuclear war makes such wars unlikely ignores the fact that the objects for which statesmen contend are rarely weighed in human lives. There are few instances in history of statesmen deciding to go to war after having made a deliberate calculation that their objects would be worth the loss of x lives (but not x + n lives). More frequently, the decisions that have led to war have taken the form of statesmen calculating only that their objects were worth the risk of war.
For these reasons, the consequences of nuclear weapons for the conduct of foreign policy may not prove as revolutionary as many believe. The level of destruction that would attend a nuclear war becomes less relevant if the critical choices should be made through reference to relative, rather than absolute, costs (better World War in now than later). The absolute level of destruction is also less relevant if the choice involved is only to risk the costs of war, not to incur them. The diplomacy of nuclear powers since World War n would indicate that, while they have been unwilling to incur the costs of nuclear war, they have been neither willing (nor seen themselves able) to forgo policies which entail the risk of such costs. Yet, as the diplomacy that preceded World War i and World War n amply illustrates, a political process in which states are willing to risk the costs of war can share many of the features, and conceivably the results, of a process where states are willing to incur the costs of war. [See Nuclear war.]
Characteristics and trends
The preceding survey has shown how technological developments of the past three centuries have effected significant changes in every element in the international political process (actors, ends, expectations, means, and system). Attention can now be directed to some of the general characteristics of and trends in the relationship between technology and international relations.
Characteristics .(1) The political changes effected by technology have normally been the result of multiple, rather than single, technological developments. The European colonization of the world was dependent upon the development of the clock, the compass, and gunpowder, as well as improvements in the design of sailing ships. Similarly, the British decision in 1912 that their navy could no longer conduct a close blockade of enemy ports cannot be traced to any single naval innovation. This decision (which led the British to develop procedures for the kind of distant blockade that subsequently strained their relations with neutrals such as the United States during World War i) was the end product of a number of technical developments, most notably steam propulsion, more powerful ordnance, mines, torpedoes, and submarines. The recognition that the effects of technology are best appreciated through reference to some grouping of interrelated individual developments is reflected in the contemporary use of the term “weapons system.” The dominant weapons system responsible for the current Soviet-American balance of terror is actually the product of the interaction of three different major technologies: those relating to missiles, electronics, and nuclear energy.
(2) The major political changes associated with technological developments have been the result of a multiplicity of nontechnical, as well as technical, factors. The disappearance of the limitations that characterized European warfare in the eighteenth century can be only partially explained by the technical changes that produced better roads, increased metal production, and improved the efficiency of firearms and artillery. Reference must also be made to critical changes in foreign policy (the displacement of territorial and commercial objectives by the ideological issues of the American and French revolutions); changes in military doctrine (organizational innovations making feasible the direction of larger armies and the development of more aggressive and more sustained campaign tactics); and even changes in the general cultural ethos (a lessened belief in the sinful nature of man, with the consequent loosening of inhibitions against weapons development, and a shift from an interest in production for artistic value to a concern for low-cost quantity production). The complex of technical and nontechnical variables can also be seen in the reasons for the breakup of the European colonial empires after World War n and the consequent doubling of the number of states on the planet. The explanation is to be found partly in technical developments (the global diffusion of European weapons technology, and the contribution of mass communications technology to the growth of a sense of identity among colonial peoples) and partly in political developments (the diffusion of European ideas about nationalism, and the contribution of new theories about racial equality to the weakening of the European determination to maintain colonial rule).
(3) The political problems and opportunities resulting from technological change have been unequally distributed among states, both temporarily and permanently. The American experience with nuclear weapons provides a recent example. The advantages of a short-lived monopoly have been followed by a revolutionary decline in the military security of the United States. Unlike Germany in the first half of this century, the Soviet Union does not have to conquer the Old World before it can command the resources necessary to strike a mortal blow at the American continent. The destructiveness, range, and cheapness of nuclear weapon systems have stripped the United States of her earlier cushion provided by allies, time, and space and have largely canceled out the industrial superiority that meant defeat for her enemies in the last two world wars. The asymmetrical effects of technological change are also evident in the results of the global diffusion, since the end of World War n, of public health techniques innovated in Europe and North America. The application of these techniques in Asia and Latin America reduced death rates in those areas, in a period of a decade, to levels which the Europeans had required centuries to reach. But as a result of their continued high birth rates, the Asians and Latin Americans, unlike the Europeans, must begin their efforts to industrialize under the handicap of an unparalleled expansion in population.
(4) The political consequences of technological change have been largely unanticipated. To begin with, most of the technological developments themselves have come as surprises. A study, sponsored by the United States government in 1937, which endeavored to forecast developments for the next decade failed to anticipate, among other items, atomic energy, jet propulsion, radar, and antibiotics. Even when the general effects of technological developments have been clear, an analysis of their political consequences has not always been forthcoming. As of this writing, the population explosion noted above, one of the major transformations in the world today, has been discussed for over a decade, but its foreign-policy consequences have yet to be delineated beyond the simple Malthusian prophecies of war, plague, and famine. And finally, when efforts have been made to predict the foreign-policy consequences of new technologies, the score has not been impressive. History is full of confident predictions that this or that development (the hot-air balloon, dynamite) would make war irrational. Similarly, many observers have expected that the advances in transportation and communication technology during the past century would increase international ties and identifications and result in larger states, regional groupings, or even one world. Actually, as a result of the political innovations with which governments met these technical developments (e.g., more effective trade and passport controls, censorship, and more intensive means of political socialization), the world has become, not more “international” since the nineteenth century, but less. History’s largest contiguous empire remains that conquered by the Mongols on horseback, and while steam did help to enlarge the European empires created by sail, the main effect of the last century’s advances in transportation and communication has been, not to produce larger polities, but to increase the cohesion of existing polities.
Trends .(1) Science now precedes technology. Both the neolithic revolution (the domestication of animals and the development of agriculture) and the industrial revolution took place independently of advances in man’s scientific knowledge. Steam engines were built long before their basic laws were formulated. This relationship began to change with the advent of the chemical and electrical industries, and since this century began scientific discovery has increasingly become a necessary preliminary to new technology. Thus, the development of the atomic bomb was dependent on basic research in nuclear physics; and by the end of this century the further development of technology may be almost completely based upon advances in scientific knowledge.
(2) Scientific knowledge and technological innovation are increasing at an exponential rate, at least in the scientifically literate and technically advanced states, for the more technologically complex a society becomes, the more easily it can generate and absorb new information and techniques. It is estimated that 90 per cent of all the scientists who ever lived are alive today, and, as crudely measured by the volume of scientific publication, scientific knowledge is doubling every ten to fifteen years. The change in the rate of technological innovation is equally impressive. In the first three hundred years after the invention of firearms, the improvement in the original product was so slow that Benjamin Franklin gave serious consideration to arming the Continental Army with bows and arrows. In contrast, only ninety years passed between the first successful steamship and the disappearance of sails from warships, and fifteen years after the first flight of Orville Wright there were 2,600 planes and 300,000 men in the Royal Air Force.
(3) Both the costs of acquiring new scientific knowledge and the costs of product innovation appear to be increasing. One reason American university research budgets have become so dependent on the government for funds is that no other source is rich enough to meet the rising costs of research. The situation in some fields of nuclear physics has been characterized by one scientist’s observation that it costs a million dollars just to ask a question. Similarly, the production of a fighter plane required 17,000 engineering hours in 1940, but 1.4 million hours were required by 1955. Finally, as a result of the disappearance of high-grade ores, even the production of basic materials, such as iron, copper, and bauxite, now requires increasing amounts of technical equipment and energy. As a result of these developments, the scientific and technological distance between powers has been steadily widening. At present the United States, the Soviet Union, Europe, and the rest of the world each has one-fourth of the world’s supply of scientists and engineers. Even the most technically advanced of the European states are no longer able to compete, on an individual basis, with the United States and the Soviet Union in such technologically intensive fields as nuclear weapons, advanced aircraft, space, and missiles, and the nations which make up the rest of the world are hopelessly outclassed. In 1953, for example, the United States Atomic Energy Commission used six times as much electricity as India produced that year. The point to these developments would seem to be that in the future, not only will the Great Powers alone be able to have great technology but, unless the smaller states pool their efforts, only the Great Powers will have great science.
(4) Scientific research has become increasingly subject to government control and direction. Governments have long sought to foster and exploit technological developments for political, especially military, purposes. (Bessemer began work on his process in order to win a prize that Napoleon in had offered for a cheaper means of producing armor plate, and the governments of several European states took an active part in the construction and location of railroads in order to facilitate the deployment of troops at key frontiers.) But until the advent of the cold war the process of scientific discovery was largely unplanned and random, as far as government choices were concerned. By the end of the seventeenth century, science had developed into an international and essentially autonomous social institution; during the great ideological conflicts of the early nineteenth century, scientists and their ideas were allowed to pass as freely across political frontiers in time of war as they did in time of peace. Although governments made a primitive effort to put scientists to work on military problems during World War i (the key role of Fritz Haber and other German chemists in the development of poison gas was a harbinger of the part physicists were to play in the development of the atom bomb), it was not until World War n that governments brought the resources of their scientists and engineers fully to bear on the problems of war. The results of this effort (radar, the proximity fuse, the V-2, and the atom bomb) were such as to guarantee that its value would not be forgotten with the war’s end.
What has transformed the relationship between science and government has been the previously noted point that the development of technology has become increasingly dependent upon advances in scientific knowledge about the physical world. This trend is especially critical for the United States and the Soviet Union. As these powers throw one weapons systems after another into the effort to maintain at least a balance of terror, neither dares fall behind in either the discovery of new physical relationships or the application of scientific knowledge to military hardware and political-military strategy. It is indicative of the new relationship between science and war that figures and graphs comparing the major powers in number of scientists and engineers have become as familiar as those in the 1930s which compared their output of coal, oil, and steel. Nor is it only in the military field that science has become vital to the course of foreign policy. Science has been harnessed to the advancement of foreign-policy goals in such diverse fields as the exploration of space and oceans, birth and disease control, weather modification, and global communications. [See Science, article on science-Government Relations
It is a safe prediction that the foreign-policy problems and opportunities of states will continue to be influenced by technological change. Even after the current exponential rates of discovery and invention begin to level off, the pace of discovery and invention will still be far in excess of what man has experienced throughout most of the twentieth century. Moreover, mankind appears to be entering upon an era of technological development commensurate in cultural importance to that of the industrial revolution. Just as the industrial revolution was based on the substitution of inanimate energy for the deficiencies of the human muscular system, so now automation and computers have begun to substitute for the deficiencies of man’s brain and central nervous system. In fact, this second development may prove even more revolutionary for man’s culture than the first. Man also stands on the threshold of major discoveries in human biology and chemistry. Indeed, the foreign-policy problems posed by nuclear weapons could seem simple compared with those which might result from breakthroughs in the understanding and control of memory, learning, and heredity (the alarm that a state might experience on the discovery of a “gene gap” could easily match the alarm felt by the United States during the “missile gap” scare of the late 1950s). When to these prospects one adds such possibilities as the use of new energy sources and climate control, it would seem evident that the future changes in international relations associated with scientific and technological developments will prove at least as consequential as those of the past.
It is much less certain whether man will be able to improve on his past performance in anticipating and controlling the political consequences of technological change. To date, science and technology have been liberating forces in Western culture. They have served to dispel ignorance and superstition and have given man a sense of control over nature and his destiny. But with the multiplication of knowledge and the increased specialization of disciplines, individuals are becoming ever more ignorant of the workings of the world about them, outside their area of information. Unless this development is balanced by an increased sense of governmental or social control over the course of technology, it could lead to a mounting sense of impotence on the part of technical-urban man. He could begin to display the same kind of fatalism and apathy toward the mysteries of his technical environment that peasant-village man displays toward the mysteries of his natural environment. Should the development of science and technology lead to a perspective of this order, it would mark the final collapse of the eighteenth-century and nineteenth-century ideal of a rational society where man’s material environment, no less than his social and political environment, is susceptible to human understanding and control.
In view of the difficulties that attend the problem of prediction, it might seem rash to expect that future discoveries and inventions and their foreign-policy consequences will be better anticipated than has been the case in the past. Nevertheless, the four trends discussed above do provide some grounds for such an expectation. Previous attempts to forecast the development and consequence of technology have been sporadic, informal, and mainly the work of interested but not always appropriately skilled individuals. In the future, as the governments of the major powers play an increasing role in the material support of research and development programs, the mounting costs, together with the multiplication of the possible avenues of inquiry, insure that these governments will become increasingly involved in determining the content and priorities of such programs. The existence of continuous and self-conscious planning efforts of this order, on the part of skilled and concerned government consultants and officials, should have the effect of significantly reducing the degree of “technical surprise” that will attend the results of national research and development programs.
The same trends also point toward a more determined effort by governments to predict the political consequences of their research and development programs. As the opportunities for further research and development in each of a thousand different fields mushroom with the acceleration of scientific knowledge, whatever the government decides to support, it will be deciding not to support many more. In consequence, both the government’s own interests and the interests of the proponents and opponents of particular programs will combine to place governments under increasing pressure to predict and justify in advance the policy consequences of their choices.
The mere fact that governments will be under pressure to make predictions provides, of course, no guarantee of their accuracy. Still, as with the effort to predict the future course of technology, there is some reason to believe that more determined efforts to predict the consequences of that technology will lead to some improvement over past performance. One reason for the succession of “political surprises” experienced over recent centuries is that predictions have too often taken as their point of departure the alleged identification of a single new “key” discovery or invention. What is clearly needed instead are predictive efforts which take as their point of departure the identification of potential new technological systems. This approach is already employed in the analysis of military research and development options, and there seems no reason why it cannot be extended to other technological fields. [See Economics OF DEFENSE.]
An equally important requirement for more accurate prediction is the necessity to take account of the manner in which political purposes and institutions may shape the consequences of technological change. Man has never been the passive tool of his technology, Important as the scientific discoveries and technical inventions of the past several centuries have been, the history of those years could hardly be written without reference to man’s political theories and innovations: nationalism, the Protestant ethic, the balance of power, democratic government, bureaucracy, collective security, or socialism. Consider the current relations between the United States and the Soviet Union. Missiles, electronics, and nuclear weapons have produced a revolutionary change in the two countries’ military technology, but the policies which have guided the development and deployment of that technology have been the product of such factors as the “lessons of the 1930s,” on one side, and Lenin’s reading of nineteenth-century history, on the other.
There is, in short, an “endless frontier” to politics as well as to science, and man’s fate will be determined as much by his adventures along the one as along the other. Indeed, the more complex man’s technology becomes, the more permissive are its effects on man’s action and the more the consequences of technology turn on his political choices. In technologically primitive societies, man’s values and social structures are highly conditioned by the nature of his technology. But just as man first used technology to overcome the limitations of his natural environment, so now, in technologically complex societies, man can turn science and technology to the task of overcoming limitations in his technical environment. Increasingly, man’s values determine his technology; he can do what he wants.
The result of this development is that in the future, even more than in the past, the task of understanding, predicting, and controlling the impact of scientific and technological developments on international relations will turn not so much on an analysis of the technological possibilities as on an analysis of men’s theories about the international political process and their conceptions about the roles that their own and other states should and will play in that process.
Warner R. Schilling
[Directly related are the entries Communication, Political; Disarmament; Foreign Policy; Geography, article on political Geography; International Politics; Military Power Potential; Nuclear War; Strategy. See also International Relations; War; and the biographies of Douhet; Mahan; Richardson
Berkner, Lloyd V. 1964 The Scientific Age: The Impact of Science on Society. New Haven: Yale Univ. Press.
Born, Max 1958 Europe and Science. Bulletin of the Atomic Scientists 14:73-79.
Boyko, Hugo (editor) 1964 Science and the Future of Mankind. Bloomington: Indiana Univ. Press.
Brodie, Bernard 1941 Sea Power in the Machine Age. Princeton Univ. Press.
Brodie, Bernard; and Brodie, Fawn 1962 From Crossbow To H-Bomb. New York: Dell.
Brown, Harrison; Bonner, James; and Weir, John 1957 The Next Hundred Years: Man’s Natural and Technological Resources. New York: Viking.
Foster, George McC. 1962 Traditional Cultures and the Impact of Technological Change. New York: Harper.
Fuller, J. F. C. (1945) 1946 Armament and History: A Study of the Influence of Armament on History From the Dawn of Classical Warfare to the Second World War. London: Eyre & Spottiswoode.
Haskins, Caryl P. 1964 The Scientific Revolution and World Politics. New York: Harper.
Johns Hopkins University, Washington Center OF Foreign Policy Research 1960 Developments in Military Technology and Their Impact on United States Strategy and Foreign Policy. U.S. Congress, Senate, Committee on Foreign Relations, U.S. Foreign Policy Study No. 8. Washington: Government Printing Office.
Johnson, Ellis A. 1958 The Crisis in Science and Technology and Its Effect on Military Development. Operations Research 6:11-34.
Lasswell, Harold D. 1956 The Political Science of Science: An Inquiry Into the Possible Reconciliation of Mastery and Freedom. American Political Science Review 50:961-979.
Mumford, Lewis (1934) 1964 Technics and Civilization. New York: Harcourt.
Nef, John U. (1950) 1952 War and Human Progress. Cambridge, Mass.: Harvard Univ. Press.
Ogburn, William F. (editor) 1949 Technology and International Relations. Univ. of Chicago Press.
Schilling, Warner R. 1959 Science, Technology, and Foreign Policy. Journal of International Affairs 13: 7-18.
Sprout, Harold; and Sprout, Margaret 1962 Foundations of International Politics. Princeton, N.J.: Van Nostrand. → See especially chapters 7 and 8.
Sprout, Harold; and Sprout, Margaret 1965 The Ecological Perspective on Human Affairs, With Special Reference to International Politics. Princeton Univ. Press.
Stanford Research Institute 1959 United States Foreign Policy: Possible Nonmilitary Scientific Developments and Their Potential Impact on Foreign Policy Problems of the United States. U.S. Congress, Senate, Committee on Foreign Relations, U.S. Foreign Policy Study No. 2. Washington: Government Printing Office.
Vlekke, B. H. M. 1965 The Development of Modern Science and the New Tasks of Diplomacy. Pages 221-236 in Karl Braunias and Peter Meraviglia (editors), Die modernen Wissenschaften und die Aufgaben der Diplomate. Graz (Austria): Verlag Styria.
Wohlstetter, Albert 1964 Technology, Prediction, and Disorder. Bulletin of the Atomic Scientists 20, October: 11-15.
Woodward, Llewellyn 1956 Science and the Relations Between States. Bulletin of the Atomic Scientists 12:119-124.
Wright, Quincy 1955 The Study of International Relations. New York: Appleton.
TECHNOLOGY. Early modern Europeans paid new attention to the machines and technical processes that created most of their material goods. Appreciation of rapidly advancing arts and inventions was not particularly new—the Middle Ages also having been an era in which myriad new technologies appeared in Europe. What was becoming noticeably different by the middle of the fifteenth century was that new technologies were becoming a force in the shaping of Europeans' intellectual framework—just as they shaped social frameworks through the expanding manufactories in mining, ordnance, papermaking, printing, and textiles. Both the material and the mental landscapes of early modern Europe were dramatically reconfigured over these centuries, and in a very self-consciously interdependent way.
"Technology" did not really exist as a concept until at least the seventeenth century; what we see in the early modern period is the attempt to create a realm that constantly straddled growing scientific thought and developing industrial practices. Technology continues today to ambiguously refer both to the practices and tools of material construction, and to the knowledge (the -ology ) about how these practices and tools operate. In the centuries spanning the invention of the printing press and the first experiments with electricity, technology gave rise to a particular vision of human effort and learning, one whose central image was that of "progress."
Mechanical arts in the ancient and medieval period had often been disregarded by scholars and philosophers and by the makers of literate culture. To a large extent, the name "mechanic," because associated with manual labor, remained tainted throughout the early modern period (and remains so today). However, starting in the Renaissance, Europeans began to reframe their concept of learning around the study of human productivity. This reframing contributed significantly to the restructuring of the existing system of Aristotelian natural philosophy. The knowledge of machines and technical processes became clues to the natural forces that govern both natural and artificial processes. Galileo Galilei's (1564–1642) formulation of kinematic motion, for example, was completed at the end of long years studying projectiles in the context of military engineering. Early modern theorists of science and enlightenment articulated the faith that philosophical knowledge can be derived from technical arts, and then reapplied to organize the technical world in a more efficacious way. They did not so much dignify craftsmen as seek to appropriate from craftsmen universal principles by which the arts could be directed. The capture of those principles became a major goal of scientific enquiry and underwrote a new professional engineer with status and learning meant to distinguish him from the mere craftsman.
WONDERS OF THE AGE
By 1548, the French physician and astronomer Jean Fernel (1497–1558) could proclaim the inventions that testified to "the triumph of our New Age": the compass, the cannon, and the printing press. Of these, the printing press, nearly one hundred years old, was the newest. The full impact of the compass, cannon, and printing press was not obvious until the end of the fifteenth century and depended on the development of other technologies.
Compass. The introduction of the magnetic compass gave mariners not only a new way of navigating in open sea, but, perhaps even more importantly, a means of recording their journeys in a readable and fairly precise way. The portolan map, fully developed by the fifteenth century, was produced by drawing coast lines and islands according to constant lines of compass bearing. The remarkable advance this offered can only be appreciated visually. In the middle of the fifteenth century, this advantage to navigation was joined by a new ship design that allowed greater maneuverability. The medieval carrack was replaced by the three-masted ship, which offered more sail area, the ability to sail windward, and larger sterns for cargo and crew. By 1488, Portuguese sailors, who were also learning the system of winds, were able to circumnavigate the Cape of Good Hope. Oceanic voyages quickly opened up new prospects for trade with the East, and, after 1492, a New World.
Cannon. The development of gunpowder artillery changed the balance of power both between Europeans and other peoples, and, intermittently and temporarily, between the emerging nation-states of Europe. Invented sometime in the early fourteenth century as a rather cumbersome, if effective, bombard, gunpowder artillery underwent a great deal of development throughout the fifteenth century. Europeans learned to cast and bore cannons (rather than barrel together hoops of forged metal) to specific calibers; they designed gun carriages for better mobility; they learned to make nitrates for the salt-peter necessary to gunpowder production, and to corn (or ball) the gunpowder for better storage. The main effect the advent of widespread cannon warfare had on noncombatants was to change the faces of their cities. Older town walls (and often a number of townsmen's houses) were demolished for newer, lower, and thicker geometrical circuits. Polygonal, bastioned fortifications, the trace Italienne, were built around numerous continental European cities. A secondary effect of military engineering concerns was to focus attention on the problems of projectile motion, impact, and the resistance of materials—all areas of concern in the establishment of a new physics.
In the field, the integration of small arms worked to further alter the conduct of open battle. The shoulder-carried harquebus or musket, already in use by the 1480s, developed into a common weapon of the infantry, even if pikemen continued to be of essential importance into the seventeenth century. A more sudden transformation took place in the cavalry as a result of the spread of the wheellock pistol in the mid 1500s. Employed by mounted German Reiters, and further developed as a cavalry weapon by the French under Henry IV (ruled 1589–1610), the adoption of the pistol led to the dethroning of the armored lance, and "the end of knighthood."
Printing press. The political theorist Jean Bodin (1530–1596) wrote, "The art of printing alone would easily be able to match all the inventions of the ancients." Printing had transformed intellectual life. Before its advent around 1450, a personal library of fifty volumes was considered sumptuous; by Bodin's writing, noblemen routinely collected hundreds; pamphlets and other cheap print were available to most literate people.
The printing press relied on a set of standard-sized raised letters, cast in a matrix that had been impressed with the letter's impression by a steel punch, and then set into a form. The system of punches, matrices, and forms was the most significant (and expensive) aspect of the invention, and established printing as the first industry to employ interchangeable parts. The success of the print trade relied on the earlier development of paper technology, which in the previous 150 years had largely replaced parchment (scraped animal skins) and greatly reduced the expense of books. It also depended on sophisticated metallurgy; steel was difficult to produce, and the metals used had to perform properly.
Other arts. Aside from these "revolutionary" technologies, a host of smaller-scale innovations enriched domestic interiors between 1450 and 1550. Venetian glassmakers pioneered a refined clear glass in the late fifteenth century, and Italian potters began to manufacture brightly painted majolica. The European silk industry expanded greatly. In the sixteenth century, the French potter Bernard Palissy (1510–1589) formulated a pure white glaze in imitation of porcelain. All these products offered domestic alternatives to goods that had previously been imported from the Middle or Far East. Meanwhile, techniques for quicksilvering mirrors and the development of oil paints that could capture dramatic lighting effects offered new adornments.
With printing, the techniques of numerous arts were recorded in printed books. By the end of the sixteenth century, books were available on the employments, tools, and "secrets" of trades as diverse as fishing, pyrotechnics, metallurgy, and architecture. Many were written by practicing artisans and mechanics. Some of these books amounted to little more than lists of recipes, while others eloquently discussed the relationship between art and nature, and insisted on the need for both theory and practice in the proper execution of crafts. These discussions offered an alternative discourse on these subjects to that available through elite education. Later promoters, apologists, and organizers of technological knowledge drew heavily on this vast literature.
ARCHITECTS AND HUMANISTS
Renaissance artists created some of the most impressive engineering feats of their day. Filippo Brunelleschi (1377–1446) awed his contemporaries with the construction of the enormous duomo atop the Florentine cathedral. The dome was constructed without centering or beams by connecting eight spears above the cathedral. Even Brunelleschi's scaffolding and lifting machine designs were copied by other artists. The most developed mechanical knowledge available was no doubt cultivated by architects. This was particularly obvious in Italian cities, where architects and other artists were highly trained in practical mathematics, and constantly experimented, at least in sketches, with various combinations of machine elements. Leonardo da Vinci's (1452–1519) well-known breadth of interests—stretching from his designs of ingenious devices to sculpture to painting—was not uncommon. Francesco di Giorgio (1439–1502) also developed great expertise in the fields of engineering and hydraulics, along with his more decorative work. Architects directed sometimes dramatic refigurement of major cities. Rome was largely rebuilt in the sixteenth century and Paris in the seventeenth. Architects also designed dams and waterways, fortifications, and stage machinery.
As works of architecture and engineering gained greater cultural capital as markers of status and power, scholars and patrons themselves often came to seek the knowledge of the architects and to share their literate culture. Leon Battista Alberti (1404–1472) was a humanist who carved a new role for himself as the technical counselor to powerful men. His treatises detailing mathematical and conventional rules for painting, sculpture, and architecture became classics even in manuscript. Cooperation between elites and architects centered on military engineering and the study of ancient technical texts, works that promised the secrets of recreating the splendid world of the ancients. The duke of Urbino, Federigo Montefeltro (1422–1482), himself tried to aid Francesco di Giorgio in a translation of De architectura by the Roman architect Vitruvius. Alberti had given up making sense of this text, but the first editions came from practicing architects: Fra Giovanni Giocondo da Verona's (c. 1433–1515) Latin text of 1511, and Cesare Cesariano's vernacular edition in 1521. Other texts considered clues to ancient marvels of engineering were also routed to prominent architects and painters by their patrons. Texts of Archimedes, the hydraulics of Hero, and the mechanical collections of Pappus were books examined by scholars of both elite and artisanal status.
By the end of the sixteenth century, mathematicians such as Federico Commandino (1509–1575) and Guidobaldo del Monte (1545–1607) had developed their own elaboration of a classical rational mechanics. This work remained rooted to the world of the mechanic, but began to address a new sort of engineering professional that was just then beginning to emerge.
NATURAL MAGIC AND ALCHEMY
No easy category existed during the late Renaissance in which to place figures who performed technological feats. The Syracusan Archimedes (c. 287–212 b.c.e.), for example, was famous as the maker of a wooden bird that flew all by itself, and as the engineer whose special mirrors burned Roman ships in the harbor—both accomplishments that early modern engineers attempted to recreate well into the eighteenth century. In the language of Renaissance Neoplatonism, the term magus often served best to characterize such figures. The magus was figured as a wise man whose knowledge of occult (hidden) natural properties allowed him to unleash operative forces and create amazing effects. Scholars of magic—among the most learned of the age—developed a doxography that linked magical, philosophical, and religious figures in historical progressions: from the legendary Egyptian magus Hermes Trismegistus, to Moses, to Pythagoras, to Platonic and Aristotelian philosophers, to Ptolemy as a judicial astrologer, and thence to the Hellenistic mathematician and reputed engineer Archimedes.
Meanwhile engineers themselves, military engineering writers such as Conrad Keyser (1366–1405) and Giovanni da Fontana (1395?–1455?), had cultivated a mixture of technology and magic. "Natural magic" pointed to the operative power inherent in technology, and offered a framework outside that of Aristotelian causality. By the turn of the seventeenth century, discussions of technology often adopted the name "magic" as "the practical part of natural philosophy." Influential writers such as Tommaso Campanella (1568–1639) and Giambattista della Porta (1535?–1615) continued to configure technological work as natural magic. Della Porta in particular had himself demonstrated success experimenting with lenses and was a key member of the Accademia dei Lincei before Galileo, with his mathematical-philosophical approach to technology, gained center stage among the academicians. In England the connection remained intact through Robert Fludd (1574–1637), whose work explicitly drew together mechanical technologies and divinatory arts within a mystical Christian framework. The work of John Wilkins (1614–1672) is a late echo of the connection between mathematics, technology, and magic. His compendium of the most current work in rational and practical mechanics was entitled Mathematical Magic, but the "magic" was completely removed from occult overtones, and merely captured the transformative power of technology.
Another tradition of natural magic ran from Hermes to alchemical thinkers such as the medieval Islamic alchemist Geber and the learned friar Roger Bacon (c. 1220–1292). Alchemy was a repository of knowledge for a variety of distillation and metallurgical techniques. Before a more rationalized nomenclature could be instituted, alchemical lore was often veiled in occult language and bizarre images. Alchemy enjoyed something of a vogue in the sixteenth and seventeenth centuries and occupied some of the finest minds of the age, including the twenty-year concentrated studies of Isaac Newton (1642–1727). Alchemy consisted of distillation and metallurgical techniques, and created seemingly new substances through the combination and heating of reagents. These practices were often conceived within a theory of metals and a religious-spiritual view of nature and human labor. Probably due to the shapes of mineral veins, metals were believed to grow inside the earth; over long periods of time all metal would mature into gold. Alchemy was the art and labor by which nature could be hastened and perfected. While alchemists did indeed believe it was possible to turn base metals into gold, the operations of alchemy also provided both consumable products and an observable, experimental analog to the processes of nature. Metallurgists utilized the literature and techniques of alchemy, and Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541) developed a chemical medicine and alchemical view of nature that found numerous followers throughout the sixteenth and seventeenth centuries.
BACONIANS AND THE DIRECTION OF PROGRESS
Francis Bacon (1561–1626) spent much of his forced retirement from politics writing on a reform of knowledge that would account for and extend the success of technological traditions but avoid the drawbacks of its current practices. His Novum Organum (1620; New organon) detailed both criticisms of the current state of knowledge and remedies. Bacon advocated the redirection of philosophy away from erudition and logical terminology, toward experience and the advancement of material wealth. Mechanics, mathematicians, physicians, alchemists, and magicians, Bacon noted, had handson knowledge of nature, "but all [have met with] faint success." Bacon had patience neither to wait for the happenstance of a lucky discovery or invention, nor to suffer the "fanciful philosophy" advanced by alchemists and others who presumed too much based on a narrow base of technical knowledge. "Knowledge and human power are synonymous," he proclaimed. While he advocated a program of experimentation, he was decidedly more articulate about a more descriptive collection of facts from the natural and technological worlds. For example, from a "history of trades" that would chart information from all manner of tradesmen, the philosopher would draw out axioms of principal import. The axioms could then be used to organize and further the trades.
Bacon's program, with the approach of the 1640 Puritan Revolution, appeared to some to offer the prospect of a "new Albion," an Edenic England created through technology in a great reform of religion, mind, and social organization. Samuel Hartlib (c. 1600–1662), for example, worked toward such a vision. Hartlib was in fact central to the circle of men who later founded the Royal Society.
The Royal Society, founded on explicitly Baconian inspiration, at first tried to fulfill the role of collectors of histories of trades. While this project was not successful, the society often centered around the experiments made by its curator. Information on mines, machines, and other technological news was assiduously collected along with accounts from physicians, mathematicians, and naturalists, and was printed in the Philosophical Transactions. Exhaustive histories of trades were finally realized at the end of the eighteenth century in France. The overt Baconians Denis Diderot (1713–1784) and Jean Le Rond d'Alembert (1717–1783) and the more staid Académie des Sciences both produced encyclopedias of arts and trades in the decades before the French Revolution.
TECHNOLOGIES FOR SCIENCE; SCIENCE FOR TECHNOLOGIES
While Bacon had fully recognized the mutual relationship between the reform of natural philosophy and the progress of the arts, he had paid relatively little attention to the technologies that were themselves transforming the practices of science. While mechanics, architects, and craftsmen had always used mathematical measuring instruments in their work, and these themselves underwent great refinement in the sixteenth century, the new scientific instruments of the seventeenth century—the telescope, microscope, air pump, and to a lesser degree thermometers and barometers—depended on technologies and offered possibilities on a whole new level. The telescope and the microscope extended human vision enormously and produced experiential evidence in debates such as that over the Copernican hypothesis. The air pump, as it was developed by Robert Boyle (1627–1691) and his mechanic-client, Robert Hooke (1635–1703), consisted of a ratchet and piston system that could evacuate a glass receiver one cylinder-volume at a time. This served as a stage of observation for an artificial environment of evacuated air and allowed Boyle to make claims concerning the nature of the tiniest units of matter. This was a sort of instrument that had never been used in natural philosophy before. Such instruments were difficult to get to work dependably, and often relied on the skills of a mechanic like Robert Hooke.
Meanwhile, both elite and practical mathematicians developed mathematical skills that were meant to aid the design of ever more complicated technical tasks. Vernacular editions of Euclid had been available since Niccolò Tartaglia's (1499–1557) 1543 Italian edition. Above all, these editions spread and popularized geometrical proportioning techniques. Simultaneously, in the early seventeenth century the Scottish nobleman John Napier (1550–1617) and the Swiss watchmaker Joost Bürgi (1552–1632) developed logarithms that would make trigonometrical computations much easier. Napier in particular drew explicit attention to the ways logarithms would ease tasks in military engineering and survey. Napier also employed the decimal notation developed by the Dutch engineer and counselor to Maurice of Nassau (1567–1625), Simon Stevin (1548–1620). Decimal notation eased work with fractions. Proportional compasses and calculating sectors also eased practical calculations. The foundations of algebraic analysis were meanwhile made by Pierre de Fermat (1601–1665), and a century later the use of analysis became essential to the cadets of France's technical institutes, and made possible a new style of engineering. Meanwhile, projective geometry, always to some extent a tool of architects and engineers, had been highly developed and integrated into perspective by Gérard Desargues (1591–1661). Descriptive geometry was institutionalized in technical drawing, again at the French écoles, by Gaspard Monge (1746–1818).
PROJECTORS, ARTIFICERS, AND THEIR PATRONS
In his fable of the ideal technological and moral society, the New Atlantis (1627), Francis Bacon had presented a kind of intellectual mirror opposite of mercantilist programs. In his imaginary Benthalem, technological secrets were constantly imported by explorers and developed by technicians; no technologies, however, would be exported to other nations. This speaks both to concerns about industrial espionage and difficulties caused by undeveloped patent laws that infected all states in Europe. It also indicates some of the enthusiasm political and cultural leaders had in the wholesale collection of technical knowledge, and their reliance on mechanical workers to feed their interests.
European rulers had long tried to prohibit the export of technologies on which their economies depended. Venice, for example, forced glassmakers to swear they would not take their art outside of the city's dominion. The importance of technological transference through the migration of skilled persons is most forcefully demonstrated in the case of Lucca's silk-throwing machine, the filotoio. Anyone carrying knowledge of this machine outside the confines of the city was threatened with death. Meanwhile, a design of the machine had been publicly available for years in Vittorio Zonca's Novo Teatro di Machine et Edificii (1607). It was not until the eighteenth-century industrial spy John Lombe spent two years studying the machine in Italy that the machine could be reproduced and operated.
Semi-itinerant mechanics often haunted baroque courts. Mechanicians such as Dutch-born Cornelis Drebbel (1572–1633) attracted attention in England (and for a short time in Prague) with perpetual motion machines, inventive skills for such devices as diving bells, and technical know-how for such major works as the draining of fens. As a projector in various German courts, the alchemist and mechanic Johann Joachim Becher (1635–1682) rose to something of a patron himself. He solicited secrets from a range of artificers, and probably used his alchemical skills to advertise his ideas for a new political economy based on trade and technology rather than agriculture. Numerous enthusiasts and scientific gentlemen cultivated relationships with their own artificers to construct machines.
CLOCKS AND WATCHES
The first town clocks were constructed in the Middle Ages, usually as way of letting workmen know when shifts should change in new textile factories. While watchmakers themselves continually refined methods of gear-cutting throughout the period, scientists dramatically innovated clocks in the mid-seventeenth century. Clocks became more accurate and more convenient and promised a solution to the problem of determining longitude at sea—one of the most long-standing obstacles to navigation—as well as offering advantages to positional astronomy. If one could accurately keep track of the time of the home port and local time, longitude could easily be calculated. In 1656, the Dutch scientist Christiaan Huygens (1629–1695) designed a clock using a pendulum oscillator with a tautochronic, one-second period. The pendulum clock, however, proved inappropriate for the pitching deck of a ship. In the mid 1660s, Huygens turned to oscillators formed of a spiral hair spring—just as Robert Hooke was also investigating the use of a hair spring. This gave rise to a bitter, ultimately unresolved controversy over patents. However, neither watch proved accurate enough to serve the purposes of a marine chronometer. The government prize for the solution of the longitude problem, £20,000, was finally awarded in 1765 after the Yorkshire watchmaker John Harrison (1693–1776) improved accuracy through advances in workmanship rather than design.
AUTOMATONS AND POPULAR DEMONSTRATIONS
In the sixteenth and early seventeenth centuries, mechanical devices for delight had largely been cultivated in personal collections and gardens. Self-moving statues, ingenious fountains, and hydraulic devices designed by architects like Salomon de Caus (1576–1626) delighted visitors. Mechanical marvels were often placed next to exotic naturalia and antiquities. In the eighteenth century, automatons, such as those designed by Jacques de Vaucanson (1709–1782), were exhibited in shows and fairs.
More serious forms of enlightened infotainment were provided by popularizers of Isaac Newton's work. Jean Theophilus Desaguliers (1683–1744), for example, offered ten-week courses at a cost of two guineas a head. Demonstrators of "Newtonian" devices showed their wares from town to town. The abbé Jean-Antoine Nollet (1700–1770) made presentations of the new physics, and was a favorite in French salons. These popular mechanical demonstrations and lectures were probably one of the best venues in which to learn about applied mechanics. The automatons and demonstration devices, however, belonged to a larger cultural context in which machinery powered more tasks, and automation of labor was becoming more prevalent.
MILLS: AGE OF WATER AND WOOD
If the nineteenth century was predominantly an age of coal and iron, the preceding centuries were largely characterized by water and wood. The vertical water wheel and the windmill were both imported to the Latin West in the Middle Ages. By 1450, these sources of power were already applied to brewing, hemp production, fulling, ore stamping, tanning, sawmills, blast furnaces, paper production, and mine pumping. Their use and development continued throughout the early modern period. The principle of translating circular wheel motion into other forms of translational motion was also applied through human or animal labor. Concern for milling and water-lifting machines is testified by the printed machine books of Agostino Ramelli (1531–c. 1600), Jacques Besson (1540–1576), and Vittorio Zonca (born c. 1580). These books present the intricate connection of wheels, gears, cams, and winches. Concurrent with the pressing need for machines to power manufactories was the need for machines that could pump or raise water. The latter were everywhere employed for drinking-water, for evacuating deep mines, for draining swamps, and for building canals.
The Netherlands, not surprisingly, led Europe in these technologies, both because of the superabundance of water and the need to drain the land and dredge ports. Because prevailing westerlies dependably blow over its lands, the Dutch also perfected windmills. Top sails could be rotated (either because mounted on a rotating cap or because the bottom of the tower could be rotated on wheels) to face wind. The Wimpolen drove bucket chains that drained water from the soil, then dumped it into the canals, and was part of land reclamation projects. Dutch experts in water reclamation and water wheel machinery were in high demand throughout the seventeenth century.
The main drawback of these early modern machines was that they were made of wood. By the late sixteenth century, Europe had been largely deforested, and wood became increasingly expensive. Wood also was a material in which precision tooling was limited, and which broke easily and required much maintenance.
Textiles were among the first products to be produced on a large scale through division of labor and mechanization. Important textile manufactories were well established in Italy and the Netherlands by the thirteenth century. In the sixteenth and seventeenth centuries, modest mechanized advances in ribbon weaving were introduced. In the 1730s, John Kay's (1704–1764) "flying shuttle" made weaving much faster and allowed broader cloth. This invention was soon followed by methods that mechanized jacquard weaving and repetitive pattern weaving.
Increased speed in weaving put heavier demands on the spinning of the yarns. Richard Arkwright (1732–1792) became one of the richest men in late-eighteenth-century England by mechanizing the spinning process of newly exploitable cotton imports. Arkwright's "waterframe" managed to imitate the touch of spinning and drawing out yarns by hand. Cotton fibers were drawn along through three pairs of rollers, each pair spinning at an increasingly faster rate. Arkwright began a spinning mill powering his invention with one horse in 1769, but established a water-powered mill only two years later. He continued to mechanize the industry with carding machines and a drawing frame.
MINING, METALLURGY, AND THE STEAM ENGINE
With a demand for more intensive mining, and often entrepreneurial investment, sixteenth-century mining employed a vast array of machines and techniques, including the first form of the railroad. These were detailed in the elaborately illustrated volume De Re Metallica by the humanist Georgius Agricola (1494–1555). Deep ore deposits required pumps to evacuate water; the ore had to be raised; it was then roasted to make crushing easier. By the sixteenth century, most crushing was done by power-driven stamping mills. Ores were then fired in a blast furnace to extract the metals, and finally refined through a variety of metallurgical techniques, depending on the metals present.
The blast furnace was introduced by the beginning of the sixteenth century, and adopted across Europe. It was larger than its predecessor and required mechanical power to work the large bellows that provided the "blast" of hot air across the smelting metals. The furnace also had to be kept going around the clock. These alterations meant that blast furnaces needed to be built where there were plentiful supplies of water to run the water wheel, timber to make charcoal and fuel the furnace, plentiful labor, and exploitable ores. The blast furnace also made possible a new product: cast iron. While cast iron, particularly English cast iron, had a use in the making of ordnance, most cast iron was formed into wrought iron in a secondary process.
The iron trade was freed from the expense of charcoal fuel and the necessity and drawbacks of water-driven wheels in the mid-eighteenth century by the innovations of Henry Cort (1740–1800) and James Watt (1736–1819). Henry Cort developed a new style of furnace that made possible the use of coal in smelting iron by designing a way in which the sulfurous coke was kept out of direct contact with the metal. Watt improved the Newcomen steam engine used in mine drainage so that it was far more powerful. Thomas Newcomen's (1663–1729) steam engine was itself a variation of a philosophical curiosity invented by the mechanic Denis Papin (1647–1712?). The principle of both was to raise a piston in a cylinder by forcing it up with steam, then allowing condensation to create a vacuum so that atmospheric pressure would push the piston down. Watt added a separate condenser and a steam jacket around the cylinder, thus creating a far more rapid and powerful engine. Watt's steam engine was later adapted for use in many other manufactories, notably in textile and brass production, and made possible many new technologies. By the end of the eighteenth century, an average furnace consumed at least 2,000 tons of coke, processed 3,000 to 4,000 tons of iron ore, and produced 1,000 tons of iron per year.
ENGINEERS, ENTREPRENEURS, AND ENLIGHTENMENT
As a generalization, one might say that the Renaissance gave rise to the great Italian architect-engineers; the baroque hailed the itinerant skilled mechanic from German and Dutch lands; and the Enlightenment saw the development of the highly trained French engineer and fostered the activities of the English entrepreneurial engineer.
By the end of the seventeenth century, Edmond Halley (1656–1742), otherwise beholden to various patronage networks and government service, set up his own ship-salvaging firm based on his innovative diving bell and diving suit. James Watt was one of the most successful (in part due to his association with Matthew Boulton [1728–1809]) and prominent of a number of engineers and inventors whose businesses flourished in eighteenth-century England. His association with the Birmingham "Lunar Society" is also instructive: a group composed of Watt, Boulton, the ceramics manufacturer Josiah Wedgwood (1730–1795), the botanist Erasmus Darwin (1731–1802), chemists James Keir (1735–1820) and Joseph Priestley (1733–1804), among others. These men saw the power of the connection between science and industry, and its possibilities for the improvement of society. They themselves had become engineers, curators of craftsmen, and scientists in eighteenth-century England's free mix of popular science and artisanal mechanics; however, they advocated a more rigorous scientific education for following generations. Whatever the workers in the mills, mines, and manufactories might have thought, members of the Lunar Society saw the values and products of science and technology as those most likely to lead to the moral, intellectual, and material liberation of humanity. This ideology they shared with many French Revolutionaries. Indeed, their forces were scattered in 1791 when a mob sacked the house of Priestley and others for their support of the French Revolution.
See also Academies, Learned ; Alchemy ; Architecture ; Artisans ; Cartography and Geography ; Ceramics, Pottery, and Porcelain ; Chronometer ; Clocks and Watches ; Communication, Scientific ; Design ; Education ; Engineering ; Enlightenment ; Firearms ; Guilds ; Industrial Revolution ; Industry ; Libraries ; Magic ; Medicine ; Monopoly ; Nature ; Optics ; Physics ; Printing and Publishing ; Scientific Instruments ; Scientific Method ; Scientific Revolution ; Shipbuilding and Navigation ; Textile Industry .
Braudel, Fernand. The Structures of Everyday Life: The Limits of the Possible. Translated and revised by Siân Reynolds. New York, 1981.
Bredekamp, Horst. The Lure of Antiquity and the Cult of the Machine: The Kunstkammer and the Evolution of Nature, Art, and Technology. Translated by Allison Brown. Princeton, 1995.
Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000–1700. 3rd ed. Translated and revised by Christopher Woodall. London, 1993.
——. Guns, Sails, and Empires: Technological Innovation and the Early Phases of European Expansion, 1400–1700. New York, 1965.
Eamon, William. Science and the Secrets of Nature: Books of Secrets in Medieval and Early Modern Culture. Princeton, 1994.
Goodman, David C. Power and Penury: Government, Technology, and Science in Phillip II's Spain. Cambridge, U.K., 1988.
Heller, Henry. Labour, Science, and Technology in France, 1500–1620. Cambridge, U.K., 1996.
Jacob, Margaret C. Scientific Culture and the Making of the Industrial West. New York and Oxford, 1997.
Jardine, Lisa. Ingenious Pursuits: Building the Scientific Revolution. New York and London, 1999.
Long, Pamela O. Openness, Secrecy, Authorship: Technical Arts and the Culture of Knowledge from Antiquity to the Renaissance. Baltimore, 2001.
McCray, Patrick W. Glassmaking in Renaissance Venice: The Fragile Craft. Aldershot, U.K., and Brookfield, Vt., 1999.
McNeil, Ian, ed. An Encyclopaedia of the History of Technology. London and New York, 1996.
Rossi, Paolo. Philosophy, Technology, and the Arts in the Early Modern Era. Translated by Salvator Attanasio. Edited by Benjamin Nelson. New York, 1970.
Schaffer, Simon. "Machine Philosophy: Demonstration Devices in Georgian Mechanics." Osiris 2nd ser., 9 (1995): 157–182.
——. "Natural Philosophy and Public Spectacle in the Eighteenth Century." History of Science 21 (1983): 1–43.
Singer, Charles, E. J. Holmyard, and A. R. Hall, eds. A History of Technology. Vol. 2, From the Renaissance to the Industrial Revolution, c. 1500–c. 1750. Oxford, 1957.
Smith, Pamela. The Business of Alchemy: Science and Culture in the Holy Roman Empire. Princeton, 1994.
Stewart, Larry. "A Meaning for Machines: Modernity, Utility, and the Eighteenth-Century British Public." Journal of Modern History 70, no. 2 (1998): 259–294.
——. The Rise of Public Science: Rhetoric, Technology, and Natural Philosophy in Newtonian Britain, 1660–1750. Cambridge, U.K., 1992.
The relationship between technology and social history raises two kinds of considerations. The initial section of this essay takes a conceptual approach, examining the nature of technology itself. Is technology a separate force, as is often assumed by historians of technology, or does it interact with society in more complex ways, such that social forces may help explain technological developments and vice versa? The second category of considerations involves the actual development of technology as part of European social history, which is taken up in the second section of this essay. In terms of chronology, the conventional division between technology before and after the industrial revolution forms the main organizing principle.
What are the relationships between processes of technological change and the social context? Until very recently technological change has usually been viewed primarily in terms of hardware: impressive, ingenious and increasingly sophisticated engineering solutions to the problems posed by production tasks. For a long time these solutions tended to be seen as rather autonomous in character, so that they could be understood without much in the way of social context. For example, the five-volume History of Technology edited by Charles Singer and collaborators and published between 1954 and 1958 follows technology from the earliest stages of human evolution to the twentieth century. Its 4000 pages cover technical developments (in terms of hardware and specific operative practices) in metalworking, textiles, pottery, and other areas in considerable detail but contain only one brief article by Gordon Childe on technology in terms of social practice.
The role of social factors in the history of technological change gives rise to a range of explanatory problems at different levels. There is, for example, a quite abstract level at which the general propensity of an economic system for such change is explored; this is the level that David Landes (1998) has explored. Then there are questions about why particular sectors of the economy exhibit a propensity for technical change; here one would have to consider questions of how technological opportunity emerges as well as questions of industrial structure; the development of markets, patterns, and levels of demand; the structure and capacity of producer goods industries; state economic policy; and so on. Finally there are questions about why specific technologies develop and what factors shape their diffusion. All of these levels have been researched, in one way or another, from a social perspective. But it is probably this last which has formed the most important focus of recent research. Social factors have been to the forefront in the analysis of how technologies originate and diffuse. As a recent study covering aircraft, fluorescent lights, steel, atomic energy, and electricity production and distribution claimed:
Technologies do not, we suggest, evolve under the impetus of some necessary inner technological or scientific logic. They are not possessed of an inherent momentum. If they evolve or change, it is because they have been pressed into that shape....Technology does not spring, ab initio, from some distinterested fount of innovation. Rather it is born of the social, the economic, and the technical relations that are already in place. A product of the existing structure of opportunities and constraints, it extends, shapes or reproduces that structure in ways that are more or less unpredictable. (Bijker and Law, 1992, pp. 5, 11)
The more traditional and still to some extent dominant view is that technology is something that might have profound social effects but which has developed and spread on the basis of rather autonomous processes of artisan development or, in the modern era, scientific and engineering advance. This kind of determinism has in recent years been supplanted by approaches that seek to set technical or engineering processes against the background of the social environments in which they are generated and put to work. From this perspective, technology immediately begins to look more complicated, and we can begin to see ways in which the social environment shapes technological evolution, as much as it is shaped by it.
Modern perspectives begin by conceptualising technology in ways that move beyond the level of material technique. Of course technology does involve hardware (machines, tools, infrastructure) and technique (in the sense of routines of technical practice), but it also involves at least two other primary dimensions, namely knowledge and organisation, both of which are social phenomena. Technology involves, for example, the production and maintenance of knowledge, both in terms of formal scientific and technical disciplines and also as an equally important array of tacit knowledge. These human skills—sometimes codified, but equally often developed gradually by individuals and taking the form of acquired skills—are an integral part of all processes of production and work. Then there are the crucially important processes of organisation and management through which hardware and technique are set to work. On the one hand these organisation and managerial issues involve decisions about how production processes are to be subdivided, operated, integrated, and supervised; this element of technological practice has a complex history of its own. The publication in 1974 of Harry Braverman's Labor and Monopoly Capital was a key event in this area. Braverman argued that the history of modern capitalist production is characterised by a consistent attempt to separate conceptual aspects of production (in terms of human skill and control) from the actual process of work; technological change in the modern era thus involves a persistent "degradation of work," and modern management is essentially a method for organising this. There is now a wide literature on the history of work organisation and its links to technological change and society. However, there are also equally important managerial issues involved in integrating technological aspects of production with the wider processes of commercial calculation, marketing, financial organisation, and so on, which firms must undertake. Finally, these elements of knowledge, hardware, and organisation at the firm level occur within a much broader and extremely complex social framework of economic, political, and cultural relationships. This social environment both facilitates and constrains the development, use, and spread of technologies in many ways: for example, through cultural attitudes that affect levels and types of education or that place different valuations on technical or economic achievement.
Central to modern conceptual approaches, therefore, is the idea that the histories of technologies should be seen in their economic and social context and that the focus should extend wider than to embrace just technical artefacts. The point here is that the evolution of technologies involves complex social processes of conflict, negotiation, compromise, and adaptation, and technological change cannot be understood in isolation from these social dimensions. In these approaches, society is not seen as adapting to a deterministic process of technological change, but rather it is social values and decisions that shape the path of technological development. It is a short step from this to the idea that differences in technological performance between societies have at least some of their roots in social structure and social forms, although how these differences operate is as yet far from clear. Nonetheless, technological developments have important impacts on the social world, on the environment, the way we work, and on our general social interrelations. So understanding the evolution of technology in the long run is in part a process of understanding the history of the wider society in which technology is embedded. Socio-technical interplay has only recently emerged as a systematic theme in historical studies. While study of technical and social interaction has frequently been found in historical work, there has also often been a strain of technological determinism, which has raised considerable problems in understanding technological dynamics and their relation to the social context.
Society and technology in the very long run. The link between human society and technology goes back a long way. The evolution of human societies and even the dominance of homo sapiens as a species are intimately joined with the evolution of technology. Early hominid fossil records, for example, are usually found in close proximity to remains of stone implements, and the extension of human society over the earth's surface seems to be founded on mastery of a number of apparently simple (but arguably rather complex) technologies: stone weapons, the management of fire, and the construction of shelter, for example. These technologies emerged in the distant past and characterised the paleolithic and neolithic periods, in which humans evolved complex understandings of animal behaviour, pyrotechnology, weapons manufacture, medical practice, materials, and so on. It has been argued that even these distant technologies can be analysed in terms of evolutionary sequences; the archeological record of such tools exhibits considerable variation, which led George Basalla to argue that
The modern technological world in all its complexity is merely the latest manifestation of a continuum that extends back to the dawn of humankind, and to the first shaped artefacts. Stone implements may not offer a crucial test for the evolutionary thesis, but they provide the best illustration of continuity operating over an extended period of time. (Basalla, 1988, pp. 30–31)
From the neolithic period (from ca. 5000 b. c.) this very slow evolution developed into a number of very profound technological revolutions, of which probably five are especially significant, apart from those mentioned above: the domestication of animals, cultivation of food and "industrial" plants (such as plants used for vessels, construction materials, fibres, and so on), the development of pottery, the development of textiles, and the evolution of metallurgy.
The evidence for the emergence and use of these technologies is primarily archeological, but over this period we have the first sustained phase of what can reasonably be called "radical" change. H. S. Harrison remarks that
The centuries following the development of the initial features of Neolithic culture, during which the hunter and gatherer first became a farmer and stock breeder, were the most significant in the history of human progress. Steps were taken then that were essential to the building of civilizations upon which later cultural revolutions depended.... the evidence indicates that the ferment leading to the development of the new culture was in progress before 5000 b. c. Centuries, and not years only, were consumed in the processes which led to the cultivation of cereals and the domestication of hoofed animals. New opportunities and stimuli emerged that led into other fields of discovery and invention. (Harrison, 1958, p. 79)
Harrison points to three further key features of these technological revolutions, which are found persistently in the historical literature and are relevant also in understanding modern large-scale technological change. First, the time periods involved in these shifts are long—the development of radically new technologies is slow, and therefore for long periods new techniques (such as metal implements) co-exist with the old (such as implements of wood and stone). Second, technical advance has an evolutionary character with new developments opening up further opportunities and thus gradually speeding up the overall process of change. Third, there is a close relationship between large-scale technological change and the social context. The emergence of new technological regimes interacted in significant ways with technical divisions of labour, productivity, and patterns of exchange. In particular, historians have emphasized the fact that increasing productivity raises the question of the distribution of the gains from growth; this is central to questions of the emergence of hierarchy, order, and power in human society. In the very long run, shifts in technological regime cannot therefore be separated from the evolution of social forms as such.
Early social conflict and technological change: the case of the water mill. With respect to modern and premodern eras, it has long been recognised by historians that the diffusion of major technologies is often closely linked to social factors such as patterns of ownership, economic organisation, and income distribution. A classic analysis of such factors was developed by Marc Bloch in his study of the diffusion of water-powered mills in England. The grinding of corn in England, as in all medieval societies, was an activity of key economic significance; the technological alternatives were handmills, which operated on a very small scale with human muscle power, and water mills, which operated with considerably greater speed and efficiency. Yet water mills diffused very slowly as a technique for corn grinding in the period after the eleventh century. The reason for this lies not in the technique itself but in the way the technique was integrated with particular patterns of ownership and social control. After the Norman Conquest of England control of rivers and streams became part of an attempt to impose a new social system based on manorial rights through which landowners claimed income and services from other social classes:
Manorial rights were not an institution native to England. The Norman conquerors had imported them from the continent as one of the principal elements in the manorial system which after the almost total dispossession of the Saxon aristocracy they methodically established. (Bloch, 1985, p. 75)
The watermill was in effect monopolised by the seigneurial class and used as a method of revenue extraction. As part of this process, handmills were proscribed, with a wide variety of attempts to eliminate their existence and use, often by force. This attempt to facilitate use of the water technology by direct suppression of the competing technology failed in the long run, and the consequence was a very slow spread of the apparently superior technology. Bloch's key point in analyzing this process was the deep interconnection between social power, embedded interests, and the processes of use and diffusion of a technology. The fates of the competing technologies were therefore shaped by the fact that different social classes championed them for different economic ends, and the diffusion of the technologies depended on the outcome of sustained social struggle.
Comparative technological development across societies. Social factors have also been deployed around the major historical problem of differences in the rates and direction of technological change across societies. There can be no doubt that many societies are capable of sustained and ingenious invention. Joseph Needham's magisterial Science and Civilisation in China showed beyond any doubt that China pioneered a wide range of technical advances; similar points can be made with respect to the Arab world in such key areas as written texts, mathematics, and so on. Yet, as Joel Mokyr has remarked, "The greatest enigma in the history of technology is the failure of China to sustain its technological superiority." Mokyr surveys a plethora of explanations for this but ultimately supports the view that a constraining social order was the core of the problem:
The difference between China and Europe was that in Europe the power of any social group to sabotage an innovation it deemed detrimental to its interests was far smaller. First, in Europe technological change was essentially a matter of private initiative; the role of the rulers was secondary and passive. Few significant contributions to non-military technology were initiated by the state in Europe before (or during) the Industrial Revolution. There was a market for ideas, and the government entered these markets as just another customer or, more rarely, a supplier. Second, whenever a European government chose to take an actively hostile attitude towards innovation and the nonconformism that bred it, it had to face the consequences.... the possibilities of migration in Europe allowed creative and original thinkers to find a haven if their place of birth was insufficiently tolerant, so that in the long run, reactionary societies lost out in the competition for wealth and power. (Mokyr, 1990, p. 233)
Although serious histories of technology have been written around the centrality of social forces in technological evolution for many years now, it would be a mistake to think that technological determinism is dead. It is common for writers and analysts (with the notable exception of James R. Beniger) to speak as though the revolution in information and communications technologies is autonomous and is reshaping society, but it is hard to doubt that this area too will come to be seen in the kind of context outlined above.
MAJOR DEVELOPMENTS IN TECHNOLOGY
Medieval and Renaissance technology. Most approaches to the development of technology in European culture stress the inventiveness of medieval and renaissance Europe, combined with relatively slow or limited diffusion and use of new technologies. Historians such as Bloch, Lynn White, and Bertrand Gille have shown the medieval development or adoption of a wide range of technologies, such as new forms of plow and harness in agriculture, the open field system, moveable type, and powered machinery. In an recent overview, Frances and Joseph Gies showed the importance of complex infrastructural developments, such as bridges, cathedrals, and fortifications on the one hand and on the other hand, of commercial innovation such as milling, textiles, glass, double-entry bookkeeping, and general accounting techniques. But it really cannot be claimed that these technologies came into widespread use. Mokyr makes a similar point with respect to Renaissance technologies. Clearly we should be cautious about using catchall terms such as the "Renaissance" to describe such a wide and differentiated period, but however we label it the period 1500–1750 generated a wide range of new technical developments in agriculture, mining pumps, precision instruments, tools, and other technologies. But the period is at least as interesting in terms of what did not happen, namely the widespread application of these technologies in a context of technical and productivity advancement. This is primarily a matter of the social and institutional context. Europe was only in the early stages of evolving the social framework which would sharply stimulate not only the development of technologies but their widespread application.
Still, the early modern period did see steady technological evolution in major branches of the European economy. It was in this period that western Europe gradually shifted from being a borrower of Asian technologies such as explosive powder, the compass, and printing, to being a technological leader. Gradual changes in mining and metallurgy boosted European technology by 1600. Adaptations in the printing press, with the use of movable type, propelled Europe to a clear advantage in printing even earlier. By 1700 new technologies in many branches of textiles made Europe a world leader in that area.
The decades from the late seventeenth century to the advent of James Watt's steam engine (1765) saw an accelerating pace of technological change spurred not only by Europe's lead in world trade, but also by growing artisanal freedom from guild restrictions in England and Scotland and by some spillover from the scientific revolution. Social and cultural causes, in other words, explain technological change along with world economic position, while the technological changes in turn fed further social shifts. For the first time since the Middle Ages agricultural technology received attention (at the same time that Europeans were introduced to New World crops like the potato). New methods of drainage expanded available land in places like Holland, while the seed drill and even wider use of the scythe instead of the sickle for harvesting led to modest increases in productivity. The other main sector in which there was significant technological advance was domestic manufacturing, where new techniques such as the flying shuttle for weaving (1733), while still relying on manual or foot power, partially automated processes and so increased productivity. These developments soon proved compatible with water or steam power, combining to generate the technological basis for the industrial revolution proper. In the interim, new technologies fed the rapid commercial and manufacturing expansion of rural and urban areas in western Europe and fostered other changes such as the growth of consumerism.
Industrialization and the new technological era. Many of the issues involved in the interaction between society and technology become critical in the modern period, characterized as it is by incessant technological change and continuous productivity growth. What is often referred to as the industrial revolution began in England in the late eighteenth century and is usually and rightly regarded as a technological watershed, yet its interpretation gives rise to major problems of technological determinism.
Influential explanatory accounts ascribe the industrial revolution to the effects of the deployment of new techniques as the primary agent of economic advance. The strongest version of this argument is written around the steam engine:
If we were to try to single out the crucial inventions which made the industrial revolution possible and ensured a continuous process of industrialization and technical change, and hence sustained economic growth, it seems that the choice would fall on the steam engine on one hand, and on the other Cort's puddling process which made a cheap and acceptable British malleable iron. (Deane, 1965, p. 130)
In effect, the rise of the Watt steam engine has long been treated in British historiography as a decisive event in industrialization. The heroic approach began with the first systematic work on the industrial revolution, Lectures on the Industrial Revolution of the Eighteenth Century, by Arnold Toynbee (1852–1883), which focused on the Watt steam engine and the "four great inventions" which revolutionized the cotton textile industry—the spinning jenny (1770), the waterframe (1769), Crompton's spinning mule (1779) and the automatic mule (1825) of Richard Roberts. Toynbee took an essentially determinist view of technology; for example, in seeking to explain the rise of urban industrialization and the decline of the outwork system, he suggested that the emergence of the factory was "the consequence of the mechanical discoveries of the time," and indeed that the steam engine was the basic permissive factor in economic liberalisation. Toynbee had a major impact on subsequent economic history. His technological emphases were repeated in Paul Mantoux's classic Industrial Revolution in the Eighteenth Century and in a wide range of later works up to and including Landes's Unbound Prometheus, which remains the major work on technological development in Western Europe. Mantoux focused the second part of his work, titled "Inventions and Factories," on exactly the same sequence of textile inventions to which Toynbee drew attention, plus Henry Cort's iron process (1783–1784) and the Watt engine. Landes did likewise, adding a discussion of power tools and chemicals. It is only in recent years that a counteremphasis has emerged in which small scale innovation has been placed in the forefront of analysis. Donald McCloskey, for example, emphasized that by 1860 only about 30 percent of British employment was in "activities that had been radically transformed in technique since 1780" and that innovations "came more like a gentle (though unprecedented) rain, gathering here and there in puddles. By 1860 the ground was wet, but by no means soaked, even at the wetter spots. Looms run by hand and factories run by water survived in the cotton textile industry in 1860." G. N. von Tunzelmann (1981) argued that "the usual stress on a handful of dramatic breakthroughs is seriously open to question," and that what mattered was the variety and pervasiveness of innovation.
This general account has not gone without challenge, however. For a start it runs into serious problems of chronology: in the words of G. N. von Tunzelman, "if the Industrial Revolution was to be dated from around 1760, as Toynbee believed, then the Watt engine can hardly have triggered off industrialization, since it was not being marketed commercially until the mid-1770s." Even where there is a clear temporal correlation between expanded output and technical change, as in cotton and in the period 1760–1800, the causal relations are not at all obvious. Others have pointed out that the large factory was uncharacteristic in the eighteenth century; that historians of industrialization have seriously neglected agriculture, "the dominant sphere of the economy at this time, and also the most intensively capitalist of any sector," as Keith Tribe has called it; that hand techniques persisted in sector after sector until well into the nineteenth century, and that it is therefore, according to Raphael Samuel, "not possible to equate the new mode of production with the factory system." All of these considerations suggest a need for a closer look at the social aspects of technological change during the industrial revolution.
Social determinants of innovation in the industrial revolution. Although economic historians have, on the whole, a much more complex understanding of the industrialization process than economists, they have nonetheless followed economists in focusing on aspects of the economic environment (entry conditions, for example, or the structure of factor prices), or the impact of technological change on, for example, productivity growth, rather than on the sources and character of technological change as such. The approach taken by much of the literature on the social dimensions of industrialization has been similar in that it focused on the impacts of technology but not on the dynamics of innovation itself. This was probably because of the long lasting influence of the first systematic examinations of industrialization, the Parliamentary Select Committee hearings that began in the early nineteenth century, and the substantial literature on industrialization to which they gave rise. Within this literature the emphasis was on working conditions, health effects, mortality, and other impacts on the new working class. This type of approach was followed through in the classic sociological study of industrialization, Neil J. Smelser's Social Change in the Industrial Revolution (1959), and then in modern social history.
The approaches of social and economic historians have said little about the technologies themselves. So although technological change is treated as a major factor in early industrialization, it is rarely itself explained in any systematic way. In some cases this occurs because of an explicit or implicit technological determinism, as noted above, which sees technology as an autonomous explanatory force. It is quite common in the literature to find arguments to the effect that the transition to the factory, the rise of new forms of enterprise, or the development of cost accounting, for example, are responses to technological change. It is rare, on the other hand, to find detailed or systematic treatments of the evolution of specific technologies.
Indeed, with the exception of the literature on steam power, we have no systematic histories of the core technologies of early industrialization. Instead what we have had until very recently is Hamlet without the Prince: an economic historiography written largely around the impact of new technologies, but with little analysis of the processes that produce specific areas of technological development, or that determine why some technologies succeed and some fail.
Where we have had attempts to explain the development of technological change in the industrial revolution, the explanations have emphasized the new social context of commercial calculation. Landes, for example, in Unbound Prometheus, writes of technical change in European industrialization as an effect of a conjunction of Western "rationality" (by which is meant means-end calculation) and a "Faustian spirit of mastery." Samuel Lilley, on the other hand, emphasised the causal effectivity of the control, decision-making capacity, and incentives to innovate that characterize the capitalist entrepreneur:
The capitalist entrepreneur is aware—to a degree that no previous exploiter is aware—of how much he stands to gain from this or that technical change. He probably also has enough technological knowledge to judge the practicability of an invention, perhaps even to invent for himself. And the cold steel of competition reinforces this awareness and eliminates those who do not possess it. Hence derives the extreme sensitivity of response to technological opportunity that eighteenth century entrepreneurs repeatedly exhibited. (Lilley, 1978, pp. 219–220)
It should be emphasized that these aspects of the new technological environment are essentially social: they rest on new powers of ownership and control in production. However, we can go beyond these general factors into accounts of the determinants of specific lines of technical change. Modern analysis suggests that the technological change process is not general but focused, and that this is one of the primary explanatory problems which technological advance presents. Against this background the history of technological change is in fact one of advance in quite specific directions, often concentrated not just on particular sectors of the economy but on particular processes within sectors subject to change. In a word, there appear to be priorities. The theoretical problem here has been most succinctly outlined by Nathan Rosenberg:
In the realm of pure theory, a decision maker bent on maximising profits under competitive conditions will pursue any possibility for reducing costs.... What forces, then, determine the directions in which a firm actually goes in exploring for new techniques? Since it cannot explore all directions, what are the factors which induce it to strike out in a particular direction? Better yet, are there any factors at work which compel it to look in some directions rather than others? (Rosenberg, 1977, pp. 110–111)
If the explanation of technological change should be understood in terms of explaining the direction of technological change, then we should seek to explain why technological advance has specific trajectories. This is in large part a matter looking at the social and technical problems which the innovator seeks to solve. Rosenberg has proposed three such "problem areas": technological complementarities, in which imbalances between technical processes induce correcting innovations; supply disruptions of various kinds, leading to innovations to provide substitute products and processes; and labour conflict, in which strikes or plant-level struggles generate "a search for labour-saving machines."
The latter issue was particularly important during the industrial revolution; it gave rise to Marx's famous remark that "it would be possible to write a whole history of the inventions made since 1830 for the sole purpose of providing capital with weapons against working class revolt." This claim has in fact been researched in terms of the sources of innovation during industrialization, and a number of confirming instances have been found. Kristine Bruland (1982) described three important technologies deriving from an attempt to "innovate around" labor conflicts, showing that a number of key innovations in textiles (including the first fully automatic machine in history) could be ascribed to the desire of entrepreneurs and engineers to automate their way around persistent conflicts with powerful shop-floor operatives. Conventional interpretations of industrial technology, in other words, do not deal adequately with the pace and extent of adoption of new technologies or the nature of social and cultural, rather than "great inventor," causation. More recent interpretations have revealed the role of social forces in the construction of "heroic inventors," as in Christine MacLeod's study of Watt and the steam engine.
By the mid-nineteenth century the pace and extent of new technologies unquestionably accelerated. Railroads and steamships transformed transportation from the 1820s onward, and the telegraph began to do the same for communication. Metallurgy was revolutionized by the substitution of coal for charcoal and the invention of the Bessemer process (1850s) for making steel. Printing was automated and larger printing presses were introduced. By the 1870s, use of electrical and gasoline motors anchored the set of new technologies sometimes referred to as the second industrial revolution.
The basis for invention increasingly shifted from individual tinkerers, usually of artisanal background, to organized, collective research in large companies, government agencies, and universities. German firms pioneered the formal research and development approach. The United States became a significant innovator where previously it had borrowed; its contributions included the introduction of interchangeable parts, which speeded the manufacture of weaponry and machinery, and the expansion of looms and other equipment later in the nineteenth century. The second industrial revolution also involved the application of new technology outside the factory, to agriculture (harvesters and other implements), crafts (loading equipment, mechanical saws, and the like), and office work (typewriters and cash registers). Even the home became the site of technological change with sewing machines and vacuum cleaners, among other conveniences.
The modern industrial era. The emphasis on social forms as a central explanatory element of technological change does not stop with the industrial revolution. Many researchers have pushed it into the modern technological epoch, a field of study which developed rapidly in the 1990s, especially focusing on analyses of technology which conceptualise technologies not as artefacts but as integrated systems, with supporting managerial or social arrangements. A particularly influential body of work has been that of Thomas P. Hughes, whose history of electrical power generation and distribution emphasizes that the development of this core technology of the "second industrial revolution" must be understood in terms of "systems, built by systems builders." His work encompasses the electrification of the United States, Britain, and Germany between the 1880s and 1930s. As Hughes shows, the evolution of electric power systems was different in each country, despite the common pool of knowledge to draw on. Reasons for these differences are found in the geographical, cultural, managerial, engineering, and entrepreneurial character of the regions involved. The "networks" which he studies refer not only to the technology but also to the institutions and actors involved. Such an approach, treating technologies as complex integrated systems of artefacts and social organization, has been carried out with regard to a wide range of technologies such as radio, jet engines, and railways.
Interest in the process of technological change crested again with the final decades of the twentieth century. New procedures of genetic engineering, computation, and robotics transformed the technological landscape in what some observers termed a third industrial—or postindustrial—technological revolution. Europe now participated in a literally international process of technological innovation, lagging in some areas (in computerization, behind the United States) but advancing rapidly in others, such as robotics. The full effects of this latest round of technological upheaval have yet to emerge, but the complex relationship between technological and social dynamics will surely remain a major topic for European social history in the future.
See also other articles in this section.
Basalla, George. The Evolution of Technology. Cambridge, 1988.
Beniger, James R. The Control Revolution: Technological and Economic Origins of the Information Society. Cambridge, Mass., 1986.
Bijker, Wiebe E. and John Law, eds. Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge, Mass., 1992.
Bijker, Wiebe E., Thomas P. Hughes, and Trevor J. Pinch. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, Mass., 1987.
Bloch, Marc. Land and Work in Medieval Europe: Selected Papers. Translated by J. E. Anderson. Berkeley, Calif., 1967.
Bloch, Marc. "The Watermill and Feudal Authority." In The Social Shaping of Technology: How the Refrigerator Got Its Hum. Edited by Donald Mackenzie and Judy Wajcman. Philadelphia, 1985. Pages 75–78.
Braverman, Harry. Labor and Monopoly Capital: The Degradation of Work in the Twentieth Century. New York, 1974.
Bruland, Kristine. "Industrial Conflict as a Source of Technical Innovation: The Development of the Automatic Spinning Mule." In The Social Shaping of Technology: How the Refrigerator Got Its Hum. Edited by Donald Mackenzie and Judy Wajcman. Philadelphia, 1985. Pages 84–92.
Bruland, Kristine. "Industrial Conflict as a Source of Technical Innovation: Three Cases." Economy and Society 11:2 (1982). 91–121.
Childe, V. Gordon "Early Forms of Society." In A History of Technology. Vol. I. From Early Times to the Fall of Ancient Empires. Edited by Charles Singer et. al Oxford, 1958. Pages 38–57.
Deane, Phyllis. The First Industrial Revolution. Cambridge, U.K., 1965.
Gille, Bertrand. "The Medieval Age of the West, Fifth Century to 1350." In A History of Technology and Invention: Progress Through the Ages. Vol. 1. The Origins of Civilization. Edited by Maurice Daumas. New York, 1969. Pages 422–572.
Gille, Bertrand. "The Fifteenth and Sixteenth Centuries in the Western World." In A History of Technology and Invention: Progress Through the Ages. Vol. 2. The First Stage of Mechanization, 1450–1725. Edited by Maurice Daumas. New York, 1969. Pages 16–148.
Harrison, H. S. "Discovery, invention and diffusion." In A History of Technology. Vol. I. From Early Times to the Fall of Ancient Empires. Edited by Charles Singer et. al Oxford, 1958. Pages 58–84.
Hughes, Thomas P. Networks of Power: Electrification in Western Society, 1880–1930. Baltimore, 1993.
Landes, David S. The Unbound Prometheus: Technological Change and Industrial
Development in Western Europe from 1750 to the Present. London, 1974.
Landes, David S. The Wealth and Poverty of Nations: Why Some Are So Rich and Some So Poor. New York, 1999.
Lilley, Samuel. "Technological Progress and the Industrial Revolution, 1700–1914." In The Fontana Economic History of Europe. Vol 3. Edited by Carlo M. Cipolla. London, 1978. Pages 187–254.
Mackenzie, Donald, and Judy Wajcman. The Social Shaping of Technology: How the Refrigerator Got Its Hum. Philadelphia, 1985.
MacLeod, Christine. "James Watt, Heroic Invention, and the Idea of the Industrial Revolution." In Technological Revolutions in Europe: Historical Perspectives. Edited by Maxine Berg and Kristine Bruland. Chelten, U.K., and Northampton, Mass., 1998. Pages 96–116.
Mantoux, Paul. The Industrial Revolution in the Eighteenth Century: An Outline of the Beginnings of the Modern Factory System in England. London, 1961.
McCloskey, Donald. "The Industrial Revolution, 1780–1860: A Survey." In The Economic History of Britain Since 1700. Vol 1. 1700–1860. Edited by Roderick Floud and Donald McCloskey. Cambridge, U.K., 1981. Pages 242–270.
Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. New York, 1990.
Needham, Joseph. Science and Civilisation in China. Cambridge, U.K., 1954.
O'Brien, Patrick K. "Introduction: Modern Conceptions of the Industrial Revolution." In The Industrial Revolution and British Society. Edited by Patrick K. O'Brien and Roland Quinault. Cambridge, U.K., and New York, 1993. Pages 1–31.
Pahl, R. E., ed. On Work: Historical, Comparative, and Theoretical Approaches. Oxford, 1988.
Rosenberg, Nathan. Perspectives on Technology. Cambridge, U.K., 1976.
Rudgeley Richard. Lost Civilisations of the Stone Age. London, 1998.
Samuel, Raphael. "Workshop of the World: Steam Power and Hand Technology in Mid-Victorian Britain" History Workshop Journal 3 (1977): 6–72.
Tann, Jennifer. The Development of the Factory. London, 1970.
Toynbee, Arnold. Lectures on the Industrial Revolution of the Eighteenth Century, Popular Addresses, Notes, and Other Fragments. London and New York, 1908.
Tribe, Keith. Land, Labour, and Economic Discourse. London, 1978.
Tunzelmann, G. N. von. "Technical Progress During the Industrial Revolution." In The Economic History of Britain Since 1700. Vol 1. 1700–1860. Edited by Roderick Floud and Donald McCloskey. Cambridge, U.K., 1981. Pages 143–163.
Tunzelmann, G. N. von. Steam Power and British Industrialization to 1860. Oxford, 1978.
White, Lynn. Medieval Technology and Social Change. Oxford, 1962.
TechnologyEARLY MOTION PICTURES
COLOR AND SOUND
THE TELEVISION AGE
THE DIGITAL AGE
Ever since the invention of motion pictures, movie industries around the world have counted on a stream of technological developments to maximize production processes, increase profits, and entice audiences. Yet the history of film technology, spanning a little over one century, is a finite one, more subtle and incremental than one might assume. Indeed, the basics of film production went largely unchanged for a good part of the last century. Other than several watershed innovations that required systemic overhauls, such as synchronized sound, wide-screen formats, and color processes, most technological innovations were small by comparison, affecting the final product in ways that were often not noticeable to most viewers.
Only recently, in the past few decades, has the industry begun to explore new alternatives to conventional film stock, editing techniques, and the basic motion picture camera. One explanation is the uniqueness of the movies as a manufactured product. Unlike other technology products, such as automobiles, television sets, and appliances, the movies are neither tangible nor utilized in any conventional way by consumers. The product is less material than it is imagistic, something to be recounted and remembered rather than owned and handled. In the case of television, however, consumers do more than watch it. They own, display, and control the machine, which explains, in part, the medium's dramatic technological changes (remote control, cable, Tivo, flat-screen, and VHS/DVD). Movie formats have undergone dramatic changes as well, of course, but on the whole they have been more sporadic and aimed at attracting moviegoers during box-office slumps. Another, more compelling reason for the relative constancy of motion picture technology has been a reluctance on the part of movie industries—and especially the eight major and minor studios of classical Hollywood—to make systemic changes requiring costly, comprehensive overhauls of the industry. Nonetheless, and sometimes against its will, the moviemaking industries around the world have adopted new technologies in response to audience interests, economic imperatives, societal shifts, and aesthetic trends.
Beginning in the 1830s and continuing throughout the century, series photography generated early interest in the possibilities of motion pictures. Inventors and entrepreneurs quickly recognized the entertainment value of simulating the movement of photographs, such that by the middle of the nineteenth century a variety of peephole toys and coin machines were appearing in arcade parlors throughout the United States and Europe. These pre-cinematic mechanisms were crucial in the technological leap from still photography to motion pictures projected on big screens for paying audiences. One of the earliest toys was the Zoetrope, a handheld spinning wheel with a series of photographs on the inside, visible to the viewer by thin slits along the top. The Mutoscope, a coin machine found in arcades, enabled viewers to see a series of photo cards flip by at the turn of a crank.
These early peephole toys and experiments with sequence photography indicate that the premise of the movies—that is, a sequential series of pictures on cards or film passed by the eye fast enough to suggest continuous movement—was well in place before the first motion pictures were made and projected onto a screen. Three critical components, however, were missing: light-sensitive and fast film rolls that could travel through a camera and capture the action sequentially on frames; a camera that would record this action; and a projector that could run the film at such a pace and with enough light to throw the images, in seeming motion, onto a large screen.
In 1882 Étienne-Jules Marey (1830–1904), a French physiologist, invented the "chronophotographic gun" to record animal locomotion. The camera initially captured images on glass plates, but Marey soon switched to an easier, more manipulable format, paper film, thus introducing the film strip to cinematography and setting the stage for further developments. Indeed, only a few years later, in 1887, an Episcopalian minister from New Jersey, Hannibal Goodwin (1822–1900), developed the first celluloid roll film as a base for light-sensitive emulsions. Goodwin's success with celluloid film rolls was particularly significant because it made possible motion picture cameras and projection. George Eastman (1854–1932) soon thereafter adapted Goodwin's roll film, patented it, and made it the industry standard by 1890. Eastman Kodak issued this same basic stock, in rolls of two hundred feet, all the while making technical innovations to improve its quality. Eastman and his laboratories made it the most dependable film stock, and by 1910 studios and filmmakers from around the world were using it.
Thomas Alva Edison (1847–1931), inventor and entrepreneur, was in many ways an unlikely but important figure in the history of movie technology. Long before the first talkies, Edison was arguably the first to envision motion pictures as a marriage of image and sound. Before his company patented motion picture cameras—among other technologies vital to producing and projecting movies—he invented the phonograph, for which he always dreamed of producing visual accompaniment. Toward this end, he sought to invent a camera that would shoot a series of images onto a strip of film that, when projected at a certain speed, would convey a continuous sequence resembling live action. In 1883 he hired the young William Kennedy Laurie Dickson (1860–1935), who would greatly aid him in this quest. By 1895, Dickson ran Edison's West Orange, New Jersey, laboratory. After working on this project for a number of years, Dickson invented the first motion picture camera in 1891.
Borrowing from several earlier mechanisms, including time watch engineering and Marey's chronophotographic gun, Dickson came up with an instrument called the Kinetograph. What distinguished this new camera from other devices of the same period were two crucial additions, both of which remained defining attributes of motion picture cameras and projection throughout the twentieth century. First, it made use of a stop-motion device to regulate the intermittent motion of the film strip through the camera at various rates of frames per second (typically, 16 fps during the silent era and 24 fps for talking pictures). This allowed for the unexposed film strip to pause for a fraction of a second, during which time the shutter briefly opened long enough to sufficiently expose the film to a beam of light. Second, Dickson added sprocket holes on one side of the celluloid film strip, which could then be pulled through the machine by teethed gears. As Dickson carefully notes in his History of the Kinetograph, Kinetoscope, and Kineto-Phonograph, originally published in 1895, these perforations allowed for the locking device to keep the film in place for nine-tenths of a second, as the shutter opens and admits a beam of light long enough to expose the film.
The Kinetograph shot short films in 50-foot installments (typically less than 30 seconds), which could then be viewed in the Kinetoscope, a battery-powered coin machine—one of the last of its kind before motion picture exhibition became geared toward collective audiences—also designed by Edison's company. Unlike later projectors, this one operated at over 40 frames per second, nearly three times faster than what would become the standard rate. Soon entire parlor halls were filled with Kinetoscopes, drawing in customers who individually watched a number of short movies. Using the Kinetograph, Dickson shot thousands of short films in what was the first motion-picture studio, "the Black Maria," a barnlike structure with a sliding roof that allowed sunlight to enter and illuminate the subjects being shot. Since the camera was large and immobile, the "action" needed to be brought before it. The shorts were thus one-shot, one-scene "movies."
In spite of its unwieldy size and relatively primitive mechanics, the Kinetograph influenced nearly every motion picture camera made since, but especially those that followed in the decade after. Like their predecessor, these cameras were typically made of wood, sat on a box or tripod, had a hand crank for shooting and projecting, and came with sprockets that drove the film through the machine. In Europe several important early filmmakers and inventors adapted the Kinetograph to fit their own needs, which included more versatile, mobile filmmaking as well as projection. The French Lumière brothers, Auguste (1862–1954) and Louis (1864–1948), invented the Cinématographe in 1895, a remarkable machine that was camera, printer, and projector all in one device. The Lumières became famous for shooting their popular actualités, short, single-shot films of locations and scenarios, such as oncoming trains, people kissing, and distant lands. Unlike the Kinetograph, the Cinématographe was light and more easily transportable, able to capture city scenes and "exotic" locales at a time when few were able to travel the world.
With the rapid growth of camera technology came attendant developments in projection. Many early cameras were also used as projectors, whereby an arc-light source would be attached to the back, which could be opened for projection purposes. Arc lights were a popular and powerful source of illumination for early theater and photographic portraiture, and were later used for motion picture production at a time when less sensitive film stocks required powerful lighting for full exposure. As early as 1888, Louis Aimé Augustin Le Prince (1842–c.1890), working in England, rivaled Dickson and his Kinetograph by patenting a motion picture camera-projector that used perforated film and intermittent stop-go motion. (Prince might have become more than a footnote in the early history of motion pictures had he and his machinery not disappeared without a trace in 1890.)
Several problems with early projection engineering needed solving, however. First, there was the matter of precisely regulating the film roll's intermittent but consistent movement through the machine, such that each frame would travel between the projection lamp and the open shutter for the same duration and at the correct pace for proper projection. German film pioneer Oskar Messter (1866–1943) developed the Maltese-cross system—still used today in most projectors—to ensure regular "stop-and-go" motion (Cook, p. 9). This gear, in the shape of a Maltese cross, sits atop the sprocket wheel that pulls the film through the projector. A pin on the edge of the wheel briefly locks with the gear, such that the film is momentarily (and repeatedly) paused and then released.
The second predicament with early projection was figuring out a method to prevent the film from tearing under the pressure of hundreds of feet of film spinning and intermittently tugging at the single strip between the reels (this pressure builds to a critical mass typically when the film is longer than 100 feet, equivalent to over a minute in duration). The solution came in 1896 with the invention of the Latham loop, an extra loop in the film's path through the projector that absorbed the tension and facilitated the showing of longer films. Although filmmakers may not have taken advantage of this new-found possibility until 1899, when longer films were introduced, exhibitors and studios did so by splicing shorter films together to make longer programs. In 1889 Edison's company and others around the world were taking patents out on projectors, and less than a decade later, on 23 April 1896, New York City was home to the first public projection of a motion picture in the United States. Both European and American audiences were quick to embrace the new entertainment, flocking to theaters and then reading about it the next day in their local newspapers.
There were many key players behind the initial technological developments of motion pictures. Yet few of these inventors were collaborating or even envisioning a common goal; even fewer foresaw the potential for movies to tell stories, create international celebrities, and entertain large audiences collectively gathered before one large screen. Eventually, however, technological advancements coalesced to match the period's fascination with mechanized movement. Together they soon offered up the possibility of the movies as an entertainment form and a highly profitable industry.
Long before Technicolor revolutionized the look of movies, color appeared in movies through a number of different methods. One of the first narrative movie directors, Georges Méliès (1861–1938), known for his early special effects and camera trickery, used color on occasion to accentuate spectacle, such as bursts of yellow flame and the like. In order to achieve this effect, he had individual frames hand-painted, a laborious and expensive practice. Tinting and toning were more popular, if only because the process was easier and cheaper, though admittedly less dramatic in effect. Tinting involved dyeing the entire emulsion in one color, so that shots of sky or twilight would appear blue and fire scenes red, for instance. Toning, on the other hand, was the chemical coloring of the silver portions of the image, which changed the normally black areas of the frame into colored ones. Early directors such as England's Robert William Paul (1869–1943) and James Williamson (1855–1933) made extensive use of both techniques, which would continue in popularity throughout the nickelodeon era and beyond.
In 1908 Charles Urban (1871–1942), an American businessman and motion picture enthusiast, patented the first functional color film process, called Kinemacolor. Unlike later color processes that would become the standard, this one was a two-strip additive system. In an additive color process, the camera produced two pairs of red and green exposures simultaneously, thus requiring superimposition in the projection of the final product (Cook, p. 254). Urban and his partners quickly began making films with Kinemacolor in several countries, including England and the United States. It was mainly used on shorter films, which kept the budget down, but by the early teens it was appearing in longer features as well. Because of patent litigation and technical problems with the process, Kinemacolor disappeared several years later. Additive color methods were generally short-lived because they required faster shooting, more illumination and film stock, and tricky equipment for projecting in superimposition, which the exhibitors resisted. In spite of its brief run, Kinemacolor was very popular in its time and established the foundation for future color processes, including Technicolor.
The next legitimate color process was developed by Technicolor in the 1920s. Herbert T. Kalmus (1881–1963), Daniel F. Comstock, and W. Burton Wescott had started the firm in 1915. Like Urban and others from this period, they began with an additive process, but once that failed, Kalmus sought to invent a subtractive process that would allow the colors to print on positive stocks and thus eliminate the superimposition of negatives. In 1922 Technicolor patented the first such color process, but the high cost made it untenable for most studios. A few years later, as talkies were emerging, Technicolor was using a two-strip subtractive process that attracted the studios' attention. Warner Bros., the most adventurous of the five major studios, was one of several companies to try it out on a limited basis. After several years into the Depression, however, the high cost again proved prohibitive for studios. Making it even less attractive were deficiencies inherent in a two-strip process, namely the lack of color range in the product (it had been proven in the nineteenth century that the full color spectrum could be achieved with combinations of only three primary colors: red, green, and blue).
In 1932 Technicolor came back with a three-strip method that included a "three-color beamsplitter and a third strip of film, so that each matrix—red, blue, green—had its own separation negative" (Bordwell, Staiger, and Thompson, p. 353). With the aid of a mirror and prisms, the image was rendered simultaneously onto three different emulsion film strips. One strip, sensitive to green, was placed behind the lens, while the other two—one sensitive to blue and the other to red—were back to back on a separate track and at a 90-degree angle from the first. Because the light was split by the prism and mirror, so that all three strips could register the image, shooting in three-strip Technicolor required a great deal more lighting on the set. Yet the result was a fuller, richer spectrum of colors on film, as is evident in the films that featured it, including Disney's animated Three Little Pigs (1933) and Snow White and the Seven Dwarfs (1937), as well as Gone with the Wind (1939) and The Wizard of Oz (1939).
With each year, Technicolor improved its color process, which became faster and finer-grained, offering richer colors. The process still had its drawbacks, however, namely its high cost. Shooting a film in Technicolor could add in the hundreds of thousands of dollars to individual film budgets, so studios were not ready to make most or even a quarter of their productions in color. In addition to the need for more lighting, the three-strip Mitchell cameras, owned and leased by Technicolor, were expensive, large, and heavy, making for difficult on-location shooting. The lack of competition at this time also made Technicolor more in demand and thus pricier. Further increasing the price tag, the company often required that studios rent one of its trained cinematographers. As director Alfred Hitchcock learned during the production of his first color film, Rope (1948), this was not necessarily a bad thing. A notorious perfectionist, Hitchcock was disappointed with the sunset sky's red-orange colors, which he felt smacked of a "cheap postcard." He brought in a Technicolor camera technician to reshoot the last five ten-minute takes of Rope. As this story suggests, filmmakers (not merely directors and cinematographers, but also costume designers, art directors, and set designers, and makeup artists), long accustomed to black-and-white aesthetics, underwent a necessary period of adjustment. Three-strip Technicolor remained the best and only color film method until it was updated and made obsolete in the 1950s, when single-strip color processes would emerge and television would provide legitimate competition. Only thereafter would the industry's conversion to color be nearly absolute.
Just as the idea of movies in color had its roots in the earliest recorded history of the motion pictures, so too did the notion that movies could and should talk to us. Indeed, as long as motion pictures have been projected, they have rarely been without sound and even synchronized sound, in rhythm with the images on screen. During the silent era, live organists, pianists, and symphonic orchestras accompanied the projection of movies in theaters both big and small. On occasion, live actors would stand behind the screen to speak the lines. In other countries, such as Japan, a narrator (benshi) would sometimes provide commentary on the action. By the mid-1920s, however, advancements in recording and audio technology ushered in the era of "talkies."
At first, synchronized sound systems were often on-disc, meaning that the film's audio (lines, foley sounds, and/or score) would be recorded onto a recordlike disc. Then, as the film projected, a disc player would play the audio in synchronization with the images on screen. In the United States, Vitaphone successfully used this process in the years after World War I. This method was flawed, however, and was often unsatisfying for viewers because the synchronization of sound and image was tenuous, easily disrupted. Across the Atlantic, German engineers concomitantly developed a means of recording the soundtrack directly onto the film, such that sound and image were truly wed during projection. This method, which was called the Tri-Ergon Process, converted sound into light beams, which were first recorded onto the film strip and then reconverted to sound in the projection process. In the early 1920s, Dr. Lee De Forest (1873–1961) was promoting a similar sound-on-film method in the United States. What gave De Forest the advantage over his counterparts was his ability to make sound audible to an entire audience with the aid of his patented Audion vacuum tubes, which were able to amplify sound coming out of a speaker without the usual distortion of the time.
In spite of these early sound-on-film innovations, the first talkies in Hollywood used a sound-on-disc system contracted by Vitaphone (owned by Western Electric). The major studios of the time, including Paramount and Metro-Goldwyn-Mayer (MGM), were not willing to take the risk on what would require such a costly overhaul of production and exhibition equipment. However, Warner Bros., a small but growing studio, anxious to compete with the major studios that threatened to squeeze out smaller competition, gambled by purchasing exclusive rights to Vitaphone in 1926. Warner Bros. started by making a program of talkie shorts before producing two features, Don Juan (1926) and The Jazz Singer (1927), both directed by Alan Crosland. Don Juan featured merely a scored soundtrack, so it still resembled a silent film. Like many films of this transitional period, The Jazz Singer was part silent and part talkie; it included several scenes with players speaking, but it otherwise used a prerecorded on-disc music score. Warner's gamble paid off handsomely nonetheless: the films did very well at the box office and only encouraged Warner Bros.—and the rest of Hollywood—to continue in the direction of talkies.
By 1929, most of Hollywood had made the conversion to talkies, implementing sound-on-film systems that allowed for the mechanical synchronization of image and sound. Much of Europe followed in the year or two after. Problems abounded during this initial phase of talkies for several reasons. Since the cameras of this era were so loud, they needed to be encased during shooting so that the sensitive microphones on the set would not pick up their audible hum. This made for a rather static kind of cinema, particularly in light of the precedents set by the highly mobile camera work of silent film masters such as F. W. Murnau (1888–1931) and Carl Theodor Dreyer (1889–1968). Arc lights, which had become standard by this time, also were loud enough to be picked up by the microphones. Hollywood switched soon thereafter to tungsten light sources, which, according to film historian Barry Salt, did not overly change the look of the films. In addition, the industry struggled at first with dialogue, which often came off as forced, unrealistic, and clichéd. Lastly, the industry discovered quickly that not all of its best silent stars were able to make the transition to the age of sound.
As several noted film historians have suggested, however, these growing pains were relatively few and short-lived for such an extensive industry-wide conversion. The industry solved most of these problems in time with developments in audio and recording technology. For instance, before long studios were using multiple audio tracks on films, looping in dialogue, music scores, and foley sounds during postproduction. Quieter cameras and more directional microphones also freed up the camera and increased the quality of sound. By the early 1930s, only a few years since the inception of the conversion to talkies, directors such as Fritz Lang (M, 1931), Lewis Milestone (All Quiet on the Western Front, 1930), and Hitchcock (Blackmail, 1929) were using sound and dialogue in complex ways, proving Soviet film theorist-director Sergei Eisenstein's (1898–1948) assertion that synchronized sound could be employed as audio montage and/or counterpoint. With the conversion to sound, purists throughout the world proclaimed that the advent of talkies would be the death knell of cinema as they knew it, a singularly visual art. It was not long before film industries and individual filmmakers silenced these critics.
In the Cold War era of communist witch hunts and blacklisting, Hollywood executives had even more pressing worries: the imminent death of the studio system and the meteoric rise of television, which subsequently led to a drastic decline in ticket sales. To combat the drop in profits, the studios quickly sought to attract moviegoers—particularly families—from the living room by enhancing and exploiting their medium's technological advantages, namely its relatively large image size and its color format. Not coincidentally, the 1950s were the first decade of drive-in movie theaters, stereo sound, wide-screen formats, epics shot in glossy color, and a full gamut of movie ballyhoo such as 3-D film technology.
Beginning in 1952, Hollywood began to make the conversion to color production. As with other sectors of the movie industry, the government deemed Technicolor (and particularly its three-strip technology) a monopoly in 1950. That same year Eastmancolor, a single-strip format based on Germany's Agfacolor, emerged as a legitimate and cheaper means of shooting in color. Unlike the earlier three-strip processes, Eastmancolor (and other processes similar to it) fused the three emulsion strips into a single roll, soon eclipsing the competition and replacing Technicolor as the most widely used color process in the industry. Whereas in the 1940s less than a quarter of Hollywood features were shot in color, by the 1950s more than half were; by the 1970s, the conversion was nearly complete. Barring student productions and the occasional "art" film intentionally shot in black and white, movies made since the 1970s have been exclusively shot in color.
To complement the great rise in color production, and to increase its drawing power as spectacle entertainment on a grander scale than television, Hollywood sought to widen the aspect ratio of the motion picture image. Up until the early 1950s, the standard (or Academy) aspect ratio of motion pictures was nearly square, 1.33:1. Since the television screen adopted this same format, Hollywood had even more incentive to increase its screen image. The first such widescreen optical process, Cinerama, appeared in 1952. It was a multiple-camera and multiple-projector system that showed films on a curved screen, adding depth and spectacle to the experience of movie spectatorship. (The equivalent format for today's spectators is IMAX, a two-projector system that shows movies—many shot in 3-D—on a giant screen not only wider but also taller than typical widescreen formats.) The projected image was as much as three times the standard aspect ratio of a 35mm movie image. As with most early processes, however, this one proved too expensive and burdensome both for those shooting and projecting the picture. A small number of motion pictures were shot in this format, among them How the West Was Won (1962).
In 1954 CinemaScope emerged as the most popular widescreen format in Hollywood and other parts of the world. It was one of several optical formats that used anamorphic lenses, which allowed for a 2:1 image to be compressed onto a 35mm lens and then converted to its natural dimensions in projection. In time, CinemaScope offered movies in a 2.35:1 format, which greatly widened the image seen by viewers. Not surprisingly, CinemaScope was used for epics, westerns, and other genres that were best suited for landscape shots, action scenes, and general spectacle. CinemaScope became extremely popular with audiences, who were drawn to the heightened experience of movie watching, and with the studios, which liked its cheap price tag and ease of use.
A number of widescreen variations became available during the 1950s and 1960s. Directors such as John Ford (The Searchers, 1956) and Alfred Hitchcock (Vertigo, 1958; and North by Northwest, 1959), for instance, famously used Paramount's Vista Vision. Some filmmakers preferred Vista Vision because it produced an unusually sharp image for widescreen formats, but it also used twice as much negative film stock as conventional shooting. By the 1960s Panavision gradually replaced CinemaScope as the standard format for widescreen cinematography. Non-anamorphic widescreen processes as well, such as 70mm, were used for popular films such as Around the World in 80 Days (1956), Cleopatra (1963), and The Sound of Music (1965).
In addition to changing the way moviegoers watched movies, widescreen cinema altered the way cinematographers approached shooting them as well. For many directors, there was more incentive to shoot long takes and to reduce the number of cuts. Yet the average length of shots in widescreen productions was only minimally longer than those in films shot in Academy ratio. The majority of filmmakers and cinematographers shooting in widescreen sought to take advantage of the extra width by lining up all the characters that could possibly fit in the frame and by adding more material to the mise-en-scène. Others, such as Jean-Luc Godard and Hitchcock, employed their own distinctive cinematic styles when using the new format. In Le mépris (Contempt, 1963), for instance, Godard seems to defy the film's width, establishing off-screen space while using only a fraction of the frame, and panning, rather than merely fixing upon, landscapes. For Godard the widescreen provided a means for compositional counterpoint. Hitchcock, in a different vein, remained true to his commitment to the principles of montage and thus cut even his widescreen films in ways that were not typical for this period. His great attention to composition, color, setting, and blocking are also on display in his later films, many of them shot using the VistaVision format.
Emulating a pattern in movie technology, stereoscopic (popularly known as "3-D") formats were introduced at an early stage in the history of motion pictures. In 1903 the Lumière brothers were the first to publicly screen a stereoscopic picture, L'arrivee du train (The Train's Arrival). The process was labor-intensive and highly expensive, however, making it largely unpopular. The increase in move lengths, due in large part to the rise of narrative and the star system beginning in the early teens, only exacerbated its high cost and unpopularity. Applying the anaglyphic system, stereoscopic productions required twice as much film stock, as shooting in 3-D necessitated using a twin-camera method that shot the same footage on two different reels, one tinted in red and the other in blue. Once processed, the film strips would be projected together for an audience wearing special glasses that had one red-filtered lens and one blue-filtered lens. Anaglyphic 3-D did not disappear, though, appearing in several European and US productions throughout the 1920s and 1930s.
By the early 1950s, Hollywood was desperate enough to overlook the format's imperfections in favor of its shock value. Several innovations ameliorated the process, as well, further explaining its enormous popularity during this period. A polarized version of the 3-D process increased precision, while simultaneously enhancing the viewing experience. Natural Vision, for instance, first introduced in 1952, fixed the dual cameras in a way that approximated the distance between the human eyes. This made for a more realistic sense of depth than earlier, less precise 3-D formats. Stereoscopic production and exhibition boomed for two years (1953 through 1954), appearing most often in adventure, science fiction, and horror movies, helping to give 3-D an aura of kitsch. Among over fifty titles shot in 3-D, its most famous include Universal's Creature from the Black Lagoon (1954) and House of Wax (1953). Hitchcock's Dial M for Murder (1954) and the only musical using the format, Kiss Me Kate (1953), were both shot in 3-D but were screened "flat" due to the sudden decline of the stereoscopic fad at the time.
Although the 3-D craze faded less than two years after its boom in the 1950s, stereoscopic filmmaking practices have reemerged time and again, suggesting their allure across generations. They returned in the 1960s, for instance, when a string of pornographic and X-rated 3-D films enjoyed great box office success. More recently, 3-D has made a comeback in the digital age of filmmaking.
A renewed interest in film realism influenced motion picture technology during and after World War II. In order to afford greater versatility and mobility, filmmakers took to using smaller cameras that could shoot on location without tripods or heavy equipment. Shortly after World War II, director Morris Engel (1918–2005), whose low-budget films shot in New York City would later influence John Cassavetes, helped Charlie Woodruff construct a portable 35mm camera that prefigured the Steadicam. By the middle of the 1950s, cinematographer Richard Leacock (b. 1921) and sound recording specialist D. A. Pennebraker (b. 1925) innovated a portable 16mm synchronized-sound camera that rested on the operator's shoulder. These light and highly mobile sync-sound cameras were instrumental in renewing a movement in documentary filmmaking during the 1960s. Filmmakers such as Shirley Clark, Robert Drew, and Frederick Wiseman helped popularize the 16mm cameras, which were famously used in productions such as Primary (1960) and High School (1968). Thanks to new developments in film technology, and inspired by new waves of filmmaking around the world, including Italian neorealism and cinéma vérité, handheld cinematography became not only feasible but also popular in both documentary and narrative movie production.
Beginning in the late 1970s, the Steadicam offered a new means of shooting handheld while maintaining steadiness of image. The Steadicam is a mount that stabilizes the camera by isolating it from all but the cinematographer's largest movements. In addition to absorbing shocks from movement, the mount also continually keeps the camera at its center of gravity. The Steadicam enabled filmmakers to shoot in tight spaces and accomplish difficult shots (such as circulars, extensive pans, and crowd scenes), while providing a degree of steadiness previously attained only by dolly shots or zooms. More recently, Hi-8 cameras, camcorders, and digital cameras have increased personal (and occasionally professional) handheld filmmaking practices. Director Martin Scorsese and his cinematographer Michael Chapman used the Steadicam quite effectively in a famous sequence in Raging Bull (1980), in which the camera follows Jake LaMotta (Robert De Niro) as he winds through a throng of fans and reporters on his way to the boxing ring.
Computer- and digital-based filmmaking technologies have picked up where the Steadicam left off, allowing for even greater portability and image steadiness. In addition, these new technologies are able to heighten special effects, intermix digital or virtual domains with live action, convey scale, and reduce the labor necessary in setting up difficult shots and constructing complex settings. Indeed, the new age of cinema signals the end of perforated film strips, 35mm cameras, and editing methods that have remained largely the same since motion pictures were born. While many of these changes are yet to be standardized and institutionalized, the technology has been around in some form since the early 1980s.
Disney's Tron (1982) was the first movie to include high-resolution digital imagery, but it did so sparingly. Several years later, in 1989, James Cameron took the technology to a new level, intermixing live action and computer graphics in The Abyss. Cameron proved that computer-generated imagery (CGI) could add complex yet realistic special effects while remaining cost-effective (Cook, p. 955). Cameron's success invited further experimentation with digital technologies. Since the early 1990s, many productions have implemented CGI in some form. Robert Zemeckis, in Forrest Gump (1994), blended virtual history (past US presidents, for instance) with live action. Cameron created digital replicas of Miami as background in True Lies (1994). In Star Wars: Episode 1, The Phantom Menace (1999), George Lucas's crew shot every scene with computer-generated technology, simulating entire battle sequences with digitally designed extras multiplied to fill the screen. These effects are especially suitable for action-adventure films, of course, but they are being increasingly used across genres to reduce costs and save labor time.
Like previous phases of film technology, the digital age of cinema has had to weigh the advantages of spectacle with more practical matters of efficiency, economy, and realism. Digital technology has also resurrected stereoscopic filmmaking. After the success of IMAX 3-D in the 1990s, James Cameron's Ghosts of the Abyss (2003), a documentary on the Titanic, and Steven Spielberg's digitally animated The Polar Express (2004) both played on IMAX's giant screens. Directors Lucas and Cameron have also explored a new 3-D process in which technicians can render flat films stereoscopic using digital means. This conversion process would be applicable not only to newly made films but also to reissues of previously released movies. The technology is in place for both the conversion and projection of digital 3-D, but theaters will need first to make the conversion to digital projection, which will be the next costly—but perhaps inevitable—overhaul.
Cook, David A. A History of Narrative Film. 3rd ed. New York and London: W. W. Norton, 1996.
Dickson, W. K. L., and Anita Dickson. History of theKinetograph, Kinetoscope, and Kineto-Phonograph. New York: Museum of Modern Art, 2000.
Salt, Barry. Film Style and Technology: History and Analysis. 2nd ed. London: Starword, 1992.
The development of motion-picture technology during the silent-feature era was largely incremental. Increasing standardization and quality control brought filmmakers' tools up to a professional level undreamed of in the short-film era. Yet this very standardization acted as a brake on the introduction of radical technical innovations. The intermittent surges of mechanical progress that mark the early cinema were hardly in evidence. The major exception to this trend, the development and introduction of the sound film, would eventually bring the silent cinema to an abrupt halt. But this work was carried on far from the silent stages and was of small consequence at the time to practitioners of silent filmmaking. Thus, in a 1926 paper for the American Academy of Political and Social Science, P. M. Abbott, vice-president of the Society of Motion Picture Engineers, divided motion-picture technology into five distinct process areas: manufacture of raw stock, studio machinery, laboratory equipment, material required for film exchange operations, and theater apparatus. Sound films had no place in his picture of cinema technology.1
In Abbott's view, raw-film manufacture was a straightforward situation that could "be dismissed in one paragraph," since it essentially duplicated the process involved for still photography. It is true that the manufacture of stock had long been standardized and that the Eastman Kodak Company had dominated the American market for years. But 1926 marked the climax of a revolution that Abbott allowed to pass unnoticed: the triumph of panchromatic negative stock in the Hollywood studios.
In 1915 Eastman Kodak offered only one negative stock and one positive release-print stock. This negative stock was an orthochromatic variety sensitive only to blue, violet, and ultraviolet light. It bore no name other than "motion-picture negative film," but to differentiate it from stocks added later, it was eventually labeled Motion-Picture Negative Film Par Speed type 1201. Although it was not rated in terms of the current ASA scale, the speed of this film was approximately ASA 24 according to filmmaker and historian Kevin Brownlow. Super-speed negative film was introduced in August 1925, but this was also an orthochromatic stock.2
The fact that orthochromatic negative stock was insensitive to the red end of the spectrum created many photographic problems. With red and yellow registering as black, and blue as white, relative color values could not be properly reproduced. Such a negative was unable to distinguish a white cloud in a blue sky and had a great deal of trouble with blue-eyed actors and actresses. The use of filters and various lighting tricks could mask some of these problems, but the results were never completely satisfactory.3
Panchromatic stock, capable of reproducing proper tonal values across the entire visible spectrum, was first introduced for still photographic plates in 1906. That same year, George Albert Smith in England was able to sensitize motion-picture film to the red end of the spectrum for use in his Kinemacolor process, but his results were far from perfect. The Eastman Kodak Company introduced a panchromatic motion-picture stock in September 1913, also in connection with color work, but not until 1918 was this stock employed to solve the problems of monochrome photography. Sequences of Fox's Queen of the Sea (1918) were photographed on panchromatic negative that year, but the stock still had a shelf life of only two months and needed to be specially ordered in batches of at least 8,000 feet. In fact, laboratories would resist the introduction of panchromatic negative for some time, because its sensitivity to red prevented them from using their traditional red-light illumination during development.4
In 1922, Ned Van Buren successfully photographed The Headless Horseman entirely on panchromatic stock, and the following year it became a regular Eastman product (later labeled type 1203). But laboratory resistance, lack of familiarity on the part of cinematographers, and an increased cost restricted its use to special occasions. For example, Robert Flaherty turned to panchromatic stock while filming Moana in Samoa in 1923 only after rejecting the results obtained with orthochromatic. Despite the inherent processing difficulties, Flaherty was able to develop and print his footage deep inside a cave, using an underground spring as his water source.5
Gradually the use of panchromatic negative began to increase, especially when much exotic location work was involved. Henry King filmed Romola entirely on panchromatic stock in Italy in 1923–1924, claiming that it was faster than he had expected. This unforeseen benefit arose because panchromatic's sensitivity to yellow and red permitted sufficient exposure under conditions unsuitable for par-speed film.6
In 1924 panchromatic stock was still 1½¢ per foot more expensive than par-speed film, but prices were equalized in 1926, with a resulting shift to panchromatic. Old Ironsides (1926) was the first Famous Players-Lasky feature shot entirely on panchromatic stock, and George Barnes used it to film The Winning of Barhara Worth and The Son of the Sheik that same year.7
For a time, as in Aloma of the South Seas and Beau Geste (both 1926), panchromatic location work was mixed with orthochromatic studio shots, but the results were unsatisfying. "After the visual grandeur of human faces in the desert," wrote John Grierson of Beau Geste, "one returned to the simpering lollipop studio faces of the final garden scenes." By the end of 1926, panchromatic negative was used more widely than par-speed film, a fact hailed by the SMPE as the year's "most prominent progressive step in connection with film and emulsions."8
By contrast, there was relatively little change in the stock provided for release prints. Kodak manufactured Eastman Cine Positive Film type 1301 throughout this
period. Dupont release-print stock was introduced in 1918. In 1920 Eastman and Gaevert were the only firms advertising release-print stock for sale in the pages of the Motion Picture News. By the end of the silent era, Carl Louis Gregory listed Eastman Kodak, Dupont-Pathé, Zeiss-Ikon, Agfa, and Gaevert as the leading manufacturers, but he gave no indication of the relative importance of these brands in the American market.9
Eastman Duplicating Film, a low-contrast film with high resolving power, was in use by 1926 and was labeled type 1503. Prior to the introduction of this stock, any necessary duplicate negative was produced from an ordinary projection print through the use of par-speed negative, a process that resulted in significant graininess and noticeable halation surrounding dark objects in a light field ("Mackie line").10
Cellulose acetate film base ("safety stock") was available throughout this period but was generally restricted to non-theatrical use, as in release prints provided for schools and hospitals or for home movies. Theatrical filmmakers continued to use the flammable cellulose nitrate stock until 1950 because, despite the attendant fire hazard, it was far less prone to shrinkage and curling, which were especially severe problems in this period.11
Initially, motion-picture cameras were manufactured (or at least owned) by the producing companies. Biograph and Vitagraph, for example, had unique cameras dating from the industry's earliest period, and the movements of these cameras were founded on various key patents held by the corporations. But as these firms grew more interested in defending their patents than improving their camera design, European models of French, German, and English manufacture came to dominate the market. These machines were sold outright to any available customer, a list that soon included members of the Motion Picture Patents Company, independent firms, and ambitious individual cameramen. Since the cinematographer was held directly responsible for the optical quality of the film, these men had an interest in maintaining their own equipment and avoiding the"junk boxes" supplied by the studio camera department. They soon found that a new generation of American camera manufacturers, notably the Bell & Howell Corporation and the Mitchell Camera Company, were quite responsive to their needs and suggestions, and links were forged directly between these manufacturers and various individual cinematographers, with the studios playing a much less prominent role.12
Of course, the existence, side by side, of so many cameras of disparate manufacture led to significant standardization problems. Carl Louis Gregory wrote as late as 1917 that "no two cameras can be used in the same production at the present time without having the frame line adjusted to one another." Because various cameras presented a different relation of frame line to perforations, a sequence that intercut footage taken by different cameras would appear to go "out of frame" at every cut. Some firms tried to standardize all the cameras under their control, but those cinematographers who were suspicious of studio camera departments insisted on using their own equipment. These departments had improved greatly by the end of the silent period, when they were capable of providing the finest equipment for their permanent staff. By 1927 only the top cameramen could afford the investment of over
$10,000 required for some complete outfits, but many free-lancers did continue to earn a good living, especially if they owned some unusual piece of apparatus not in the typical studio collection.13
In a 1915 ad in the Moving Picture World, the Motion Picture Apparatus Company of New York offered for sale "The Better Makes of Motion Picture Cameras," namely the Pathé, Moy, Prestwich, and Prevost. The Pathé studio model had been hailed as "the most popular camera world-wide … until after the First World War." A wooden camera encased in black leather, this rugged but inexpensive piece of equipment was noted for its curious, rear-positioned hand crank and external 400-foot film magazines. The film movement was the original Lumière harmonic cam, produced by Pathé under license after Lumière left the camera-manufacturing field. The camera came with a hooded Newtonian range finder and film footage counter. The Moy (or Moyer) was a British camera, whose film magazines were mounted internally, while the Prestwich was a very early British design capable of doubling as a printing machine. Designed specifically to circumvent American patents, the Prevost was assembled mainly from Pathé parts, but replaced the Pathé movement with a pearshaped
cam lobe of its own. These were typical of the cameras that would have been available from a general dealer.14
Not mentioned are two French cameras very commonly used at the Fort Lee studios, the Éclair and the Debrie, both of which had interior film magazines. The Éclair movement was similar to the Pathé, but the camera boasted a prismatic focusing unit, which eliminated the necessity for ground-glass focusing, common in most cameras of the day. The Debrie "Parvo" model had been introduced in 1908 and was highly regarded throughout the silent era for its precise workmanship and fine materials. Its unique reciprocating gear movement was highly accurate, and focusing was possible through a ruby window during cranking itself. Because it lacked a lens turret and a straight-line film feed, the Debrie never became a significant factor in the Hollywood studios but various models were favored by newsreel cameramen and European studio producers.15
The first camera to offer real competition to the Pathé studio model was the Bell & Howell 2709, introduced in 1911–1912, but not widely used until 1915. Now considered "one of the most important pieces of cinemachinery ever designed," this was the first high-precision, all-metal 35-mm motion-picture camera. The 2709 film movement had fixed non-moving registration pins and an intermittent motion mechanism so precise that many are still used for special-effects work today (the production line ran until 1958). Its turret lens allowed "making close-up views without budging the camera from its position," an interesting notion that prefigures one use for the modern zoom. Indeed, just about the only useful gadget missing from the 2709 was a cranking-speed indicator. The introduction of this camera changed cinematography from a tricky and inexact art to a science in which specific effects could be achieved with absolute predictability.16
But improvements were still possible, and in 1920 Charles Rosher demonstrated on The Love Light the prototype of the Mitchell camera that would soon force the Bell & Howell from the studios. The key feature of this camera was its unique focusing device, a rackover system invented by John Leonard that allowed the cameraman to frame and compose directly through the lens without moving either lens or aperture. This rackover was accomplished by turning a handle that shifted the entire camera body to the right, placing the finder immediately behind the taking lens. A similar operation with the Bell & Howell not only was time-consuming but risked misaligning the taking lens in the process.17
George Mitchell, who had acquired Leonard's patent, devised a three-cam movement of great accuracy, one that would prove easier to silence than the Bell & Howell when talking pictures arrived. He also incorporated masks, irises, and mattes directly
into the camera body, a substantial increase in convenience. These special qualities made the Mitchell's dominance in the American studios inevitable, but the fact that its designer was a cameraman, and its factory was in Los Angeles, did not hurt matters. The gossip columns of the American Cinematographer often noted with pride that this or that cameraman had invested in a Mitchell: it was a local product that made good. Bell & Howell eventually introduced a less efficient "shiftover" of its own, but the general convenience of the Mitchell had already won the market18.
Various other cameras were used throughout this period for special applications. According to Carl Louis Gregory (1927), the Universal was "the first moderate priced camera to stand the test of time." Solidly constructed of wood and metal, it was widely used by the government during the First World War and became a favorite of industrial filmmakers and explorers. Far superior was the Akeley, an all-metal camera designed by Carl Akeley of the American Museum of Natural History for use on his field expeditions. In order to better follow moving objects at a distance, Akeley replaced the usual pan and tilt cranks with a single handle. His viewfinder was paired with the taking lens and so mounted that it adjusted comfortably to the eye no matter how the camera was tilted. A focal-plane shutter maximized the light for exposure. Robert Flaherty used an Akeley on Nanook of the North, but "Akeley specialists" soon appeared in the studios as well, to film aerial dogfights and cowboy chases. These men were so specialized in the use of their equipment that they were listed separately on ASC rosters, like stills cameramen.19
In 1925 Bell & Howell introduced the 35-mm Eyemo, a spring-wound, hand-held camera patterned after the 16-mm Filmo, which they had marketed since 1923. Initially presented only as a news camera, it was quickly taken up by cameramen like Dan Clark "to get to difficult places" on Tom Mix films. Soon Bell & Howell began promoting this use in Hollywood with advertisements like that in the January 1927 American Cinematographer, showing Cecil B. DeMille using an Eyemo during the filming of King of Kings.20
Experiments in the hand-held use of a motorized Debrie Parvo and a Moy Aerial Camera (which contained a gyroscopic stabilizer) were conducted at the Paramount Astoria studio in 1925–1926, but these systems were too clumsy for general use. The Debrie Sept and the Zeiss-Ikon Kinamo also failed to find a market in America, essentially because of their limited film capacity. The Devry, introduced in 1925, did find some acceptance among newsreel cameramen and wealthy amateurs, but the Eyemo remained the only practical hand-held professional motion-picture camera throughout the silent period and continued to be successfully marketed until 1970.21
A major reason for the success of the Bell & Howell and Mitchell cameras was the superiority of their turreted lens system to the threaded lens mount of the Pathé or even the Debrie's bayonet mount. Lenses were often marketed separately from camera bodies and by 1925 were commonly supplied at speeds of f/2.3, f/2.7, or f/3.5. Interior views of the Scopes trial were successfully filmed that year with an f/2.7 lens. By 1926, a lens of f/1.6 was considered "quite common" by the SMPE, and the following year an f/1.5 lens was claimed to be the fastest available. By the end of the silent period there was a great demand for fast lenses of f/1.5 to f/2.5, which allowed savings in lighting costs, better results in bad weather, and a greater range of artistic effects. Of the various specialty lenses, one of the most important was the Struss Pictorial Lens, originally developed by Karl Struss as a still-camera lens capable of producing a Photo Secessionist effect without the bother of negative manipulation.
John Leezer was the first to use it in motion-picture work, on The Marriage of Molly O (1916).22
By the late 1920s, cinematographers were quite fond of the "softly diffused pictures" they could obtain with the aid of the Struss Pictorial Lens and similar devices, but the rest of the industry did not share their feelings. When Joseph Dubray of the ASC praised such techniques before a Hollywood meeting of the Society of Motion Picture Engineers, he was attacked by various members who claimed that "fuzzy pictures are annoying to look at" and that "the ordinary man on the street wants his pictures clear." Indeed, when the negative of Street Angel was sent to the Fox lab in New York in 1928, it was originally returned as unprintable, although Hollywood cinematographers saw fit to honor Ernest Palmer's heavy use of fog filters here with an Academy Award nomination.23
The development of motion-picture camera lenses affected the work of the stills photographer as well. Despite the fact that these images have always been crucial in promoting current films or recalling past releases, little has been written about the significance of the motion-picture still. Stills were needed for advertising, marketing, production reference work, and trick photography. The first photographers employed to shoot scene stills posed the cast specially at the end of each action. Since
this practice was costly in terms of time and money, photographers were told to shoot during action. But the eight-by-ten-view cameras then in use required a twenty-inch lens to duplicate the field of a 35-mm motion-picture camera with a two-inch lens. An f/6 lens of nineteen-inch focal length was the best available for still work, and by using fast film, a focal plane shutter, and exposures of one-fifth of a second, "fairly good results" could be obtained if the cinematographer was shooting around f/4.5.24 But when motion-picture lenses began to increase in speed, the stills men could not keep up, and scene stills once again required posing. Glamour photography done in portrait studios for advertising and publicity purposes was modeled on the work of the most fashionable society photographers. After the war, it was not unusual to see Baron de Meyer, Arnold Genthe, or Edward Steichen handling a portrait sitting for a top star, and their style was carried on for many years by George Hurrell, Clarence Sinclair Bull, Ruth Harriet Louise, and many other Hollywood studio photographers.
Cinematographers generally calculated their exposures by eye, on the basis of their previous experience. A short strip of test film could immediately be developed if necessary; a sign posted in the Universal camera department offered the sage advice, "If in doubt, shoot at 5.6." It was a matter of professional pride to be able to produce a negative of uniform density entirely without recourse to meters, but several were available. The Harvey Calculator was little more than an exposure table operated like a slide rule. The Watkins Actinometer measured overall actinic power with the aid of a sensitive paper that darkened on exposure to light. Most satisfactory was the Cinophot, a visual extinction meter that measured reflected light. Unfortunately, reading an extinction meter is a largely subjective task, and various users could report different results. "Because of this, the old school photographers refer to exposure meters as 'Guessometers,'" noted Carl Louis Gregory in 1927.25
Three main varieties of artificial lighting were employed in the studios during this period: mercury vapor tubes, arc lamps, and incandescent units. A 1915 survey found that fifty American studios employed some form of artificial lighting, forty-three of them using mercury vapor tubes alone or in combination with other units. Known as
Cooper-Hewitts, the mercury tubes were introduced as early as 1905. Each assemblage contained a bank of eight tubes capable of throwing "a mass of light upon the scene similar to that from a fair size window or skylight." Arranged in rows directly overhead, or angled slightly for front lighting, these units produced a flat, undifferentiated flow of light. Since the units were originally introduced to assist (and later supplant) sunlight diffused by muslin scrims, this was exactly what the filmmakers required. An added benefit was that the light given off by the Cooper-Hewitts was very rich at the blue and ultraviolet end of the spectrum, and almost absent at the red end. This dovetailed remarkably with the photosensitive properties of orthochromatic negative. "A scene which has been photographed with mercury vapor light invariably shows better modeling and tone relationships than subjects which have been made with other light sources," reported the head of the Ansco Research Laboratories in 1922.26
White-flame carbon arcs, adapted from theatrical lighting units and capable of concentrating a volume of light in a limited area, were introduced in 1912 and marketed widely by M. J. Wohl and Company and the Kliegl Brothers. Cecil B.
DeMille recalled borrowing such a lamp from a downtown Los Angeles theater to produce a dramatic effect in The Warrens of Virginia (1915), which was his eighth film but purportedly his first to use artificial light. By 1916–1917 Sun-Arcs (similar to naval searchlights) and twin-arc broadsides were being used to illuminate large sets and create more sculptural lighting effects. According to the New York Times, cameraman Harry Fischbeck developed "a new system of lighting, in which a preponderance of spotlights are used. He obtains his effects of highlights and shadows by employing spotlights as an artist uses a brush and colors on the canvas. The basic idea is to make each picture scene look like a painting, with the characters standing out in bold relief." Fischbeck used this system quite conspicuously on Monsieur Beaucaire (1924).27
But compared to Cooper-Hewitts, arcs were dirty (hot ash floating through the air was a real problem), tremendously hot, and so labor-intensive that each unit required its own electrician. In addition, the problem of "Klieg eyes," a painful redness and swelling that could incapacitate the performers, was eventually traced to the powerful ultraviolet rays given off by the arcs. The use of thirty-seven Sun-Arcs to light the cathedral set for The Hunchback of Notre Dame (1923) may have been spectacular, but something else was required for more practical studio shooting.28
By the late teens, incandescent tungsten lamps were being used for close-ups and other special purposes by Lee Garmes and a few others, but existing units were not available in suitably high wattages. Even worse, their actinic quality was so heavily weighted to the red end of the spectrum that they were relatively inefficient when used with orthochromatic negative. Serious development of incandescent lamps for motion-picture work did not begin until 1920, when the actress Maude Adams, who was promoting a color-film system, appealed to General Electric for a new type of lighting unit. This eventually resulted in the 10K and 30K tungsten Mazda lamps, later known more generically as "inkies." As early as 1922, Victor Milner had hoped that incandescent lamps might soon replace arcs, which he found "glary," but little real progress was made until 1927.29
That year, a tremendous surge in the use of incandescent lamps was fueled by the dramatic rise in the use of panchromatic stock. This move knocked the Cooper-Hewitts out of the market and temporarily made arcs the dominant studio lighting units. But in addition to their previously mentioned problems, arcs were too directional and difficult to diffuse to be used as the sole unit in the cameraman's lighting kit. They would also prove unsuitable for early talkies, since their operation produced a crackling on soundtracks that engineers had not yet learned to eliminate. In 1927 the Mole-Richardson Company was formed in Hollywood to market incandes-cent units with a wide range of motion-picture applications, and existing production conditions caused their sales to balloon immediately (see fig. 5.1).30
Even without the sound-film problem that restricted the use of arcs, Mazda lamps dominated the last days of the silent picture, and films such as The Little Shepherdof Kingdom Come, Two Lovers, Show Boat, Sins of the Fathers, Masks of the Devil (all 1928–1929), and the original silent version of Hell's Angels were shot entirely or mainly with Mazda lights. Not only did the system have innate mechanical advantages, but operational costs were far lower. The number of men required to handle the lighting equipment on an average set dropped from twenty or thirty to as low as eight or twelve, while electric bills were reduced by one-third to one-half.31
It should be remembered that the studio itself also functioned as a kind of lighting unit. Greenhouse construction was characteristic of the Fort Lee studios in the mid teens, with glass roof and walls maximizing the available New Jersey sunlight. Over-head scrims served to diffuse this light, and if it began to fail, Cooper-Hewitts could be called in as a supplement. Glass studios were less frequently seen in California, where better weather conditions allowed large open-air stages, such as the one at Universal City, to operate well into the teens. Enclosed studios entirely dependent on artificial lighting had been known in the East for many years, but with the
introduction of Klieg and Wohl arcs, this curse suddenly became a blessing: cinematographers could ignore the vagaries of sunshine and could paint with their own light. By the late teens, many of the greenhouse studios were feverishly being painted over.32
The proliferation of arc lighting in this period turned the average stage floor into a jungle of electric cables. Studio wiring followed the "wall pocket" system, with 100-amp pockets of two or three lines placed every twenty or thirty feet around the stage perimeter. Lighting for a stage would be centrally controlled from one master switchboard, no matter how many companies were working simultaneously. Each would have to shout or otherwise signal its orders to the main board. In 1920 remote-control systems were introduced in New York at Famous Players-Lasky's Amsterdam Opera House and Hearst's International studio. In this scheme the master switchboard was replaced by a conductor box capable of being operated remotely by each company. Power was carried overhead on runways, clearing the floors of cable. A small push-button unit controlling some six contactors was dropped from the runway near the director and cameraman, who could now act independently of the main board. Derivatives of this system soon appeared in the Famous Players-Lasky Astoria studio, the Fox New York studio, and the Metro Hollywood studio.33
The editing process was the studio worker's final contact with the film, although for most of the silent period little of the apparatus involved here was worthy of the name "technology." A light table, a pair of rewinds, and a splicing block were all the specialized equipment needed to edit Intolerance or Greed. Off-the-shelf supplies included a pair of scissors to cut the film, a razor blade to scrape the emulsion, and a bottle of film cement. Miniaturized viewing machines to aid novices in admiring their work had been available for years, but professional pride kept most film cutters away from such devices. A good film editor could judge pacing and rhythm simply by pulling the film through his or her fingers, but the Moviola, a device that could achieve this effect far more accurately, began to force a change around 1924. This machine, developed by Ivan Serrurier, was essentially a motorized version of the earlier viewers, capable of running at variable speeds in either direction. It also had the benefit of proper illumination. Originally it was used only with open spools of film, but by 1928 the Model D, or "Director's Model" Moviola, boasted 1,000 foot take-up and feed reels. Still resisted during the silent era by many traditional film editors, the Moviola would prove invaluable with the introduction of talking pictures.34
Laboratory handling of motion-picture negatives and positives underwent significant changes during this period, as procedures advanced from the nearly handcrafted methods of earlier days to the fully automated systems that would be required by talkies.
The rack-and-tank system dominated laboratory practice for many years. In the dark room, exposed negative would be wound in spiral fashion onto wooden frames or racks, emulsion side out, each rack being about 4½ feet square and capable of holding about 200 feet of film. A pair of racks would be dipped into a deep, narrow tank containing about 110 gallons of developing solution (generally Eastman 16, or some other member of the methol-hydroquinone group) until the handler judged it to have achieved the proper density. He would do this by periodically withdrawing the rack and examining the film against a ruby light. After a session in a washing tank, the rack of film was transferred to a fixing bath of sodium hyposulphite until all the active silver salts had dissolved out. Eight to twelve racks at a time were then placed in a very large washing bath, often located outdoors, and thoroughly rinsed with running water. Finally, each rack was attached to a wooden frame, or "horse," and the film was unrolled onto huge revolving wooden drying drums (an intermediate step sometimes used in the early days added a glycerine bath before drying to guard against excessive moisture loss).35
The potential for mechanical or chemical failure, or human error, was quite high. "It is," wrote one lab man in 1923, "an impossibility to preserve an exceedingly careful attitude in a number of workmen who are by nature of their work wet and uncomfortable." As the film became soaked in developer, it would expand, thus requiring the technicians to tighten thumb screws on the rack to keep it from slipping off. Conversely, as it dried, it began to shrink, and the screws all needed to be reset again. So much handling caused scratching and tearing problems. There were difficulties with non-uniformity of development, especially since the racks held such
short lengths of film. Frequent exposure to air created dust problems and air-bell marks (spots caused by evaporating droplets of water that kept developer from the film). Rack marks—dark bands on the film where it had been wrapped over the top or bottom of a rack—caused rhythmic flashing onscreen. Laboratory specialists as late as 1925 felt that "much of the film shown in the present day theatre" suffered from such visible defects.36
Kodak had installed Gaumont equipment for automatic processing of positive film as early as 1913, but not until 1920 did the Spoor-Thompson machine (which had the capacity to correct for shrinkage during processing) begin to make an impact on general laboratory practice. The Erbograph, in use soon after, was a horizontal-feed machine that could even perform messy tinting and toning operations. Its manpower savings were also considerable. Not until the end of the silent era, however, would filmmakers trust the development of motion-picture negatives to any automated system. Old-time laboratory men, who were leery of machines damaging irreplaceable camera negatives, also felt that "hand-developing" gave them an opportunity to correct exposure problems that might have occurred during photography. In 1925 Alfred B. Hitchens, technical director of the Duplex Motion Picture Laboratories, argued that it was the cinematographer's responsibility to provide a properly exposed negative and that the greatest contribution a laboratory could make was to guarantee consistency of development. Criticized by those who felt that every negative needed uniform density throughout (so-called "one-light" negatives), he answered that negatives could be timed, with varying densities compensated for by automatic light changes during printing. Nevertheless, such a system was slow to catch on in Hollywood, where most negative was processed. Not until 1927 did Universal install an improved Spoor-Thompson machine for automatic negative developing. Capable of processing 4,000 feet of film per hour, it was first used on The Man Who Laughs and resolved most of the problems associated with the rack-and-tank method. Negative timing, of course, continued to be done by eye.37
Little change occurred during these years in the actual printing of release positives. The most commonly used machines were the Duplex, a step printer, and the Bell & Howell Model D, a continuous printer introduced as part of the Bell & Howell system in 1911. The Duplex featured a complicated intermittent movement in which each frame was stopped and printed individually. It was especially suited to an age when perforations and frame lines were not yet completely uniform. The Bell & Howell was a continuous-contact printer in which both positive and negative traveled in unison around a printing sprocket. Its speed of 60 feet per minute was triple that of most step printers. Still fairly labor-intensive, the machines were fed by women who worked all day in the weak glow of a ruby light. A range of printing densities could be selected and automatically programmed for each reel, with the changes triggered by notches cut in the side of the negative. The Duplex, for example, could provide a range of eighteen different densities and change these densities eighteen times per sitting.38
P. M. Abbott, in his 1926 report to the Academy of Political and Social Science, stated that "relatively little equipment was used in film exchanges," the business being for the most part devoted to distributing, inspecting, repairing, and storing circulating prints. Exchanges shared much of what apparatus they did use with laboratories and theaters, notably the rewinds used for inspecting the film and the reels it was wound onto. It was traditional, however, for exchanges to mount their films on the worst available reels. "I myself stood beside the manager of an exchange supplying dozens of theaters," reported F. H. Richardson, "and we have both watched the winding of a new roll of film just received from the producer on a flimsy, rickety, bent up, decrepit reel, which in the process of a single winding of the film would cause more damage to the same than would cover the cost of a fairly good new reel." Projectionists would remove the film from these "exchange reels" whenever possible, but the damage might well be done already, as Richardson suggests.39
Most exchanges were equipped with the Bell & Howell Standard Film Splicing Machine soon after it was introduced in the mid 1920s, but their employees were often antagonistic to any such mechanical splicing device. The Vidaver Film Inspection Machine was available in 1924, but theaters seemed more interested in this first non-manual examining device than were the exchanges. Properly equipped or not, the performance of exchange workers seems to have left much to be desired. Richardson reports "reels loaded with film taken from the shipping case by exchange employees and literally thrown, or tossed, a distance of fully six feet to a board-top
table." Exchange workers may have had to deal with only a few kinds of technical apparatus, but the way they used, or abused, these items could easily result in the "tangled mass in the shipping case" that not infrequently arrived at local theaters.40
Theater equipment included most of the same film-handling devices used by exchanges, namely splicers, rewinds, and various examining machines. Some projectionists would supply themselves with a foot-candle meter to measure screen-illumination intensity. All booths would have some form of magic lantern to project slides for various purposes, and atmospheric theaters would use the Brenograph, which could project everything from song slides to the Aurora Borealis. Ben Hall supplies the quintessential Brenograph tag line: "Please Do Not Turn On the Clouds Until the Show Starts. Be Sure the Stars Are Turned Off When Leaving."41
But, of course, projecting the film remained the theater's most crucial mechanical task, the final technological link between filmmaker and audience. Because of a burst of new projector designs in the years just prior to 1915, only a few new machines were introduced into the American market during the period under discussion. While the Motiograph, Powers Cameragraph, and Simplex projectors dominated the field, it should be remembered that many theaters chose less elaborate machines, such as the American Standard or the Baird, or continued to make do with surviving models from the early days of cinema. David Hulfish's Motion Picture Work, a 1913 manual that had wide circulation in the late teens, still contained detailed descriptions of such antique apparatus as the Selig Polyscope, Edengraph, and Lubin projectors. The same 1915 issue of the Moving Picture World that carried the camera ads referred to earlier included an announcement from the Amusement Supply Company, which identified the major brands of projectors as Powers, Motiograph, Simplex, and Edison. On another page, the same firm offered the following rebuilt machines for sale: a 1908 model Motiograph for $60, an Edison Exhibition model for $65, a Powers No. 5 for $75, a Powers No. 6 for $115, and a 1911 Motiograph for $125. A rival firm, the Stern Manufacturing Company, offered floor samples of the Simplex or Powers 6A for $185. The Powers was also available from them with motor drive at $230.42
The Motiograph was introduced in 1908, a development of A. C. Roebuck's Opti-graph, which had been one of the earliest American projectors to be widely distributed (via the Sears, Roebuck Company). The Motiograph No. 1 was the first projector having all gears enclosed, as well as the first in which the movement could be easily removed for cleaning or repairs. This Geneva-type movement operated an intermit-tent sprocket wheel known as the "star and cam," and the entire movement assembly slid up and down for framing. While the earliest models seem unusually slight, the Motiograph was constantly under development throughout this period. The addition of cylindrical rear shutters in 1928 gave the Motiograph a decided advantage over its competition when sound-on-film arrived by increasing the amount of available light. Although Don Malkames could write in 1957 that the Motiograph "is still considered one of the finest projectors manufactured today," it was relatively ignored during the 1920s. James Cameron neglects to mention the Motiograph in his encyclopedic Motion Picture Projection (perhaps because the manufacturer failed to buy a display ad in the book), but T. O'Connor Sloane leaves the machine out of his book as well.43
The machines that do merit detailed description in such manuals are invariably the Simplex and the Powers Cameragraph. The most significant of the Cameragraphs in use during this period were No. 5 and No. 6, both on the market by 1909. The Powers Cameragraph No. 5 incorporated a traditional Geneva movement with a four-slot star and a one-pin cam, kept in balance by a heavy flywheel mounted directly on the drive shaft. It was available with two styles of safety shutter and boasted fireproof feed and takeup magazines. The No. 6 introduced a remarkable new movement called the "pin cross." A four-armed cross mounted on the drive shaft carried four pins that engaged a revolving cam ring, thus effecting movement of the sprocket shaft. This system afforded a longer exposure without added strain on the mechanism. Before the Cameragraph No. 7 could be placed on the market, the Nicholas Power Company merged with the International Projector Company, and the trademark disappeared. Nonetheless, the Powers name was extremely respected throughout the silent period. The company constantly upgraded the basic equipment by offering devices such as the Nupower motor (a universal motor capable of running on either AC or DC), the Powers speed indicator, which gave projector speed in feet per minute and minutes per reel, a film-footage recorder, and a remote instrument panel that could provide readings on current and voltage regulation of the arc, as well as projector speed. When Grauman's Chinese Theatre opened in 1927, there were three Powers Cameragraphs in the booth.44
The Simplex projector was introduced by the Precision Machine Company in 1911 and eventually, with the demise of the Nicholas Power Company, came to dominate the market. "It was the first completely enclosed mechanism with center frame bearings. It had means for adjusting the revolving shutter during operation, a new style of sliding gate instead of the former hinged types, a new type of fire shutter and governor, and a precision-focusing and lens-mount attachment," wrote Malkames. The Simplex Model S employed the standard arc lamp, while the Model Â was adapted for Mazda projection. As with the Powers, a motorized variable-speed control was also available. The intermittent used was the traditional Geneva cross. By the end of the silent era, the Simplex was the projector of choice in theaters such as the Roxy. The introduction of the Simplex marked the arrival of the modern motion-picture projector, and although it reached the market several years before the period under discussion, no major improvements were seen in American projection design until the introduction of the rear shutter by Motiograph in 1928 (a feat matched by the Super Simplex of 1930).45
A statistical analysis of his extant writings suggests that technology was the most important of Leonardo’s varied intervals. Indeed, it is revealing to compare the volume of his technological writings with that of his purely artistic work. Of his paintings, fewer than ten are unanimously authenticated by art scholars. This evident disinclination to paint, which even his contemporaries remarked upon, contrasts strongly with the incredible toil and patience that Leonardo lavished upon scientific and technical studies, particularly in geometry, mechanics, and engineering.
Documentary evidence indicates that in his appointments Leonardo was always referred to not only as an artist but also as as engineer. At the court of Ludovico il Moro he was“Ingeniarius et pinctor,” while Cesare Borgia called him his most beloved “Architecto et Engegnero Generale”(1502). when Leonardo returned to Florence, he was immediately consulted as military engineer and sent officailly “a livellare Arno in quello di pisa e levello del letto suo”(1503)1. In 1504 he was in Piombino, in the service of Jacopo IV d’ Appiano, working on the improvement of that little city-state’s fortifications.2 For Louis XII he was“notre chier et bein aime Leonard de Vnci, notre paintre et ingenieur ordinaire” (1507).3 When in Rome, form 1513 to 1516, his duties clearly included technical work, as documented by drafts of letters to his patron Giuliano de’ Medici, found in the Codex Atlanticus. Even his official burial document refers to him as“Lionard de Vincy, noble millanois, premier peincutr et ingenieur et architecte du Roy, mescanichien d’Estat …”(1519).4 the surviving notebooks and drawings demonstrate Leonardo’s lifelong interest in the mechanical arts and engineering.
Leonardo’s scientific and technological activities were well known to his early biographers, even if they did not approve of them. Paolo Giovio’s short account on Leonardo’s life (ca.1527) contains a significant phrase:“But while he was thus spending his time in the close research of subordinate branches of his art he carried only very few works to completion.”
Another biographer, the so-called“Anonimo Gaddiano”or“Magliabechiano,”writing around 1540, said that Leonardo“was delightfully inventive, and was most skillful in lifting weights, in building waterworks and other imaginative constructions, nor did his mind ever come to rest, but dwelt always with ingenuity on the creation of new inventions.”
Vasari’s biography of Leonardo, in his Lives of the Painters, Sculptors and Architects (1550; 2nd ed., 1568), reflects the widespread sentiments of his contemporaries who were puzzled by the behavior of a man who, unconcerned with the great artistic gifts endowed upon him by Providence, dedicated himself to interesting but less noble occupations.
Vasari’s testimony concerning Leonardo’s widespread technological projects (confirmed by Lomazzo) is important in assessing the influence of Leonardo on the development of Western technology :
He would have made great profit in learning had he not been so capricious and fickle, for he began to learn many things and then gave them up …he was the first, though so young, to propose to canalise the Arno from Pisa to Florence. He made designs for mills, fulling machines, and other engines to go by water … Every day he made models and designs for the removal of mountains with ease and to pierce them to pass from one place to another, and by means of levers, cranes and winches to raise and draw heavy weights; he devised a method for cleansing ports, and to raise water from great depths, schemes which his brain never ceased to evolve. Many designs for these motions are scattered about, and I have seen numbers of them.…His interests were so numerous that his inquiries into natural phenomena led him to study the properties of herbs and to observe the movements of the heavens, the moon’s orbit and the progress of the sun.
(It is characteristic of Leonardo’s contemporary critics that Vasari, giving an account of Leonardo’s last days, represents him as telling Francis I“the circumstances of his sickness, showing how greatly he had offended God and man in not having worked in his art as he ought.”)
Lomazzo, in his Trattato della pittura (Milan, 1584), tells of having seen many of Leonardo’s mechanical projects and praises especially thirty sheets representing a variety of mills, owned by Ambrogio Figino, and the automaton in the form of a lion made for Francis I. In his Idea del tempio della pittura(Milan, 1590), Lomazzo mentions :Leonardo’s books, where all mathematical motions and effects are considered”and of his“projects for lifting heavy weights with ease, which are spread over all Europe. They are held in great esteem by the experts, because they think that nobody could do more, in this field, than what has been done by Leonardo.”Lomazzo also notes“the art of turning oval shapes with a lathe invented by Leonardo,”which was shown by a pupil of Melzi to Dionigi, brother of the Maggiore, who adopted it with great satisfaction.
Leonardo’s actual technological investigations and work still await an exhaustive and objective study. many early writers accepted all the ingenious mechanical contrivances found in the manuscripts as original inventions; their claims suffer from lack of historical perspective, particularly as concerns the work of the engineers who preceded Leonardo. The “inventions”of Leonardo have been celebrated uncritically, while the main obstacle to a properly critical study lies in the very nature of the available evidence, scattered and fragmented over many thousands of pages. Only in very recent times has the need for a chronological perspective been felt and the methods for its adoption elaborated.5 It is precisely the earliest—and for this reason the least original—of Leonardo’s projects for which model makers and general authors have shown a predilection. On the other hand, the preference for these juvenile projects is fully justified: they are among the most beautiful and lovingly elaborated designs of the artist-engineer. The drawings and writings of MS B, the Codex Trividzianus, and the earliest folios of the Codex Atlauticus date from this period (ca. 1478-1490). Similar themes in the almost contemporary manuscripts of Francesco di Giorgio Martini offer ample opportunity to study Leonardo’s early reliance on traditional technological schemes (Francesco di Giorgio himself borrowed heavily from Brunelleschi and especially from Mariano di Jacopo, called Taccola, the“Archimedes of Siena”); the same comparison serves to demonstrate Leonardo’s originality and his search for rational ways of constructing better machines.
While he was still in Florence, Leonardo acquired a diversified range of skills in addition to the various crafts he learned in the workshop of Verrocchio, who was not only a painter but also a sculptor and a goldsmith. Leonardo must therefore have been familiar with bronze casting, and there is also early evidence of his interest in horology. His famous letter to Ludovico it Moro, offering his services, advertises Leonardo’s familiarity with technique of military importance, which, discounting a juvenile self-confidence, must have been based on some real experience.
Leonardo’s true vocation for the technical arts developed in Milan, Italy’s industrial center. The notes that he made during his first Milanese period (from 1481 or 1482 until 1499) indicate that he was in close contact with artisans and engineers engaged in extremely diversified technical activities—with, for example, military and civil architects, hydraulic engineers, millers, masons and other workers in stone, carpenters, textile workers, dyers, iron founders, bronze casters (of bells, statuary, and guns), locksmiths, clockmakers, and jewelers. At the same time that he was assimilating all available traditional experience, Leonardo was able to draw upon the fertile imagination and innate technological vision that, combined with his unparalleled artistic genius in the graphic rendering of the most complicated mechanical devices, allowed him to make improvements and innovations.
From about 1492 on (as shown in MS A; Codex Forster; Codex Madrid, I; MS H, and a great number of pages of the Codex Atlanticus), Leonardo became increasingly involved in the study of the theoretical background of engineering practice. At about that time he wrote a treatise on“elementi macchinali” that he returned to in later writings, citing it by book and paragraph. This treatise is lost, but many passages in the Atlanticus and Arundel codices may be drafts for it. Codex Madrid, I (1492-1497), takes up these matters in two main sections, one dealing with matters that today would be called statics and kinematics and another dedicated to applied mechanics, especially mechanisms.
Our knowledge of the technical arts of the fourteenth to sixteenth centuries is scarce and fragmentary. Engineers were reluctant to write about their experience; if they did, they chose to treat fantastic possibilities rather than truce practices of their time. The books of Biringuccio, Agricola, and, in part, Zonca are among the very few exceptions, although they deal with specialized technological fields. The notes and drawings of Leonardo should therefore be studied not only to discover his inventions and priorities, as has largely been done in the past, but also—and especially—for the insight they give into the state of the technical arts of his time. Leonardo took note of all the interesting mechanical contrivances he saw or heard about from fellow artists, scholars, artisans, and travelers. Speaking of his own solutions and improvements, he often referred to the customary practices. Thus his manuscripts and books of machines by authors of the periods both preceding and following Leonardo, projects for complete machines are presented, without any discussion of their construction and efficiency.6 The only exception previous to the eighteenth-century authors Sturm, Leupold, and Belidor is represented by the work of Simon Stevin, around 1600.
As far as the evidence just mentioned shows, the mechanical engineering of times past was limited by factors of two sorts: various inadequacies in the actual construction of machines produced excessive friction and wear, and there was insufficient understanding of the possibilities inherent in any mechanical system. Leonardo’s work deserves our attention as that of the first engineer to try systematically to overcome these shortcomings; most important, he was the first to recognize that each machine was a composition of certain universal mechanisms.
In this, as in several other respects, Leonardo anticipated Leupold, to whom, according to Reuleaux, the foundations of the science of mechanisms is generally attributed.7 Indeed, of Reuleaux’s own list of the constructive elements of machines (screws, keys or wedges, rivets, bearings, plummer blocks, pins, shafts, couplings, belts, cord and chain drives, friction wheels, gears, flywheels, levers, cranks, connecting rods, ratchet wheels, brakes, pipes, cylinders, pistons, valves, and springs), only the rivets are missing from Leonardo’s inventories.
In Leonardo’s time and even much later, engineers were convinced that the work done by a given prime mover, be it a waterwheel or the muscles of men or animals, could be indefinitely increased by means of suitable mechanical apparatuses. Such a belief led fatally to the idea of perpetual motion machines, on whose development an immense amount of effort was wasted, from the Middle Ages until the nineteenth century. Since the possibility of constructing a perpetual motion machine could not, until very recent times, be dismissed by scientific arguments, men of science of the first order accepted or rejected the underlying idea by intuition rather than by knowledge.
Leonardo followed the contemporary trend, and his earliest writings contain a fair number of perpetual motion schemes. But he gave up the idea around 1492, when he stated“It is impossible that deal [still] water may be the cause of its own or of some other body’s motion”(MS A, fol. 43r), a statement that he later extended to all kinds of mechanical movements. By 1494 Leonardo could say that
… in whatever system where the weight attached to the wheel should be the cause of the motion of the wheel, without any doubt the center of the gravity of the weight will stop beneath the center of its axle. No instrument devised by human ingenuity, which turns with its wheel, can remedy this effect. Oh! speculators about perpetual motion, how many vain chimeras have you created in the like quest. Go and take your place with the seekers after gold! [Codex Forster, II2, fol. 92v].
Many similar statements can be found in the manuscripts, and it is worth noting that Leonardo’s argument against perpetual motion machines is the same that was later put forth by Huygens and Parent.
Another belief common among Renaissance engineers was that flywheels (called “rote aumentative”) and similar energy-storing and equalizing devices are endowed with the virtue of increasing the power of a mechanical system. Leonardo knew that such devices could be useful, but he also knew that their incorporation into a machine caused an increase in the demand of power instead of reducing it (Codex Atlanticus, fol. 207v-b; Codex Madrid, I, fol. 124r)..
A practical consequence of this line of thought was Leonardo’s recognition that machines do not perform work but only modify the manner of its application. The first clear formulation of this was given by Galileo,8 but the same principle permeates all of Leonardo’s pertinent investigations. He knew that mechanical advantage does not go beyond the given power available, from which the losses caused by friction must be deducted, and he formulated the basic concepts of what are known today as work and power. Leonardo’s variables for these include force, time, distance, and weight (Codex Forster, II2, fol. 78v; Codex Madrid, I, fol.152r). One of the best examples is folio 35r of Codex Madrid, I, where Leonardo compares the performance of two lifting systems; the first is a simple windlass moved by a crank, capable of lifting 5,000 pounds; the second is also moved by a crank, but a worm gear confers upon it a higher mechanical advantage, raising its lifting capacity to 50,000 pounds. Leonardo affirms that operators of both machines, applying twenty-five pounds of force and cranking with the same speed, will have the load of 50,000 pounds raised to the same height at the end of one hour. The first instrument will raise its load in ten journeys, while the second will lift it all at once. The end result, however, will be the same.
In the same codex Leonardo established rules of general validity:“Among the infinite varieties of instruments which can be made for lifting weights, all will have the same power if the motions [distances] and the acting and patient weights are equal”(Codex Madrid, I, fol. 175r). Accordingly,“It is impossible to increase the power of instruments used for weightlifting, if the quantity of force and motion is given” (ibid., fol. 175v).
That Leonardo had an intuitive grasp of the principle of the conservation of a energy is shown in many notes dispersed throughout the manuscripts. He tried to measure the different kinds of energy known to him (muscle power, springs, running and falling water, wind, and so forth) in terms of gravity—that is, using dynamometers counterbalanced by weights, anticipating Borelli and Smeaton. He even tried to investigate the energetic equivalent of gunpowder, weighing the propellant and the missile and measuring the range. The missile was then shot from a crossbow spanned with a given weight, which was then correlated with the quantity of gunpowder used in the first experiment (ibid., fol. 60r).
Leonardo was aware that the main impediment to all mechanical motions was friction. He clearly recognized the importance of frictional resistance not only between solid bodies but also in liquids and gases. In order to evaluate the role of friction in mechanical motions, he devised ingenious experimental equipment, which included friction banks identical to those used by Coulomb 300 years later. From his experiments Leonardo derived several still-valid general principles—that frictional resistance differs according to the nature of the surfaces in contact, that it depends on the degree of smoothness of those surfaces, that it is independent of the area of the surfaces in contact, that it increases in direct proportion to the load, and that it can be reduced by interposing rolling elements or lubricating fluids between the surfaces in contact. He introduced the concept of the coefficient of friction and estimated that for “polished and smooth”surfaces the ratio F/P was 0.25, or one-fourth of the weight. This value is reasonably accurate for hardwood on hardwood, bronze on steel, and for other materials with which Leonardo was acquainted9.”
Leonardo’s main concern, however, was rolling friction. Realizing that lubrication alone could not prevent rapid wear of an axle and its bearing, Leonardo suggested the use of bearing blocks with split, adjustable bushings of antifriction metal (“three parts
of copper and seven of tin melted together”). He was also the first to suggest true ball and roller bearings, developing ring-shaped races to eliminate the loss due to contact friction of the individual balls in a bearing. Leonardo’s thrust bearings with conical pivots turning on cones, rollers, or balls (ibid., fol. 101v) are particularly interesting. He also worked persistently to produce gearings designed to overcome frictional resistance. Even when they are not accompanied by geometrical elaborations, some of his gears are unmistakably cycloidal. Leonardo further introduced various new gear forms, among them trapezoidal, helical, and conical bevel gears; of particular note is his globoidal gear, of which several variants are found in the Codex Atlanticus and Codex Madrid, I, one of them being a worth gear shaped to match the curve of the toothed wheel it drives, thus overcoming the risk inherent in an endless screw that engages only a single gear tooth (ibid., fols. 17v-18v). This device was rediscovered by Henry Hindley around 1740.
Leonardo’s development of complicated gear systems was not motivated by any vain hope of obtaining limitless mechanical advantages. He warned the makers of machines:
The more wheels you will have in your instrument, the more toothing you will need, and the more teeth, the greater will be the friction of the wheels with the spindles of their pinions. And the greater the friction, the more power is lost by the motor and, consequently, force is lacking for the orderly motion of the entire system [Codex Atlanticus, fol. 207v-b].
Leonardo’s contribution to practical kinematics is documented by the devices sketched and described in his notebooks. Since the conversion of rotary to alternating motion (or vice versa) was best performed with the help of the crank and rod combinations, Leonardo sketched hundreds of them to illustrate the kinematics of such composite machines as sawmills, pumps, spinning wheels, grinding machines, and blowers. In addition he drew scores of ingenious combinations of gears, linkages, cams, and ratchets, designed to transmit and modify mechanical movements. He used the pendulum as an energy accumulator in working machines as well as an escapement in clockwork (Codex Madrid, I, fol. 61v).
Although simple cord drives had been known since the Middle Ages, Leonardo’s belt techniques, including tightening devices, must be considered as original. His manuscripts describe both hinged link chains and continuous chain drives with sprocket wheels (ibid., fol. 10r; Codex Atlanticus, fol. 357r-a).
Leonardo’s notes about the most efficient use of prime movers deserve special attention. His particular interest in attaining the maximum efficiency of muscle power is understandable, since muscle power represented the only motor that might be used in a flying machine, a project that aroused his ambition as early as 1487 and one in which he remained interested until the end of his life. Since muscles were also the most common source of power, it was further important to establish the most effective ways to use them in performing work.
Leonardo estimated the force exerted by a man turning a crank as twenty-five pounds. (Philippe de La Hire found it to be twenty-seven pounds, while Guillaume Amontons, in similar experiments, confirmed Leonardo’s figure; in 1782 Claude Francois Berthelot wrote that men cannot produce a continuous effort of more than twenty pounds, even if some authors admitted twenty-four.)10 Such a return seemed highly unsatisfactory. Leonardo tried to find more suitable mechanical arrangements, the most remarkable of which employ the weight of men or animals instead of muscle power. For activating pile drivers (Codex Leicester, fol. 28v [ca. 1505]) or excavation machines (Codex Atlanticus, fols. 1v-b, 331v-a), Leonardo used the weight of men, who by running up ladders and returning on a descending platform, would raise the rant or monkey. Leonardo used the same system for lifting heavier loads with cranes, the counterweight being “one ox and one man”; lifting capacity was further increased by applying a differential windlass to the arm of the crane (ibid., fol. 363v-b [ca. 1495])..
Until the advent of the steam engine the most popular portable prime mover was the treadmill, known since antiquity. Leonardo found the conventional type, in which men walk inside the drum, in the manner of a squirrel cage, to be inherently less efficient than one employing the weight of the men on the outside of the drum. While he did not invent the external treadmill, he was the first to use it rationally—the next scholar to analyze the efficiency of the treadmill mathematically was Simon Stevin (1586).
Leonardo also had very clear ideas about the advantages and the limitations of waterpower. He rejected popular hydraulic perpetual motion schemes, “schemes, “Falling water will raise as much more weight than its own as is the weight equivalent to its percussion... But you have to deduce from the power of the instrument what is lost by friction in its bearings” (ibid., fol. 151r-a). Since the weight of the percussion, according to Leonardo, is proportional to height, and therefore to gravitational acceleration (“among natural forces, percussion surpasses all others… because at every stage of the descent it acquires more momentum”), this represents the first, if imperfect, statement of the basic definition of the energy potential Ep = mgh.
Leonardo describes hydraulic wheels on many pages of his notebooks and drawings, either separately or as part of technological operations. He continually sought improvements for systems currently in use. He evaluated all varieties of prime movers, vertical as well as horizontal, and improved on the traditional Lombard mills by modifying the wheels and their races and introducing an adjustable wheel-raising devices (MS H; Codex Atlanticus, fols. 304r-b, v-d [ca. 1494]).
In 1718 L. C. Sturm described a“new kind”of mill constructed in the Mark of Brandenburg,“where a lot of fuss was made about them, although they were not as new as most people in those parts let themselves be persuaded ….“They were, in fact, identical to those designed by Leonardo for the country estate of Ludovico it Moro near Vigevano around 1494.
It is noteworthy that in his mature technological projects Leonardo returned to horizontal waterwheels for moving heavy machinery (Codex Atlanticus, fol. 2r-a and b [ca. 1510]), confirming once more the high power output of such prime movers. His papers provide forerunners of the reaction turbine (Codex Forster,I2 and the Pelton wheel (Codex Madrid, I, fol. 22v); drawings of completely encased waterwheels appear on several folios of the, Codex Atlanticus.
There is little about wind power in Leonardo’s writings, probably because meteorological conditions limited its practical use in Italy. Although it is erroneous to attribute the invention of the tower mill to Leonardo (as has been done), the pertinent sketches are significant because they show for the first time a brake wheel Mounted on the wind shaft (MS L, fol. 34v). (The arrangement reappears, as do many other ideas of Leonardo’s, in Ranelli’s book of 1588 [plate exxxiii].) Windmills with rotors turning on a vertical shaft provided with shield walls are elaborated on folios 43v, 44r, 74v, 75r, and 55v of Codex Madrid, II; and there can be no doubt that Leonardo became acquainted with them through friends who had seen them in the East.
In contrast with Leonardo’s scant interest in wind power, he paid constant attention to heat and fire as possible sources of energy. His experiments with steam are found on folios 10r and 15r of the Codex Leicester. His approximate estimation of the volume of steam evolved through the evaporation of a given quantity of water suggests a ratio 1:1,500, the correct figure being about 1:1,700. Besson in 1569 still believed that the proportion was 1:10, a ratio raised to 1:255 in the famous experiments of Jean Rey; it was not until 1683 that a better estimate—1:2,000— was made by Samuel Morland. Leonardo’s best-known contribution to the utilization of steam power was his “Architronito” (MS B, fol. 33r), a steam cannon. The idea is not as impractical as generally assumed, since steam cannons were used in the American Civil War and even in World War II (Holman projectors). It was Leonardo, and not Branca (1629), who described the first impulse turbine moved by a jet of steam (Codex Leicester, fol. 28v).
One of Leonardo’s most original technological attempts toward a more efficient prime mover is the thermal engine drawn and described on folio 16v of MS F (1508–1509), which anticipates Huygens’ and Papin’s experiments. In 1690 Papin arrived at the idea of the atmospheric steam engine after Huygens’ experiments with a gunpowder engine (1673) failed to give consistent results. Leonardo’s atmospheric thermal motor, conceived“to lift a great weight by means of fire,”like those of Huygens and Papin, consisted of cylinder, piston, and valve, and worked in exactly the same way.11
Leonardo’s studies on the behavior and resistance of materials were the first of their kind. The problem was also attacked by Galileo, but an adequate treatment of the subject had to wait until the eighteenth century. One of Leonardo’s most interesting observations was pointed out by Zammattio and refers to the bending of an elastic beam, or spring. Leonardo recognized clearly that the fibers of the beam are lengthened at the outside of the curvature and shortened at the inside. In the middle, there is an area which is not deformed, which Leonardo called the“linea centrica” (now called the neutral axis). Leonardo suggested that similar conditions obtained in the case of ropes bent around a pulley and in single as well as intertwisted wires (Codex Madrid, I, fols. 84v, 154v.). More than two centuries had to pass before this model of the internal stresses was proposed again by Jakob Bernoulli.
Leonardo’s notebooks also contain the first descriptions of machine tools of some siginificance, including plate rollers for nonferrous metals (MS I, fol. 48v), rollers for bars and strips (Codex Atlanticus, fol. 370v-b; MS G, fol. 70v), rollers for iron staves (Codex Atlanticus, fol.2r-a and b), semiautomatic wood planers (ibid., fol. 38v-b), and a planer for iron (Codex Madrid, I, fol. 84v).His thread-cutting machines (Codex Atlanticus, fol. 367v-a) reveal great ingenuity, and their principle has been adopted for modern use.
Not only did Leonardo describe the first lathe with continuous motion (ibid., fol. 381r-b) but, according to Lomazzo, he must also be credited with the invention of the elliptic lathe, generally attributed to Besson (1569). Leonardo described external and internal grinders (ibid., fols. 7r-b, 291r-a) as well as disk and belt grinding machines (ibid., fols. 320r-b, 380v-b, 318v-a). He devoted a great deal of attention to the development of grinding and polishing wheels for plane and concave mirrors (ibid., fols. 32r-a, 396v-f, 401r-a; Codex Madrid, I, fol. 163v, 164r; MS G, fol. 82v, for examples). Folio 159r-b of the Codex Atlanticus is concerned with shaping sheet metal by stamping (to make chandeliers of two or, better, four parts). The pressure necessary for this operation was obtained by means of a wedge press.
Leonardo’s interest in water mills has already been mentioned. Of his work in applied hydraulics, the improvement of canal locks and sluices (miter gates and wickets) is an outstanding example. He discussed the theory and the practice of an original type of centrifugal pump (MS F, fols. 13r, 16r) and the best ways of constructing and moving Archimedean screws. Some of these methods were based on coiled pipes (MS E, fol. 13v, 14r) like those seen by Cardano at the waterworks of Augsburg in 1541.
Leonardo left plans for a great number of machines which were generally without parallel until the eighteenth and nineteenth centuries. Some of these are the improved pile drives (Codex Forster,II, fol.73v; MS H, fol. 80v; Codex Leicester, fol. 28v) later described by La Hire (1707) and Belidor (1737), a cylindrical bolter activated by the main drive of a grain mill (Codex Madrid, I, fols. 21v, 22r),and a mechanized wedge press (ibid., 46v, 47r),which in the eighteenth century became known as the Dutch press. Particularly original are Leonardo’s well-known canalbuilding machines, seattered through the Codex Atlanticus—in Parsons’ opinion,“had Leonardo contributed nothing more to engineering than his plans and studies for the Arno canal, they alone would place him in the first rank of engineers for all time.” The knowledge that most of those projects were executed in the Romagna during Leonardo’s service with Cesare Borgia makes little difference.
Leonardo also designed textile machinery. Plans for spinning machines, embodying mechanical principles which did not reappear until the eighteenth century, are found on folio 393v-a of the Codex Atlanticus, as well as in the Codex Madrid, I (fols. 65v, 66r). The group represented in the Codex Atlanticus includes ropemaking machines of advanced design (fol. 2v-a and b), silk doubling and winding machines (fol. 36v-b), gig mills whose principle reappears in the nineteenth century (fols. 38r-a, 161v-b, 297r-a), shearing machines (fols. 397r-a, 397v-a), and even a power loom (fols. 317v-b, 356-a, 356v-a).
Leonardo was also interested in the graphic arts and in 1494 presented details of the contemporary printing press, antedating by more than fifty years the first sensible reproduction of such an instruments. Even earlier (around 1480) he had tried to improve the efficiency of the printing press by making the motion of the carriage a function of the motion of the pressing screw. Leonardo’s most interesting innovation in this field, however, was his invention of a technique of relief etching, permitting the printing of text and illustration in a single operation (Codex Madrid, II, fol. 119r ). The technique was reinvented by William Blake in 1789 and perfected by Gillot around 1850.12
Leonardo’s work on flight and flying machines is too well known to be discussed here in detail. The most positive part of it consists of his studies on the flight of birds, found in several notebooks, among them Codex on Flight… MSS K, E, G, and L; Codex Atlanticus; and Codex Madrid, II, etc. Leonardo’s flying machines embody many interesting mechanical features, although their basic conception as ornithopters makes them impractical. Only later did Leonardo decide to take up gliders, as did Lilienthal 400 years later. Although the idea of the parachute and of the so-called helicopter may antedate Leonardo, he was the first to experiment with true airscrews.
Leonardo’s interest in chemical phenomena (for example, combustion) embraced them as such, or in relation to the practical arts. He made some inspired projects for distillation apparatus, based on the “Moor’s head” condensation system that was universally adopted in the sixteenth century. Descriptions of water-cooled delivery pipes may also be found among his papers, as may a good dexcription of an operation for separating gold from silver (Codex Atlanticus, fol. 244v [ca. 1505]). Some of the most original of the many practical chemical operations that are described in Leonardo’s notebooks are concerned with making decorative objects of imitation agate, chalcedony, jasper, or amber. Leonardo began with proteins—a concentrated solution of gelatin or egg-white—and added pigments and vegetable colors. The material was then shaped by casting in ceramic molds or by extrusion; after drying, the objects were polished and then varnished for stability (MS I, fol. 27v; MS F, fols. 42r, 55v, 73v, 95v; MS K, fols. 114-118). By laminating unsized paper impregnated with the same materials and subsequently drying it, lie obtained plates “so dense as to resemble bronze”; after varnishing, “they will be like glass and resist humidity” (MS F, fol. 96r). Leonardo’s notes thus contain the basic operations of modern plastic technology.13
Of Leonardo’s projects in military engineering and weaponry, those of his early period are more spectacular than practical. Some of them are, however, useful in obtaining firsthand information on contemporary techniques of cannon foundin (Codex Atlanticus, fol. 19r-b) adn also provide an interesting footnote on the survival, several centuries after the introduction of gunpowder, of such ancient devices as crossbows, of gunpowder, of such ancient devices as crossbows, ballistae, and magonels. Leonardo did make some surprisingly modern suggestions; he was a stout advocate (although not the inventor0 of breechloading guns, he designed a water-cooled barrel for a repid-fire gun, and, on several occasions, he proposed ogive-headed projectiles, with or without directional fins. His designs for wheel locks (ibid., fols. 56v-b, 353r-c, 357r-a) antedate by about fifteen years the earliest known similar devices, which were constructed in Nuremberg. His suggestion of prefbricated catridges consisting of ball, charge, and primer, which occurs on folio 9r-b of the Codex Atlanticus, is a vert earkt (ca, 1480) proposal of a system introduced in Saxony around 1590.
Several of Leonardo’s military projects are of importance to the history of mechanical engineering because of such details of construction as the racks used for spanning giant crossbows (ibid., fol. 53r-a and b [ca. 1485]) or the perfectly developed universal joint on a light gun mount (ibid., fol. 399v-a [ca. 1494]).
Leonardo worked as military architect at the court of Ludovico il Moro, although his precise tasks are not documented. His activities during his short stay in the service of Cesare Borgia are, however, better known. His projects for the modernization of the fortresses in the Romagna are very modern in concept, while the maps and city plans executed during this period (especially that of Imola in Windsor Collection, fol. 12284) have been called monuments in the history of cartography.
One of the manuscripts recently brought to light, Codex Madrid, II, reveals an activity of Leonardo’s unknown until 1967-his work on the fortifications of Piombino for Jacopo IV d’Appiano (1504). Leonardo’s technological work is characterized not only by an understanding of the natural laws (lie is, incidentally, the first to have used this term) that govern the functioning of all mechanical devices but also by the requirement of technical and economical efficiency. He was not an armchair technologist, inventing ingenious but unusable machines; his main goal was practical efficiency and economy. He continually sought the best mechanical solution for a given task; a single page of his notes often contains a number of alternative means. Leonardo abhorred waste, be it of time, power, or money. This is why so many double-acting devices-ratchet mechanisms, blowers, and pumps-are found in his writings. This mentality led Leonardo toward more highly automated machines, including the file-cutting machine of Codex Atlanticus, folio 6r-b, the automatic hammer for gold-foil work of Codex Atlanticus, folios 8r-a, 21r-a, and 21v-a and the mechanized printing press, rope-making machine, and power loom, already mentioned.14
According to Beck, the concept of the transmission of power to various operating machines was one of the most important in the development of industrial machinery. Beck thought that the earliest industrial application of this type is found in Agricola’s famousDe re metallica (1550) in a complex designed for the mercury amalgamation treatment of gold ores.15 Leonardo, however, described several projects of the same kind, while on folios 46v and 47r of Codex Madrid, I, he considered the possibility of running a complete oil factory with a single power source. No fewer than five separate operations were to be performed in this combine—milling by rollers, scraping, pressing in a wedge press, releasing the pressed material, and mixing on the heating pan. This project is further remarkable because the power is transmitted by shafting; the complex thus represents a complete “Dutch oil mill,”of which no other record exists prior to the eighteenth century.
Leonardo’s interest in practical technology was at its strongest during his first Milanese period, from 1481 or 1482 to 1499. After that his concern shifted from practice to theory, although on occasion he resumed practical activities, as in 1502, when he built canals in the Romagna, and around 1509, when he executed works on the Adda River in Lombardy. He worked intensively on the construction of concave mirror systems in Rome, and he left highly original plans for the improvement of the minting techniques in that city (1513-1516). He was engaged in hydraulic works along the Loire during his last years.
2.Codex Madrid, II,passim.
3. Beltrami, op. cit., no. 189.
4. Beltrami, op. cit., no. 246.
5. G. Calvi, I manoscritti di Leonardo da Vinci (Bologna, 1925); A. M. Brizio, Scritti scelti di Leonardo da Vinci (Turin, 1952); Kenneth Clark, A Catalogue of the Drawings of Leonardo da Vinci at Windsor Castle(Cambridge, 1935; 2nd rev. ed., London, 1968-1969); C. Pedretti, Studi Vinciani (Geneva, 1957).
6. Kyeser (1405), anonymous of the Hussite Wars (ca. 1430), Fontana (ca. 1420), Taccola (ca. 1450), Francesco di Giorgio (ca. 1480), Besson (1569), Ramelli (1588), Zonca (1607), Strada (1617-1618), Veranzio (ca. 1615), Branca (1629), Biringuccio (1540), Agricola (1556), Cardano (1550), Lorini (1591), Bockler (1661), Zeising (1607-1614).
8. G. Galilei, Le Meccaniche (ca. 1600), trans. with intro. and notes by I.E. Drabkin and Stillman Drake as Galilei on Motion and on Mechanics (Madison, Wis., 1960).
9. G. Canestrini, Leonardo costruttore di macchine e veicoli (Milan, 1939); L.Reti,“Leonardo on Bearings and Gears, in Scientific American, 224 (1971),100.
10. E.S. Ferguson,“The Measurement of the ’Man-Day,’” in Scientific American,225 (1971), 96. The writings of La Hire and Amontons are in Mémoires de l’ Académie … ,1 (1699).
11. L. Reti,“Leonardo da Vinci nella storia della macchina a vapore,”in Rivista di ingegneria (1956-1957).
12. L. Reti,“Leonardo da Vinci and the Graphic Arts,” in Burlington Magazine,113 (1971), 189.
13. L.Reti,“Le arti chimiche di Leonardo da Vinci,” in Chimica el’industria,34 (1952),655,721.
15. T. Beck, Beiträge Zur Geschichte des Maschinenbaues (Berlin, 1900), pp. 152-153.
"The splendors of this age outshine all other recorded ages," Ralph Waldo Emerson (1803–1882) wrote in his journal in 1871, adding a list of recent innovations that he saw as important driving forces of modern history: "In my lifetime, have been wrought five miracles, namely, 1. the Steamboat; 2. the railroad; 3. the Electric telegraph; 4. the application of the Spectroscope to astronomy; 5. the photograph; five miracles which have altered the relations of nations to each other" (Journals 16:242). Though one may argue about the actual role of these inventions in changing the course of modern history, there is no doubt that for the eminent New England philosopher technological progress represented not just a revolution of "improved means to an unimproved end," as his disciple Henry David Thoreau (1817–1862) sarcastically put it (p. 192), but the ambivalent legacy and future of modern society at large. Given the increasing presence of the machine in early-nineteenth-century America, the extent to which technical inventions shaped the minds and attitudes of its people can hardly be overrated. What is more, it was during these important stages of the nation's growing political and cultural self-awareness that the very concept of "technology" as independent from other areas of rational investigation such as philosophy, literature, and the arts had first been introduced. When the Harvard professor Jacob Bigelow (1787–1879) published his influential study Elements of Technology (1829), the term and its underlying differentiation between the so-called useful and the fine arts became known to a wider public. Despite its utilitarian etymology (from the Greek word techne, meaning a systematic way of doing things), technology for Bigelow signified not merely a method or a new tool but a particular mindset—a rational, scientific approach by which men cope with the complexity of nature and by which they try to master the vagaries of human existence.
Technically Bigelow defined "technology" as a wedding of two already established disciplines: the application of science to the useful arts. Yet in view of what had already been achieved in this new field he was convinced that once technology was instituted as a common practice there would be no return to an earlier state of being, that it was a force administering its own laws and following its own logic. "The augmented means of public comfort and of individual luxury, the expense abridged and the labor superseded, have been such," he explains with regard to possible public skepticism about technology's rapid progress, "that we could not return to the state of knowledge which existed even fifty or sixty years ago, without suffering both intellectual and physical degradation" (p. 6). In a similar vein, Emerson's friend the English critic Thomas Carlyle, while brandishing the age's mechanical orientation in his influential essay "Signs of the Times," published the same year as Bigelow's Elements, outlines his hopes for the future by explicitly approving of the progress made in learning and the arts:
Doubtless this age also is advancing. . . . Knowledge, education are opening the eyes of the humblest; are increasing the number of thinking minds without limit. This is as it should be; for not in turning back, not in resisting, but only in resolutely struggling forward, does our life consist. . . . Indications we do see . . . that Mechanism is not always to be our hard taskmaster, but one day to be our pliant, all-ministering servant. (Pp. 485–486)
What such diverse writers as Bigelow, Carlyle, and Emerson thus have in common is a feeling that with the staggering number of mechanical inventions, an irrevocable shift—a transition from a pretechnological state to a society continuously producing and being shaped by technology—has occurred.
By and large, early-nineteenth-century Americans welcomed the introduction of new devices and means of transportation, and they generally understood the importance of technology for the pressing task of exploring and settling the vast continent. Contrary to Carlyle and other European critics of mechanization they rarely discussed technology as a companion to industrialization. So much did the idea of an "industrialized" society seem out of place in America that the nation readily embraced mechanical contrivances such as steamboats, the McCormick automatic reaper, and the power loom while at the same time denouncing industrialization for its obvious negative consequences—the establishment of an impoverished, morally weak proletariat and the pollution of the natural environment. What could be observed in England, Germany, or France as a result of large-scale manufacturing simply did not apply to the conditions in the New World. Given the scarcity of its population, the abundance of nature and wilderness, and the great distances that separated individual settlements, instituting improved means of transportation, communication, and production appeared more of a practical necessity than a social evil. "With abundant resources but few people to exploit them," the historian of technology Carroll Pursell reminds us, "Americans who aspired to surpass quickly the splendor and power of the old British Empire soon realized that machines would have to replace hands if the job were to be done" (p. 2).
In a letter to Thomas Jefferson on 28 June 1813, John Adams highlights the importance of technology in shaping modern society. As he points out, the changes occasioned by new inventions during the early decades of the nineteenth century had been dramatic:
The invention in mechanic arts, the discoveries in natural philosophy, navigation, and commerce, and the advancement of civilization and humanity, have occasioned changes in the condition of the world and the human character which would have astonished the most refined nations of antiquity.
Lester J. Cappon, ed., Adams-Jefferson Letters, 2 vols., (Chapel Hill: University of North Carolina Press, 1959), 2:340.
DO MACHINES MAKE HISTORY? TECHNOLOGY AND AMERICA'S MANIFEST DESTINY
While "Yankee ingenuity" soon became synonymous with the pioneering efforts to build the nation, it also spelled out an unflinching belief in the essential power of knowledge. In line with fundamental ideas of the Enlightenment and the premium it placed on the human capacity to better social conditions and to envision a future perfected state of society, the founding fathers actively endorsed the invention of labor-saving machinery and other useful contrivances. Though apprehensive of the negative impact of the machine on communal life, technological expertise was essential not only as a means to serve the needs of the individual citizen but also to promote the Republic's higher humanitarian goals. Even Thomas Jefferson (1743–1826), who in his Notes on the State of Virginia (1785) promulgated a pastoral America immune to the social and moral corruption of industrial production, eventually conceded that technology could well be a major ingredient of historical progress. To Robert Fulton, the successful inventor of a new steamboat, he wrote in 1810: "I am not afraid of new inventions or improvements, nor bigoted to the practices of our forefathers. It is that bigotry which keeps the Indians in a state of barbarism in the midst of the arts" (Meier, p. 219). For Jefferson and his fellow Americans the importance of technology was thus actually twofold. First, technological advancement figured, in a very literal sense, as a means to conquer and eventually possess the whole of the continent. Second, it was taken to vindicate synecdochically the historical destiny of America and the accompanying exploitation of natural resources that led to the extinction of its native population.
Two famous literary authors, Edgar Allan Poe (1809–1849) and Nathaniel Hawthorne (1804–1864), took issue with this widespread metaphorical conflation of technology and historical progress. In his political satire "The Man That Was Used Up" (1839), Poe turned the tables on Americans' naive readiness to assume an intrinsic connection between progress and technology. By relating the creation of the republic and the violence associated with its geographical expansion to an authentic historical figure, who literally is made of and, later, "wasted" by modern technology, Poe launches a scathing critique of historical progress as the fulfillment of America's special destiny. In the story technological progress is tied up with this character to such a degree that his very name calls forth commendations on the age's inventiveness and mechanical expertise. Whenever the narrator mentions General John A. B. C. Smith, supposedly a veteran Indian fighter of the late Bugaboo and Kickapoo campaign and alias of former Vice President Richard M. Johnson, the general's friends and acquaintances invariably reiterate a paean to the "wonderful age" of invention (p. 381). Though the general seems to be well recognized among his contemporaries as a living emblem of the marvelous prospects of modern times, the enthusiastic responses to the narrator's query about his actual identity remain strikingly evasive and tautological. With each interlocutor, the fabulous soldier becomes increasingly entangled in a skein of elliptic discourses that are bound to mystify rather than uncover the history of his mysterious personality. In the end General Smith remains but a narrative construct, a hollow (and horrible) signifier of both technological ingenuity and historical myth.
If "The Man That Was Used Up" questioned ante-bellum Americans' love affair with machinery by exposing its inherent (self-) destructive powers, Hawthorne took a different, yet in no way less critical, approach. In his famous short story "The Celestial Railroad" (1843) he satirizes the historical driving role that many ascribed to the onrush of technology and material inventions by making technology the center of a burlesque rewriting of John Bunyan's The Pilgrim's Progress. Machines clearly abound in this allegorical tale. Not only does the modern Christian alleviate the burden of his pilgrimage to the Celestial City by riding on the newly established railroad, he also encounters such engineering achievements as, for example, a daring bridge whose foundations have been secured by "some scientific process," a tunnel lit by a plethora of communicating gas lamps, and a steam-driven ferryboat.
"There is nothing at all like it," he would say; "we are wonderful people, and live in a wonderful age. Parachutes and railroads—man-traps and spring-guns! Our steam-boats are upon every sea, and the Nassau balloon packet is about to run regular trips (fare either way only twenty pounds sterling) between London and Timbuctoo. And who shall calculate the immense influence upon social life—upon arts—upon commerce—upon literature—which will be the immediate result of the great principles of electro magnetics! Nor, is this all, let me assure you! There is really no end to the march of invention. The most wonderful—the most ingenious . . . the most truly useful mechanical contrivances, are daily springing up like mushrooms."
Thomas Ollive Mabbott, ed., The Collected Works of Edgar Allan Poe, 3 vols. (Cambridge, Mass.: Harvard University Press, 1978), 2:381–382.
Significantly, Hawthorne's adoption of technological metaphors in the story blurs with his critical stance on specific cultural practices and religious trends. When the narrator finally arrives at the present-day Vanity Fair, where "almost every street has its church and . . . the reverend clergy are nowhere held in higher respect" (p. 139), he ridicules the traveling lecturers of these burgeoning sects as "a sort of machinery" designed to distribute knowledge without the encumbrance of true learning. On the surface a critique of facile latitudinarianism—a prominent, pseudo-rational strain of thought within the Anglican church—and the contemporary fad of providing instruction through oral rather than liter-ary discourse, the passage also betrays Hawthorne's anxiety about the ongoing mechanization of American society in general. Moreover, the "etherealizing" of literature that appears to be the bottom line of his complaint epitomizes the difficult position of literary authors within an increasingly technological, differentiated sphere of cultural production. Much as Hawthorne tries to defend the superior quality of the literary text (versus the sheer "machinery" of trivial lectures), his rhetorical strategy also lays bare the degree to which he himself has become a part of the new machine environment. If he dismisses the shallow libertarian sects as a movement inevitably leading to moral and intellectual destruction, to use machinery as an emblem of such inevitability attests to the symbolic power of modern technology, a power that held enthralled even the most conservative of antebellum writers.
TECHNOLOGY AND THE ROMANTIC POLITICS OF DISEMBODIMENT
This ambiguity of the literary writer vis-à-vis an increasingly technological environment can be traced throughout the major works of the period. The establishment of the literary profession within the socioeconomic network of nineteenth-century American society required its differentiation from other specialized professions such as engineering or manufacturing, and it rested on a rationalization of the inventive process as exempted from the materialist exigencies of industrial production. The notion of modern authorship, in other words, developed along the lines of strong antimaterialist biases that emphasized the spiritual over the physical implications of writing. Romantics often conceived of their work as a disembodied process that turned on an effort to transcend both the bodily confines of the writer and the material constraints of the text to be produced. That the Romantic poetics of disembodiment were closely tied to contemporary discussions of technology can be seen in Hawthorne's metafictional short story "The Artist of the Beautiful" (1844). The story effectively juxtaposes the materialist foundations of modern technological society and the ethereal, disem-bodied work of the Romantic writer. Resonating with references to early industrial manufacturing and the emphasis that Jacksonian America placed on punctuality and the utilitarian ideal of the "useful" arts, "The Artist of the Beautiful" aptly reflects the cultural changes concurrent with rapid technological advancement and the burgeoning of the antebellum American economy. Not only does Hawthorne apprize the conflict between the practical and the beautiful by creating a character who is both watchmaker and artist; he also has his protagonist, Owen Warland, embark on a highly symbolic project. Searching for a material form that will communicate his aesthetic ideals, the watchmaker builds a synthetic creature, a mechanical butterfly, which combines his artistic ambitions and the difficulties arising from his ambivalent professional status.
Hawthorne's text cogently portrays the human body as the antithesis to everything that is beautiful and aesthetically important. Having set his heart upon the realization of an abstract concept, biological life matters only as conditional to the accomplishment of Warland's task. Whereas the technological—and the body as its physical-material counterpart—operates in direct opposition to the artist's ethereal strivings, the story as a whole might well be taken as an attempt to amalgamate the divergent forces of creativity and materiality. The ironic and ambiguous ending, which has left many readers puzzled as to the true relation of art, nature, and material culture in the story, could thus be read as a plea for the inclusion—rather than exclusion—of technology into the realm of artistic production. In keeping with the organic principle of Romantic writing, Hawthorne provides his watchmaker with the power to animate, to spiritualize, machinery. Warland's ambition is not "to be honored with the paternity of a new kind of cotton machine" but to produce a "new species of life and motion" (pp. 453, 466). It is thus not by imitating nature but by competing with her, by putting forth "the ideal which nature has proposed to herself in all her creatures, but has never taken pains to realize" (p. 466) that the watchmaker becomes an artist. However frail and transient his imaginative child may be, as carrier of an original idea it takes on a quality more real than reality itself. "When the artist rose high enough to achieve the beautiful," as we learn in the concluding paragraph of the story, "the symbol by which he made it perceptible to mortal senses became of little value in his eyes while his spirit possessed itself in the enjoyment of the reality" (p. 475).
The historian Daniel Boorstin has remarked that capitalist "America has been the laboratory and the nemesis of romanticism" (p. 173). Though Boorstin's use of the term "romanticism" was rather figurative, the bifurcation of values expressed in his statement—one ringing with promises of new insights, the other gloomy and apocalyptic—underscores the complex self-representations of American Renaissance writers and their contradictory relations with antebellum society. His critical satires notwithstanding, Poe overall responded positively to the wave of new technology. Despite his emphasis on the exceptional cognitive status of creative work, his definition of authorship was utterly technological. Given the fervor with which Poe embraced, for example, "anastatic printing" (also known as "relief etching" that reproduced a facsimile impression of the original) as a way of experimenting with and ultimately increasing the representational value of written texts, he impressively foreshadows the constructivist tradition within modern arts that is mainly identified with early-twentieth-century avant-garde movements. Rather than figuring as the downfall of the writer's profession, science and technology provided for Poe a "laboratory" of new ideas from which he concocted the symbols and metaphors that are now closely associated with his literary oeuvre.
Nor would Hawthorne or Herman Melville (1819–1891) conceive of the contemporary technological environment as the "nemesis" of literary creativity. Aware of the ubiquitous presence of the machine in antebellum America, these writers examined the changing conditions under which they labored in sometimes excruciating detail. However, the numerous representations of literary work in both their shorter fiction and in many of their full-fledged romances should rather be read as part of an imaginative search for professional identity. Far from advocating the writer's withdrawal from society, they addressed the processes of modernization in a quite pragmatic manner. To find a place of their own within America's dramatic shift from agrarian virgin land to a Tartarus of industrial labor, Hawthorne and Melville often had recourse to highly symbolic modes of self-representation that helped to deflate the rising tensions between, on the one hand, the materiality of the printed text, and on the other, the original ideas it conveyed. Since the conflict between modern authors and the economic and technological environment often turned on the rival ideologies of idealism and materialism, cybernetic imagery, as we have seen in "The Artist of the Beautiful," offered a perfect screen onto which the writer's struggle for social recognition could be projected.
THE AUTHOR IN PAIN: TECHNOLOGY AND THE CIVIL WAR
Complex images of humans-turned-machine (or vice versa), which abound in antebellum literature, reflect the authors' attempt to avoid the social trapdoors of their idealist self-definitions and thereby to narrow the gap between literary work and other modern professions. Yet, however widespread the urge to compete on the marketplace of specialized labor, American Renaissance writing is also marked by the somber prospects of the author's inevitable alienation from society. In Melville's "Bartleby, the Scrivener," isolation and estrangement of the literary worker sets in after a period of extreme productivity. Since Melville's literary reputation was already flagging when the story first appeared in 1853, the text encapsulates, on one level, its author's doomed struggle for public recognition. On another level, however, it instances the first in a row of mid-nineteenth-century American texts in which authorship appears to be entirely overwhelmed by technology. There is no escape for Bartleby from the prison house of Wall Street and the mass production of written texts; mired in physical deterioration and increasing muteness, the scrivener's initial resistance to the growing mechanization of his office environment eventually turns into a hollow gesture of all-encompassing passivity.
Melville's symbolic depiction of the artist's fragmented, immobilized body in "Bartleby" ties in with concerns traceable in the work of two other contemporary Americans, Rebecca Harding Davis (1831–1910) and Walt Whitman (1819–1892). To bring into conjunction writers as structurally different as Melville, Davis, and Whitman is by no means an easy task. If Davis's social realism already differs considerably in both its form and its setting from Melville's Romantic self-representation, Whitman's democratic, all-embracing pose seems to be even farther from the latter's deeply pessimistic stance. However, in Davis's Life in the Iron Mills (1861) and in Whitman's Drum-Taps, a cluster of poems about the Civil War first published in 1865, the besieged artist is rendered as being as muted and paralyzed when confronted with modern technology as the starving scrivener. What thus began as the self-conscious claim of Romantic artists to a voice of their own is transformed, under the influence of war technology and its disfigured, amputated victims, into painful dramatizations of the writer's speechlessness and despair.
Adams, John, and Thomas Jefferson. The Adams-JeffersonLetters. Vol. 2. Edited by Lester J. Cappon. Chapel Hill: University of North Carolina Press, 1961.
Bigelow, Jacob. Elements of Technology. Boston: Hilliard, Gray, Little and Wilkins, 1829.
Carlyle, Thomas. "Signs of the Times." Edinburgh Review 49 (1829): 439–459.
Davis, Rebecca Harding. Life in the Iron Mills and Other Stories. New York: Feminist Press, 1972.
Hawthorne, Nathaniel. "The Celestial Railroad" and "The Artist of the Beautiful." 1843, 1844. In Mosses from an Old Manse, vol. 10 of The Centenary Edition of the Works of Nathaniel Hawthorne, edited by William Charvat, Roy Harvey Pearce, and Claude Simpson. Columbus: Ohio State University Press, 1974.
Melville, Herman. The Piazza Tales and Other Prose Pieces,1839–1860. Vol. 9 of The Writings of Herman Melville. Edited by Harrison Hayford et al. Evanston, Ill.: Northwestern University Press, 1987.
Poe, Edgar Allan. "The Man That Was Used Up: A Tale of the Late Bugaboo and Kickapoo Campaign." 1839. In Collected Works of Edgar Allan Poe, vol. 2, Tales and Sketches, 1831–1842, edited by Thomas Ollive Mabbott, pp. 376–392. Cambridge, Mass.: Harvard University Press, 1978.
Thoreau, Henry David. Walden. 1854. Edited by J. Lyndon Shanley. Princeton, N.J.: Princeton University Press, 1971.
Whitman, Walt. Leaves of Grass. 1855. Edited by Harold W. Blodgett and Scully Bradley. New York: New York University Press, 1965.
Benesch, Klaus. Romantic Cyborgs: Authorship and Technology in the American Renaissance. Amherst: University of Massachusetts Press, 2002.
Boorstin, Daniel. The Genius of American Politics. Chicago: University of Chicago Press, 1953.
Bromell, Nicholas K. By the Sweat of the Brow: Literature and Labor in Antebellum America. Chicago and London: University of Chicago Press, 1993.
Kasson, John F. Civilizing the Machine: Technology andRepublican Values in America, 1776–1900. New York: Grossman, 1976.
Matthiessen, F. O. American Renaissance: Art and Expression in the Age of Emerson and Whitman. London: Oxford University Press, 1941.
Marx, Leo. The Machine in the Garden: Technology and thePastoral Ideal in America. London and New York: Oxford University Press, 1964.
Meier, Hugo A. "Thomas Jefferson and a Democratic Technology." In Technology in America: A History of Individuals and Ideas, edited by Carroll W. Pursell Jr., pp. 17–33. Cambridge, Mass.: MIT Press, 1981.
Pease, Donald E. Visionary Compacts: American RenaissanceWriting in Cultural Context. Madison: University of Wisconsin Press, 1987.
Pursell, Carroll W., Jr. "Introduction." In Technology inAmerica: A History of Individuals and Ideas, edited by Carroll W. Pursell Jr. Cambridge, Mass.: MIT Press, 1981.
The First World War caused more death and destruction than all the wars that came before it. The reason for the slaughter was twentieth-century firepower. Powerful new weapons such as the machine gun halted military movements and killed men by the thousands. A British officer, quoted in William G. Dooly Jr.'s Great Weapons of World War I, observed the effects of machine-gun fire at the Battle of Mörhange-Sarrebourg in 1914:
Whenever the French infantry advance, their whole front is at once regularly covered with shrapnel and the unfortunate men are knocked over like rabbits. They are brave and advance time after time to the charge through appalling fire, but so far it has been to no avail… The officers are splendid; they advance about 20 yards ahead of their men as calmly as though on parade, but so far I have not seen one of them get more than 50 yards without being knocked over.
Machine guns were not the only weapons to radically reshape the nature of modern warfare. Tanks, flamethrowers, airplanes, and submarines—all products of advanced technology—changed the way armies faced each other in battles on land, on the sea, and in the air.
"Perhaps no invention has more profoundly modified the art of war than the machine gun," observed U.S. secretary of war Newton D. Baker, according to William G. Dooly Jr. in Great Weapons of World War I. Indeed, the machine gun was made for mass murder. Unlike rifles, which could shoot one bullet at a time and were accurate within about a thousand yards, heavy machine guns that were mounted on wheeled carts could fire up to five hundred rounds per minute. Light machine guns weighing between sixteen and twenty-eight pounds could fire magazines of up to forty-seven rounds. Both heavy and light machine guns had an accuracy that far exceeded the rifle's. The French 37mm gun Model 1916 had a range of about a mile and a half, for example.
Military planners did not foresee the importance of the machine gun. War strategists had been trained to rely on large numbers of trained, professional foot soldiers who would
engage in close, hand-to-hand combat to win wars. As the British planned their entrance into World War I, the Ministry of Munitions considered two machine guns per battalion to be "more than sufficient," according to Dooly. The Battle of Loos in 1915 demonstrated how devastating the machine gun was to advancing armies. When a line of British soldiers came within a thousand yards of a defensive line of Germans, the machine gun proved its effectiveness. Dooly quotes a German reserve regiment's observations:
Ten columns of extended line could clearly be distinguished, each one estimated at more than a thousand men, and offering such a target as had never been seen before, or even thought possible. Never had the machine-gunners such straightforward work to do nor done it so effectively. They traversed to and fro along the enemy's ranks unceasingly. The men stood on the fire-steps, some even on the parapets, and fired triumphantly into the mass of men advancing across the open grassland. As the entire field of fire was covered with the enemy's infantry the effect was devastating and they could be seen falling literally in hundreds.
Within two years, after the Germans had used the rapid-fire weapons to mow down thousands of charging men and effectively stymie the Allied effort along the Western Front, Britain's Ministry of Munitions increased the allotment of machine guns to thirty-two per battalion. The Allies and the Central Powers both began to group machine guns along lines of trenches to hold off any advancement. By the end of the war, the machine
gun was recognized as one of the most essential weapons for regiments. Between 1912 and 1919, the U.S. Army increased its provisions from 4 machine guns per regiment to 336.
Unable to penetrate the Allied trenches along the Western Front with artillery or with waves of soldiers armed with machine guns, the Germans introduced an insidious new
weapon on April 22, 1915. A German airplane dropped canisters in no-man's-land. Breaking on impact, the canisters released yellowish green fumes that wafted slowly toward the French and African troops near the Belgian town of Ypres. As the fumes reached the Allied forces, soldiers realized the cloud was poisonous chlorine gas. Quoted in Dooly's Great Weapons of World War I, one French doctor at Ypres expressed his horror: "I had the impression that I was looking through green
glasses. At the same time, I felt the action of the gas upon my respiratory system; it burned in my throat, caused pains in my chest, and made breathing all but impossible. I spat blood and suffered from dizziness. We all thought we were lost." The gas opened a four-mile gap in the Allied line, but the Germans failed to exploit the gap: Fearful German soldiers advanced slowly behind their terrible new weapon, and nightfall hid the damage the gas had done. By morning, the Allied forces had sealed the gap, and the Germans' attack had accomplished nothing but to display the horror of chlorine gas.
Quickly, both sides developed gas masks. At first, soldiers held chemically treated cotton pads over their noses and mouths. Later they wore fabric face masks soaked in chemicals, and finally soldiers on both sides wore respirators with charcoal filters.
Although the world was outraged by the use of poisonous gas (after the war, its use was banned by international agreements), both Allied and Central Powers forces used various gases against each other for the remainder of World War I. Armies used several types of gases. Some were lethal, such as the cyanides used by France. Others were irritants. The irritant gas phosgene was used by all countries and caused great suffering to unprotected soldiers, who would get watery eyes, sneezing fits, and blisters on exposed skin; phosgene also scarred the soldiers' lungs. Mustard gas was a persistent irritant that could remain on the battlefield for days.
Casualties from gas attacks totaled nearly 800,000 soldiers during the course of the war. Although the percentage of deaths resulting from gas attacks was relatively low—only 2 percent of American gas casualties died—poisonous gas represented one of the most dreaded and horrifying realities of modern warfare. According to Dooly, "It symbolized the death of individual bravery, initiative, and skill."
Even though some forward-thinking engineers proposed prototypes of tanks as early as 1907, it was not until noman's-land was thoroughly blood soaked that a joint army-navy committee formed to build the first tank. The Mark I prototype, built in Britain in 1915 and affectionately called "Big Willie," entered the field on September 15, 1916. The twenty-six-foot-long, twenty-eight-ton tank required a crew of eight to maneuver it as it lumbered at three miles per hour across a battlefield. Thirty-two tanks started out across ground that had been mangled by artillery bombardments. The nine tanks that reached their destinations forced the surrender of three hundred Germans and captured the village of Flers.
The first tanks were useful, but they did not prove to be the dominant offensive weapon that their inventor, British lieutenant colonel Ernest D. Swinton, had imagined. Back at the drawing board, tanks went through several more prototypes. The Mark IV won the tank the place it deserved in offensive attacks. On November 20, 1917, four hundred Mark IVs advanced across the torn-up no-man's-land at Cambrai. They smashed through barbed wire, and proceeded to capture six and a half miles of a double line of German trenches within twelve hours, with only four thousand casualties. German General Paul von Hindenburg lamented that the battle at Cambrai taught the Germans the potential benefits of tanks in modern warfare. That the tanks could smash over barricades and undamaged trenches shocked the Germans.
In 1918 better tanks punched holes in German lines at Soissons and Amiens, forcing the Germans into retreat. The Whippet tank, used in these battles, weighed 15.7 tons and was twenty feet long. A crew of three could speed along in it at8.3 miles per hour. By war's end, tanks had become a promising part of modern warfare.
For years, armies remained locked in bitter conflict along static lines of defense that stretched for hundreds of miles. Powerful weapons like the machine gun and poisonous gas rendered individual heroics almost obsolete. But in the air, pilots of newly designed bombers and fighter planes became World War I's glamorous heroes.
When the war began, aviation was not very advanced. Armies used balloons to observe enemy movements on the ground and to protect whole cities with steel curtains up to ten thousand feet high. Germans began dropping bombs from zeppelins just days after their first attack on Liège, Belgium, in 1914 and continued to plague civilians with bombings until a few months before the armistice. England became the prime target of these German zeppelin raids, which reached a peak in 1916 with 126 raids over England.
At the beginning of the war, no country foresaw the usefulness of airplanes. France had 120 planes, Britain had 113, and Germany had more than 200, but not all of these planes were military types. Airplanes were only observation tools at the beginning of the war. They were used to spot artillery, take photographs, and drop messages to ground troops. Most planes could scarcely carry the pilot and enough fuel to complete a flight, but planes were still an important tool: A single plane could survey what once took a whole regiment of cavalry to see.
Pilots quickly realized that airplanes could mount attacks as well. Some pilots packed bricks to throw at other
pilots or rifles to shoot enemy planes. Paris was the first victim of a bombing raid; a German plane dropped four bombs on August 30, 1914. Realizing the potential offensive capabilities of the airplane, the opposing sides began focusing their energy on improving airplanes' war-worthiness. Guns were the main weapon needed by fighter pilots. Many different plane designs tried to mount guns in places that posed the least risk of shooting the plane's propeller; cockpits for gunners were made on the side of the plane or positioned in front of the propellers.
On October 5, 1914, French pilot Joseph Frantz and his gunner, Louis Quénault, engaged in the first aerial combat of the war, shooting down a German plane and killing the pilot and his passenger. French pilot Adolphe Pégoud became the war's first ace, shooting down six German planes in 1915. In January of 1915 French pilot Roland Garros invented the first deflector to enable machine guns to shoot between propeller blades. After a plane equipped with one of the deflectors was captured by the Germans in 1915, Garros's invention was perfected by Dutch engineer Anthony Fokker, who was working for the German army. According to Thomas R. Funderburk, Fokker's more sophisticated synchronizer began what the British called the "Fokker Scourge," a German strategy to prey on unarmed or single Allied planes flying reconnaissance missions. The British soon ordered reconnaissance missions to include at least three armed planes.
Nations honored pilots who shot down at least five enemy planes; these pilots were referred to as "aces." French pilot Paul-René Fonck shot down 75; his comrade Georges-Marie Guynemer brought down 53. Eddie Rickenbacker became America's hero, shooting down 22 planes and four balloons. But German pilot Manfred von Richthofen, known as the Red Baron, was the most successful ace of all. Between 1916 and 1918, the Red Baron shot down 80 enemy planes, more than any other pilot on either side.
Parachutes did not become standard issue for military pilots until after World War I, so early combat pilots experimented with flying techniques. The fancy combat maneuvers taught in rigorous flight schools and featured in military air shows in the twenty-first century were made up on the spot by the combat pilots of World War I. Pégoud, wanting to learn as much as he could about flying, was an especially inventive pilot. He tested his plane's capabilities by trying things like flying upside down. Of his experimentation Pégoud said, "If I kill myself, so what? One less aviator. But if I succeed, how many valuable lives may be saved for aviation," as quoted in Funderburk's Early Birds of War. Other pilots learned new techniques accidentally. Convinced his death was imminent, one pilot accidentally discovered how to survive when a plane starts spiraling downward. The pilot of the plunging plane decided to speed up his coming death and pushed the plane into full throttle. Instead of speeding him into the ground, his maneuver brought the plane under control and became a valued combat technique he lived to teach others. Pilots soon learned to do spins, half-rolls, and climbing turns, among other things.
A unique witness to the incredible advancements in aviation during World War I was Roland Garros. A skillful pilot
before the war, Garros set an altitude record in 1912, and in 1913 he became the first person to fly across the Mediterranean Sea. He contributed the first crude mechanism for shooting bullets between spinning propeller blades in 1915. Garros spent three years as a German prisoner of war between 1915 and 1918. He escaped and returned to France to find aviation advanced well beyond his dreams. Planes now flew twice as high and twice as fast as Garros had ever seen. Quoted in Funderburk'sEarly Birds of War, Garros noted, "I am a novice now!… I used to say that the progress which would be achieved in three years would surpass imagination, but I never thought I would be the first victim of that progress." Garros started over, returning to military flight school. On October 5, 1918—exactly four years after the first aerial combat of Frantz and Quénault— Garros was shot down and killed by the Germans.
At the beginning of World War I, huge battleships were deemed the most important naval weapon. The British Dreadnought, a battleship with one-foot-thick steel armor, steam turbines to power it two knots (a unit of speed used for ships that is equal to one nautical mile per hour, or 1.15 land miles per hour [1.85 kilometers per hour]) faster than any other warship, and ten 12-inch guns, became the ideal battleship and started an arms race between Britain and Germany. (In fact, the word "dreadnought" soon became the generic name for any huge, heavily-armed battleship.) By 1914 Britain had twenty-four dreadnoughts, and Germany had fourteen. At the start of the war, Britain used its dreadnoughts and other, smaller surface ships to block German ports from receiving supplies.
In 1914 submarines were not seen by either side as essential components to a successful navy. But the Germans quickly identified these "underwater boats" (U-boats) as the best defense against the British blockade. Germany entered the war with twenty-four submarines, massive vessels averaging over five hundred tons' displacement and stretching from 150 to 200 feet long. With four or five torpedo tubes and mounted guns ranging in size from two inches to almost six inches, these submarines were capable of mounting devastating offensive attacks. In 1914 Germany began building submarines at a furious pace, doubling its fleet by the end of the year. By the end of the war, Germany had built nearly four hundred submarines, of which more than half were destroyed.
In 1915, German submarines began attacking ships bringing supplies to Britain. The submarines did considerable damage in 1915, sinking 396 Allied and neutral ships, more than twice the number lost to other ships or mines. But the attacks on merchant shipping outraged neutral countries,
especially the United States. In 1915 a German U-boat sank the passenger liner Lusitania, which was supposedly carrying munitions to Britain. The death of more than a thousand of the ship's passengers, including 128 Americans, forced the United States to threaten entrance into the war. Hoping to keep America from joining the war, Germany promised to warn ships before attacks so the crew and passengers could evacuate. For a year and a half Germany held back its use of submarines, but by early 1917 military leaders decided that they had little chance of winning the war without using these powerful underwater weapons. In February of 1917 the Germans resumed unrestricted submarine warfare. (For more on submarine warfare see Chapter 7: The War at Sea.)
Germany's return to stealthy U-boat attacks on Allied shipping was an immediate success. The subs sank hundreds and hundreds of ships bringing food and supplies to Great Britain and France. In 1917 German submarines sank 2,439 Allied ships. It looked like the Germans might succeed at their goal of starving the British into submission. Allied navies were desperate to stop submarine attacks. They tried to arm commercial ships and increase submarine patrols, but it was difficult to halt the invisible menace. Finally the Allies began sending large groups of ships across the sea together in convoys. A squadron of armed boats, which surrounded the merchant ships and could fire on any submarine that surfaced to attack them, escorted these convoys. The Allied convoy system was a great success, allowing the safe passage of 88,000 ships, with losses of only 436 ships. The convoy system also destroyed more submarines than ever before, sinking 74 subs in 1918. Convoys didn't stop sinkings altogether, but they limited the damage greatly because the Allies no longer sent lone ships across the open seas. Even if one ship was sunk, many others now made the passage safely. The convoy system largely removed the danger from U-boats and helped the Allies win the war.
Modern technology and industrial capacity combined in World War I to create some of the most powerful weapons ever used in warfare. Tanks, flamethrowers, machine guns, submarines, and airplanes were all tested and proved in battle. There were hints of other technological advances as well. For example, World War I saw the first aircraft landing on a ship at sea and the first torpedo attack on a ship from a fighter plane. All of these technologies would be improved and made more powerful during the Second World War (1939–45). Tanks would become faster, more agile, and more resistant to firepower, and would help prevent stalemate in the Second World War. Airplanes would fly faster and further and carry more weapons; artillery would become more accurate and more mobile. Unfortunately, these new, modern weapons allowed military planners to wage a second world war that resulted in even more deaths than the first.
For More Information
Stokesbury, James L. A Short History of World War I. New York: William Morrow, 1981.
"The Weapons of World War One." [Online] http://www.iol.net.au/~conway/ww1/weapons.html. (accessed May 2001.)
"Weapons of World War I." Discoveryschool.com. [Online]. http://school.discovery.com/homeworkhelp/worldbook/atozpictures/lr001160.html. (accessed May 2001.)
Clare, John D., editor. First World War. San Diego: Harcourt Brace, 1995.
Colby, C. B. A Colby Book about Aircraft of World War I: Fighters, Bombers, Observation Planes. New York: Coward, McCann and Geoghegan, 1962.
Dooly, William G. Jr. Great Weapons of World War I. New York: Bonanza Books, 1969.
Funderburk, Thomas R. The Early Birds of War. New York: Grosset and Dunlap, 1968.
Haythornthwaite, Philip J. The World War One Source Book. London: Arms and Armour Press, 1992.
What Good Was a Hole in the Ground?
Attempting to push further into France in 1914, the Germans were forced into retreat at the Battle of the Marne. But German forces didn't flee far. Their weapons were better for defense; what they needed was a safe place to hide so they could shoot at the Allies. Trenches were that safe place: Ten-foot deep ditches in the ground could protect soldiers and effectively halt the enemy.
The Allies and the Central Powers both dug a variety of trenches. The main battle trenches were dug in zigzag patterns to protect against attack. Supporting trenches were also dug to create protected pathways for communication with headquarters and routes for supplies. Some trenches were open to the air, while others had wooden covers or were actually dug underground. These holes in the ground became parallel lines of trenches stretching 475 miles between the Belgian coast and Switzerland by the end of 1914. Both sides had machine guns positioned in frontline trenches to prevent advances and along second- and third-line trenches to cover any breakthroughs. Each line of trenches was protected by tangles of barbed wire, which were meant to snag any soldiers who had managed to cross the short stretch of land between the opposing trenches, called no-man's-land.
Trenches proved the effectiveness of the defensive weapons on both sides. Trenches halted troop movement. In his World War One Source Book, Philip J. Haythornthwaite quotes Canadian general Sir Edwin Alderson's advice to his soldiers in 1915: "Do not expose your heads, and do not look around corners, unless for a purpose … the man who does so is stupid… If you put your head over the parapet without orders, they will hit that head."
New weapons needed to be invented to break through the heavily defended enemy trenches and to cross noman's-land, which artillery bombardments had turned into rough, ruined, nearly impassable ground. Soon armies tried poisonous gas and tanks to open holes in trench lines.
When soldiers succeeded in crossing no-man's-land and entered enemy trenches, new weapons were needed. Flamethrowers were short-range weapons that squirted pressurized streams of a burning mix of gas and oil; both stationary and portable versions were made. Flamethrowers could literally blow a wall of flame into trenches or passageways in fortifications, burning alive all those inside. After the Germans introduced the weapon in 1914, each country made their own versions of flamethrowers. The French flamethrower, named the Schilt after its inventor, could shoot eight to ten 30-yard bursts of flame or one 100-yard blast.
Flamethrowers were especially good at clearing trenches and forcing the surrender of soldiers in dugouts. The Germans used ninety-six flamethrowers at Verdun in 1916, and it is estimated that there were 653 subsequent flamethrower attacks during the war. The drawback to these weapons was their enormous fuel requirement. In his World War One Source Book, Haythornthwaite notes that "one mile of front line (requiring 30 Livens projectors [a type of flamethrower]) would consume about 1,000 gallons of petrol per minute… an hour of such operation using more fuel than the entire French army transport service's daily need in 1917–1918." Though effective, the flamethrower would not be fully developed until the Second World War.
A New Use for an Obsolete Weapon
World War I was notable for the incredible advances in technology that changed the way wars were fought. But a weapon that had not been used for nearly 250 years became one of the most important weapons for trench warfare. Mortars—short-range artillery weapons designed to lob bombs—had been introduced in 1673 to blow up forts but had rarely been used since the eighteenth century.
At the beginning of the war, the Germans massed about 150 mortars to defend their forts near Metz. But when fighting along the Western Front bogged down into trench warfare later in 1914, the mortars were moved to the front-line trenches to throw bombs into the French trenches a few hundred feet away. The mortars could destroy the barbed wire barricades protecting the Allied trenches from troop advances. The Germans, with their mortars of various sizes, "were masters with the trench mortar from beginning to end," according to Dooly.
The Allies did not have similar mortars to use in counterattack, so they searched museums for suitable mortars from past wars and used them as models. Until the French introduced their first 58mm trench mortar in 1915, Allied soldiers on the front lobbed makeshift bombs made of nails and explosive powder. The Allies did not develop mortars as mobile as the Minenwerfer (German mortars) until near the end of the war, when the French introduced their 150mm mortar in 1918.
Europeans arrived in the New World to find advanced civilizations coexisting with less advanced tribal organizations. The archaeological sites of Machu Picchu, Teotihuacán, and Tikal testify to the remarkable construction and architectural skills of pre-Columbian civilizations. Mayan knowledge of astronomy, the pre-Incan irrigation works (1300–1400 bce) in northern Peru, and the terraced agricultural system of the Incas are additional evidence of advanced indigenous technology. Indigenous peoples had accomplished stunning advances in breeding and disseminating corn, potatoes, and many other plants. They also developed a method for predicting climate by observing astrological, atmospheric, and oceanic phenomena as well as the behavior of flora and fauna. Other indigenous groups continued to practice more traditional slash-and-burn agriculture and hunting.
Since the European conquest, Latin America has relied heavily on technology from abroad. Despite the steady introduction of new technologies to advanced economic sectors, they have had only a small impact on vast segments of Latin America's population. The importation of new technologies has often failed to produce sustained domestic technological advance. In some cases Latin Americans have made important technological innovations of their own.
MINING, SUGAR, AND HENEQUEN
Indians mined silver, gold, copper, and lead, usually from shallow pits, long before the arrival of the Spaniards. During colonization the Spanish introduced deep-shaft mining techniques that significantly increased the amount of ore that was accessible, and a major technological breakthrough was achieved at the Purísima Grande mine in Pachuca, Mexico, in the mid-1550s when Bartolomé de Medina developed a new amalgamation process for silver, which used large amounts of mercury to extract prodigious amounts of silver from mines in such areas as Zacatecas, Mexico, and Pachuco, Peru. Although the basic technology remained in use for about three centuries, small technological innovations did generate significant increases in productivity. For example, improved furnaces for processing mercury were introduced in Peru in 1633 by Lope de Saavedra, and steam engines were used in mining from the early 1800s.
During the late nineteenth century, sugar from the Pernambuco region was Brazil's leading export. The Pernambuco sugar industry demonstrated an ability to incorporate both superior species of cane and more efficient equipment. In the first part of the 1800s creole sugarcane was replaced by cayenne cane, which was larger, had more extensive branching, contained greater amounts of sugar, and more effectively withstood drought. After 1879 a disease afflicting the cayenne cane motivated the importation of species from Java and Mauritius. In the mid-nineteenth century, sugar producers began shifting from vertical rollers for grinding cane to horizontal rollers that applied greater pressure, faster grinding, and better distribution of the cane on the rollers. In the 1870s vacuum pans were introduced to speed evaporation. Steam engines gradually replaced animals as the major source of power for sugar mills.
The production and export of Yucatán henequen fiber expanded rapidly in the 1880s. Equipment for stripping the fiber was invented in the Yucatán, but interests from the United Kingdom and the United States took over most of the production. By 1880 replacement parts were being produced in the Yucatán, and by 1910 local shops built defibration machines and henequen presses. Innovators in the Yucatán were able to improve on designs and increase operating efficiency, and production reached a peak with World War I.
While the mining, sugar, and henequen industries all enjoyed a period of success and demonstrated an ability to adopt new production techniques, none of these activities launched sustained technological innovations that could spread to other sectors. Cheap land and labor blunted the incentive to modernize production techniques and held the rate of technology adoption far below optimum.
When considered as components of a productive system, imported technologies were not always unequivocally superior. Spanish agricultural methods, for example, did not surpass the chinampa agricultural technique used by the Aztecs, and probably the Mayans, which featured raised fields, intensive cultivation, and a variety of crops.
During the first half of the nineteenth century, saladeros (beef drying and salting plants) along the Río de la Plata began to operate for an export market. In the 1870s freezing and insulation technologies gave rise to the frigoríficos (refrigeration ships) that began to replace the saladeros. Further advances in temperature control led to significant exports of beef and mutton to Great Britain beginning around 1900. Although Argentine entrepreneurs recognized a high profit opportunity, virtually all of the frigoríficos were operated by foreign interests. The refrigeration technology itself posed no problem for national entrepreneurs, but the main difficulty lay in mastering the complex "soft technologies," which involved the precise timing of receiving beef consignments and loading, unloading, and wholesaling the product, with little room for error. Without these organizational, managerial, and marketing capacities, Argentine entrepreneurs faced high risks.
Between 1944 and 1955 a Mexican monopoly, Syntex Company, gained complete dominance over European competitors in the production of steroid hormones. The firm's success was based on three discoveries. In 1949 barbasco was discovered in Mexico. This plant had much higher yields of steroids and was in greater supply than alternative sources. During the same year, two researchers at the Mayo Clinic in the United States found that cortisone was helpful in treating the symptoms of arthritis. And in 1951 the Upjohn Company discovered an inexpensive technique for altering the molecular structure of the steroid. This meant an increased demand for barbasco-derived progesterone that could now be used as an intermediate material for the production of cortisone. Syntex thrived during the 1960s, but by the mid-1970s steroid hormone production was once again controlled by foreign enterprises and exports had shrunk from 545 pounds in 1969 to 260 pounds in 1976.
Technological breakthroughs outside the region have occasionally hurt Latin American development. A classic example was the drastic decline in Chile's nitrate exports after introduction of the Haber process, a method of synthesizing ammonia from nitrogen and hydrogen, developed in the early 1900s by the German Fritz Haber.
Mexican research led to the successful development and export of glass manufacturing equipment. It also developed technologies for deep drilling in petroleum extraction and for a method of producing paper from bagasse.
Brazil's production of ethanol from sugarcane and other vegetation is an impressive technological achievement, although it is controversial in terms of economic justification and environmental impact. In 1975 Brazil launched its huge research and development program to produce and substitute ethanol for gasoline. Brazilian researchers were able to upgrade older fermentation methods and achieve necessary technological changes in processing equipment, automobile engines, and sugar production. As of 1990 ethanol accounted for about 15 percent of Brazil's liquid fuel requirements.
The first nuclear power in Latin America was produced at the Atucha I plant in Argentina in 1974, a year during which Brazil and Mexico began to install nuclear power reactors. During the 1970s Colombia, Chile, Peru, and Venezuela had working nuclear centers, while Bolivia, Ecuador, Jamaica, and Uruguay announced intentions to establish their own centers. Cuba signed an agreement with the Soviet Union to install nuclear stations, but after a long series of difficulties, Cuba finally stopped construction in 1992. Chile and Peru have been in the forefront of experimental research to commercialize the extraction of copper through a process of microbial leaching. In Mexico, iron ores are transported by the most advanced, computer-driven pipeline system in the world. Many Latin American manufacturing firms have improved productivity by a constant stream of minor intraplant innovations.
During the late 1960s and early 1970s many of the larger nations of Latin America, including Chile, Colombia, Mexico, Peru, and Venezuela, established or significantly strengthened their national councils for science and technology. During the same period, Argentina, Brazil, Mexico, and the Andean Pact established measures to regulate the importation of technology; this was meant to eliminate imperfections in the transfer of technology that Latin American countries felt put them at a disadvantage. At the same time, there was an increase in the importation of technology along with an emphasis on fostering internal technological capacity. Argentina, Brazil, and Mexico began exporting capital goods, consulting contracts, civil engineering contracts, and turnkey plant facilities, and made some foreign investments that involved technology transfers.
TRENDS SINCE THE 1990S
After the fiscal crises and neoliberal reforms of the 1980s and 1990s, most Latin American states have reduced their investments in technological research and innovation. They have also privatized many state-owned technological enterprises. Private enterprises—both domestic and foreign—have become major forces both for the introduction of new technologies and for fostering domestic technological innovation.
Some Latin American technological enterprises have flourished in the new economic environment. Brazil has developed a thriving small-and medium-size aircraft industry. Based largely on Brazilian design and technical innovations, the industry has been able to adjust rapidly to changing global technological and market conditions. The most successful of these companies is the Brazilian aircraft manufacturer Embraer. The Brazilian government privatized Embraer in 1994. Since then, the company has gradually become a global player in the aircraft industry, specializing in creating a line of short-haul regional jets.
Latin American research institutions have played a significant role in biotechnology. They have propagated potato cultivars through tissue-culture techniques in Argentina, developed a new strain of inocula for soybeans in Brazil, and produced single-cell proteins in Mexico. The boom in biotechnology research in the 1990s is the product of collaborative initiatives between researchers in the public and private sectors. One of these organizations, the Brazilian Organization for Nucleotide Sequencing and Analysis (ONSA), has conducted pioneering genomic research on citrus crops and sugar cane.
A number of countries in Latin America have also embraced the computer revolution. Argentina, Brazil, and Mexico have national policies that promote local production of computers and associated peripheral equipment. In the 1990s local and state governments in Latin America began seeking collaborative relationships with global technology companies. For example, the Brazilian state of Rio Grande do Sul convinced Dell Corporation to establish a computer manufacturing plant to produce computers for the Brazilian and Latin American markets. Significantly, even some of Latin America's smaller countries are participating in these global processes. In 1996 Intel established a center for parts assembly, semiconductor design, and software development in Costa Rica. Other companies have since followed Intel to Costa Rica.
The use of the Internet in Latin America has also grown explosively since the mid-1990s. More than 5 million Latin Americans had access to the Internet in 1999, and the number has continued to grow rapidly. In Latin America the Internet has been used for a wide range of purposes, from e-commerce to education, government, and political protest. Activist organizations from the EZLN in Mexico to indigenous groups in the Amazon have used the Internet to build international political alliances. As with other technological systems, the comparatively high cost of computers and Internet access has meant that Internet users have been predominantly from the upper sectors of society. The proliferation of cybercafes in cities and in some rural areas has, however, broadened access.
Latin American cities have played a globally important role in the innovation of mass transportation. Here, technological innovation has been focused not on developing new technologies but rather on using existing technologies in new ways. Rather than building expensive metro systems, many Latin American cities have instead built relatively inexpensive busways, with dedicated lanes for public transportation. The Brazilian city of Curitiba—now a global model for transportation planners—inaugurated its busways in the mid-1970s. Many cities in Brazil and elsewhere in Latin America have followed suit. Most of these busways, such as Bogota's TransMilenio, were built through partnerships between public and private enterprise.
These new mass transportation systems are a form of technological innovation that benefits the mass of ordinary Latin Americans. Similarly, cellular telephones have become ubiquitous across Latin America. The impact of technological innovation and industrialization has not, however, been uniformly positive. For example, the opening of the Trans-Ecuadorean oil pipeline has facilitated petroleum exploitation in the Ecuadorean Amazon. While the expanding petroleum industry has generated considerable profits over the short term, oil spills, forest clearances, and road-building associated with the industry have harmed both the region's landscapes and its indigenous groups. The development of maquiladoras, factories along the U.S.-Mexican border, has generated significant pollution problems for the people and landscapes of northern Mexico. The privatization of utilities has also generated popular backlashes. Utilities—such as water, gas, telecommunications, and electricity—are essentially large technological systems for the delivery of basic services. Many Latin Americans fear that the privatization of these utilities will drive up prices, effectively preventing many people from having access to these essential technologies.
Paradoxically, some of the most significant technological innovations in Latin America have involved abandoning the high technologies of the twentieth century. Cuban farmers have embraced organic agriculture on a large scale, forced to do so by economic and political pressures following the collapse of the Soviet bloc. This is not simply a return to older farming practices; organic farming on a large scale requires significant scientific and technological innovation. Organic agriculture is not limited to Cuba. In order to produce certified organic coffee, some coffee farmers in Mexico and Central America are eliminating the high-tech agricultural inputs introduced in the 1970s and 1980s. This technological shift was driven both by a catastrophic collapse in coffee prices in the 1990s and increasing overseas demand for organic produce.
The years since 1980 have been a period of rapid technological innovation and change in Latin America. Partnerships between governments and private enterprise have helped introduce and produce new technologies, many of which are now reaching sectors of Latin American society that had been largely untouched by technological innovations in earlier periods. In some instances, Latin America has even begun to export technologies across the globe. But the benefits of these technological transformations are not equally shared by all Latin Americans. Conversely, their social and ecological costs are frequently borne by the most disadvantaged sectors of Latin American society. The challenge for the future is to ensure that Latin America's technological growth becomes and remains economically, socially, and ecologically sustainable.
Baklanoff, Eric N., and Jeffery T. Brannon. "Forward and Backward Linkages in a Plantation Economy: Immigrant Entrepreneurship and Industrial Development in Yucatán, Mexico." Journal of Developing Areas 19, no. 1 (1984): 83-94.
Bargalló, Modesto. La minería y la metalurgía en la América Española durante la época colonial. Mexico: Fondo de Cultura Económica, 1955.
Bonilla, Marcelo, and Gilles Cliche, eds. Internet and Society in Latin America and the Caribbean. Ottawa, ON: International Development Research Council, 2004.
Cole, William E. "Technology, Ceremonies, and Institutional Appropriateness: Historical Origins of Mexico's Agrarian Crisis." In Progress toward Development in Latin America: From Prebisch to Technological Autonomy, edited by James L. Dietz and Dilmus D. James. Boulder, CO: Lynne Rienner, 1990.
Collinson, Helen, ed. Green Guerrillas: Environmental Conflicts and Initiatives in Latin America and the Caribbean. London: Latin America Bureau, 1996.
Crespi, M. B. A. "La energía nuclear en América Latina: Necesidades y posibilidades." Interciencia 4 (1979): 22-29.
Dahlman, Carl J., and Francisco C. Sercovich. "Exports of Technology from Semi-Industrial Economies and Local Technological Development." Journal of Development Economics 16 (1984): 63-99.
Eisenberg, Peter L. The Sugar Industry in Pernambuco: Modernization without Change, 1840–1910. Berkeley: University of California Press, 1974. See especially pp. 32-62.
Felix, David. "On the Diffusion of Technology in Latin America." In Technological Progress in Latin America: The Prospects for Overcoming Dependency, edited by James H. Street and Dilmus D. James. Boulder, CO: Westview Press, 1979.
Goldstein, Andrea. "Embraer: From National Champion to Global Player." CEPAL Review 77 (August 2002): 97-115.
Gómez, Ricardo. "The Hall of Mirrors: The Internet in Latin America." Current History (February 2000): 72-77.
Inter-American Development Bank. Economic and Social Progress in Latin America, 1988 Report. Washington, DC: Inter-American Development Bank, 1988. See especially pp. 105-283.
Katz, Jorge M., ed. Technology Generation in Latin American Manufacturing Industries. New York: St. Martin's Press, 1987.
Martínez-Torres, Maria Elena. Organic Coffee: Sustainable Development by Mayan Farmers. Athens, OH: Ohio University Press, 2006.
Nelson, Roy C. "Harnessing Globalization: Rio Grande do Sul's Successful Effort to Attract Dell Computer Corporation." Journal of Developing Societies 19 (2003): 268-307.
Pereira, Armand. Ethanol, Employment, and Development: Lessons from Brazil. Geneva: International Labour Office, 1986.
Roberts, J. Timmons, and Nikki Demetria Thanos, eds. Trouble in Paradise: Globalization and Environmental Crises in Latin America. New York and London: Routledge, 2003.
Rodríguez-Clare, Andres. "Costa Rica's Development Strategy Based on Human Capital and Technology: How It Got There, the Impact of Intel, and Lessons for Other Countries." Journal of Human Development 2 (July 2001): 311-324.
Roper, Christopher, and Jorge Silva, eds. Science and Technology in Latin America. London: Longman, 1983.
Rossett, Peter, and Medea Benjamin, eds. The Greening of the Revolution: Cuba's Experiment with Organic Agriculture. Melbourne: Ocean Press, 1994.
Wright, Lloyd. "Latin American Busways: Moving People Rather than Cars." Natural Resources Forum 25 (May 2001): 121-134.
Dilmus D. James
TECHNOLOGY.DIVERGENCE IN TECHNOLOGY: SOME EXAMPLES
TECHNOLOGY AND EUROPEAN INTEGRATION
At the beginning of the twentieth century, technological development in Europe was extremely diverse. Britain, the first industrial nation, had experienced some decline, while Germany, latecomer of the industrial revolution, had caught up rapidly and had overtaken Britain in some new, research-based industries. Research institutions enabled German industry to move ahead as new technological innovations were implemented. Although research universities in the United States were modeled on the German university system and U.S. chemical companies looked to Germany for inspiration in research and development, many industrialists in Germany and other European countries were fascinated by U.S. industry. The American system of manufacture, mechanization, automatic machine tools, and an infectious feeling of technological optimism had a great impact in Europe. But World War I, the war of the engineers, made the destructive potential of technology visible to everyone. Although the new weapons such as tanks, submarines, and aircraft had to a large extent been developed in Europe, European engineers could also build on inventions made in the United States.
After World War I many European engineers flocked to the United States, visiting steel plants and machine and automobile factories and praising American technical and industrial efficiency, mass production, and management. Although these reports were eagerly absorbed at home, some Europeans expressed reservations against the American system. The old elites found it hard to accept that a new system based on industrial technology and mass culture was to prevail. Already in the late nineteenth century the "shock of modernity" had hit the traditional elites in Europe, and during the 1920s the concept of "Americanism" divided the different strata of European society. Hailed by industrialists, but also by many trade unionists as a means to improve living standards, it was denounced by the old cultural elites who contrasted European "culture" with American "civilization," associating the latter with only material values. In European industry the 1920s were a period of rationalization and of attempts to increase industrial productivity. Taylorist time-and-motion studies were adopted and Fordist mass-production methods became an attractive model. But the United States and Western Europe differed, for example in the automobile industry: incomes in Europe were comparatively low and, together with high operating costs, prevented the emergence of a mass market for automobiles. As a consequence European car producers adapted American mass production only piecemeal. But conditions in Europe also had advantages, allowing more flexibility and a higher level of innovation. The decades after World War I were characterized by large technological systems that originated in the late nineteenth century, for example in electricity supply. These systems were set up on a local, later regional, and sometimes even national basis. The larger the system the more efficiently it could function, making use of different sources of energy, especially hard coal, lignite, hydropower, and later, oil and gas. The German engineer Oskar Oliven presented a plan to the World Power Conference in Berlin 1930 to set up a European electricity supply system, but this failed, partly because of German reservations and a striving toward autarky.
Although technological innovations such as radar, jet engines, and rockets had mainly been implemented in Europe in the context of military research and development, there was usually an American element to this technology; in digital computer technology and in the military and civil use of nuclear energy the center of activity was in the United States. In terms of institutional framework and educational system, it makes sense to speak of national systems of technical innovation, but most of the significant technological inventions were distinctly transnational and to an extent even transoceanic. In the two decades after World War II the Americanization of Western Europe grew rapidly. As Jean-Jacques Servan-Schreiber pointed out in 1967, Europe had to do something to stop the brain drain of scientific and technological talent from Europe to the United States, put an end to "Eurosclerosis," and increase European competitiveness, particularly in high-technology areas. A few years later discussions about the limits of growth set in and were especially strong in Western Europe. In the wake of the oil crisis of 1973–1974 a debate already under way was intensified on energy conservation, air pollution, and other environmental issues. This became stronger after the nuclear accident at the Three Mile Island nuclear power plant in Harrisburg, Pennsylvania, in 1979 and the catastrophic incident at Chernobyl in Ukraine in 1986. Particularly in Western Europe, reservations grew against "big technology"—large technological systems that might get out of control—whereas on the individual level the daily use of technology such as the telephone, television, and computer seemed to be completely "natural" and was generally seen in a positive light. Japan's rise as a leading industrial power enhanced the view of many politicians in Europe that an explicit national technology policy can be effective and that in order to compete with great powers such as the United States and Japan it would be necessary to intensify technological cooperation within Europe. Unlike Japan and the United States, however, Europe was and is very diverse in its institutional settings, which may be advantageous in some respects but has often proved to be a drawback. From the 1980s onward Japan embarked on direct investment overseas. In the automobile industry it employed lean, just-in-time, robot-based, flexible mass production methods, which became a model for producers in Europe and elsewhere. European technology policy had important effects on the structural development of the automobile industry in Europe, being directly responsible for Belgium's emergence as a major automobile producer. In the 1990s the European Union's automobile industry enjoyed the chances of a single European market but also had to meet challenges such as Japanese competition, including transplants (such as Japanese car factories in Berlin) and overcapacities. The link with central and Eastern European countries and with many other countries overseas has for some time pointed toward a global, not only European, market.
Looking at European countries more closely, the introduction of standards, especially in the armament industry, gave a push to war-production efforts in Germany during World War I. As in some other Western European countries during the 1920s, the rationalization movement in German industry was strong. During the Third Reich the Four-Year Plan was implemented in 1936 to make the German economy independent with respect to strategic raw materials. A strong emphasis on armament and the introduction of new weapons was a feature of the National Socialist regime. After World War II the Allies interdicted research in Germany in military technology but also in some areas of civilian technology—sometimes difficult to distinguish from each other—such as aeronautics, rocket propulsion, radar, and nuclear technology. The result of this setback was the relatively poor performance of the German aircraft, electronics, and telecommunications industries in later decades. Like other Western industrial countries, Germany experienced increasing competition from countries in the Far East, especially Japan. Japan soon acquired a lead in fields such as electronics, data processing, communications, and materials science and even challenged Germany in its traditionally strong fields of mechanical engineering and the chemical and pharmaceutical industries. One of the future tasks for German policy will have to be a reform of higher education, one of the weak components in its innovation system.
Although in the early twentieth century France was quite successful in innovations such as automobiles and aeronautics, its position in "science push" research carried out in industrial research and development laboratories was comparatively weak. After World War II, in an attempt to keep up with industrial nations such as the United States and Great Britain, France embarked on a policy of large investments in research and development and the foundation of new institutions in science and technology. As a result French industry built a successful commercial aircraft, the Caravelle, and, in cooperation with Britain, the Concorde, a supersonic airplane that, although unsuccessful commercially, was nevertheless a technological achievement. By the mid-1970s France had become a modern industrial state with significant high-technology capabilities. However, the French system of innovation has several problematic peculiarities. Although, compared to the United States or Japan, France is a small country, in its mission-oriented innovation system "big is beautiful." In France emphasis is on large technological systems, especially in military and space technology, electric power, and rail transport, technologies normally developed for public, not private, markets.
Britain's growth in high technology in the 1960s and 1970s was to a large extent due to increased defense expenditure but also to U.S. and Japanese investments in electronics and other fields. Britain managed to keep a leading position in such areas as chemicals, especially petrochemicals, and pharmaceuticals, food processing, and energy, whereas in engineering, except in areas such as aircraft engines, its position was much weaker. With an emphasis on the service sector rather than on manufacturing, Britain has a distinctly modern industrial structure. There is, however, a problematic emphasis on product innovation at the expense of process innovation. Although British science has shown remarkable strength in several fields, technological innovation, particularly in the civil sector, is comparatively weak. Besides, British firms have severely underinvested in vocational training and in research and development, and the comparatively low status of engineers in contemporary Britain points to a loss of technological culture.
In the early twentieth century, Central and Eastern Europe was behind some Western European countries technologically, but science and technology did play a role there too. In Russia, polytechnical institutes had a good reputation, and scientists and engineers such as Vladimir K. Zworykin in electronics and Igor Sikorsky in aircraft and helicopter development testify to their high standard. After the October Revolution of 1917 many first-rate engineers left the country for the United States and elsewhere. In accordance with Lenin's slogan "communism is Soviet power plus the electrification of the whole country," the Bolshevik regime in 1920 embarked on the electrification of Soviet Russia. During the 1920s European and U.S. engineers and businessmen were instrumental in advancing Soviet industrial development, constructing the huge hydroelectric plant Dneprostroi and the gigantic Magnitogorsk iron- and steelworks, modeled after the U.S. steelworks in Gary, Indiana. Henry Ford transferred tractor and automobile technology to the Soviet Union, and Taylorist management principles were adopted there. During the period of the First Five-Year Plan (1928–1932) the USSR slowly tried to set up automobile, machine-tool, aircraft, and mechanical industries of its own, an effort hampered by the fact that many supposedly counterrevolutionary engineers had to leave the country or even were killed in the purges of the 1930s. Shortly before and during World War II the Soviet government set up research institutes for science and technology that later enabled the Soviet Union—with foreign, mainly German—assistance, to become a leader in space technology and also to play a significant role in nuclear-energy research and in other high-technology areas.
Like Russia, other Central and Eastern European countries had long-standing scientific and technological relations with the West. Countries such as Poland and Romania had for a long time felt close to French culture, while Czechoslovakia and Hungary had old industrial and technological contacts with Germany. Although industrial technology had generally spread from west to east, the indigenous technological capabilities in Central and Eastern European countries were significant. From the mid-1930s onward several Central and Eastern European countries experienced a growing dependency on technological cooperation with Germany; Czechoslovakia became an armament manufacturing center for the Third Reich. After World War II, the technological system of the Soviet Union and some members of the Eastern bloc was characterized by large investments in the military and military technology at the expense of investment in the civil sector. This made for international prestige but in the context of the Cold War and arms race created technological, political, economic, and social imbalances that resulted in the dissolution of the communist system in the late 1980s. Political and economic reforms have been under way since; some have brought the desired results, but there is still a long way to go on the road toward transformation.
From the beginning the European Union, together with economic and political integration, aimed at intensive cooperation in technology. The European Coal and Steel Community (ECSC), founded in 1951–1952, worked toward an integration of the European steel industry and also prompted and coordinated research in metallurgy. The European Atomic Energy Commission (Euratom), founded in 1958, undertook strengthening the scientific and technological base of nuclear research and development within Europe. In the early 1960s France tried to convince its European partners that Europe was not to stand aside while the United States was about to establish a monopoly in satellites and launchers. In 1971, after the creation of ESRO (European Space Research Organisation) and ELDO (European Launcher Development Organisation), the European Space Agency (ESA) was founded; its most important ventures were the construction of the Ariane launcher and of Spacelab, a laboratory for research onboard NASA's space shuttle. Airbus Industrie, a European aircraft producer, was founded in 1970 with the French firm Aérospatiale and the German Messerschmitt-Bölkow-Blohm (MBB) as founding members. The Spanish Construcciones Aeronáuticas S.A. (CASA) joined the consortium in 1971 and British Aerospace in 1979. Its main aim was to be able to compete with the large U.S. aircraft producers. Although there have been national rivalries among countries involved, Airbus can be called a success story. In 1998 it sold more aircraft than its main competitor, Boeing.
Another attempt at coordinating technological research and development in order to strengthen the technological base and enhance competitiveness are the European Union's Framework Programs. The first program (1984–1987) was rather general while the second program (1987–1991) was more focused with Esprit, a program devoted to electronics, especially to information and communications technology. COST (European Cooperation in the Field of Scientific and Technical Research), an intergovernmental European framework for international cooperation between nationally funded research activities established in 1971, was directed toward member states of the European Union and beyond. Rather than funding research and development activities themselves, it brought together research teams from different countries working on specific topics. France was again the driving force behind the launch of another research program, Eureka, established in 1985 with eighteen European countries participating. Eureka aimed at setting up or strengthening research and development cooperation among European industrial enterprises in order to increase productivity and competitiveness of industry in Europe. Emphasis was on environmental technology and recycling, biotechnology, robotics, and computer technology, but also on new high-performance materials, transport, communication, energy, and laser technology. So far undertakings such as this have yielded some impressive results, although there have also been complaints about cumbersome bureaucratic procedures and limited flexibility of participating companies. European technological programs have been more successful in a context of public action organized around a large project rather than in promoting networking and decentralized technological integration. In 1969 British, Federal German, and Italian aircraft companies established the Panavia consortium to produce the multirole combat aircraft (MRCA), which in 1976 was called the PA 200 Tornado. This fighter bomber was capable of high performance but was also very costly. Europe has some experience in other collaborative defense programs such as the Eurofighter combat aircraft and the Airbus Military Company A400M, a European airlifter. In 1996 France, Germany, the United Kingdom, and Italy founded the Organization for Joint Armament Cooperation (OCCAR) to improve the efficiency of collaborative programs. Compared to a nation such as the United States, the European defense industry is less efficient because of the duplication of costly research and development programs and small production runs for national markets, which prevent opportunities for economies of scale, learning, and scope.
Braun, Hans-Joachim, and Walter Kaiser. Energiewirtschaft, Automatisierung, Information seit 1914. In Propyläen Technikgeschichte, vol. 5, edited by Wolfgang König. Frankfurt and Berlin, 1992.
Caron, François, Paul Erker, and Wolfram Fischer, eds. Innovations in the European Economy between the Wars. Berlin and New York, 1995.
Graham, Loren R. The Ghost of the Executed Engineer: Technology and the Fall of the Soviet Union. Cambridge, Mass., 1993.
Hempstead, Colin A., ed. Encyclopedia of 20th-Century Technology. 2 vols. New York and London, 2004.
Hughes, Thomas P. Networks of Power: Electrification in Western Society 1880–1930. Baltimore, Md., and London, 1983.
Johnson, Peter, ed. Industries in Europe: Competition, Trends, and Policy Issues. Cheltenham, U.K., and Northampton, Mass., 2003.
Kipping, Matthias, and Nick Tiratsoo, eds. Americanization in 20th-Century Europe: Business, Culture, Politics. 2 vols. Lille, France, 2002.
Nelson, Richard R., ed. National Innovation Systems: A Comparative Analysis. Oxford, U.K., and New York, 1993.
Petit, Pascal, and Luc Soete, eds. Technology and the Future of European Employment. Cheltenham, U.K., and Northampton, Mass., 2003.
Servan-Schreiber, Jean-Jacques. The American Challenge. Translated by Ronald Steel. New York, 1969.
Hans -Joachim Braun