In psychology, the term “field theory” is used primarily to designate the point of view of Kurt Lewin and his co-workers. Although the term has its origin in physics, where it is employed to refer to the conceptualization of electromagnetic phenomena in terms of fields of electromagnetic forces, field theory in psychology is not an attempt to explain psychological events in terms of physical processes. Rather, it refers to a “method of analyzing causal relations and of building scientific constructs” (Lewin 1943a), which Lewin felt could be applied as fruitfully in psychology as it had been in physics. [See LEWIN.]
Lewin’s early interests and writings—much influenced by the German philosopher Ernst Cassirer —were concerned with the nature of theory in science. Much of his own research orientation and of his great impact on psychology arose from his way of thinking about theorizing. His approach to theorizing in psychology was characterized by several major themes:
(1) An emphasis on the psychological explanation of behavior focuses on the purposes which underlie behavior and the goals toward or away from which behavior is directed; it stresses that one has to deal with what exists psychologically, what is real for the person being studied.
(2) An emphasis on the total situation stresses that the most fundamental construct is that of the psychological “field,” or life space. All psychological events are conceived to be a function of the life space, which consists of the person and the environment viewed as one constellation of interdependent factors. Lewin made it evident that it is meaningless to explain behavior without reference to both the person and his environment and that psychological events are not determined by the characteristics of the individual—“instincts,” “heredity,” “needs,” “habits”—acting independently of the situation.
(3) An emphasis on systematic rather than historical causation indicates that psychological events have to be explained in terms of the properties of the field which exist at the time when the events occur. Lewin rejected the notion of “action at a distance” and stressed that past events can have a position only in the historical causal chains whose interweaving creates the present situation.
(4) The dynamic approach, like that of the gestalt school, accepts the view that living systems tend to maintain a dynamic equilibrium in relation to their environments. Related to this view was Lewin’s interest in the processes by which equilibrium is restored when it is disturbed and his interest in such motivational processes as the arousal of need tensions, the setting of goals, goaldirected action, and the release of tension.
Lewin was not only a metatheoretician but also a bold and original experimentalist. However, the themes underlying Lewin’s approach to theorizing have played a central role in his empirical work on psychology. These themes are reflected in his research preoccupation with the psychological environment (the person’s perception of the situation confronting him), his emphasis on social and group influences upon behavior, his stress on the contemporaneous rather than the historical determinants of behavior, and his focus on dynamics (motivation, conflict, and change). The range and impact of the work of Lewin and his associates is indicated by a list of some of the research areas in psychology which they opened up for experimental investigation: dynamic studies of memory, resumption of interrupted activities, substitute activity, satiation, level of aspiration, studies of different types of group leadership, group decision. Many of the terms associated with Lewin are now part of the common vocabulary of psychologists— e.g., “life space,” “valence,” “locomotion,” “overlapping situation,” “cognitive structure,” “action research.” In the space available here, it will be impossible to do more than sketch some of Lewin’s central theoretical notions. This is done in five sections which deal with (1) dynamic concepts, (2) structural concepts, (3) socially induced change, (4) level of aspiration, and (5) group dynamics.
After several years of work at the University of Berlin on the more traditional problems of perception and learning, Lewin turned to the study of motivation. In 1926 he published the first of a series of over twenty brilliant articles by himself and his students, “Untersuchungen zur Handlungsund Affektspsychologie” (“Investigations Into the Psychology of Behavior and Emotion”), which appeared in Psychologische Forschung. Here most of the concepts which later became so famous first appeared. Among them are the concepts “tension,” “valence,” “force,” and “locomotion,” which play a key role in Lewin’s theorizing about motivation.
A system in a state of tension is said to exist within the individual whenever a psychological need or an intention (sometimes referred to as a quasi need) exists. Tension is released when the need or intention is fulfilled. Tension has the following conceptual properties: (a) it is a state of a region in a given system which tries to change itself in such a way that it becomes equal to the state of surrounding regions, and (b) it involves forces at the boundary of the region in tension. A positive valence is conceived as a force field in which the forces are all pointing toward a given region of the field (the valent region which is the center of the force field); all the forces point away from a region of negative valence. The construct force characterizes the direction and strength of the tendency to change at a given point of the life space. Change may occur either by actual locomotion (i.e., change in position) of the person in his psychological environment or by a change in the structure of his perceived environment.
There exists a definite relation between tension systems of the person and certain properties of the psychological environment. In particular, a tension may be related to a positive valence for activity regions in the psychological environment which are perceived as tension reducing, and a negative valence for the region in which the behaving self is at present. However, the existence of a region of positive valence (a goal region) depends not only upon the existence of tension but also upon whether there are perceived possibilities for reducing the tension.
When a goal region which is relevant to a system in tension exists in the psychological environment, one can assert that there is a force acting upon the behaving self to locomote toward the goal. A tension for which there is a cognized goal leads not only to a tendency to actual locomotion toward the goal region but also to thinking about this type of activity. This may be expressed by saying that the force on the person toward the goal exists not only on the level of doing (reality) but also on the level of thinking (irreality).
The Zeigarnik quotient . From the foregoing assumptions about systems in tension it is possible to make a number of derivations. Thus, it follows that the tendency to recall or resume interrupted activities should be greater than the tendency to recall or resume finished ones. Zeigarnik (1927) and many others have conducted experiments in which subjects are given a series of tasks to perform and are then prevented from completing half of them. Later, the subjects are asked to recall the tasks they had performed. The results are presented in the form of a quotient, commonly called the Zeigarnik quotient (ZQ):
Zeigarnik predictecd a quotient of greater than 1. The obtained quotient was approximately 1.9, clearly supporting Lewin’s assumptions. However, since many completed tasks were also recalled, it was obvious that additional factors were involved. Analyzing the situation of the subject at the moment of recall, Zeigarnik concluded that in addition to the force on the person to think about, and hence to recall, the uncompleted tasks, there was also present a force to recall both uncompleted and completed tasks exerted upon the person by the experimenter’s instructions to “try to recall the tasks you worked on earlier.” The Zeigarnik quotient could be viewed as a function of the relative strengths of the induced force to recall all tasks and of the force to recall the uncompleted tasks. As the strength of the force induced by the experimenter increases in relation to the force toward the task goals, the quotient should decrease toward 1; as it decreases in relative strength, the quotient should increase beyond 1. These additional predictions, which follow from an analysis of the situation at recall, were borne out in experiments by Zeigarnik and others. Thus, if the strength of motivation associated with the interrupted task is relatively high, or the strength of the experimenter’s pressure to recall is low, or if the task is interrupted near its end, the Zeigarnik quotient will be high.
A number of more recent experiments have indicated that the situation of recall is frequently even more complex than indicated above. When not finishing a task is interpretable as a personal failure (e.g., in an experiment where the tasks are presented as measures of a socially esteemed ability) and when recall of failure threatens one’s self-esteem, or when the recall of success raises a lowered self-esteem, the Zeigarnik quotient tends to be less than 1.
Substitute value. Kate Lissner initiated the study of the value which one activity has for reducing a tension originally connected with another activity by a technique involving resumption of the interrupted task; some of the other experimenters have employed the technique of recall. The substitute value is measured by the amount of decrease in resumption or recall of the interrupted original activity after a substitute activity has been completed. The results of the experiments on substitute value can be summarized as follows: (1) Substitute value increases with the perceived degree of similarity between the original and the substitute activity and with the degree of difficulty of the substitute activity (Lissner 1933). (2) Substitute value increases with increasing temporal contiguity between the original and the substitute activity and with the attractiveness of the substitute activity (Henle 1942). (3) The substitute value of an activity (e.g., thinking, talking, or doing) depends upon the nature of the goal of the original task. Tasks that are connected with the goal of demonstrating something to another person (e.g., the experimenter) require an observable substitute activity (not merely “thinking” without social communication); “realization tasks,” in which the building of a material object is the goal, require “doing,” not only telling how it can be done; for intellectual problems, talking (or telling how it can be done) can have a very high substitute value (Mahler 1933). (4) “Magic solutions,” “make-believe solutions,” or solutions which observably violate the requirements of the task have little substitute value for tasks at the reality level. However, if the situation is a make-believe or play situation, make-believe substitutions will have substitute value (Sliosberg 1934; Dembo 1931). (5) A substitute activity which is identical with the original activity will have little substitute value if it does not serve the same goal. Thus, building a clay house for Tony will have little substitute value for building a clay house for Nicky. If the emphasis is on building a clay house and not upon the “for somebody,” then, of course, substitution will occur (Adler & Kounin 1939). (6) Having someone complete the subject’s interrupted task tends to have little substitute value, particularly when completion of the task is related to self-esteem. However, when pairs of individuals work cooperatively on a task, the completion of the task by one’s partner has considerable substitute value (e.g., Lewis & Franklin 1944).
The research findings with respect to substitute value have implications for a wide range of problems in psychology—from the relative gratification value of individual versus socially shared projective or fantasy systems to the development of specialized roles within a group. Let us briefly illustrate with the very important finding that the actions of another person can be a substitute for one’s own actions if there is a cooperative relationship. The fact of substitutability enables individuals who are working cooperatively on a common task to subdivide the task and to perform specialized activities, since none of the individuals in a cooperative situation has to perform all the activities by himself. In contrast, the individual in a competitive situation is less likely to view the actions of others as substitutable for similarly intended actions of his own. Thus, when a competitive situation exists in a group, specialization of activities is less likely to develop (Deutsch 1949a; 1949b).
Satiation. The concept of tension systems has also been fruitfully employed in experimental studies of satiation. With regard to most needs, one can distinguish a state of deprivation, of satiation, and of oversatiation. These states correspond to a positive, a neutral, and a negative valence of the activity regions which are related to a particular need or tension system. Karsten (1928) has studied the effect of repeating over and over again such activities as reading a poem, writing letters, drawing, and turning a wheel. The main symptoms of oversatiation appear to be (1) the appearance of subunits in the activity that lead to the disintegration of the total activity and a loss of meaning of the activity; (2) increasingly poorer quality of work and greater frequency of errors in performing the task; (3) an increasing tendency to vary the nature of the task, accompanied by a tendency for each variation to be quickly satiated; (4) a tendency to make the satiated activity a peripheral activity by attempting to concentrate on something else while doing the task—this is usually not completely successful, and the mind wanders; (5) increasing dislike of the activity and of similar activities, accompanied by an increased valence for different tasks; (6) emotional outbursts; (7) development of “fatigue” and similar bodily symptoms which are quickly overcome when the individual is shifted to another activity.
Satiation occurs only if the activity has, psychologically, the character of marking time or of getting nowhere. If the activity can be viewed as making progress toward a goal, the usual symptoms of satiation will not appear. Embedding an activity in a different psychological whole, so that its meaning is changed, has practically the same effect on satiation as shifting to a different activity. The rapidity with which satiation occurs depends upon factors such as the nature of the activity—with increasing size of its units of action and with increasing complexity satiation occurs more slowly —and the state of the person—the more fluid the state of the person, the more quickly he is satiated. The rate of satiation and cosatiation of similar activities (i.e., the spread of satiation effects from one activity to similar activities) decreases with age and with lack of intelligence (Kounin 1941).
Lewin attempted to develop a geometry, which he termed “hodological space,” to represent a person’s conception of the means-end structure of the environment, of what leads to what. Although Lewin’s “hodological space” was never developed adequately from a mathematical viewpoint, it served to highlight the necessity of considering a person’s conception of his environment in analyzing his behavior possibilities and in characterizing the direction of his behavior. For example, an individual who walks around a fence to get to a ball behind it is, psychologically, walking toward the ball as he physically walks away from it.
Cognitive structure . The view that direction in the life space is dependent upon cognitive structure has been applied to provide insights into some of the psychological properties of situations which are cognitively unstructured.
Most new situations are cognitively unstructured, since the individual is unlikely to know “what leads to what.” As he strikes out in any direction, he does not know whether he is going toward or away from his goal. His behavior will be exploratory, trial-and-error, vacillating, and contradictory rather than efficient and economical. If reaching the goal has positive significance and not reaching it has negative significance for the individual concerned, then being in a region that has no clear cognitive structure results in psychological conflict, since the direction of the forces acting upon him is likely to be both toward and away from any given region. There will be evidences of emotionality as well as cautiousness in such situations. In addition, the very nature of an unstructured situation is that it is unstable; perception of the situation shifts rapidly and is readily influenced by minor cues and by suggestions from others.
Lewin has employed his characterization of cognitively unstructured situations to give insight into the psychological circumstances of the adolescent (1939-1947) as well as those of minority group members (1935), of people suffering from physical handicaps (Barker 1946), of the nou-veaux riches, and of other persons crossing the margins of social classes. It may be applied to any situation in which the consequences of behavior are seemingly unpredictable or uncontrollable; in which benefits and harms occur in an apparently inconsistent, fortuitous, or arbitrary manner; or in which one is uncertain about the potential reactions of others.
Conflict . Lewin has brilliantly employed his structural concepts, in conjunction with his dynamic concepts, to give insight into the nature of conflict situations. He distinguishes three fundamental types of conflict:
(1) The individual stands midway between two positive valences of approximately equal strength. This type of conflict is unstable. As the individual, due to the play of chance factors, moves from the point of equilibrium toward one goal region rather than the other, the resultant force toward that region increases, and, hence, he will continue to move toward that region. This follows from the assumption that the strength of force toward a goal region decreases with increasing distance from the goal.
(2) The second fundamental type of conflict occurs when the individual finds himself between two approximately equal negative valences. The punishment situation is an example. This type of conflict is very much influenced by the structure of the situation. Let us illustrate three varieties of this type of conflict: (a) the individual is between two negative valences, but there are no restraints keeping him in the situation—e.g., a girl who will have to marry an unpleasant suitor or become an impoverished spinster if she remains in her village (there is nothing to prevent her from leaving her village); (b) the individual is between two negative valences, but he cannot leave the field—e.g., a group member who is faced with the prospect of losing social status or of performing an unpleasant task (he cannot leave the group); (c) the individual is in a region of negative valence and can leave it only by going through another region of negative valence—e.g., a man is cited for contempt of Congress for not testifying whether or not some of his acquaintances were members of the Communist party (to purge himself of contempt he must become an informer).
It is evident that the situation depicted in (a) will lead the person to go out of the field. Only if restraints prevent the individual from leaving the field will such a situation result in more than momentary conflict. Restraints as in (b) introduce a conflict between the driving forces related to the negative valence and the restraining forces related to the barrier. There is a tendency for a barrier to acquire a negative valence which increases with the number of unsucccessful attempts to cross it and which, finally, is sufficiently strong to prevent the individual from approaching it (Fajans 1933). Thus, the conflict between driving and restraining force is replaced by a conflict between driving forces, as in (c) above. This fact is particularly important for social psychology since, in many situations of life, the barriers are social. When a person turns against the barrier, he is in effect directing himself against the will and power of the person or social group to whom the erection of the barrier is due.
(3) The third fundamental type of conflict situation occurs when the individual is exposed to opposing forces deriving from a positive and a negative valence. One can distinguish at least three different forms of this conflict: (a) a single region has both positive and negative valence. (The Freudian concept of ambivalence is subsumable under this variety of conflict.) For example, a person wishes to join a social group but fears that being a member will be too expensive, (b) A person is encircled by (but not actually in) a negative or a barrier region and is attracted to a goal which is beyond it—e.g., a person who has to go through the unpleasant ordeal of leaving his home or his group in order to pursue some desired activity. In this situation, the region of the person’s present activity tends to acquire negative valence as long as the region which encircles the person hinders locomotion toward desired outer goals. Thus, being a member of a minority group or being in a ghetto or in a prison often takes on negative valence, apart from the inherent characteristics of the situation, because one can get to desired goals from the region only by passing through an encircling region of negative valence, (c) A region of positive valence is encircled by or is accessible only through a region of negative valence. This type of situation differs from (b) in that the region of positive valence rather than the region in which the person is to be found is encircled by the negative valence. The “reward situation,” in which the individual is granted a reward only if he performs an unpleasant task, is an example of this type of conflict.
Lewin ([1939-1947] 1963, p. 259), as well as Miller (1944), has pointed out that the forces corresponding to a negative valence tend to decrease more rapidly as a function of psychological distance than do the forces corresponding to a positive valence. The amount of decrease depends also upon the nature of the region which has a positive or negative valence. It is different in the case of a dangerous animal which can move about than it is in the case of an immovable unpleasant object. The difference in the gradients of decrease for forces deriving from positive and negative valences accounts for the apparent paradox that a strong fear or a strong tendency to withdraw may be taken as evidence of a strong desire for the goal. Only with a very attractive goal will the equilibrium point between approach and avoidance tendencies be close enough to the negatively valent region to produce a strong force away from the goal. On the other hand, strengthening the negative valence of a region may very well have the effect of weakening the forces in conflict, since the equilibrium point may be pushed a considerable distance away from the valent regions. [See CONFLICT, article onPSYCHOLOGICAL ASPECTS.]
It should be noted that Lewin’s motivational concepts do not presuppose that motivation is primarily induced by physiological deficits; his conceptualization suggests that tension and valences may be aroused socially (e.g., by an experimenter’s instructions to perform a task); he also indicated that the forces acting upon an individual may be “imposed” or may directly reflect the individual’s own needs.
The concept of powerfield is relevant in this connection. It is an inducing field; it can induce changes in the life space within its area of influence. The distinction between own and induced forces has been found to be useful in explaining some of the differences in behavior under autocratic and democratic leadership (Lippitt & White 1943). Thus, children in a club led by authoritarian leaders (who determined policy, dictated activities, and were arbitrary and personal in evaluation of activities) tended to develop little of their own motivation with respect to club activities. Although the children worked productively when the leader was present (i.e., when his powerfield was psychologically effective), the lack of personal motivation toward group goals clearly evidenced itself in (1) change of behavior when the leader left the club, (2) absence of motivation when the leader arrived late, (3) lack of carefulness in the work, (4) lack of initiative in offering spontaneous suggestions in regard to club projects, (5) lack of pride in the products of club effort. [See LEADERSHIP.]
The distinction between own and induced forces has also been employed to explain why workers are usually happier and more productive when they can participate in the decisions which affect their work. Participation in goal setting is more likely to create own forces toward the goal and thus reduce the necessity to exert continuous social influences toward the same end (Coch & French 1948).
Perhaps no other area of research that Lewin and his students have opened to experimental investigation has been the object of so many studies as that of level of aspiration. The level of aspiration may be defined as the degree of difficulty of the goal toward which the person is striving. The concept of level of aspiration is relevant only when there is a perceived range of difficulty in the attainment of possible goals and a variation in valence among the goals along the range of difficulty.
In discussing the level of aspiration, it may be helpful to consider a sequence of events which is typical for many of the experimental studies in this area: (1) A subject plays a game (or performs a task) in which he can obtain a score (e.g., throwing darts at a target); (2) after playing the game and obtaining a given score, he is asked to tell what score he will undertake to make the next time he plays; (3) he then plays the game again and achieves a different score; (4) he reacts to his second performance with feelings of success or failure, with a continuing or new level of aspiration, etc. In the foregoing sequence, point (4) (reaction to achievement) is particularly significant for the dynamics of the level of aspiration.
In outline form, the theory of the level of aspiration is rather simple (Lewin et al. 1944). It states that the resultant valence of any level of difficulty will be equal to the valence of achieving success times the subjective probability of success, minus the valence of failure times the subjective probability of failure. The level of aspiration (i.e., the goal an individual will undertake to achieve) will be the level of difficulty that has the highest resultant positive valence. The subjective experience of success or failure is determined by the relation of the individual’s performance to his level of aspiration (providing, of course, that the performance is seen to be self-accomplished) and not simply by his absolute accomplishments.
Experimental work on the level of aspiration has brought out the variety of influences which affect the positive and negative valences of different levels of difficulty. It has indicated that cultural and group factors establish scales of reference which help to determine the relative attractiveness of different points along a difficulty continuum. Some of these influences are rather stable and permanent in their effects. It has been found, for example, that most people in Western cultures, under the pervasive cultural pressures toward “self-improvement,” when first exposed to a level of aspiration situation initially give a level of aspiration which is above the previous performance score, and under most conditions they tend to keep their level of aspiration higher than their previous performance. In addition to broad cultural factors, the individual’s level of aspiration in a task is likely to be very much influenced by the standards of the group to which he belongs. The nature of the scales of reference set up by different group standards may vary. Reference scales do not derive solely from membership in a definitely structured social.group; they also may reflect the influence of one’s self-image, of other individuals, or of groups that either establish certain standards for performance or that serve as models for evaluating self-performance. Thus, the level of aspiration of a college student with respect to an intellectual task will vary, depending upon whether he is told that a given score was obtained by the average high school student, the average college student, or the average graduate student.
Research has given some insight into the factors determining the values on the scale of subjective probability. A main factor which determines the subjective probability of future success and failure is the past experience of the individual in regard to his ability to reach certain objectives. If the individual has had considerable experience with a given activity, he will know pretty well what level he can expect to reach, and the gradient of values on the subjective probability scale will be steep. However, it is not only the average of past performances which determines an individual’s subjective probability scale but also the trend—whether he is improving, getting worse, or remaining the same. Furthermore, there is experimental evidence to indicate that the last or most recent success or failure has a particularly great influence on the individual’s expectation of his future achievement level. In addition, there is evidence that the subjective probability scales, as well as the performance of others, can influence the subject’s own probability scale. Personality factors—for example, self-confidence— may also influence subjective probability.
The level of aspiration theory has widespread implications for many social phenomena. It gives insight into the reasons for social apathy in the face of pressing political and international problems. People are not likely to attempt to achieve even highly valued objectives when they see no way of attaining them. Similarly, it sheds some light upon why social revolution tends to occur only after there has been a slight improvement in the situation of the oppressed groups—the improvement raises their level of aspiration, making goals which were once viewed as unattainable now perceived as realistic possibilities.
Apart from papers dealing with group decision and social change (Lewin 1935-1946; 1947a; 1947b) Lewin actually wrote very little on the theory of group dynamics. However, from the research investigations of his colleagues and students at the Research Center for Group Dynamics a formidable array of concepts has emerged. [See GROUPS.]
Let us begin our discussion of the concepts of group dynamics by briefly considering the concept of group. Lewin ([1935-1946] 1948, p. 48) wrote:
The essence of a group is not the similarity or dissimilarity of its members, but their interdependence. A group can be characterized as a “dynamic whole”; this means that a change in the state of any subpart changes the state of any other subpart. The degree of interdependence of the subparts of members of the group varies all the way from a loose “mass” to a compact unit.
French (1944) pointed out that in addition to interdependence, membership in a group presupposes identification with the group. Deutsch (1949a) indicated that the interdependence is promotive or cooperative rather than, for example, competitive. Thus, a group may be tentatively defined as being composed of a set of members who mutually perceive themselves to be cooperatively or promotively interdependent in some respect(s) and to some degree.
Cohesiveness . One of the key concepts, which has been the subject of much experimental investigation, is that of cohesiveness. Intuitively, cohesiveness refers to the forces which bind the parts of a group together and which, thus, resist disruptive influences. Hence, the study of conditions affecting group cohesiveness and of the effects upon group functioning of variations in group cohesiveness is at the heart of the study of group life. Festinger, Schachter, and Back (1950) have defined cohesiveness, in terms of the group member, as “the total field of forces which act on members to remain in the group.” The nature and strength of the forces acting upon a member to remain in the group may vary from member to member.
Various measures of individuals’ “cohesiveness” to their groups have been employed in experimental investigations: desire to remain in the group, the ratio of “we” remarks to “I” remarks during group discussion, ratings of friendliness, evaluations of the group and its sociometric choice versus out-group sociometric choice, and others. Deutsch (1949a), in a theoretical paper, provides a rationale for the use of a wide variety of measures by developing the hypotheses that members of more cohesive (cooperative) groups, as compared with members of less cohesive (competitive) groups, would, under conditions of success: (a) be more ready to accept the actions of other group members as substitutable for similarly intended actions of their own (and therefore would not have to perform them also); (b) be more ready to accept inductions (i.e., be influenced) by other members; and (c) be more likely to cathect positively the actions of other group members. From these core hypotheses, with the addition of more specific assumptions, it is possible to derive the influence of more or less cohesiveness upon many aspects of group functioning. Thus, from the substitutability hypothesis, it is possible to predict that more specialization of function, more subdivision of activity, and more diversity of membership behavior would occur in the more cohesive groups. The inductibility hypothesis leads to the prediction that members of more cohesive groups would be more attentive to one another, be more understood by one another, be more influenced by one another, and be more likely to change and have more internalization of group norms than members of less cohesive groups. The cathexis hypothesis leads to predictions of greater friendliness, greater ratio of in-group sociometric choices, etc., in the more cohesive groups. Data in a variety of experiments (e.g., Deutsch 1949b; Schachter 1951; Back 1951) support the foregoing predictions. [See COHESION, SOCIAL.]
Communication . In a well-integrated program of research, Festinger and his co-workers have developed a series of fertile hypotheses (Festinger 1950) and have conducted some ingenious experiments on the communication process within groups. In brief, these investigators have been concerned with three sources of pressures to communicate within groups: (a) communications arising from pressures toward uniformity in a group (Back 1951; Festinger & Thibaut 1951; Schachter 1951); (b) communications arising from forces to loco-mote in a social structure (Kelley 1951; Thibaut 1950); and (c) communications arising from the existence of emotional states (Thibaut 1950; Thibaut & Coules 1952). [See INTERACTION.]
Pressures toward uniformity. Festinger (1950; 1954) has indicated two major sources of pressures toward uniformity in a group: social reality and group locomotion. He indicates that when there is no single objective basis for determining the validity of one’s beliefs, one is dependent upon social reality to establish confidence in one’s beliefs. Thus, to evaluate the validity of his opinions or to evaluate his abilities or even the appropriateness of his emotions (Schachter 1959), he will compare them with those of others. Festinger (1954) posits that to facilitate accuracy of evaluation of their opinions or abilities, people will seek out others with similar opinions or abilities for comparison and will avoid dissimilarity or attempt to reduce it when it exists. Also, lack of agreement among members of a group provides an unstable basis for beliefs which depend upon social consensus for their support, and hence —in line with Heider’s discussion of the tendency toward cognitive balance (1958) or Festinger’s theory of cognitive dissonance ( 1957)—forces will arise to produce uniformity. [See CONFORMITY.]
Pressures toward uniformity among members of a group may also arise because such uniformity is desirable or necessary in order for the group to locomote toward some goal. Greater uniformity in opinion within a group can be achieved in either of the following ways: (a) by actions (i.e., communications) that are directed at changing the views of others or one’s own views or (b) by actions to make others incomparable in the sense that they are no longer effective as a comparison for one’s opinions —e.g., by rejecting or excluding people with deviating opinions from the group.
Experiments have shown that increasing the attraction to the group, and thus increasing the importance of the group as a comparison object, increases the amount of influence which is attempted and the amount of opinion change which occurs when there is opinion discrepancy within a group (Back 1951). It has also been demonstrated that the more relevant or important the opinion is for the functioning of the group, the more pressure there will be for uniformity in a group (Schachter 1951; Festinger & Thibaut 1951). Also, as may be expected from theoretical considerations, there is evidence that when pressures toward uniformity exist, the concern is mainly with those members of the group who have opinions most divergent from one’s own. Thus, members exert influence mainly on those whose opinions are most divergent from their own.
Pressures toward change. The experiments revealing the tendency to direct communication toward deviants in the group, and through communication to exert social pressure upon them, provide support for Lewin’s theory of group decision and social change. Lewin (1947b) began his analysis of change by pointing out that the status quo in social life is not a static affair but a dynamic process that flows on while still keeping a recognizable form. He borrowed the term “quasi-stationary equilibria” from physics to apply to such ongoing processes, which are kept at their present level by fields of forces preventing a rise or fall. The field of forces in the neighborhood of the level of equilibrium presupposes that the forces against exceeding the equilibrium level increase with the amount of raising and that the forces against lowering increase (or remain constant) with the lowering. Thus, if we assume that a group standard is operating to determine the level of worker productivity in a factory, any attempt upon the part of the worker to deviate from the standard by higher productivity will only result in stronger forces being induced upon him by his co-workers in order to push him back into line. That is, as the Festinger experiments have demonstrated, the deviant will be exposed to stronger pressures toward uniformity the more he deviates. However, as Lewin points out, the gradient of forces may change at a distance from the equilibrium level, so that after an individual has gone a certain distance from the equilibrium level, the forces may push him away from rather than pull him toward the group standard.
Lewin’s analysis of the status quo as a quasi-stationary equilibrium has two major implications. First, it points out that change from the status quo can be produced either by adding forces in the desired direction or by diminishing opposing forces. However, the two methods of producing change have different consequences: if forces are added, the process on the new level will be accompanied by a state of relatively high tension, since the strength of forces in opposition will be greater; if the opposing forces are decreased, the new level will be accompanied by lower tension. Second, it highlights the difficulties of attempting to change group-rooted individual conduct and attitudes through efforts directed at the individual and not his group. If one endeavors to change the prejudices of an individual without changing the prejudice of the group to which he belongs, an individual will either be estranged from his group or be under pressure from his group to revert to his initial attitudes. Isolated individuals may perhaps change their attitudes because of their individual experiences, but the person who is deeply enmeshed in the social life of his community is unlikely to be able to resist the pressures to conform on matters of community importance if he wishes to continue in good standing. [See ATTITUDES, article on ATTITUDE CHANGE.]
Considerations such as the foregoing have led to a series of experiments in various settings— in the school, with neighborhood groups, in industry, in an interracial workshop—on the relative efficacy of changing behavior by efforts directed at individuals or at a group (Lewin 1935-1946). A typical procedure has been to compare the results of a lecture or individual instruction in changing behavior regarding the use of certain foods with the results of a group decision favoring the use. Results have clearly indicated that the group decision method produces more change.
In previous sections of this article, work that was directly initiated or stimulated by Lewin or his immediate associates has been described. The large sweep of this work, the brilliant innovations, the experimental ingenuity cannot help but be impressive. Even though Lewin died in 1947, his impact on psychology continues to be felt in the extension and application of his ideas by others. We turn, briefly, to a discussion of some current work in psychology—a theory of achievement motivation, balance and dissonance theory, and T-group training (laboratory training groups) which extends Lewin’s ideas.
A theory of achievement motivation . Atkinson (1957; 1964) has developed a theory of achievement motivation that is an extension and elaboration of ideas advanced in the theory of level of aspiration developed by Lewin, Festinger, and Sibylle Escalona. Atkinson’s theory attempts to account for the determinants of the direction, magnitudes, and persistence of achievement-motivated performance. Achievement motivation is the resultant of two opposed tendencies: T,-Tf(the tendency to achieve success minus the tendency to avoid failure. In Atkinson’s notation this is represented as T-f.
The tendency to achieve success is assumed to be a multiplicative function of the motive to achieve success (M3) which the individual carries about with him from situation to situation, the subjective probability of success (P3), and the incentive values of success at a particular activity (I3,): T3 = M3 × P3 × I8. Similarly, the tendency to avoid failure is assumed to be a multiplicative function of the motive to avoid failure (Maf), the subjective probability of failure (Pf), and the negative incentive value of failure (If): Tf = Maf × Pf × If.
So far, Atkinson’s analysis parallels the level of aspiration theory, if one assumes, quite properly, that the Lewinian concept of valence is equivalent to Atkinson’s motive times incentive: namely, that the valence of success = Ms x I, and that the valence of failure = Ma x If. He, however, introduces the additional assumptions that I, = 1 — P, = P/ and that I, = -P,. In other words, his theory details more unequivocally the relationship between the perceived level of difficulty and the incentive values of success and failure. It states, in effect, that the value of success increases with the perceived difficulty of the task, while the displeasure at failure increases with the perceived ease of the task. (Notice that in Atkinson’s theory s x I, must equal -P, x /, and, hence, that whatever differences in strength there are between T, and T, will be due solely to the differences between Ms and M,f.) In addition, he has specified methods of measuring M, and Ma/; M8 is measured by the methods developed by McClelland and his associates (1953) to measure need achievement and Maf is measured by the Mandler-Sarason Test of Anxiety (1952).
Atkinson (1964) summarizes a considerable amount of research that is consistent with his theory. The wide applicability of the theory is indicated by the range of content to which the theory has been applied: motivational effect of ability grouping in schools, strength of achievement motive and occupational mobility, fear of failure and unrealistic vocational aspiration, preferences for degrees of risk, and the effects of success and failure. [See ACHIEVEMENT MOTIVATION.]
Cognitive balance . One of the notions implicit in Lewin’s view of the life space as cognitively structured, and generally in the gestalt view that organization tends to be as “good” as possible, is that when a structure of beliefs and attitudes is imbalanced or disharmonious, a tendency will arise to change one’s beliefs and attitudes until they are balanced. Lewin’s view is brought out most clearly in his discussion of cognitively unstructured situations and in his discussion of the cognitive structure of different types of conflict situations. Heider (1946; 1958), one of Lewin’s most brilliant associates, has extended this notion in his theory of cognitive balance. [See THINKING, article on COGNITIVE ORGANIZATION AND PROCESSES.]
Heider points out that cognitive stability requires a congruence among causal expectations with respect to related objects. That is, for a state of complete cognitive harmony to exist, the various implications of a person’s expectations or judgments of any aspect of the cognized environment cannot contradict the implications of his expectations or judgments of any other aspect of the cognized environment. Thus, if a person judges X to be of potential benefit to his welfare, he cannot at the same time judge that Y (which is also judged to be of benefit to his welfare) and X are antagonistic and still have a stable or balanced cognitive structure. (Let X and Y be things, people, the products of people, or the characteristics of people.) When the cognitive structure is in a state of imbalance or is threatened by imbalance, forces will arise to produce a tendency toward locomotion that will change the psychological environment or produce a tendency toward change in the cognition of the environment. Under conditions which do not permit locomotion, the tendency for cognitive change is enhanced.
Heider specifies further that (a) in regard to attitudes directed toward the same entity, a balanced state exists if positive (or negative) attitudes go together; a tendency exists to see a person as being positive or negative in all respects; (b) in regard to attitudes toward an entity combined with belongingness, a balanced state exists if a person is united with the entities he likes and if he likes the entities he is united with; the converse is true for negative attitudes; (c) if two entities are seen as parts of a unit, a balanced state will exist if the parts are seen to have the same dynamic character (positive or negative). If the two entities have different dynamic characters, a balanced state can exist if they are seen to be segregated (i.e., by breaking up the unit).
Heider’s theory has had wide ramifications in social psychology. Cartwright and Harary (1956) extended balance theory to cover a greater range of phenomena and also removed some of its ambiguities by using the mathematical theory of linear groups. Newcomb (1953; 1961) has applied Heider’s theory to the “balancing” of interpersonal relations, particularly in his analysis of communicative acts and the development of friendships. Rosenberg and Abelson (1960) have applied a modification of Heider’s theory to the problems of attitude change.
Dissonance theory . Leon Festinger, one of Lewin’s most renowned students, has developed a theory (1957) which is similar to balance theory in its stress on the need to avoid cognitive inconsistency (dissonance) but differs from it in emphasizing postdecisional processes. Lewin (1939-1947), in a discussion of the behavior of a housewife who buys food, had suggested that behavior before and after decisions about purchasing food differed; prior to the decision, the more expensive the food, the less likely it was to get to the family table; after the decision, the higher its cost, the more likely it was to be used. Festinger’s theory generalizes the idea that the postdecision situation may differ from the predecision situation. It makes the original assumption that making a decision per se arouses dissonance and pressure to reduce the dissonance. [See THINKING, article on COGNITIVE ORGANIZATION AND PROCESSES.]
Postdecision dissonance results, according to Festinger, from the fact that the decision in favor of the chosen alternative is counter to the beliefs which favor the unchosen alternative(s). To stabilize or freeze the decision after it has been made, the individual will attempt to reduce dissonance by changing his cognitions so that the relative attractiveness of the chosen, as compared with the unchosen, alternative is increased, or by developing cognitions which permit the alternatives to be substitutable for one another, or by revoking the decision psychologically. In Festinger’s view (1964), the crucial difference from the predecision state of dissonance is that the predecisional conflict is “impartial” and “objective”; it does not lead to any spreading apart of the attractiveness in favor of the to-be-chosen alternative. Festinger writes: “Once the decision is made, however, and dissonance-reduction processes begin, one should be able to observe that the differences in attractiveness between the alternatives change, increasing in favor of the chosen alternative” (1964, PP. 8-9).
A variety of interesting and ingenious experiments have been stimulated by Festinger’s view of the postdecision process. (See Brehm & Cohen 1962, for a detailed review.) These experiments have often involved “nonobvious” predictions which appear to defy common sense. Many of these predictions derive from the notion that if a decision produces insufficient rewards, the person will change his beliefs so as to make the decision seem more rewarding. Festinger (1961, p. 11) writes: “Rats and people come to love the things for which they have suffered.” Presumably they do this to reduce the dissonance induced by the suffering, and their method of dissonance-reduction is to enhance the attractiveness of the choice which led to their suffering in order to justify it.
The theory of dissonance has stimulated extensive research into many aspects of social psychology: attitude and behavioral change, perceptual distortion, selective exposure to information, work productivity, and so forth. Recently, dissonance theory has been the subject of methodological criticism (Chapanis & Chapanis 1964) and theoretical criticism (Deutsch et al. 1962; Rosenberg 1965) which pose some difficult but perhaps not unre-solvable problems for the theory.
T-group training . T-group training (laboratory training in groups) utilizes the experiences and interpersonal relations which occur in a temporary, laboratory-created group to help the participants develop insights into group processes, insights into the way that they themselves function in groups, and skills of participation in groups. The National Training Laboratories, which has become one of the key institutions concerned with the application of behavioral science to social practice, was established in 1947 with the cosponsorship of Lewin’s Research Center for Group Dynamics and has been very much influenced by Lewin’s ideas. His articles “Conduct, Knowledge, and Acceptance of New Values” (Lewin & Grabbe 1945) and “Frontiers in Group Dynamics” (1947) present the intellectual base for the development of the conception of laboratory training groups. These articles highlight the importance of the group in the process of individual re-education and change. In recent years, a considerable literature has been developed in connection with the laboratory method of training individuals in groups (see Bradford et al. 1964). While some of this literature is evangelistic in tone, this, too, is consonant with Lewin’s active and continuing concern that social science be used to make the world a better place in which to live.
Although it cannot be said that Lewin’s specific theoretical construct—his structural and dynamic concepts—are central in current research in psychology, his impact on psychology has been a major one. His impact is reflected in the general orientations which, today, are more and more taken for granted. These are that psychological events have to be explained in psychological terms; that central processes in the life space (e.g., distal perception, cognition, motivation, goal-directed behavior) rather than the peripheral processes of sensory input or of muscular action should be the focus of investigation; that psychological events have to be studied in their interrelations with one another; that the individual has to be studied in his interrelations with the group to which he belongs; that the attempt to bring about change in a process is the most fruitful way to investigate it; that important social psychological phenomena can be studied experimentally; that the scientist should have a social conscience; and that a good theory is valuable for social action as well as for science.
[Directly related are the entriesGESTALT THEORY; PERSONALITY; PHENOMENOLOGY; SOCIAL PSYCHOLOGY; THINKING, article onCOGNITIVE ORGANIZATION AND PROCESSES; and the biography ofLEWIN. For comparison, other relevant approaches to psychological phenomena may be found inPERSONALITY: CONTEMPORARY VIEWPOINTS ; PSYCHOANALYSIS. Relevant also areACHIEVEMENT MOTIVATION; ATTITUDES; COHESION, SOCIAL; CONFLICT; GROUPS; LEADERSHIP; SYSTEMS ANALYSIS, article onPSYCHOLOGICAL SYSTEMS; and the biographies ofBRUNS-WIK; CASSIRER; HULL; TOLMAN.]
Adler, Dan L.; and KOUNIN, JACOB S. 1939 Some Factors Operating at the Moment of Resumption of Interrupted Tasks. Journal of Psychology7 : 255–267.
Atkinson, John W. (1957) 1958 Motivational Determinants of Risk-taking Behavior. Pages 322-339 in John W. Atkinson (editor), Motives in Fantasy, Action, and Society: A Method of Assessment and Study. Princeton, N.J.: Van Nostrand. → First published in Volume 64 of Psychological Review.
Atkinson, John W. 1964 An Introduction to Motivation. Princeton, N.J.: Van Nostrand.
Back, Kurt W. 1951 Influence Through Social Communication. Journal of Abnormal and Social Psychology46 : 9–23.
Barker, Roger G. (1946) 1953 Adjustment to Physical Handicap and Illness.2d ed. New York: Social Science Research Council.
Bradford, Leland P.; GIBE. JACK R.; and BENNE, KENNETH D. (editors) 1964 T-group Theory and Laboratory Method: Innovation in Re-education. New York: Wiley.
Brehm, Jack W.; and COHEN, ARTHUR R. 1962 Explorations in Cognitive Dissonance. New York: Wiley.
CARTWRIGHT, DORWIN; and HARARY, FRANK 1956 Structural Balance: A Generalization of Heider’s Theory. Psychological Review63 : 277–293.
Chapanis, Natalia P.; and CHAPANIS, ALPHONSE 1964 Cognitive Dissonance: Five Years Later. Psychological Bulletin61 : 1–22.
COCH, LESTER; and FRENCH, JOHN R. P. 1948 Overcoming Resistance to Change. Human Relations1 : 512–532.
Dembo, Tamara 1931 Der Aäger als dynamisches Problem. Psychologische Forschung15 : 1–144.
Deutsch, Morton 1949a A Theory of Cooperation and Competition. Human Relations2:129:152.
Deutsch, Morton 1949b An Experimental Study of the Effects of Co-operation and Competition Upon Group Process. Human Relations 2: 199–231.
DEUTSCH, MORTON; KRAUSS, ROBERT M.; and ROSENAU, NORAH 1962 Dissonance or Defensiveness? Journal of Personality 30: 16–28.
Fajans, Sara 1933 Die Bedeutung der Entfernung fär die Sta’rke eines Aufforderungscharakters beim Sa’ug-ling und Kleinkind. Untersuchungen zur Handlungsund Affektspsychologie, No. 12. Psychologische Forschung17 : 215–267.
Festinger, Leon 1950 Informal Social Communication. Psychological Review57 : 271–282.
Festinger, Leon 1954 A Theory of Social Comparison Processes. Human Relations7 : 117–140.
Festinger, Leon 1957 A Theory of Cognitive Dissonance. Evanston, Ill.: Row, Peterson.
Festinger, Leon 1961 The Psychological Effects of Insufficient Reward. American Psychologist16 : 1–11.
Festinger, Leon 1964 Conflict, Decision, and Dissonance. Stanford Studies in Psychology, No. 3. Stanford Univ. Press.
FESTINGER, LEON; SCHACHTER, STANLEY; and BACK, KURT (1950) 1963 Social Pressures in Informal Groups: A Study of Human Factors in Housing. Stanford Univ. Press.
FESTINGER, LEON; and THIBAUT, JOHN 1951 Interpersonal Communication in Small Groups. Journal of Abnormal and Social Psychology46 : 92–99.
French, John R. P. 1944 Organized and Unorganized Groups Under Fear and Frustration. Pages 229-308 in Authority and Frustration, by Kurt Lewin et al. Univ. of Iowa Press.
Heider, Fritz 1946 Attitudes and Cognitive Organization. Journal of Psychology21 : 107–112.
Heider, Fritz 1958 The Psychology of Interpersonal Relations. New York: Wiley.
Henle, Mary 1942 An Experimental Investigation of Dynamic and Structural Determinants of Substitution. Contributions to Psychological Theory, Vol. 2, No. 3, Serial No. 7. Durham, N.C.: Duke Univ. Press.
Karsten, Anitra 1928 Psychische Sättigung. Psycho-logische Forschung10 : 142–254.
Kelley, Harold H. 1951 Communication in Experimentally Created Hierarchies. Human Relations4 : 39–56.
Kounin, Jacob S. 1941 Experimental Studies of Rigidity. Parts 1–2. Character and Personality9 : 251–282. → Part 1: The Measurement of Rigidity in Normal and Feeble-minded Persons. Part 2: The Explanatory Power of the Concept of Rigidity as Applied to Feeble-mind-edness.
LEVY, S. 1953 Experimental Study of Group Norms: The Effects of Group Cohesiveness Upon Social Conformity. Ph.D. dissertation, New York Univ.
Lewin, Kurt (1935) 1948 Psycho-Sociological Problems of a Minority Group. Pages 145-158 in Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group Dynamics. New York: Harper. → First published in Volume 3 of Character and Personality.
Lewin, Kurt (1935-1946) 1948 Resolving Social Conflicts: Selected Papers on Group Dynamics. New York: Harper.
Lewin, Kurt (1939-1947) 1963 Field Theory in Social Science: Selected Theoretical Papers. Edited by Dorwin Cartwright. London: Tavistock.
Lewin, Kurt (1943a) 1963 Defining the “Field at a Given Time.” Pages 43-59 in Kurt Lewin, Field Theory in Social Science: Selected Theoretical Papers. London: Tavistock.
Lewin, Kurt (1943b) 1948 The Special Case of Germany. Pages 43-55 in Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group Dynamics. New York: Harper. → First published in Volume 7 of Public Opinion Quarterly.
Lewin, Kurt (1946) 1948 Action Research and Minority Problems. Pages 201-218 in Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group Dynamics. New York: Harper. → First published in Volume 2 of the Journal of Social Issues.
Lewin, Kurt (1947a) 1963 Frontiers in Group Dynamics. Pages 188-237 in Kurt Lewin, Field Theory in Social Science: Selected Theoretical Papers. Edited by Dorwin Cartwright. London: Tavistock. → First published in Volume 1 of Human Relations.
Lewin, Kurt (1947b) 1958 Group Decision and Social Change. Pages 197-211 in Society for the Psychological Study of Social Issues, Readings in Social Psychology.3d ed. New York: Holt.
LEWIN, KURT; and GRABBE, PAUL (1945)1948 Conduct, Knowledge, and Acceptance of New Values. Pages 56-68 in Kurt Lewin, Resolving Social Conflicts: Selected Papers on Group Dynamics. New York: Harper. → First published in Volume 1 of the Journal of Social Issues.
Lewin, Kurt et al. 1944 Level of Aspiration. Volume 1, pages 333-378 in J. McV. Hunt (editor), Personality and the Behavior Disorders. New York: Ronald Press.
Lewis, Helen B.; and FRANKLIN, MURIEL 1944 An Experimental Study of the Role of the Ego in Work. Part 2: The Significance of Task-orientation in Work. Journal of Experimental Psychology34 : 195–215.
LIPPITT, RONALD; and WHITE, RALPH K. 1943 The “Social Climate” of Children’s Groups. Pages 485-508 in Roger G. Barker, Jacob S. Kounin, and Herbert F. Wright (editors), Child Behavior and Development. New York: McGraw-Hill.
Lissner, Kate 1933 Die Entspannung von Bedürfnis-sen durch Ersatzhandlungen. Untersuchungen zur Handlungs- und Affektspsychologie, No. 18. Psychologische Forschung18 : 218–250.
Mcclelland, David C. et al. 1953 The Achievement Motive. New York: Appleton.
Mahler, Wera 1933 Ersatzhandlungen verschiedenen Realitütsgrades. Untersuchungen zur Handlungs- und Affektspsychologie, No. 15. Psychologische Forschung18 : 27–89.
MANDLER, GEORGE; and SARASON, SEYMOUR B. 1952 A Study of Anxiety and Learning. Journal of Abnormal and Social Psychology47 : 166–173.
Miller, Neal E. 1944 Experimental Studies of Conflict. Volume 1, pages 431-465 in J. McV. Hunt (editor), Personality and the Behavior Disorders. New York: Ronald Press.
Newcomb, Theodore M. 1953 An Approach to the Study of Communicative Acts. Psychological Review60 : 393–404.
Newcomb, Theodore M. 1961 The Acquaintance Process. New York: Holt.
Rosenberg, Milton J. 1965 When Dissonance Fails: On Eliminating Evaluation Apprehension From Attitude Measurement. Journal of Personality and Social Psychology1 : 28–42.
Rosenberg, Milton J.; and ABELSON, R. P. 1960 An Analysis of Cognitive Balancing. Volume 3, pages’112-163 in Attitude Organization and Change, by Milton J. Rosenberg et al. New Haven: Yale Univ. Press.
Schachter, Stanley 1951 Deviation, Rejection and Communication. Journal of Abnormal and Social Psychology46 : 190–207.
Schachter, Stanley 1959 The Psychology of Affiliation: Experimental Studies of the Sources of Gre-gariousness. Stanford Studies in Psychology, No. 1. Stanford Univ. Press.
Sliosberg, Sarah 1934 Zur Dynamik des Ersatzes in Spiel- und Ernstsituationen. Psychologische Forschung19 : 122–181.
Thibaut, John 1950 An Experimental Study of the Cohesiveness of Underprivileged Groups. Human Relations3 : 251–278.
THIBAUT, JOHN; and COULES, JOHN 1952 The Role of Communication in the Reduction of Interpersonal Hostility. Journal of Abnormal and Social Psychology47 : 770–777.
Zeigarnik, Bluma 1927 Das Behalten erledigter und unerledigter Handlungen. Untersuchungen zur Hand-lungs- und Affektspsychologie, No. 3. Psychologische Forschung9 : 1–85.
"Field Theory." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (April 23, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/field-theory
"Field Theory." International Encyclopedia of the Social Sciences. . Retrieved April 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/field-theory
In physics, the field concept describes the distribution and propagation of effects such as magnetism and gravity through space. Field theories have helped implement the program of unifying the "forces" of nature.
Forces Propagating in Space
The discovery of a connection between electricity and magnetism is usually attributed to Hans Christian Ørsted (1777–1851), who in the winter of 1819 found that a wire carrying a current deflects a magnet. Subsequent experiments determined the dependence of the effect on the relative distance and orientation between the wire and the magnet. Ørsted had pursued his investigations because of his commitment to Naturphilosophie and his belief that "the same forces manifest themselves in magnetism as in electricity" and that the fundamental forces of nature were polar. Ørsted's discovery motivated numerous further investigations, by Francois Arago (1786–1853), Jean-Baptiste Biot (1774–1862), Felix Savart (1791–1841), among others, and particularly by André-Marie Ampère (1775–1836) who formulated the force law describing the interaction between two current-carrying wires. Ampère's guiding assumption was that all electrodynamic phenomena could be understood in terms of the interactions among electric charges and the currents they produce when in motion; a magnet being composed of an aggregate of electric currents.
Michael Faraday (1791–1867), prompted by analysis of Ørsted's and Ampère's investigations and of their theoretical assumptions, carried out a series of perceptive experiments. In 1821 Faraday corroborated that the force on a magnet near a current-carrying wire did not act along the line between the centers of two bodies. Following Sir Isaac Newton's (1642–1727) law of action and reaction, Faraday expected that for every effect of electricity on magnetism there should correspond an effect of magnetism on electricity. Displeased with theories of instantaneous action-at-a-distance, he sought the causes of electric and magnetic effects not only within conductors and magnets, but in the medium around them. He assumed that such effects would take time to propagate through space as "lines of force" that could interact with matter. He came to believe in the reality of these lines of force. In 1831 he found that only a changing current in a wire will induce a current in a nearby second wire. He came to believe that the phenomenon of the induction of a current in a wire near another that carried a time-varying current was due to its "cutting" lines of force. He also discovered that as light passes through glass near a magnet, the polarization of light rotates. Having found such connections among electricity, magnetism, and light, Faraday continued to investigate the properties of the field around ponderable bodies. His conceptualization of lines of force and of fields continued to evolve from the early 1830s through the late 1840s. Constant in this evolution was the belief that the forces between two or more electrically charged bodies were mediated by some influence—the field—that was created by each body separately, propagated in space and acted upon the other charged bodies. It is difficult to summarize Faraday's notions because contemporary language uses some of the same words as he used but with different meanings. And since Faraday did not use mathematics to describe his theoretical models, we cannot rely on that technical language to clarify his works, as in the case of later researchers. What is clear is that Faraday's notion of a field was entwined with his visualization of it in terms of lines of force.
In the 1840s, William Thomson (1824–1907) began to mathematically analyze Faraday's findings in terms of the deformations of a hypothetical material substance, an "ether." Drawing analogies to hydrodynamics and heat conduction, he applied the Laplace/Poisson equation to electrostatics. He showed how to represent work as spread throughout space, and described the ponderomotive force as the tendency of the field to distribute work. He represented magnetic lines of force by vortices and sought a vortex theory of ether and matter.
James Clerk Maxwell (1831–1879) developed extensively this line of research. Following Faraday, Maxwell showed that the lines of electric current and the magnetic lines were linked in a "mutual embrace." He formulated a theory with differential equations that conveyed the reciprocal embrace of constant field lines, and in 1863, for fields varying in time. The latter resulted in transverse waves in the medium, which Maxwell identified as propagating light waves. Like Thomson, Maxwell sought a mechanical account of the ether. He devised a model consisting of cellular vortices and idle wheels that transmit the motion amongst cells and represent electricity.
In Maxwell's theory, the field, which stored and conveyed energy, was fundamental and its displacements constituted charges and currents. Maxwell's theory showed a close causal connection between the separately existing electric and magnetic fields. Heinrich Rudolph Hertz (1857–1894) experimentally demonstrated the existence of invisible electromagnetic waves. Meanwhile, theorists such as Hendrik Antoon Lorentz (1853–1928) interpreted the source terms in Maxwell's equations as densities of charged particles, called electrons. Lorentz developed a theory in which ether and electrons were fundamental entities. He showed that even inside ponderable bodies, electric and magnetic effects are not merely states of matter, but of the fields within.
Fields and Subatomic Particles
In 1905, Albert Einstein (1879–1955) disposed of the concept of the mechanical ether. Electromagnetic fields propagated in vacuo with the speed of light in all inertial frames. His special theory of relativity showed that the electric and magnetic fields could be represented by one (tensor) field, such that the effects that appear in a reference system as arising from a magnetic field appear in another system moving relative to the first as a combined electric field and magnetic field, and vice versa. The theory also engendered the conception of space and time as a four-dimensional continuum. To account for gravity as a field effect, Einstein formulated the general theory of relativity in 1915. Using tensor calculus and the non-Euclidean geometry of Bernhard Riemann (1826–1866), Einstein described gravitational fields as distortions of the space-time continuum. Meanwhile, following Maxwell, some physicists attempted to construe material particles as structures of fields, places where a field is concentrated. Einstein was among them, yet in 1905 he had proposed that light is composed of particles, "photons."
In the 1920s, Werner Heisenberg (1901–1976), Erwin Schrödinger (1887–1961), Max Born (1882–1970), and others formulated quantum mechanics. Its instrumental successes suggested the possibility of describing all phenomena in terms of "elementary particles," namely electrons, protons, and photons. The components of atoms were treated as objects with constant characteristics and whose lifetimes could be considered infinite. Protons and electrons were specified by their mass, spin, and by their electromagnetic properties such as charge and magnetic moment. Particles of any one kind were assumed to be indistinguishable, obeying characteristic statistics.
Quantum mechanics originally described nonrelativistic systems with a finite number of degrees of freedom. Attempts to extend the formalism to include interactions of charged particles with the electromagnetic field brought difficulties connected with the quantum representation of fields—that is, systems with an infinite number of degrees of freedom. In 1927, Paul Adrien Maurice Dirac (1902–1984) gave an account of the interaction, describing the electromagnetic field as an assembly of photons. For Dirac, particles were the fundamental substance. In contradistinction, Pascual Jordan (1902–1980) argued that fields were fundamental. Jordan described the electromagnetic field by operators that obeyed Maxwell's equations and satisfied certain commutation relations. Equivalently, he could exhibit the free electromagnetic field as a superposition of harmonic oscillators, whose dynamical variables satisfied quantum commutation rules. These commutation rules implied that in any small volume of space there would be fluctuations of the electric and magnetic fields even for the vacuum state, that is even for the state in which there were no photons present, and that the root mean square value of such fluctuations diverged as the volume element probed became infinitesimally small. Jordan advocated a unitary view of nature in which both matter and radiation were described by wave fields, with particles appearing as excitations of the fields.
The creation and annihilation of particles—first encountered in the description of the emission and absorption of photons by charged particles—was a novel feature of quantum field theory (QFT). Dirac's "hole theory," the relativistic quantum theory of electrons and positrons, allowed the possibility of the creation and annihilation of matter. Dirac had recognized that the (one-particle) equation he had devised in 1928 to describe relativistic spin 1/2 particles, besides possessing solutions of positive energy, also admitted negative energy solutions. Unable to avoid transitions to negative energy states, Dirac eventually postulated in 1931 that the vacuum be the state in which all the negative energy states were filled. The vacuum state corresponded to the lowest energy state of the theory, and the theory now dealt with an infinite number of particles. Dirac noted that a "hole," an unoccupied negative energy state in the filled sea, would correspond to "a new kind of particle, unknown to experimental physics, having the same mass and opposite charge to an electron" (p. 62). Physicists then found evidence that positrons exist.
Beta-decay was important in the field theoretic developments of the 1930s. The process wherein a radioactive nucleus emits an electron (-ray) had been studied extensively. In 1933, Enrico Fermi (1901–1954) indicated that the simplest model of a theory of -decay assumes that electrons do not exist in nuclei before -emission occurs, but acquire existence when emitted; in like manner as photons emitted from an atom during an electronic transition.
The discovery by James Chadwick (1891–1974) in 1932 of the neutron, a neutral particle of roughly the same mass as the proton, suggested that atomic nuclei are composed of protons and neutrons. The neutron facilitated the application of quantum mechanics to elucidate the structure of the nucleus. Heisenberg was the first to formulate a model of nuclear structure based on the interactions between the nucleons composing the nucleus. Nucleon was the generic name for the proton and the neutron, which aside from their differing electric charges were assumed to be identical in their nuclear interactions. Nuclear forces had to be of very short range, but strong. In 1935, Hideki Yukawa (1907–1981) proposed a field theoretic model of nuclear forces. The exchange of a meson mediated the force between neutrons and protons. In quantum electrodynamics (QED), the electromagnetic force between charged particles was conceptualized as the exchange of "virtual" photons. The massless photons implied that the range of electromagnetic forces is infinite. In Yukawa's theory, the exchanged quanta are massive. The association of interactions with exchanges of quanta is a feature of all quantum field theories.
QED, Fermi's theory of -decay, and Yukawa's theory of nuclear forces established the model upon which subsequent developments were based. It postulated impermanent particles to account for interactions, and assumed that relativistic QFT was the proper framework for representing processes at ever-smaller distances. Yet relativistic QFTs are beset by divergence difficulties manifested in perturbative calculations beyond the lowest order. Higher orders yield infinite results. These difficulties stemmed from a description in terms of local fields, a field defined at a sharp point in space-time, and the assumption that the interaction between fields is local (that is, occurs at localized points in space-time). Local interaction terms implied that photons will couple with (virtual) electron-positron pairs of arbitrarily high momenta, and electrons and positrons will couple with (virtual) photons of arbitrary high momenta, all giving rise to divergences. Proposals to overcome these problems failed. Heisenberg proposed a fundamental unit of length, to delineate the domain where the concept of fields and local interactions would apply. His S -matrix theory, developed in the early 1940s, viewed all experiments as scattering experiments. The system is prepared in a definite state, it evolves, and its final configuration is observed afterwards. The S -matrix is the operator that relates initial and final states. It facilitates computation of scattering cross-sections and other observable quantities. The success of nonrelativistic quantum mechanics in the 1920s had been predicated on the demand that only observable quantities enter in the formulation of the theory. Heisenberg reiterated that demand that only experimentally ascertainable quantities enter quantum field theoretical accounts. Since local field operators were not measurable, fundamental theories should find new modes of representation, such as the S -matrix.
During the 1930s, deviations from the predictions of the Dirac equation for the level structure of the hydrogen atom were observed experimentally. These deviations were measured accurately in molecular beam experiments by Willis Eugene Lamb, Jr. (b. 1913), Isidor Isaac Rabi (1898–1988), and their coworkers, and were reported in 1947. Hans Albrecht Bethe (b. 1906) thereafter showed that this deviation from the Dirac equation, the Lamb shift, was quantum electrodynamical in origin, and that it could be computed using an approach proposed by Hendrik Kramers (1894–1952) using the technique that subsequently was called "mass renormalization." Kramers's insight consisted in recognizing that the interaction between a charged particle and the electromagnetic field alters its inertial mass. The experimentally observed mass is to be identified with the sum of the charged particle's mechanical mass (the one that originally appears as a parameter in the Lagrangian or Hamiltonian formulation of the theory) and the inertial mass that arises from its interaction with the electromagnetic field.
Julian Schwinger (1918–1994) and Richard P. Feynman (1918–1988) showed that all the divergences in the low orders of perturbation theory could be eliminated by re-expressing the mass and charge parameters that appear in the original Lagrangian, or in equations of motions in which QED is formulated, in terms of the actually observed values of the mass and the charge of an electron—that is, by effecting "a mass and a charge renormalization." Feynman devised a technique for visualizing in diagrams the perturbative content of a QFT, such that for a given physical process the contribution of each diagram could be expressed readily. These diagrams furnished what Feynman called the "machinery" of the particular processes: the mechanism that explains why certain processes take place in particular systems, by the exchange of quanta. The renormalized QED accounted for the Lamb shift, the anomalous magnetic moment of the electron and the muon, the radiative corrections to the scattering of photons by electrons, pair production, and bremsstrahlung.
In 1948, Freeman Dyson (b. 1923) showed that such renormalizations sufficed to absorb all the divergences of the scattering matrix in QED to all orders of perturbation theory. Furthermore Dyson demonstrated that only for certain kinds of quantum field theories is it possible to absorb all the infinities by a redefinition of a finite number of parameters. He called such theories renormalizable. These results suggested that local QFT was the framework best suited for unifying quantum theory and special relativity. Yet experiments with cosmic rays during the 1940s and 1950s detected new "strange" particles. It became clear that meson theories were woefully inadequate to account for all properties of the new hadrons being discovered. The fast pace of new experimental findings in particle accelerators quelled hopes for a prompt and systematic transition from QED to formulating a dynamics for the strong interaction.
For some theorists, the failure of QFT and the super-abundance of experimental results seemed liberating. It led to generic explorations where only general principles such as causality, cluster decomposition (the requirement that widely separated experiments have independent results), conservation of probability (unitarity), and relativistic invariance were invoked without specific assumptions about interactions. The American physicist Geoffrey Chew rejected QFT and attempted to formulate a theory using only observables embodied in the S -matrix. Physical consequences were to be extracted without recourse to dynamical field equations, by making use of general properties of the S -matrix and its dependence on the initial and final energies and momenta involved.
By shunning dynamical assumptions and instead using symmetry principles (group theoretical methods) and kinematical principles, physicists were able to clarify the phenomenology of hadrons. Symmetry became a central concept of modern particle physics. A symmetry is realized in a "normal" way when the vacuum state of the theory is invariant under the symmetry that leaves the description of the dynamics invariant. In the early 1960s, it was noted that in systems with infinite degrees of freedom, symmetries could be realized differently. It was possible to have the Lagrangian invariant under some symmetry, yet not have this symmetry reflected in the vacuum. Such symmetries are known as spontaneously broken symmetries (SBS). If the SBS is global, there will be massless spin zero bosons in the theory. If the broken symmetry is local, such bosons disappear, but the bosons associated with broken symmetries acquire mass. This is the Higgs mechanism.
The Standard Model and Beyond
In 1967 and 1968, the American nuclear physicist Steven Weinberg and the Pakistani nuclear physicist Abdus Salam independently proposed a gauge theory of the weak interactions that unified the electromagnetic and the weak interactions using the Higgs mechanism. Their model incorporated suggestions advanced by the American theoretical physicist Sheldon Glashow in 1961 on how to formulate a gauge theory in which the weak forces were mediated by gauge bosons. Glashow's theory had been set aside because physicists doubted the consistency of gauge theories with massive gauge bosons, and such theories were not renormalizable. SBS offered the possibility of giving masses to the gauge bosons. The renormalizability of such theories was proved by the Dutch physicist Gerardus 't Hooft in 1972 under the guidance of Martinus Veltman. The Glashow-Weinberg-Salam theory (GWS) rose to prominence. Experiments in 1973 corroborated the existence of weak neutral currents embodied in this "electroweak" theory. The detection of the W ± and of the Z 0 in 1983 gave further confirmation. Gauge theory, the mathematical framework for generating dynamics-incorporating symmetries, now plays a central role in the extension of QFT. Symmetry, gauge theories, and spontaneous symmetry breaking are the three pegs upon which modern particle physics rests.
Particles such as protons and neutrons are now understood as composed of "quarks." Quantum chromodynamics (QCD) describes the strong interactions between six quarks. Evidence for the sixth was confirmed in 1995. Quarks carry electrical charge and also a strong "color" charge, in any of three color states. QCD does not involve leptons because they have no strong interactions. It is a gauge theory involving eight massless gluons and the tricolor gauge bosons. The GWS of the weak interactions is a gauge theory involving two colors. Each quark thus carries an additional weak color (or weak charge). Four gauge bosons mediate the weak interactions between quarks. Since the 1980s, successful accounts of high energy phenomena using QCD have proliferated.
This elegant "standard model" does not accord with the known characteristics of weak interactions nor with the phenomenological properties of quarks. Local gauge invariance requires that the gauge bosons be massless, and therefore that the forces they generate be of long range. But actually, the weak force is of minute range and the masses of the W and Z bosons are large. Nor does the model accommodate quark masses. A Higgs SBS mechanism is commonly invoked to overcome such difficulties. Establishing its reality is an outstanding problem.
The work of the American physicist Kenneth Wilson and Weinberg gave support to a more restrictive view: All extant field theoretic representations of phenomena are only partial descriptions, valid in the energy domain specified by the masses of the particles that are included, and delimited by the masses of the particles that are excluded. QFTs can be viewed as low energy approximations to a more fundamental theory that is not necessarily a field theory. Such reconceptualizations have led to a hierarchical structuring of the submicroscopic realm with the dynamics in each domain described by an effective field theory. Some see it as rectifying the reductionist ideology that gripped physics. Others pursue the possibility of a more global and symmetric unification than provided by the standard model. String theory is the only extant candidate for a consistent quantum theory to incorporate general relativity and yield a finite theory. The finiteness of the theory is the result of the fact that its fundamental entities are not point-like, but string-like, and space-time is not limited to four dimensions. Particles are then conceived as the quantum states corresponding to excitations of the basic stringlike entities.
Some theorists herald the possibility of a "final theory" that will consistently fuse quantum mechanics and general relativity and unify the four known interactions. This hope was given some credence in 1984 when superstring theory emerged as a candidate to unify all the particles and forces, including gravitation. A newer version in 1994 imagined that there is a single "big theory" with many different phases, consisting of the previously known string theories, among other things. Yet very many questions remain, including how to make contact with the experimental data explained by the standard model. Nor is it clear that such a theory—if formulated—would constitute a final theory and that no lower level might exist.
See also Physics ; Relativity ; Quantum .
Buchwald, Jed Z. From Maxwell to Microphysics: Aspects of Electromagnetic Theory in the Last Quarter of the Nineteenth Century. Chicago: University of Chicago Press, 1985.
Cao, T. Yu. Conceptual Developments of Twentieth Century Field Theories. Cambridge, U.K.: Cambridge University Press, 1997.
Davies, Paul. ed. The New Physics. Cambridge, U.K.: Cambridge University Press, 1989.
Dirac, P. A. M. "Quantised Singularities in the Electromagnetic Field." Proceedings of the Royal Society of London Series A 133 (1931): 60–72.
Fitch, V. and Rosner, J. "Elementary Particle Physics in the Second Half of the Twentieth Century." In Twentieth Century Physics, edited by Laurie M. Brown, Abraham Pais, and Sir Brian Pippard. Philadelphia: American Institute of Physics, 1995.
Hoddeson, Lillian, et al., eds. The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge, U.K.: Cambridge University Press, 1997.
Marshak, Robert E. Conceptual Foundations of Modern Particle Physics. Singapore: World Scientific, 1993.
Pais, Abraham. Inward Bound: Of Matter and Forces in the Physical World. Oxford: Oxford University Press, 1986.
Pickering, Andrew. Constructing Quarks: A Sociological History of Particle Physics. Chicago: University of Chicago Press, 1984.
Purrington, Robert D. Physics in the Nineteenth Century. New Brunswick, N.J.: Rutgers University Press, 1997.
Schweber, Silvan S. QED and the Men Who Made It: Dyson, Feynman, Schwinger, and Tomonaga. Princeton, N.J.: Princeton University Press, 1994.
Weinberg, Steven. The Quantum Theory of Fields. 3 vols. Cambridge, U.K.: Cambridge University Press, 1995–2000.
Whittaker, E. T. A History of the Theories of Aether and Electricity. London: Longmans, Green, 1910. Reprint, Los Angeles: Tomash, 1989.
Alberto A. Martínez
Silvan S. Schweber
"Field Theories." New Dictionary of the History of Ideas. . Encyclopedia.com. (April 23, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/field-theories
"Field Theories." New Dictionary of the History of Ideas. . Retrieved April 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/field-theories
Most of classical and quantum physical phenomena are fundamentally described and explained in terms of fields, such as the electromagnetic and gravitational fields. These physical entities are not localized objects or particles, they generally vary in time, and they are defined at every point in space. They represent the influence, or force, an object or particle would experience at each point in space, at the time indicated. These fields are represented mathematically by functions of space and time. Field theories are the systematic theoretical mathematical-physical descriptions and elaborations of these fields, including their generation, detection, behavior, and relationships with one another and with other physical entities, such as particles. Generally, field theories are expressed in terms of partial differential equations that describe the relation of the fields to the entities that cause, or source, the fields.
There are also energy and momentum conservation equations that further constrain the fields, as well as the closely related equations of motion, which describe how particles or objects move at every point under the influence of the fields. All of these equations are generally derivable from a very special function, called a Lagrangian, which gives the kinetic energy minus the potential energy of the entire system. The behavior of the system described by the field equations and the equations of motion is always given by an extremum (maximum or minimum) of the Lagrangian action.
In physics there are four basic interactions, or forces: electromagnetism, the strong nuclear interaction, the weak nuclear interaction, and gravity. All of these are represented by fields, and their description, generation, behavior, and associated phenomena are treated and explained by field theories. As just indicated, these fields are generated by sources. For example, an electric field has charge as its source, and a magnetic field has a magnetized object or a current as its source. A gravitational field is generated by anything that has mass-energy. As also mentioned, these fields can be time-varying and they can propagate. Time-varying, propagating fields are often referred to as waves, or radiation. Thus, we have electromagnetic radiation—light, X-rays, radio-waves—which are really time varying electromagnetic fields. Similarly, a gravity wave or gravitational radiation is a time-varying gravitational field, or the time-varying, propagating changes of curvature through space. Fields also have particles associated with them, both those that function as sources and those that are quanta of their waves (photons for electro-magnetic waves and gravitons for gravitational waves). These bosons (integer-spin particles) are the force carriers of their respective fields between other particles, or distributions of particles, usually those that constitute matter (half-integer spin particles, like quarks, protons, neutrons, and electrons—the fermions ).
Finally, at the highest energies, these four fundamental interaction fields probably undergo unification. That is, they become indistinguishable from one another at very high energies. At a relatively low energy, equivalent to a temperature of 1015 K (kelvin), the electromagnetic and the weak nuclear interaction become indistinguishable—the electroweak interaction.
Electroweak unification has been securely demonstrated and described; this theoretical achievement is due to Stephen Weinberg and Abdus Salam. At a much higher energy (equivalent to 1027 K) the electroweak interaction and strong interaction unify in the Grand Unified Theory (GUT) interaction, and this in turn probably unifies with gravity at an energy equivalent to 1032 K, above which is the realm of quantum gravity and quantum cosmology. As of 2002, there was no theory adequately describing these last two levels of unification, nor were there experiments and observations unequivocally requiring them. However, there has been very promising theoretical and experimental progress made on both fronts. If and when total unification of all four fundamental interactions is attained this will complete the unification program that began with James Clerk Maxwell's brilliant unification of the electric and magnetic fields in 1864.
The early history of field theory
The concept of a field first appeared in hydrodynamics in the treatment of continuous media, such as fluids. Many mathematical physicists of the seventeenth and eighteenth centuries treated fluids or continuous bodies by dividing them up into small volumes or elements, but it was John Bernoulli who in 1732 first wrote down the equations of motion for these elements, considering them as point particles. Using this approach, Leonard Euler fashioned hydrodynamics into a field theory by modeling the motion of a fluid by giving its velocity at each point in the fluid, and using partial differential equations of these velocity components as functions of time and of the spatial coordinates. In doing this, the molecular structure of the fluid was neglected and it was treated as a continuum, with key parameters being determined at every point. This enabled researchers to describe the transmission of effects through the fluids. Somewhat later the more challenging problem of the propagation of displacements in solids, where elastic forces are prominently involved, was tackled. This was adequately solved by George Stokes in 1845.
Thus, field theory first emerged to describe the behavior of a continuous medium. There was at that time a different set of important physical phenomena (gravity, and electric and magnetic attraction and repulsion), which seemed to involve action at a distance. In these cases, such as in Isaac Newton's theory of gravity, it was assumed that there was no medium transmitting these interactions and that their effects occurred instantaneously. During the nineteenth century, due to the work of Joseph-Louis Lagrange, Pierre-Simon Laplace, and Siméon-Denis Poisson, the action at a distance in these cases began to be considered somewhat like a field, but without the presence of a fluid or a medium. In the case of gravity, for instance, the force of attraction at any location outside of a massive body can be designated in terms of what a point test mass would experience at each of those coordinate positions. This can be expressed in terms of the gravitational potential V at each position, which satisfies the well-known second-order differential equations of Laplace (empty space) or Poisson (nonempty space). Thus, both the gravitational force and the gravitational potential are fields. As such, they are no longer properties of discernable matter, but of empty space itself.
Despite its representation by a potential field, gravity continued to be considered an action at a distance throughout the nineteenth century because the gravitational potential could not be associated with any discernable medium. The propagation of gravitational effects was not affected by any material changes in the intervening space; it appeared instantaneous, no mechanical model for the action of a medium could be conceived, nor could any energy be located between the gravitationally interacting bodies. It was only in the early twentieth century, with the advent of Albert Einstein's General Relativity (his theory of gravity), that gravitation became recognized as a field theory in the strict physical sense. Electromagnetism, in contrast, began to be considered as a genuine field-theory in the nineteenth century, precisely because it clearly fulfilled these same criteria.
Classical field theory
It was Michael Faraday who in the mid-nineteenth century first showed that action at a distance provides an inadequate description of electric and magnetic phenomena. His studies convinced him that electrical and magnetic influences propagated through a medium and not at a distance. The basic idea is that of a "continuous action" of forces filling space, not of a continuous mechanical action. Faraday's diagrams of lines of force originating in and returning to conductors and magnets stimulated Baron Kelvin (William Thomson) and Maxwell to formulate electromagnetic behavior in terms of fields. In comparing gravitational forces with electromagnetic forces, however, Faraday was unable to extend to gravity his arguments for propagation through a medium. Thus, Newtonian gravity continued to be considered by most, from a physical point of view, to be an "action at a distance" theory.
Faraday's key insights concerning electromagnetism were confirmed by Kelvin, who in the 1840s was able to show that the same mathematical formulae could be used to describe fluid and heat flow, electrical and magnetic behavior, and elastic behavior. Kelvin thus established the important analogies among all five classes of phenomena, as well as that representing electric and magnetic phenomena by lines of force was consistent with their inverse-square falloff. Both Kelvin and Maxwell were careful not to draw conclusions about the reality of physical media from these detailed mathematical analogies. However, once Maxwell had formulated his highly successful electromagnetic field equations, which really provide a detailed quantitative unified description of electrical and magnetic phenomena, he and other physicists began to interpret these fields as a form of matter, so much so that matter in the usual sense gradually came to be looked upon in terms of fields, rather than vice versa. This was especially true once it was clear from Maxwell's theory that propagation of electro-magnetic fields is not instantaneous and that electromagnetic energy, which can be transformed into other forms of energy, is contained in the fields themselves. Maxwell also succeeded in associating momentum with the electromagnetic field, and the physicist John Henry Poynting later developed the concept of energy-flux, and showed that this applied in a concrete way to electromagnetic fields and electromagnetic radiation. These developments have all contributed to supporting the conception of electromagnetic fields as a genuine form of matter, and they presaged the discoveries in Special Relativity that mass and energy are equivalent, and later in relativistic quantum theory that all forms of matter are fundamentally interacting fields. Maxwell's theory was the first fully successful and complete field theory, and remains the best example of a classical field theory.
The influence of Special and General Relativity
Along with Maxell's electromagnetic theory, Special and General Relativity strongly reinforced the usefulness and strength of the field-theory perspective, and even the realistic physical interpretations given to fields. The formulation and confirmation of Special Relativity have been especially influential in effecting this. Besides the discovery of mass-energy equivalence, mentioned above, perhaps the most influential event was the 1887 experiment by Albert Michelson and Edward Morley, the most compelling interpretation of which held that "the ether" does not exist and that therefore the velocity of light is constant with respect to any inertial frame (any coordinate system moving at a constant velocity). Thus, there is no absolute standard of rest. Moreover, there is no medium needed for electro-magnetic fields to propagate. The fields themselves are fundamental and are, in a sense, their own media. Furthermore, since nothing propagates instantaneously, there are no perfectly rigid bodies or incompressible fluids, as envisioned in Newtonian mechanics. These are idealizations that, strictly speaking, are never realized. What is most impressive is that Maxwell's electromagnetic field theory turns out to be completely consistent with Special Relativity and can be explicitly formulated as such (in Lorentz-invariant fashion) in a natural and straightforward way. This confirms the insights that fields are a basic form of matter and that they are integral and indivisible.
Newton's theory of gravity was not generally looked upon as a field theory in the same way electromagnetism was, but rather as an "action at a distance" theory. Einstein changed that. In formulating his theory of gravitation, he fundamentally conceived space and time as fields that obey field equations, connecting space and time with the mass-energy distribution that they "contain." These space and time fields are the components of the metric tensor that makes space-time measurements possible. They are like, and in fact replace, the gravitational potential of Newton, but they are not defined in a pre-existing background space-time. They are the space-time. And this space-time is, in general, not flat but curved, depending on the density and pressure of the mass-energy on the spacetime manifold, including all nongravitational (e.g., electromagnetic) fields. As a result, light rays (electromagnetic radiation) and freely moving particles follow the geodesics in curved space-time. Gravity is no longer conceived as a force, strictly speaking, but rather as the curvature of space-time. And light is affected by this curvature, unlike in Newtonian gravitational theory. This is consistent also from the point of view that light possesses energy, which is equivalent to mass, according to Special Relativity. Through observations of the bending and the red-shifting of light rays in gravitational fields, as well as through other observations (including the evidence for the existence of black holes), General Relativity has been impressively confirmed. General Relativity also predicts the existence of gravitational radiation—the propagation, at the speed of light, of variations in the curvature of space-time. This has been indirectly detected. And there is a massive effort to detect these gravity waves directly.
Quantum mechanics and quantum field theory
One of the great accomplishments of twentieth-century physics was the development and experimental confirmation of quantum theory. This began with failures of classical physics to account for the stability of atoms, the photoelectric effect, the explanation of the Planck blackbody spectrum, wave-particle duality, and the intrinsic uncertainties in certain types of measurements. Essentially, it became clear that physical reality, at its most fundamental level, could not be modeled in a continuous way, but only in terms of discrete quanta of energy, angular momentum, spin, and so on. Furthermore, any measurement of a system automatically affects that system in some way, with the Un-certainty Principle always applying. In any quantum measurement, the outcome is never precisely predictable. The theory gives probabilities that any one of a set of possible outcomes will result from a given measurement. All of these issues have been more or less satisfactorily incorporated into quantum mechanics by Erwin Schrödinger, Werner Heisenberg, and others. Paul Dirac properly formulated quantum mechanics within the framework of Special Relativity, yielding relativistic quantum mechanics. As such, in both these formulations, quantum mechanics is not a field theory, but rather a quantum theory of discrete bodies and individual particles in their interactions with one another.
Relativistic quantum mechanics, however, is plagued by a serious problem: it allows for negative-energy states, which would seem to predict an infinite series of decays. It turns out that this problem can be solved only by moving from consideration of single particles to indefinitely many particles. This automatically leads us to consider quantum fields as fundamental, with the particles being localized realizations (modes or quanta) associated with these fields. The result is the development of the extraordinarily successful quantum field theory. The fundamental structure of physical reality has come to be understood in terms of the interaction of these quantum fields, some of which are bosonic, or force-carrying, and some which are fermionic, or particle-constituting. As mentioned at the beginning, there is strong evidence that at higher and higher energies or temperatures the four fundamental field interactions (electromagnetism, the strong and weak nuclear interactions, and gravity) unify step by step, and become indistinguishable. There are still many unknown details and challenges in constructing a completely adequate unified field theory and in explaining some of the features that physical reality manifests, particularly with respect to the quantum connections between gravity and space-time, as well as gravity and the other three interactions. But quantum field theory as it is understood in the early twenty-first century provides an impressive and reliable, though provisional and incomplete, description and guide to how reality at its most fundamental levels is constituted and behaves.
Relevance to the religion-science dialogue
The principal relevance of field theory to the religion-science dialogue is that it gives a reliable, well tested, and nearly comprehensive account of how reality is put together at its most fundamental levels. It also ultimately sheds light, through its applications in cosmology, on how the universe evolved from an extremely hot, homogeneous, simple, and undifferentiated quantum-dominated state to its present cool, lumpy, complex, and highly differentiated state. This strongly constrains theology in speaking about how creation occurred and about how God acts in creating and in sustaining what has been created. The relationships, processes, interactions, and regularities described by field theory—the laws of nature and physics—must be acknowledged to play a key role as channels of God's creative ordering power in reality. The concept of dynamic interacting fields, along with the auxiliary concepts and phenomena connected with them, can also provide analogies that can be employed in constructive theological programs.
See also Field; Grand Unified Theory; Gravitation; Physics, Quantum; Relativity, General Theory of; Relativity, Special Theory of
auyang, sunny y. how is quantum field theory possible? oxford: oxford university press, 1995.
berkson, william. fields of force: the development of a world view from faraday to einstein. new york: wiley, 1974.
hesse, mary b. forces and fields: the concept of action at a distance in the history of physics. london: thomas nelson and sons, 1961.
landau. l. d., and lifshits, e. m. classical theory of fields. oxford and new york: pergamon, 1971.
lorrain, paul, and corson, dale r. electromagnetism: principles and applications. san francisco: w. h. freeman, 1978.
peskin, michael e., and schroeder, daniel v. an introduction to quantum field theory. reading, mass.: addison-wesley, 1995.
raymond, pierre. field theory: a modern primer, 2nd edition. reading, mass.: addison-wesley, 1990.
ryder, lewis h. quantum field theory, 2nd edition. cambridge, uk: cambridge university press, 1996.
sachs, mendel. the field concept in contemporary science. springfield, ill.: thomas, 1973.
sen, d. k. field and/or particle. london and new york: academic press, 1968.
williams, l. pearce. the origins of field theory. new york: random house, 1966.
william r. stoeger
"Field Theories." Encyclopedia of Science and Religion. . Encyclopedia.com. (April 23, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/field-theories
"Field Theories." Encyclopedia of Science and Religion. . Retrieved April 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/field-theories
"field theory." A Dictionary of Sociology. . Encyclopedia.com. (April 23, 2017). http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/field-theory
"field theory." A Dictionary of Sociology. . Retrieved April 23, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/field-theory