Public Policy Analysis

views updated

PUBLIC POLICY ANALYSIS

Public policy analysis is a large, sprawling intellectual enterprise involving numerous academic disciplines, private research organizations, and governmental agencies each sharing a common concern with the formulation, implementation, or consequences of public policy decisions. There are approximately thirty journals published in the English language alone and nearly twenty professional associations that are devoted more or less exclusively to policy analysis. Departments, centers, and institutes dealing in whole or in part with policy analysis can be found at over forty American universities.

As currently practiced, policy analysis involves contributions from the entire gamut of scientific disciplines. Much present-day public policy analysis is undertaken by scholars from the various applied physical and biological sciences (for example, environmental impact studies, technology assessments, seismic risk analyses, and the like). The focus here, however, is on public policy analysis as it is conducted within the social and behavioral sciences, principally economics, political science, and sociology.

The diversity of research work conducted under the rubric of "public policy analysis," even when restricted to the social science component, is perhaps the distinguishing characteristic of the subject; in the space available here we can do little more than indicate the range of topics and approaches with which policy analysts are concerned. Rogers (1989) has developed a typology of public policy research that is useful for this purpose; the following is adapted from his discussion.


PROBLEM DEFINITION OR NEEDS ASSESSMENT

Public policy usually addresses real or sensed problems, and a great deal of public policy analysis is therefore devoted to defining or clarifying problems and assessing needs. What are the health care needs of a particular neighborhood? What are the housing or nutritional needs of the nation's poverty population? What social services do homeless persons require? It is obvious that the development and formulation of public policy will be enhanced when underlying needs have been adequately described and analyzed. There is a large literature on the theory and practice of problem definition and needs assessment; students seeking additional information will find Johnson and colleagues (1987) invaluable.

VALUE EXPLORATION OR CLARIFICATION

Given a demonstrated need, any number of policies might be developed to address it. Which policies, goals, or outcomes are most desirable? If an area is found to have unmet health needs, is it better to open freestanding clinics or to provide subsidized health insurance? Are the housing needs of the poor best addressed through public housing projects or through housing vouchers that can be used in lieu of rent? Should our policies with respect to the homeless attempt to ameliorate the conditions of a homeless existence, or prevent people from becoming homeless in the first place?

Assessing the relative desirability of policy options is only rarely an empirical matter; such decisions are more often ethical or ideological. MacRae (1985) stresses the unavoidable role of values in the process of policy analysis and the ensuing conflicts for the policy analyst. He identifies four principal "end values" widely shared throughout American society and against which policy decisions can be compared: economic benefit, subjective well-being, equity, and social integration. Sadly, policies that maximize equity may not maximize net economic benefit; those that enhance social integration may destroy subjective well-being. Thus, public policy analysis is not an arena for those who wish to pursue "value-neutral" science nor is it one for the morally or ideologically faint of heart.


CONCEPTUAL DEVELOPMENT

Much work in the area of public policy analysis consists of developing conceptual schemes or typologies that help sort out various kinds of policies or analyses of policies (such as the typology we are presently using). Nagel (1984) and Dubnick and Bardes (1983) review numerous conceptual schemes for typifying policies and policy analyses, with useful suggestions for synthesis; the former is an especially good overview of the field as a whole.


POLICY DESCRIPTION

Adequate description of public policy is essential for proper evaluation and understanding, but many public policies prove frustratingly complex, especially as delivered in the field. "Poverty policy" in the United States consists of a vast congeries of federal, state, and local programs each focused on different aspects of the poverty problem (income, employment, housing, nutrition) or on different segments of the poverty population (women, children, women with children, the disabled, the elderly). The same can obviously be said of housing policy, tax policy, environmental policy, health policy, and on through a very long list. Even a single element of poverty policy such as Temporary Assistance to Needy Families (TANF) has different eligibility requirements, administrative procedures, and payment levels in each of the fifty states. Thus, accurate policy description is by no means a straightforward task. Outstanding examples of policy description, both focused on poverty policy, are Haveman (1977) and Levitan (1985).


POLICY FORMULATION

Social science has a role to play in the formulation of policy as well as its description or evaluation. Most of the issues that policy attempts to address have been the focus of a great deal of basic social science research: poverty, ill health, homelessness, crime, violence, and so on. Although the once-obligatory discussion of "policy implications" of basic research has abated in recent years, few social scientists who work on policy-relevant issues can resist the urge to comment on the possible implications of the results for policy formulation. Much more work of this sort needs to be done, as it is evident that many policies are formulated and enacted in utter disregard for the extant state of knowledge about the topic. Indeed, Peter Rossi has hypothesized that the major reason social programs fail is that they are typically designed by amateurs who are largely innocent of social science theory, concepts, and results. Various job programs, mental health interventions, and crime reduction policies represent obvious cases in point.

METHODOLOGICAL RESEARCH

Unlike much basic disciplinary research in the social and behavioral sciences, whose results are largely inconsequential except to a handful of specialists, the results of policy studies will often influence peoples' lives and well-being and the cost of being wrong can run into millions or billions of dollars. Thus issues of internal and external validity, errors of measurement and specification, proper statistical modeling, and the like are more than methodological niceties to the policy analyst; they are worrisome, ever-present and potentially consequential threats to the accuracy of one's conclusions and to the policy decisions that ensue. A technical error in a journal article can be corrected in a simple retraction; an equivalent error in a policy analysis might result in wrong-headed or counterproductive policies being pursued.

Much of the literature on public policy analysis, and especially on impact evaluation (see below), is therefore mainly methodological in character; indeed, many recent innovations in research procedure have been developed by scholars working on applied, as opposed to basic, problems. There are many texts available on the methodology of public policy analysis. Rossi and colleagues (1998) provide a comprehensive overview; Judd and Kenny (1981) are highly recommended for the more advanced student.


POLICY EXPLANATION

Much public policy analysis undertaken by political scientists focuses on the processes by which policy is made at federal, state, and local levels. Classic examples are Marmor's analysis of the passage of Medicare (1970) and Moynihan's study of the ill-fated Family Assistance Plan proposed early in the Nixon administration but never enacted (1973).

Explanations of how public policy is made are invariably replete with the "dirty linen" of the political process: competing and often warring constituencies, equally legitimate but contradictory objectives and values, vote trading, compromises and deals, political posturing by key actors, intrusions by lobbying, advocacy and special interest groups, manipulation of public sentiment and understanding—in short, the "booming, buzzing confusion" of a fractious, pluralistic political system. For those whose understanding of such matters does not extend much beyond the obligatory high school civics lesson in "how a bill becomes a law," the policy explanation literature is a revelation.


POLITICAL INTELLIGENCE OR PUBLIC OPINION

In a democratic society, public opinion is supposed to "count" in the policy formation process. Sometimes it does; often it does not. Policy analysis thus sometimes involves plumbing the depths and sources of support or opposition to various policy initiatives, and in a larger sense, explicating the process by which policy becomes legitimated.

There is no easy answer to the question whether (or under what conditions) public opinion dictates the direction of public policy. It is evident that policy makers are sensitive to public opinion; many presidents, for example, are morbidly fascinated by their standing in the polls (e.g., Sussman 1988). It is equally evident, however, that many policies with strong majority support are never enacted into law. An interesting study of the effects of public opinion on policy formation is Verba and Nie (1975).


EVALUATION RESEARCH

The ultimate analytic question to be asked about any public policy is whether it produced (or will produce) its intended effects (or any effects, whether intended or not). The search for bottom-line effects—impact assessment—is one of two major activities subsumed under the rubric of evaluation research. The other is so-called process evaluation, discussed below under "Implementation Analysis."

There are many formidable barriers to be overcome in deciding whether a policy or program has produced its intended (or any) effects. First, the notion of "intended effects" presupposes clearly defined and articulated program goals, but many policies are enacted without a clear statement of the goals to be achieved. Thus, many texts in evaluation research recommend an assessment of the "evaluability" of the program prior to initiating the evaluation itself. A second barrier is the often-pronounced difference between the programas-designed and the program-as-delivered. This is the issue of program implementation, discussed below.

The most troublesome methodological issue in evaluation research lies in establishing the ceteris paribus (or "all else equal") condition, or in other words, in estimating what might have happened in the absence of the program to be evaluated. In an era of declining birthrates, any fertility reduction program will appear to be successful; in an era of declining crime rates, any crime reduction program will appear to be successful. How, then, can one differentiate between program effects and things that would have happened anyway owing to exogenous conditions? (Students of logic will see the problem here as the post hoc, ergo propter hoc fallacy.)

Because of this ceteris paribus problem, many evaluations are designed as experiments or quasi-experiments. In the former case, subjects are randomly assigned to various treatment and control conditions, and outcomes are monitored. Randomization in essence "initializes" all the starting conditions to the same values (except for the vagaries of chance). In the recent history of evaluation research, the various negative income tax experiments (see Rossi and Lyall 1976) are the best-known examples of large-scale field experiments of this general sort. Quasi-experiments are any of a number of research designs that do not involve randomization but use other methods to establish the ceteris paribus condition; the definitive statement on quasi-experiments is Cook and Campbell (1979).

Nowhere is the trade-off between internal and external validity more vexing than in the design of program evaluations. Evaluation designs with high internal validity, such as randomized experiments, are excellent in detecting program effects but the experimental conditions may not generalize to real-world settings. Thus, one telling critique of the Negative Income Tax (NIT) experiments is that participants knew from the beginning that the program would end in three (or in some cases five) years, so the labor-force response may have been very different than it would have been if negative income taxation became a permanent element of national income policy. Likewise, as the research setting comes to more closely mimic real-world conditions (that is, as it develops high external validity), the ability to detect real effects often declines.

A final problem in doing evaluation research is that most policies or programs are relatively small interventions intended to address rather large, complex social issues. The poverty rate, to illustrate, is a complex function of the rate of employment, trends in the world economy, prevailing wage rates, the provisions of the social welfare system, and a host of additional macrostructural factors. Any given antipoverty program, in contrast, will be a relatively small-scale intervention focused on one or a few components of the larger problem, often restricted to one or a few segments of the population. Often, the overall effects of the various large-scale, macrostructural factors will completely swamp the program effects—not because the program effects were not present or meritorious but because they are very small relative to exogenous effects.

The literature on the theory and practice of evaluation research is expansive; students seeking additional information will find themselves well served by Chambers and colleagues (1992), and by Rossi and colleagues (1998).


OUTCOME ANALYSIS

Assuming that a program has been adequately evaluated and an effect documented, one can then analyze that effect (or outcome) to determine whether it was worth the money and effort necessary to produce it. Outcome analysis thus examines the cost effectiveness or cost beneficiality of a given policy, program, or intervention.

Cost-benefit and cost-effectiveness analysis are intrinsically complex, technically demanding subjects. One complication lies in assessing the socalled opportunity costs. A dollar spent in one way is a dollar no longer available to use in some other way. Investing the dollar in any particular intervention thus means that one has lost the "opportunity" to invest that dollar in something that may have been far more beneficial.

A second complication is in the "accounting perspective" one chooses to assess benefits and costs. Consider the Food Stamp program. A recipient receives a benefit (a coupon that can be redeemed for food) at no cost; from the accounting perspective of that recipient, the benefit-cost ratio is thus infinite. The Food Stamp program is administered by the United States Department of Agriculture (USDA). From the USDA perspective, the benefit of the program presumably lies in the contribution it makes to relieving hunger and malnutrition in the population; the cost lies in whatever it takes to administer the program, redeem the coupons once submitted by food outlets, etc. Accounted against the USDA perspective, the benefit-cost ratio will be very different, and it will be different again when accounted against the perspective of society as a whole. The latter accounting, of course, requires asking what it is worth to us as a nation to provide food to those who might otherwise have to go without, clearly a moral question more than an empirical or analytic one.

This last example illustrates another thorny problem in doing cost-benefit analyses, namely, the incommensurability of benefits and costs. The dollar costs of most programs or policies can be reasonably well estimated. (The dollar costs are usually not the only costs. There may also be ethical or political costs that cannot be translated into dollars and cents but that are, nonetheless, real. Let us ignore the nondollar costs, however.) Unfortunately, the benefits of most interventions cannot be readily expressed in dollars; they are expressed, rather, in less tangible (but equally real) terms: lives saved, improvements in the quality of life, reductions of hunger, and the like. If the outcome cannot be converted to a dollar value, then a strict comparison to the dollar costs cannot be made and a true benefit-cost ratio cannot be calculated.

Cost effectiveness analysis, in contrast, compares the benefits of one program (expressed in any unit) at one cost to the benefits of another program (expressed in the same unit) at a different cost. Thus, a program that spends $10,000 to save one life is more cost effective than another program that spends $20,000 to save one life. Whether either program is cost beneficial, however, cannot be determined unless one is willing to assign a dollar value to a human life.

Many texts by economists deal at length with these and related complexities; accessible overviews include Levin (1975) and Yates (1996).

IMPLEMENTATION ANALYSIS

"Much is the slippage between the spoon and the mouth." A program as it is delivered in the field is rarely identical to the program as designed in the policy making process; sometimes, there is only a superficial resemblance. Since slippage between design and implementation might provide one explanation for the failure to achieve significant program effects, implementation analysis is an essential component of all capable policy evaluations.

There are many reasons why programs-as-delivered differ from programs-as-designed: technical impossibility, bureaucratic inertia, unanticipated conditions, exogenous influences. An elegantly designed policy experiment can fail at the point of randomization if program personnel let their own sentiments about "worthy" and "unworthy" clients override the randomizing process. Many educational policy initiatives are subverted because teachers persist in their same old ways despite the program admonition to do things differently. Welfare reform will mean little if caseworkers continue to apply the same standards and procedures as in the past. More generally, the real world finds ways to impinge in unexpected and often unwanted ways on any policy initiative; failure to anticipate these impingements has caused many a policy experiment to fail.

Loftin and McDowell (1981) provide a classic example of the utility of implementation analysis in their evaluation of the effects of the Detroit mandatory sentencing law. The policy-as-designed required a mandatory two-year "add on" to the prison sentence of any person convicted of a felony involving a firearm. Contrary to expectation, the rate of firearms crime did not decline after the law was enacted. Implementation analysis provided the reason. Judges, well aware of the overcrowded conditions in the state's prisons, were loath to increase average prison sentences. Yet, state law required that two years be added to the charge. To resolve the dilemma, judges in firearms cases would begin by reducing the main sentence by two or so years and then adding the mandated two-year add-on, so that the overall sentence remained about the same even as the judges remained in technical compliance with policy. A more thorough discussion of the implementation problem can be found in Chambers and colleagues (1992, chap. 1).

UTILIZATION

A consistent frustration expressed throughout the literature is that policy analysis seems only rarely to have any impact on actual policy. Utilization is an ongoing problem in the field of evaluation research. A more detailed treatment of the utilization problem can be found in Chambers and colleagues (1992, chapter 1), Shadish and colleagues (1991, chapters 6, 7), and Weiss (1988). For examples of ways in which evaluation can impact practice, see articles by Gueron, Lipsey, and Wholey in New Directions for Evaluation (1997).

Many reasons for nonutilization have been identified. One of the most important is timeliness. Good research takes time, whereas policy decisions are often made quickly, well before the results of the analysis are in. The negative income tax experiments mentioned earlier were stimulated in substantial part by a Nixon administration proposal for a modified negative income tax to replace the then-current welfare system. The shortest of the experiments ran for three years; several ran for five years; none were completed by the time the Nixon proposal was killed mainly on political grounds.

A second factor in the nonutilization of policy studies is that research is seldom unequivocal. Even the best-designed and best-executed policy researches will be accompanied by numerous caveats, conditions, and qualifications that strictly limit the safe policy inferences one may draw from them. Policy makers, of course, prefer simple declarative conclusions; policy research rarely allows one to make such statements.

Finally, even under the most favorable conditions, the scientific results of policy analyses are but one among many inputs into the policy-making process. There are, in addition, normative, economic, political, ethical, pragmatic, and ideological inputs that must be accommodated. In the process of accommodation, the influence of scientific research is often obscured to the point where it can no longer be recognized. It should not be inferred from this that policy analysis is not utilized, only that the research results are but one voice in the cacophony of the policy-making process.

Weiss has written extensively on the utilization problem and ways in which evaluation can be used effectively to change policy. She argues that "in its ideal form, evaluation is conducted for a client who has decisions to make and who looks to the evaluation for answers on which to base his decisions" (1972, p. 6). This is often not the case, however, as evaluation results seldom influence important decisions regarding programs and policies. Weiss's general conclusion regarding utilization is that evaluation results affect public policy by serving as the impetus for public discourse and debate that form social policy, rather than through extensive program reform or termination.


references

Chambers, K., K. R. Wedel, and M. K. Rodwell 1992 Evaluating Social Programs. Boston: Allyn and Bacon.

Cook, Thomas, and Donald Campbell 1979 Quasi-Experimentation. Chicago: Rand MacNally.

Dubnick, Melvin, and Barbara Bardes 1983 Thinkingabout Public Policy: A Problem Solving Approach. New York: John Wiley.

Gueron, Judith M. 1997 "Learning about Welfare Reform: Lessons from State-Based Evaluations." NewDirections for Evaluation 76:79–94. (Edited by D. Rog and D. Fournier)

Haveman, Robert 1977 A Decade of Federal AntipovertyPrograms: Achievements, Failures, and Lessons. New York: Academic.

Johnson, D., L. Meiller, L. Miller, and G. Summers 1987 Needs Assessment: Theory and Methods. Ames: Iowa State University Press.

Judd, Charles, and David Kenny 1981 Estimating theEffects of Social Interventions. New York: Cambridge University Press.

Levin, Henry 1975 "Cost-Effectiveness Analysis in Evaluation Research." In M. Guttentag and E. Struening, eds., Handbook of Evaluation Research. Newbury Park, Calif.: Sage.

Levitan, Sar 1985 Programs in Aid of the Poor. Baltimore, Md.: Johns Hopkins University Press.

Lipsey, Mark W. 1997 "What Can You Build with Thousands of Bricks? Musings on the Cumulation of Knowledge in Program Evaluation." New Directionsfor Evaluation 76:7–24. (Edited by D. Rog and D. Fournier.)

Loftin, Colin, and David McDowell 1981 "One with a Gun Gets You Two: Mandatory Sentencing and Firearms Violence in Detroit." Annals of the AmericanAcademy of Political and Social Science 455:150–168.

MacRae, Duncan 1985 Policy Indicators: Links betweenSocial Science and Public Debate. Chapel Hill, N.C.: University of North Carolina Press.

Marmor, Theodore 1970 The Politics of Medicare. New York: Aldine.

Moynihan, Daniel 1973 The Politics of a GuaranteedAnnual Income: The Nixon Administration and the Family Assistance Plan. New York: Vintage.

Nagel, Stuart 1984 Contemporary Public Policy Analysis. Birmingham: University of Alabama Press.

Rogers, James 1989 "Social Science Disciplines and Policy Research: The Case of Political Science." Policy Studies Review 9:13–28.

Rossi, Peter, Howard Freeman, and Mark Lipsey 1998 Evaluation: A Systematic Approach, 6th ed. Newbury Park, Calif.: Sage.

Rossi, Peter, and Kathryn Lyall 1976 Reforming PublicWelfare. New York: Russell Sage.

Shadish, William R., Jr., T. D. Cook, and L. C. Leviton 1991 Foundations of Program Evaluation. Newbury Park, Calif.: Sage.

Sussman, Barry 1988 What Americans Really Think andWhy Our Politicians Pay No Attention. New York: Pantheon.

Verba, Sidney, and Norman Nie 1975 Participation inAmerica: Political Democracy and Social Equality. New York: Harper and Row.

Weiss, Carol H. 1972 Evaluation Research: Methods forAssessing Program Effectiveness. Englewood Cliffs, N.J.: Prentice-Hall.

——1988 "Evaluation for Decisions: Is Anybody There? Does Anybody Care?" Evaluation Practice 9:15–28.

Wholey, Joseph S. 1997 "Clarifying Goals, Reporting Results." New Directions for Evaluation 76:95–106. (Edited by D. Rog and D. Fournier.)

Yates, Brian T. 1996 Analyzing Costs, Procedures, Processes,and Outcomes in Human Services. Thousand Oaks, Calif.: Sage.


James D. Wright

About this article

Public Policy Analysis

Updated About encyclopedia.com content Print Article

NEARBY TERMS

Public Policy Analysis