Statistics: Reporting Systems and Methods
STATISTICS: REPORTING SYSTEMS AND METHODS
In order to better understand, explain, and control crime, one needs accurate counts of its occurrence. Crime statistics represent the counts of criminal behavior and criminals. They are typically uniform data on offenses and offenders and are derived from records of official criminal justice agencies, from other agencies of control, and from unofficial sources such as surveys of victimization or criminal involvement. Particularly in the case of official crime statistics, they may be published annually or periodically in a relatively standard format of data presentation and analysis.
Official crime statistics are generated at different levels of government (municipal, state, and federal) by a variety of criminal justice agencies (police, court, and corrections) and at different stages in the criminal justice process (arrest, prosecution, conviction, imprisonment, and parole). Official statistics are also produced on the violation of laws, codes, and standards of a variety of administrative and regulatory agencies of control, primarily at the federal level. Official crime statistics are based on the records of those agencies that are the official registrars of criminal behavior and criminals.
Unofficial crime statistics are produced independently of the records of official agencies of crime control. The sources of these statistics are the records of private security and investigative agencies and the data collected by social scientists through experiments and observations, as well as through surveys of victimization and of self-reported criminal involvement.
Crime statistics emerged in the early nineteenth century as an adjunct to the administration of justice, the primary purpose being the measurement of the amount of crime, particularly "to know if crime had increased or decreased" in order to inform crime control policy and practice (Sellin and Wolfgang, p. 9). Early researchers pointed out the ultimately more important purpose of measuring the distribution of crime by a variety of social, demographic, and geographic characteristics. Both official and unofficial crime statistics have distinctive problems and sources of error, but a major one they share is the underestimation of the actual amount of crime. However, it is probable that the various measures generate similar distributions of crime, meaning that there is convergence rather than discrepancy in their depictions of the characteristics and correlates of crime. It is also likely that multiple indicators of crime best inform research, theory, policy, and practice.
The major types of official and unofficial crime statistics are discussed here in terms of their history and contemporary sources; their role as measures of crime; methodological and utilization issues and problems; and the general issue of discrepancy or convergence among crime statistics regarding the distribution and correlates of crime.
History of crime statistics
Simultaneously with the emergence of the discipline of statistics in the seventeenth century, the fledgling discipline's luminaries began to call for crime statistics in order to "know the measure of vice and sin in the nation" (Sellin and Wolfgang, p. 7). It was not until the nineteenth century that the measurement of a nation's moral health by means of statistics led to the development of the branch of statistics called "moral statistics." France began systematically collecting national judicial statistics on prosecutions and convictions in 1825. For the first time, comprehensive data on crime were available to the overseers of moral health, as well as to researchers. The French data became the source of the first significant statistical studies of crime, by the Belgian Adolphe Quetelet and the Frenchman Andre Michel Guerry, who have been called the founders of the scientific sociological study of crime. Soon afterward, similar analytical and ecological studies of crime were carried out by other Europeans who were influenced directly by, and made frequent references to, the work of Quetelet and Guerry.
In the United States, the earliest crime statistics were state judicial statistics on prosecutions and convictions in court and on prisoners in state institutions. New York began collecting judicial statistics in 1829, and by the turn of the twentieth century twenty-four other states had instituted systems of court data collection. Prison statistics were first gathered in 1834 in Massachusetts, and twenty-three other states had begun the systematic collection of prison data by 1900 (Robinson). The early state data on imprisonment were augmented by the first national enumeration of persons institutionalized in prisons and jails as part of the 1850 census and by subsequent decennial (taken every ten years) population counts thereafter. These early United States Bureau of the Census statistics are relatively complete and informative, including for each prisoner the year and offense of commitment, sex, birthplace, age, race, occupation, and literacy status.
By the end of the nineteenth century, most European countries and a number of states in the United States were systematically collecting judicial and prison statistics, and concomitantly most of the problems relating to these statistics and the measurement of crime in general had been identified. Numerous critics pointed to the fact that judicial and prison statistics were "incomplete" measures of the actual amount and distribution of crime in the community, primarily because of the "dark figure" of undetected, unreported, unacted upon, or unrecorded crime. It has always been clear that not all crimes committed in the community come to the attention of the police, that only a portion of crimes known to the police eventuate in arrest, that not all offenders who have been arrested are prosecuted or convicted, and that only a small fraction of the cases where there is a conviction lead to imprisonment. This underestimation of the volume of crime is not necessarily problematic if, as Quetelet suggested, we "assume that there is a nearly invariable relationship between offenses known and adjudicated and the total unknown sum of offenses committed" (p. 18). In other words, if there is a constant ratio between the actual amount of crime (including the dark figure of unknown offenses) and officially recorded crime, whether recorded by arrest, prosecution, conviction, or imprisonment, then the latter is "representative" of the former and acceptable as a measure of crime. Later research showed this to be a fallacious assumption, but during the nineteenth century and through the first quarter of the twentieth century, scholars and practitioners alike generally operated under this assumption in using and defending judicial statistics as the true measure of crime in a society. Critics pointed to the fact that judicial statistics were not representative of the actual number of crimes or criminals in their proposals that police statistics, particularly of "offenses known to the police," be used in the measurement of crime.
Beginning in 1857, Great Britain was the first nation to systematically collect police data, including offenses known to the police. The significance of this type of data was appreciated by only a few nineteenth-century scholars, among them Georg Mayr, the leading criminal statistician of the time. In 1867, he published the first statistical study using "crimes known to the police" as the primary data source, proposing that crimes known to the police should be the foundation of moral statistical data on crime (Sellin and Wolfgang, p. 14). A few researchers called for utilization of police statistics, but judicial statistics on prosecution and conviction remained the crime statistic of choice in studies of the amount and distribution of crime.
Although the origin, utilization, and defense of judicial statistics were a European enterprise, the emergence of police statistics as a legitimate and eventually favored index of crime can be characterized as an American endeavor. As a result of a growing dissatisfaction with judicial statistics and of the fact—axiomatic in criminology—that "the value of a crime rate for index purposes decreases as the distance from the crime itself in terms of procedure increases" (Sellin, p. 346), the American criminologist August Vollmer in 1920 proposed a national bureau of criminal records that, among other tasks, would compile data on crimes known to the police. In 1927 the International Association of Chiefs of Police made this suggestion an actuality by developing a plan for a national system of police statistics, including offenses known and arrests, collected from local police departments in each state. The Federal Bureau of Investigation became the clearinghouse for these statistics and published in 1931 the first of its now-annual Uniform Crime Reports (UCR). That same year, "offenses known to the police" was accorded even more legitimacy as a valid crime statistic by the Wickersham Commission, which stated that the "best index of the number and nature of offenses committed is police statistics showing offenses known to the police" (U.S. National Commission on Law Observance and Enforcement, p. 25). Ever since that time, "offenses known to the police" has generally been considered the best source of official crime data. However, most of the European countries that had developed national reporting systems of judicial statistics did not include police statistics, particularly crimes known, until the 1950s, and ironically, Great Britain did not acknowledge that crimes known to the police was a valid measure of crime until the mid-1930s, although these data had been collected since the mid-nineteenth century (Sellin and Wolfgang, pp. 18–21).
According to Thorsten Sellin's axiom, "crimes known to the police" has the most value of all official measures of crime because it is closest procedurally to the actual crime committed, probably as close as an official crime statistic will ever be. Even so, as with each and every measure and crime statistic, there are problems regarding even this best of official crime statistics.
Official crime statistics
Contemporary official crime statistics, proliferating with the growth of crime-control bureaucracies and their need to keep records, are more comprehensive and varied than nineteenth-century judicial statistics and early twentieth-century police statistics. The purposes and functions of crime statistics have also changed. Whereas the early judicial statistics were utilized to measure a nation's moral health or the social and spatial distribution of crime, many of the more contemporary official statistics are the byproducts of criminal justice "administrative bookkeeping and accounting." For example, data are collected on such matters as agency manpower, resources, expenditures, and physical facilities, as well as on warrants filed and death-row populations. Consequently, in the United States there are hundreds of national—and thousands of state and local—sources of official statistics, most of which are best characterized as data on the characteristics and procedures of the administration of criminal justice and crime control.
Given the different histories of judicial and police statistics in Europe and the United States, it is not surprising that in the latter there are relatively good police data compiled on a nationwide annual basis and relatively poor judicial data. In fact, the United States is one of only a few developed countries that publishes no national court statistics. Reflecting the unique history of corrections in the United States, where the state prison and local jail are differentiated by jurisdiction, incapacitative functions, type of inmate, and record-keeping practices, there are relatively comprehensive annual national data on the number and characteristics of adults under correctional supervision in state and federal prisons, but no national statistics on jail populations are published. A review of sources of criminal justice statistics concluded, "the absence of regular annual data on jail inmates is, along with the absence of court statistics, the most glaring gap in American criminal justice statistics" (Doleschal, p. 123).
Official crime statistics measure crime and crime control. Clearly, the historically preferred source of official statistics on the extent and nature of crime is police data, particularly crimes known to the police. Other official data gathered at points in the criminal justice system that are procedurally more distant from the crime committed are less valid and less useful measures of crime. However, these data can serve as measures of the number and social characteristics of those who are arrested, prosecuted, convicted, or imprisoned; of the characteristics, administration, and procedures of criminal justice within and between component agencies; and of the socially produced and recognized amount and distribution of crime. Official statistics, except for data on crimes known to the police are more correctly regarded as measures of crime control because they record a social-control reaction, of the criminal justice system to a known offense or offender. For example, a crime known to the police typically reported to the police by a complainant, and the record of it is evidence of the detection of a crime. If the police clear the offense through arrest, the arrest record is evidence of the sanction of a criminal, a measure of crime control (Black). In other words, a crime known to the police registers acknowledgment of an offense; an arrest, of an offender; a prosecution and conviction of the offender. Arrest, prosecution, conviction, and disposition statistics as well as administrative bookkeeping and accounting data, are best thought of as information of the characteristics, procedures, and processes of crime control. The focus in this entry will be the official statistics of crime, specifically police statistics of offenses known.
A measure of crime: offenses known to the police. From the beginning, the primary objective of the Uniform Crime Reports was made clear in 1929 by the committee on Uniform Crime Records of the International Association of Chiefs of Police—to show the number and nature of criminal offenses committed. At the time it was argued that among the variety of official data, not only were "offenses known" closest to the crime itself, but a more constant relationship existed between offenses committed and offenses known to the police than between offenses committed and other official data—assumptions shown to be erroneous by victimization surveys many years later. Nevertheless the UCR have always been the most widely consulted source of statistics on crime in the United States.
The UCR are published annually by the F.B.I. and provide statistics on the amount and distribution of crimes known to the police and arrests, with other, less complete data on clearances by arrest, average value stolen in a variety of property offenses, dispositions of offenders charged, number of law enforcement personnel, and officers assaulted or killed. The statistics are based on data submitted monthly by the fifteen thousand municipal, county, and state law enforcement agencies, which have jurisdiction over approximately 98 percent of the U.S. population.
Of crimes known and arrests, data are collected in twenty-nine categories of offenses, using standardized classifications of offenses and reporting procedures. Crimes known and arrests are presented for the eight original "index crimes"–murder, rape, robbery, aggravated assault, burglary, larceny, motor-vehicle theft, and arson—and arrests only, for the remaining (non-index) crimes. Arson was added as an index crime in 1979. For each index crime, crimes known to the police are presented by number of crimes reported, rate per hundred thousand population, clearances, nature of offense, geographical distribution (by state, region, size, and degree of urbanization), and number of offenders arrested. Arrests are presented by total estimate for each index and nonindex crime, rate per hundred thousand population, age, sex, race, and decade trend.
The index crimes are intended to represent serious and high-volume offenses. The Total Crime Index is the sum of index crimes, and subtotals are provided on violent and property index crimes. The Total Crime Index, and to a lesser extent the violent- and property-crime indexes, are often used to report national trends in the extent and nature of crime.
The statistics presented in the UCR of crimes known to the police are records of "reported" crime, and since reporting and recording procedures and practices are major sources of methodological and utilization problems, they deserve further attention. Crimes known to the police are typically offenses reported to the police by a victim or other person and are recorded as such unless they are "unfounded," or false. For property crimes, one incident is counted as one crime, whereas for violent crimes one victim is counted as one crime. Except for arson, the most serious of more than one index crime committed during an incident is counted; arson is recorded even when other index crimes are committed during the same incident. For example, stealing three items from a store counts as one larceny, but beating up three people during an altercation counts as three assaults.
Larceny and motor-vehicle theft account for the largest proportion of index crimes, for reasons pointed to by critics. Both are the least serious of the index crimes, with larceny of any amount now eligible to be counted, and motor-vehicle theft having one of the highest rates of victim reports to the police because theft must be established to file for insurance claims. On the other hand, many crimes that could be considered more serious because they involve physical injury and bodily harm to a victim are not index crimes. Moreover, completed and attempted crimes are counted equally for more than half of the index crimes. Robbery is counted as a violent crime and accounts for almost one-half of all reported violent index crimes. Most other countries classify robbery as a major larceny, as did the United States before the inception of the UCR. Of course, this difference in classification explains in part the relatively higher rate of violence in the United States. A number of other serious offenses are not counted at all in the reporting program, including a variety of victimless, white-collar, and organizational crimes, as well as political corruption, violations of economic regulations, and the whole array of federal crimes. One might characterize the Total Crime Index as a measure of the extent, nature, and trends of relatively ordinary street crime in the United States.
There are also some problems in the presentation of these data. The Total Crime Index, as a simple sum of index offenses, cannot be sensitive to the differential seriousness of its constituent offense categories and to the relative contributions made by frequency and seriousness of offenses to any index of a community's crime problem (Sellin and Wolfgang). Rudimentary summations of data also mask potentially important variations among offenses and other factors. Comparisons of data from year to year, and even from table to table for the same year, may be hampered in some cases because data may be analyzed in various ways (for example, by aggregating data in different ways for different tables). Comparisons are also made difficult by the use of inappropriate bases (or denominators) in the computation of the rates that are presented both for crimes known to the police and for arrests.
The crime rates given in the UCR, as well as in most criminal justice statistical series, are computed as the number of crimes per year per hundred thousand population. This type of "crude" rate can lead to inappropriate inferences from the data. The use of crude rates can conceal variation in subgroups of the population, so it is desirable to standardize rates for subgroups whose experience is known to be different, for example, by sex, race, and age. These subgroup-specific rates also facilitate comparisons between groups: male-female rates, white-black rates, and juvenile-adult rates.
At times, inappropriate population bases are used in calculating rates. A crime rate represents the ratio of the number of crimes committed to the number of persons available and able to commit those crimes; this ratio is then standardized by one hundred thousand of whoever is included in the base. For some offenses, the total population is an inappropriate base. For example, a forcible-rape crime rate based on the total population is less appropriate than a rate based on the number of males available and able to commit rape. Similarly, the juvenile crime rate should reflect the number of crimes committed by the population of juveniles.
Crime rates can be interpreted as victimization rates, depending on who (or what) are included in the base. If the total population base can be considered potential criminals, they can also be considered potential victims. For crimes where the victim is a person, the calculation of surrogate victimization rates using crime data is relatively straightforward—the number of available victims becomes the base. Again, in the case of forcible rape, the total population and the male population would be inappropriate bases—here the population of available victims is essentially female. Therefore, the surrogate victimization rate would be calculated as the number of forcible rapes known to the police per hundred thousand female population.
For property crimes it is more difficult, but not impossible, to calculate surrogate victimization rates. Here the denominator may have to be reconceptualized not as a population base but as a property base. For example, Boggs, and later Cohen and Felson, included "opportunities" for property theft in the bases of their analyses, including, for example, the number of cars available to steal. They reported that the subsequent opportunity-standardized rates were very different from the traditional population-standardized crime (or victimization) rates. Opportunitystandardized rates may sometimes differ even in direction. For example, rather than showing the rate of motor vehicle—related theft increasing, a corrected rate showed it to be decreasing (Wilkins; Sparks). Ultimately, of course, much more precise victimization rates are available from victimization survey data.
Finally, the total population base may be used incorrectly if the decennial Bureau of the Census counts of the population are not adjusted for projected population estimates on a yearly basis. For example, if the 1990 census data are used in the base to calculate 1999 crime rates, the rates will be artificially inflated simply as a consequence of using too small a population base. Obviously, 1999 population estimates are more appropriate in the calculation of 1999 crime rates.
Overall, the data presented in the UCR are "representative." However, the greatest threat to the validity of these statistics is differential reporting to the F.B.I. by local police, within participating departments, and to local authorities by citizen victims or other complainants. There is underreporting both by and to the police.
The reports of participating law enforcement agencies to the F.B.I. can be affected in a variety of ways, leading to variations in the uniformity, accuracy, and completeness of reporting. In spite of efforts to standardize definitions of eligible offenses, police in different states with different statutory and operational definitions of offense categories may classify crimes differently. There may be internal and external pressures on police agencies to demonstrate reductions in community crime or specific targeted offenses, and these pressures may induce police to alter classification, recording, and reporting procedures. Such changes can have a dramatic impact on the amount and rate of crime. A classic example was the reorganization of the Chicago police department. As part of the reorganization, more efficient reporting and recording procedures were introduced, and reported crime increased dramatically from 57,000 offenses in one year to 130,000 in the next (Rae).
To make the problems with the UCR even more complicated, the reported statistics can vary across time and place as policies change, police technology becomes more sophisticated, laws and statutes are modified, commitment to the program wavers, demands for information change, available resources fluctuate, and so on (Hindelang). Unfortunately, even if all the difficulties of validity, reliability, and comparability were eliminated and the statistics became completely and uniformly accurate, there would remain the more serious problem of differential reporting to the police by victims and other citizens. There is evidence of substantial underreporting and nonreporting to the police by victims of crime; in fact, the majority of crimes committed are not reported to the police.
The assumption of the originators of the UCR that there is a constant ratio between crimes known to the police and crimes committed has been shown to be fallacious by studies using unofficial crime statistics. One may never know the actual volume of crimes committed, and therefore the true base remains indeterminate. But more importantly, underreporting or nonreporting to the police varies by offense type, victim and offender characteristics, perceptions of police efficiency, and the like. In short, the dark figure of undetected and unreported crime limits the adequacy of even the historically preferred crimes-known-to-the-police index of the amount and distribution of crime.
NIBRS. During the 1980s, law enforcement agencies sought to improve official reporting methods, particularly the UCR. In 1985 the Bureau of Justice Statistics (BJS) and the F.B.I. released Blueprint for the Future of the Uniform Crime Reporting Program (Reaves, p. 1). This blueprint outlined the next generation of official reporting methods, specifically the National Incidence-Based Reporting System (NIBRS). Starting with 1991 data, the UCR program began to move to this more comprehensive reporting system. While the UCR is essentially offender-based, focusing on summary accounts of case and offender characteristics, the NIBRS is incident-based, seeking to link more expansive data elements to the crime, included in six primary categories: administrative, offense, property, victim, offender, and arrestee (Reaves, p. 2).
The first segment, the administrative, is a linking segment that provides an identifier for each incident. Further, this segment provides the date and time of the original incident as well as any required special clearance for the case. The second segment, the offense, details the nature of the offense(s) reported. Unlike the UCR, which is limited to a relatively small number of F.B.I. index crimes, NIBRS provides details on forty-six offenses. This specificity allows for more accurate reporting of the offense, as well as improved ability to analyze other characteristics of the crime. The offense category also examines conditions surrounding the event, such as drug or alcohol involvement at the time of the incident, what type of weapon, if any, was used, and whether or not the crime was completed. Segment three deals with the property aspects of the incident, such as the nature of the property loss (i.e., burned, seized, stolen), the type of property involved (i.e., cash, car, jewelry), the value of the property, and if the property was recovered and when. The fourth segment, victim, lists the characteristics of the individual victimized in the incident. The victim's sex, age, race, ethnicity, and resident status are presented and, in cases where the victim is not an individual, additional codes for business, government, and religious organizations are provided. Each of the victims is linked to the offender, by the offender number and by the relationship between the victim and the offender. Segment five focuses on the individual attributes of the offender rather than the victim. The final segment, arrestee, gives information on those arrested for the incident—the date/time of the arrest, how the arrest was accomplished, whether or not the arrestee was armed, and age, gender, race, ethnicity, and residence status of the arrestee (Reaves).
Data collection for NIBRS follows a process similar to that of the UCR, with local agencies reporting to the state program, which passes the information along to the F.B.I. However, one major change exists for those states desiring to participate in the NIBRS program—to begin regular submission of NIBRS data a state must be certified by the F.B.I. (Roberts, p. 7). The state must meet the following four criteria before becoming certified: First, the state must have an error-reporting rate of no greater than 4 percent for three consecutive months. Second, the data must be statistically reasonable based on trends, volumes, and monthly fluctuations. Third, the state must show the ability to update and respond to changes within the system. And finally, the state NIBRS program must be systematically compatible with the national program.
Calls for service. Another method by which crime may be monitored utilizes emergency calls to the police. Some of the criticisms that have been leveled at arrest records (e.g., that they measure reactions to crimes rather than criminal involvement) and at victimization surveys (e.g., there may be systematic bias in the willingness of victims to report certain crimes to interviewers) may be addressed by this method of crime measurement.
It is suggested that the primary advantage of measuring crime through calls-for-service (CFS) is that it places the data closer to the actual incident. This removes additional layers in which bias or data loss can occur. For example, in order for a crime to be recorded as an arrest, the police must respond to the call, investigate the crime, find and arrest a suspect. At any one of these steps the process can be halted and nothing recorded, hiding the occurrence of the crime and contributing to the "dark figure." Similarly, within victimization surveys the respondent may forget events in the past, or the victim may choose not to give accurate information due to the sensitive nature of the crime. By placing the data gathering at the point of actually reporting the event to authorities means that the reports are "virtually unscreened and therefore not susceptible to police biases"(Warner and Pierce, p. 496).
As potentially valuable as these calls for service are, several weaknesses exist that create difficulty in utilizing them as crime measures. The first relates to the coding of a call to the police. Not all calls for police service focus on crime or legal issues—many are calls for medical or physical assistance, general emergencies, information requests, or "crank" calls. Clearly, they do not measure crime. At first glance, these appear easy to filter out of the data. But there are ambiguities; for example, how are medical conditions brought on by illegal activity (an individual consumes an illegal substance and has a negative reaction) coded? The call to 911 requests medical assistance but fails to mention the origin of the medical distress. The code for this event appears to be medical but in fact also represents a crime. Another concern surrounding the coding of a call has to do with the accuracy of what the individual is reporting to the operator. Not only can this lead to problems in the general categorization of a call as a crime (or not), but also in the specific crime being reported. The caller may not understand the nature of the event they are witness to, or the caller may desire a faster response from the police and inflate the severity of the offense. Further, even if citizens have a sound understanding of the legal nature of the event, they may be unable to articulate critical features of the event (e.g., high levels of anxiety, English as a second language).
A second major issue with CFS as a measure of crime is that many crimes come to the attention of the police by methods other than phone calls to the police department. Officers can observe crimes while patrolling or information can be directly presented to officers by citizens while on patrol or at station houses (Reiss). This creates a new aspect of the "dark figure"–reliance on CFS may tend to undercount crimes as police officers discover criminal activity through other means. Data collected by Klinger and Bridges found that officers sent out on calls encountered more crime than reflected in the initial coding of the calls (Klinger and Bridges, p. 719).
A final consideration in using CFS as a crime measure is that calls tend to vary according to the structural characteristics of the neighborhood. Where residents believe police respond slowly to their calls, where residents are more fearful of crime, and where there exists more criminal victimization, there will be systematic variation in the presentation of calls for service (Klinger and Bridges). For example, in high crime rate areas, residents may be sensitized to crime and report all behavior, resulting in many false positives. Thus, differences in CFS across communities may introduce additional sources of error.
Utilization of CFS data is an innovative approach to the measurement of crime. However, until the strengths and weaknesses of these records of initial calls to the police for a variety of services have been scrutinized to the same degree as the more traditional measures, some caution should be exercised with this "new" indicator of crime.
Accessing official reports
One of the most significant changes in using crime data over the past decade reflects the means of access to official crime statistics. Traditionally, in order to gather official crime figures one has either to rely on published documents (e.g., the annual UCR) or to contact the agency (federal, state, or local) that is the repository of the data and request access to the appropriate information. With the expansion of the Internet, many of these same agencies have placed their crime data online.
At the federal level, one example among many is the Bureau of Justice Statistics (www.ojp.usdoj.gov/bjs/). The BJS provides aggregate level data for the United States, in both absolute figures and rates for index and non-index crimes. The BJS also provides data by region. Thus information, such as the UCR, can be accessed directly and within a format that allows for easy comparison between regions and over time.
Likewise, various state agencies have started to provide crime and criminal justice information on their web sites. For examples, the California Attorney General (www.caag.state.ca.us/) provides information on crimes within the state from 1988 until the present and presents comparative information on other states over the same period of time. The Texas Department of Criminal Justice (www.tdcj.state.tx.us/) not only provides information on crime rates within the state, but also gives statistics on demographic and offense characteristics of prisoners, including those on death row.
Even local agencies, such as city police departments, provide crime statistics. For examples, the San Diego Police Department (www.sannet.gov/police) provides a breakdown of total aggregate crime citywide, the respective rates of each crime, and the geographic distribution of crime citywide and for specific areas within the city. The Dallas Police Department (www.ci.dallas.tx.us/dpd/) presents a map of the city divided into "reporting areas" that allows the viewer to select an area in which they are specifically interested and to gather the relevant crime data.
The expansion of the Internet and its utilization by law enforcement agencies facilitate access to criminal justice data sources and statistics within minutes rather than days or weeks. The information is provided typically within a spreadsheet format or simple tables. This makes the utilization of information on crime much easier, for law enforcement, researchers, and the public.
Applications of official data
A number of computerized information and data management systems have been created to facilitate both the apprehension of offenders and research on crime. They are typically local or regional efforts, providing law enforcement agencies in particular the capacity to store, manage, and utilize individual-level, comprehensive record information on case characteristics, offenders, victims, tips, crime locations, and so on. The goal, in increasing the quantity and quality of information available to law enforcement agencies, is to enhance the effectiveness and efficiency of the criminal justice system. One example of this type of data system is the Homicide Investigation and Tracking System (HITS).
The HITS program was originally funded under a National Institute of Justice (NIJ) grant that sought to examine homicides and their investigations within Washington State (Keppel and Weis). A computerized information system, which included all homicides in the state from 1980 forward, was created to facilitate the examination of solvability factors in homicide investigations, as well as to provide a comprehensive, ongoing database to be used by investigators to inform and enhance their case investigations. This was accomplished by having law enforcement officers fill out a standardized case form, which contains hundreds of pieces of information, on the victim(s), offender(s), locations, time line, motives, cause of death, autopsy results, evidence, and so on. In effect, a digitized version of the most relevant features of a case file were input to the database.
The HITS program contains information from six major sources and is stored in seven different data files: murder, sexual assault, preliminary information, Department of Corrections, gang-related crimes, Violent Criminal Apprehension Program, and timeline. These data files can then be queried by the investigator for a wide range of information, such as victims' gender, race, and lifestyle; date and cause of death; location of body; and other similar characteristics. This allows investigators to make their search as wide or narrow as the case demands, in order to improve their ability to focus on an offender.
HITS also provides an excellent source of information for researchers. The HITS program provides initial official reports of crimes that are generated in close temporal proximity to the crime. Also, because the HITS program maintains separate databases of information provided by public sources (e.g., licensing, corrections, motor vehicles), researchers can link separate sources of information on the same case, improving the analysis of the crime. The query system also allows researchers to create aggregate data sets, adjusting for a range of variables such as time of year or day, location within state, mobility of offenders, and so on. The HITS system, and others like it, then, not only improve the ability of law enforcement to solve crimes but also the ability of researchers to analyze them.
Unofficial crime statistics
Even though most of the fundamental problems with official crime statistics had been identified before the end of the nineteenth century, including the major problem of the dark figure of unknown crime, it was not until the mid-twentieth century that systematic attempts to unravel some of the mysteries of official statistics were initiated. Turning to data sources outside of the official agencies of criminal justice, unofficial crime statistics were generated in order to explore the dark figure of crime that did not become known to the police, to create measures of crime that were independent of the official registrars of crime and crime control, and to address more general validity and reliability issues in the measurement of crime.
There are two categories of unofficial data sources: social-science and private-agency records. The first of these is much more important and useful. Among the social-science sources, there are two major, significant measures, both utilizing survey methods. The first is self-reports of criminal involvement, which were initially used in the 1940s to "expose" the amount of hidden crime. The second is surveys of victimization, the most recent and probably the most important and influential of the unofficial crime statistics. Victimization surveys were initiated in the mid-1960s to "illuminate"–that is, to specify rather than simply to expose—the dark figure and to depict crime from the victim's perspective. There are also two minor, much less significant sources of social-science data: observation studies of crime or criminal justice, and experiments on deviant behavior. Among the sources of private-agency records are those compiled by firms or industries to monitor property losses, injuries, or claims; by private security organizations; and by national trade associations. The focus here will be on the social-science sources of unofficial crime statistics, particularly victimization and self-report surveys.
Victimization surveys. Recognizing the inadequacies of official measures of crime, particularly the apparently substantial volume of crime and victimization that remains unknown to, and therefore unacted upon by, criminal justice authorities, the President's Commission on Law Enforcement and Administration of Justice initiated relatively small-scale pilot victimization surveys in 1966. One was conducted in Washington, D.C. A sample of police precincts with medium and high crime rates was selected, within which interviews were conducted with residents. Respondents were asked whether in the past year they had been a victim of any of the index crimes and of assorted nonindex crimes. Another surveyed business establishments in the same precincts in Washington, D.C., as well as businesses and residents in a sample of high-crime-rate precincts in Boston and Chicago. The instruments and procedures used in the first pilot survey were modified and used to interview residents in the second study. Owners and managers of businesses were asked whether their organization had been victimized by burglary, robbery, or shoplifting during the past year. The third pilot survey was a national poll of a representative sample of ten thousand households. Again, respondents were interviewed and asked whether they or anyone living with them had been a victim of index and nonindex crimes during the past year. They were also asked their opinions regarding the police and the perceived personal risk of being victimized.
The pilot studies verified empirically what criminologists had known intuitively since the early nineteenth century—that official crime statistics, even of crimes known to the police, underestimate the actual amount of crime. However, these victimization studies showed that the dark figure of hidden crime was substantially larger than expected. In the Washington, D.C., study, the ratio of reported total victim incidents to crimes known to the police was more than twenty to one (Biderman). This dramatic ratio of hidden victimizations to reported crimes was replicated among the individual victims in the Boston and Chicago study (Reiss) and in the national pilot survey, which showed that about half of the victimizations were not reported to the police (Ennis). The survey of business establishments discovered the inadequacy of business records as measures of crime, showed higher rates of victimization than police records indicated, and verified the valid reporting of business victimization by respondents (Reiss). These studies also demonstrated that the discrepancy between the number of victimizations and of crimes reported to the police varies importantly by the type of offense and by the victim's belief that reporting a crime will have consequences. In general, the more serious the crime, the more likely a victim is to report it to the police; minor property crimes are reported least frequently. As a result of the startling findings of these pilot victimization surveys and of the subsequent recommendations of the President's Commission, an annual national victimization was initiated in the National Crime Victim Survey (NCVS).
In 1972 the United States became one of the few countries to carry out annual national victimization surveys. The NCVS is sponsored by the Bureau of Justice Statistics (within the United States Department of Justice) and is conducted by the Bureau of the Census. Its primary purpose is to "measure the annual change in crime incidents for a limited set of major crimes and to characterize some of the socioeconomic aspects of both the reported events and their victims" (Penick and Owens, p. 220). In short, the survey is designed to measure the amount, distribution, and trends of victimization and, therefore, of crime.
The survey covers a representative national sample of approximately sixty thousand households, and through 1976 it included a probability sample of some fifteen thousand business establishments. Within each household, all occupants fourteen years of age or older are interviewed, and information on twelve- and thirteen-yearold occupants is gathered from an older occupant. Interviews are conducted every six months for three years, after which a different household is interviewed, in a constant process of sample entry and replacement.
The crimes measured in the NCVS are personal crimes (rape, robbery, assault, and theft), household crimes (burglary, larceny, and motor-vehicle theft), and business crimes (burglary and robbery). These crimes were selected intentionally for their similarity to the index crimes of the UCR in order to permit important comparisons between the two data sets. The only two index crimes missing are murder, for which no victim can report victimization, and arson, the ostensible victim of which is often the perpetrator.
The statistics on victimization generated by the NCVS provide an extremely important additional perspective on crime in the United States. Ever since they were first published, the survey's reports have forced a revision in thinking about crime. For example, a report on victimization in eight American cities, using data from the very first surveys, provided striking confirmation of the magnitude of the underreporting and nonreporting problem identified in the pilot projects. Comparing the rates of victimization and crimes known to the police, the victimization data showed fifteen times as much assault, nine times more robbery, seven times the amount of rape, and, surprisingly, five times more motor-vehicle theft than reported in the UCR for the same period (U.S. Department of Justice).
Some of the discrepancy in the two rates can be accounted for by the practices of the police—not viewing a reported offense as a crime, failing to react, and not counting and recording it. But since the time of the pilot research it has been clear that the major reason for the discrepancy is the reporting practices of victims: the pilot national survey reported that approximately 50 percent of victimizations are not reported to the police (Ennis). An analysis of preliminary data from the first NCVS in 1973 concluded that nonreporting by victims accounted for much more of the difference between victimization and official crime rates than did nonrecording by the police. Almost three-fourths (72 percent) of the crime incidents are not reported to the police, ranging from a nonreporting rate of 32 percent for motor-vehicle theft to a rate of 82 percent for larceny (Skogan).
The primary reasons for citizen hesitancy to report crime to the police are relatively clear—the victim does not believe that reporting will make any difference (35 percent) or that the crime is not serious enough to bring to the attention of authorities (31 percent) (U.S. Bureau of the Census). The less serious crimes, particularly minor property crimes, are less often reported to the police, and the more serious ones are reported more often. Paradoxically, some of the more serious personal crimes, including aggravated assault or rape, are not reported because a personal relationship between victim and perpetrator is being protected or is the source of potential retribution and further harm (Hindelang, Gottfredson, and Garofalo). Another crime, arson, presents the problems of potential overreporting and of distinguishing between victim and perpetrator, since collecting insurance money is often the motive in burning one's own property.
The NCVS does not merely provide another national index of crime, a view of crime from the perspective of the victim, and illumination of the dark figure of hidden crime. It has also contributed to a better understanding of crime in the United States, forcing scholars and criminal justice professionals alike to question many basic assumptions about crime. Perhaps most perplexing are the implications of the victimization trend data. From 1973 to the 1990s, the overall victimization rate remained relatively stable from year to year, whereas the UCR showed a more inconsistent and upward trend. It was not until the observed decline in crime from about 1992 until 2001, that the UCR and NCVS both showed the same trend, due perhaps to refinements in both systems. However, there are a number of possible interpretations for the differences, centering on the relative strengths and weaknesses of official records of crimes known to the police, as compared to unofficial victim reports.
In general, victimization surveys have the same problems and threats to validity and reliability as any other social-science survey, as well as some that are specific to the NCVS. Ironically, there is a "double dark figure" of hidden crime—crime that is not reported to interviewers in victimization surveys designed to uncover crimes not reported to the police! Such incomplete reporting of victimization means that victimization surveys, like official data sources, also underestimate the true amount of crime. Of course, this suggests that the discrepancy between the crime rate estimates of the NCVS and of the UCR is even larger than reports indicate.
A number of factors contribute to this doubly dark figure of unreported victims. One of the most difficult problems in victimization surveys is to anchor the reported crime within the six-month response frame. A respondent not only has to remember the crime incident, but must also specify when it took place during the past six months. Memory may be faulty the longer the period of time between the crime and interview; the more likely is memory to fail a respondent who either forgets an incident completely or does not remember some important details about the victimization. The less serious and more common offenses are less worth remembering because of their more trivial nature and ephemeral consequences. The concern and tolerance levels of victims may also affect their recollection of crime incidents. Moreover, telescoping may take place: the victimization may be moved forward or backward in time, from one period to another. A victim knows that a crime took place but cannot recall precisely when. Another source of inaccurate and inconsistent responses is deceit. Some respondents may simply lie, or at least shade their answers. There are many reasons for deceit, including embarrassment, social desirability (wish to make a socially desirable response), interviewer-respondent mistrust, personal aggrandizement, attempts to protect the perpetrator, disinterest, and lack of motivation. Memory decay and telescoping are neither intentional nor manipulative, and are therefore more random in their effects on responses. They are likely to contribute to the underestimation of victimization. However, deceit is intentional and manipulative, and it is more likely to characterize the responses of those who have a reason to hide or reveal something. The effects on victimization estimates are more unpredictable because deceit may lead to underreporting among respondents but to overreporting among others. One can assess the extent of underreporting through devices such as a "reverse record check," by means of which respondents who have reported crimes to the police are included in the survey sample (Turner; Hindelang, Hirschi, and Weis, 1981). Comparing a respondent's crime incidents reported in the victimization interview with those reported to the police provides a measure of underreporting. A problem, though, is that underreporting can be validated more easily than overreporting. One "underreports" crimes that actually took place. For every official crime known to the police of a particular offense category, one can be relatively certain of underreporting if no victimization is reported for that offense category. If more victimizations are reported for an offense category than are known to the police, one cannot know whether the respondent is overreporting. A person may "overreport" crimes that never took place—they cannot be known, verified, or validated.
One of the strengths of the NCVS, namely, that the crimes included in the questionnaire are F.B.I. index crimes, is also a problem. In addition to the fact that two of the index crimes (murder and arson) are not included, many other important crimes are not measured in the victimization surveys. Obviously, the whole array of crimes without victims are excluded, as well as the nonindex crimes and crimes not included in the UCR program. The result is that the victimization statistics are somewhat limited in their representativeness and generalizability.
An important limitation of the design of the NCVS is a strength of the UCR—its almost complete coverage (98 percent) of the total United States population and the resultant ability to examine the geographic and ecological distribution of crime from the national level to the levels of regions, states, counties, Standard Metropolitan Statistical Areas, cities, and local communities. Historically, data on victimization have been collected from a sample of the population, which has varied around 100,000 respondents, distributed geographically throughout the United States. There are simply not enough data to generate meaningful and useful statistics for each of the geographic and ecological units represented in the UCR. This would require a comprehensive census of households, the cost of which would be prohibitive.
Another design problem is referred to as bounding, or the time frame used as the reference period in interviews, which is established at the first interview on a six-month cycle for "household location." This is done to fix the empirically determined optimum recall period of six months and to avoid double reporting of the same crime incident by respondents. The bounding of household locations rather than of the occupants of the household has also been a problem. If the occupants move, the new occupants are not bounded, and it has been estimated that about 10 to 15 percent of the sample consists of unbounded households. This factor, coupled with the mobility of the sample, creates a related problem: complete data records covering the three-year span of each panel are available for perhaps only 20 percent of the respondents. This restricts general data analysis possibilities, particularly the feasibility and utility of these data for longitudinal analyses of victimization experiences (Fienberg).
Finally, there are the inevitable counting problems: When there is more than one perpetrator involved in a crime, it is particularly difficult for respondents to report the number of victimizers with accuracy. The typical impersonality of a household burglary makes it impossible for a victim to know the number or characteristics of the burglars. Even as personal a crime as aggravated assault often presents the victim with problems in accurately recalling his perceptions when more than one person attempted or did physical injury to his body. The respondent's reports, then, may be less accurate when the perpetrator could be seen or when there was more than one observable perpetrator. If a respondent reports multiple victimizers in a crime incident, whether a property crime or violent crime, it counts as one victimization—the general counting rule is "one victim, one crime." By itself this is not necessarily problematic, but if one compares victimization rates and official crime rates for property offenses (for which the UCR counting rule is "one incident, one crime"), there may be sufficient noncomparability of units to jeopardize the validity of the comparison. For example, a three-victim larceny would yield three reports of victimization but only one crime known to the police. A three-victim assault would yield three of each and present fewer problems of comparability. The perspectives of the victim and the police are different, as are those of the NCVS and the UCR in counting and recording crime incidents with different statistical outcomes and interpretations.
A more serious counting problem involves series victimizations or rapid, repeated similar victimization of an individual. For a victim, it can be very difficult to separate one crime from another if they are very similar and happen within a compressed time period. The consequence is that validity suffers and there is a tendency to "blur" the incidents and to further underestimate the number of victimizations. The questionnaire separates single and series incidents, which are defined as three or more similar crimes that the respondent cannot differentiate in time or place of occurrence. Early publications of the NCVS excluded these series victimizations from the published victimization rates, raising the possibility that the rates are underestimations. Even more of the dark figure of hidden crime might be illuminated. If this and other problems with victimization surveys are resolved, the discrepancy between the amount of crime committed and the amount eventually reported to the police may become more substantial. There is little evidence that victims (except those of forcible rape) are changing their patterns of reporting crimes to the police, but there is mounting and more rigorous evidence that our ability to measure the amount and distribution of the dark figure of unreported crime is improving.
Self-report surveys. Surveys of self-reported criminal involvement are an important part of the improved capacity to illuminate the dark figure, in this case from the perspective of the criminal (or victimizer). The origin of self-report surveys predated victimization surveys by more than twenty years. Preliminary, groundbreaking research on self-reported hidden crime was conducted in the 1940s, but the method of simply asking someone about the nature and extent of his own criminal involvement did not become a recognized standard procedure until the late 1950s, with the work of James Short and Ivan Nye.
Austin Porterfield first used this variation of the survey research method of "self-disclosure" by a respondent in 1946, to compare the self-reports of college students regarding their delinquent activities while they were in high school with self-reports of delinquents processed through the juvenile court. Not only were more offenses admitted than detected, but also more significantly, it appeared that the college students had been involved in delinquency during their adolescence in ways similar to those of the officially defined delinquents. These findings suggested that the distinction between delinquent and nondelinquent was not dichotomous, but rather more continuous, and that crime was perhaps distributed more evenly in the American social structure than official statistics would suggest. Fred Murphy, Mary Shirley, and Helen Witmer reported in 1946 that the admissions of delinquent activities by boys who participated in a delinquency prevention experiment significantly surpassed the number of offenses that came to the attention of juvenile authorities. James Wallerstein and Clement Wyle conducted a study that remains unique in self-report research because it surveyed a sample of adults in 1947. They discovered that more than 90 percent of their sample of about fifteen hundred upper-income "law-abiding" adults admitted having committed at least one of forty-nine crimes included in the questionnaire.
These early self-report survey findings confirm empirically what criminal statisticians, law enforcement authorities, and even the public had known since the time of Quetelet—that a substantial volume of crime never comes to the attention of the criminal justice system. The hint that some of this invisible crime is committed by persons who are not usually considered candidates for official recognition as criminals was even more revelatory and intriguing, but remained dormant for a decade.
Heeding suggestions that criminology needed a "Kinsey Report" on juvenile delinquency, Short and Nye in 1957 developed an anonymous, self-administered questionnaire that contained a checklist of delinquent acts, which was administered to populations of students and incarcerated delinquents. Their research had a more profound and longer-lasting impact because it was tied to theory-testing and construction (Nye) and, more importantly, because it provocatively verified the hint only alluded to in the earlier self report-studies—that crime is not disproportionately a phenomenon of the poor, as suggested by official crime statistics. The self-report data were apparently discrepant with the official data because they showed that self-reported delinquency was more evenly distributed across the socioeconomic status scale than official delinquency. This one provocative finding called into question the correlates and theories of juvenile delinquency and crime because most were based on official crime statistics and that period's depiction of crime and delinquency as a phenomenon of the poor. The controversy set off by the work of Short and Nye still continues.
Literally hundreds of similar studies have been carried out since Short and Nye's pioneering work, most with similar results: there is an enormous amount of self-reported crime that never comes to the attention of the police; a minority of offenders commits a majority of the offenses, including the more serious crimes; the more frequently one commits crimes, the more likely is the commission of serious crimes; and those most frequently and seriously involved in crime are most likely to have official records. Self-report researchers have tended to assume that self-reports are valid and reliable—certainly more so than official measures. Ever since the mid-1960s, work critical of criminal justice agencies and of official crime statistics generated further support for these assumptions. A few theorists, such as Travis Hirschi, even constructed and tested delinquency theories based on self-report measures and their results.
It has been suggested, "confessional data are at least as weak as the official statistics they were supposed to improve upon" (Nettler, p. 107). This criticism is damning to the extent that the statistics produced by the self-report method and official statistics are valid and reliable measures of crime: if one rejects official statistics, then one should also question the adequacy of self-report statistics. Furthermore, as with official records and victimization surveys, there are a number of problems with the self-report method. Some of these are problems shared by victimization surveys and self-report surveys, and others are unique to the latter. The shared problems are the basic threats to the validity and reliability of responses to survey questions, including memory decay, telescoping, deceit, social-desirability response effects, and imprecise bounding of reference periods. The unique problems fall into four categories: inadequate or unrepresentative samples, unrepresentative domains of behavior measured, validity and reliability of the method, and methods effects.
Whereas the national victimization surveys cannot provide refined geographical and ecological data because of the dispersion of the probability samples across the United States, self-report surveys have other problems of representativeness and generalizability because they do not typically use national samples. Practically all self-report research is conducted with small samples of juveniles attending public schools in a community that, characteristically, is relatively small, often suburban or rural, and modally middle-class and white. This, of course, restricts the ability to generalize beyond these kinds of sample characteristics to adults, juveniles who are not in school, those who refuse to participate, urban inner-city juveniles, and poor and nonwhite youngsters. Such "convenience" samples also create analytic problems because data on those variables that are correlated with delinquency are simply unavailable or underrepresented in the sample. In short, most self-report research has somewhat limited generalizability because of typical sample characteristics. On the other hand, unlike the NCVS or UCR, self-report surveys were not intended originally to produce national or even generalizable estimates of the amount of juvenile delinquency crime in the United States.
Self-report surveys were intended, however, to produce data on a variety of delinquent behaviors. Compared to the restricted range of index crimes included in the NCVS, the domain of behavior measured in self-report surveys is expansive, with as many as fifty illegal acts in a questionnaire not being uncommon. Such expansiveness, however, creates other problems. Historically, the juvenile court has had jurisdiction over both crimes and offenses that are illegal only for juveniles, usually referred to as status offenses and including truancy, incorrigibility, curfew violation, smoking, and drinking. Self-report surveys have correctly covered crimes and status offenses alike in studying juvenile delinquency, but in some cases there has been an overemphasis on the less serious offenses. To the extent that there is an overrepresentation of less serious and perhaps trivial offenses, self-report measures are inadequate measures of the kind of serious juvenile crime that is likely to come to the attention of authorities. This is important in describing accurately the characteristics of juvenile offenders and their behavior, as well as in comparing self-report and official data. Such comparison is crucial to validation research, where one needs to compare the same categories of behavior, including both content and seriousness, in order to assess the reciprocal validity of self-report and official measures. In criminology as elsewhere, one should not compare apples with oranges!
Unfortunately, there has been a dearth of this kind of careful validation research, as well as of systematic research on reliability. The accuracy and consistency of self-report surveys have been assumed to be quite acceptable, or, if questions have been posed, they have typically come from validity and reliability research on general social-science survey methods. For example, it has been assumed that anonymous surveys are more valid than signed surveys and that interviews are preferred over self-administered questionnaires. Yet no study had directly compared the validity, and only one had compared the reliability, of two or more self-report methods within the same study until the work of Michael Hindelang, Travis Hirschi, and Joseph Weis in 1981. In those isolated studies where validity and reliability were addressed, external validation criteria such as official record data have been used too infrequently.
Of course, critics have remained skeptical about the accuracy of responses from liars, cheaters, and thieves, as well as from straight and honorable persons. The latter are not motivated by deception or guile, but they may respond incorrectly because a questionnaire item has poor face validity meaning that it does not make sufficiently clear what is being asked and that the respondent is consequently more free to interpret, construe, and attribute whatever is within his experience and imagination. For example, a common self-report item, "Have you ever taken anything from anyone under threat of force?" is intended to tap instances of robbery. However, respondents might answer affirmatively if they had ever appropriated a candy bar from their kid sister. This problem of item meaning and interpretation is chronic in survey research, but it only remains problematic if no validation research is undertaken to establish face validity. Unfortunately, this has been the case in the development of self-report instruments.
There has been a basic inattention to the psychometric properties of self-report surveys and attendant methods effects on measurement. From the psychometric research that went into the development of the NCVS, it is clearer that the bounding practices in self-report research have been inadequate: the reference periods are typically too long and variable from study to study. Most self-report surveys ask whether a respondent has "ever" committed a crime, a few use the past "three years," some use the past "year," but very few use the past "six months" (or less), which was established as the optimum recall period for the national victimization surveys. This poses threats to the accuracy of responses since it is established that the longer the reference period, the more problems with memory decay, telescoping, and misinterpretation of events.
A related problem arises when the self-report researcher wants to find out how often within a specified period a respondent has committed a crime. A favored means of measuring the frequency of involvement has been the "normative" response category. A respondent is asked, "How often in the past year have you hit a teacher?" and is given a set of response categories that includes "Very often," "Often," "Sometimes," "Rarely," and "Never." One respondent can check "Rarely" and mean five times, whereas another can check the same response and mean one time. They each respond according to personal norms, which are tied to their own behavior, as well as to that of their peers. This creates analytic problems because one cannot norm (that is, accurately compare) the answers of each respondent, obviating meaningful comparisons. A great deal of information is thus lost. Simply asking each respondent to record the actual frequency of commission for each offense can solve these problems.
Finally, unlike the NCVS and the UCR, there is very little about self-report surveys—whether their samples, instruments, or procedures—that is "standardized." This restricts the kinds of comparison across self-report studies that could lead to more improvements in the method and provide a more solid empirical foundation for theory construction and testing, as well as the possibility of nationwide self-report statistics comparable to those of the NCVS and the UCR.
This lack of standardization, inadequacies of samples, and the question of the differential validity and reliability of self-report and official measures of crime have led to two important developments in the research of crime statistics. The first is the initiation of surveys of national representative samples of juveniles for the purpose of estimating the extent and nature of delinquency and of substance abuse in the United States. The second is the conducting of more rigorous and comprehensive research on the differential validity and reliability of official, as compared to self-report, measures of crime and delinquency. In 1967, the National Institute of Mental Health initiated the first of an interrupted but relatively regular series of National Youth Surveys of a representative sample of teenage youths, who were interviewed about a variety of attitudes and behaviors, including delinquent behavior. This survey was repeated in 1972, and in 1976 the National Institute for Juvenile Justice and Delinquency Prevention became a cosponsor of what has become an annual self-report survey of the delinquent behavior of a national probability panel of youths aged from eleven to seventeen years. The two major goals are to measure the amount and distribution of self-reported delinquent behavior and official delinquency and to account for any observed changes in juvenile delinquency.
These periodic national self-report surveys allow more rigorous estimation of the nature and extent of delinquent behavior. It is ironic, however, that the validity, reliability, and viability of the self-report method as an alternative or adjunct to official measures was not assessed rigorously until Hindelang, Hirschi, and Weis began a study of measurement issues in delinquency research, focusing on the comparative utility of self-report and official data.
Within an experimental design, a comprehensive self-report instrument was administered to a random sample of sixteen hundred youths from fourteen to eighteen years of age, stratified by sex, race (white or black), socioeconomic status (high or low), and delinquency status (nondelinquent, police contact, or court record). Officially defined delinquents, boys, blacks, and lower-socioeconomic-status subjects were oversampled in order to facilitate data analysis within those groups that are often underrepresented in self-report studies. Subjects were randomly assigned to one of four test conditions that corresponded to four self-report methods of administration: anonymous questionnaire, signed questionnaire, face-to-face interview, and blind interview. A number of validation criteria were utilized, including the official records of those subjects identified in a reverse record check, a subset of questions administered by the randomized response method, a deep-probe interview for face validity testing a subset of delinquency items, and a follow-tip interview with a psychological-stress evaluator to determine the veracity of responses. The subjects were brought to a field office, where they answered the questions within the method condition to which they were randomly assigned. This experimental design, coupled with a variety of external validation criteria and reliability checks, ensures that the findings and conclusions can be drawn with some confidence—undoubtedly with more confidence than in any prior research on validity and reliability in the measurement of delinquency.
Hindelang, Hirschi, and Weis's study produced a variety of findings on the whole range of previously identified methodological problems and issues. Official crime statistics, it concluded, generate valid indications of the sociodemographic distribution of delinquency. Self-reports, indeed, measure a domain of delinquent behavior that does not overlap significantly with the domain covered by official data, particularly for the more serious crimes. However, self-reports can measure the same type and seriousness of delinquent behaviors as are represented in official records. Within the domain of delinquent behavior that they do measure, self-reports are very reliable, and basically valid. Self-report samples have been inadequate in that they do not include enough officially defined delinquents, nonwhites, and lower-class youths to enable confident conclusions to be drawn regarding the correlates of the more serious delinquent acts for which a juvenile is more likely to acquire an official record. Delinquency, whether measured by official or self-report data, is not equally distributed among all segments of society—there are real differences between those youngsters who engage in crime and those who do not. Methods of administration have no significant effects on the prevalence, incidence, validity, or reliability of self-reports. There is apparently less validity in the self-reports of those respondents with the highest rates of delinquency—male, black, officially defined delinquents. Perhaps the most significant finding of the research is related to this finding of differential validity for a small subpopulation of respondents. As originally proposed by Hindelang, Hirschi, and Weis in 1979, the empirical evidence shows that there is no discrepancy in the major correlates of self-reported or official delinquency, except for race, which may be attributable to the less valid responses of black subjects, particularly males with official records of delinquency.
The finding that self-reports and official measures do not produce discrepant results regarding the distribution and correlates of delinquency, but rather show convergence, is a critical piece of evidence in the controversy that has existed among criminal statisticians since the dark figure was identified at the beginning of the nineteenth century. Does the distribution of crime look the same when those crimes not known to the police are included in the overall distribution and with the distribution of crimes known to the police? Are the different sources of crime statistics producing discrepant or convergent perspectives of crime?
Conclusion: discrepancy or convergence?
Returning to the two primary purposes of crime statistics, to measure the "amount" and "distribution" of crime, it is clear that there has been, and will probably continue to be, discrepancy among the estimates of the amount of crime that are generated by the variety of crime statistics. The dark figure of crime may never be completely illuminated, the reporting practices of victims will probably remain erratic, and the recording of crimes by authorities will continue to be less than uniform.
However, the ultimately more important purpose of crime statistics is the measurement of the distribution of crime by a variety of social, demographic, and geographic characteristics. Fortunately, the major sources of crime data—crimes known to the police, victimization surveys, and self-report surveys—generate similar distributions and correlates of crime, pointing to convergence rather than discrepancy among the measures of the basic characteristics of crime and criminals. The problems associated with each of the data sources remain, but they diminish in significance because these imperfect measures produce similar perspectives of crime. As Gwynn Nettler concluded, "Fortunately, despite the repeatedly discovered fact that more crime is committed than is recorded, when crimes are ranked by the frequency of their occurrence, the ordering is very much the same no matter which measure is used" (p. 97).
Comparisons of data from the UCR and the NCVS program show that they produce similar patterns of crime (Hindelang and Maltz). There is substantial agreement between the two measures in the ordering of the relative frequencies of each of the index crimes. Comparisons of self-reports of delinquency with crimes known to the police show that each provides a complementary rather than a contradictory perspective on juvenile crime (Hindelang, Hirschi, and Weis, 1981; Belson). Self-reports do not generate results on the distribution and correlates of delinquency that are contrary to those generated by police statistics or, for that matter, by victimization surveys. The youngsters who are more likely to appear in official police and court record data—boys, nonwhites, low achievers, youths with friends in trouble, urban residents, and youths with family problems—are also more likely to self-report higher rates of involvement in crime.
This message should be of some comfort to a variety of people interested in crime and delinquency, from researchers and theorists to policy-makers, planners, program implementers, and evaluators. The basic facts of crime are more consistent than many scholars and authorities in the past would lead one to believe. In fact, the major sources of official and unofficial crime statistics are not typically inconsistent in their representations of the general features of crime but rather provide a convergent perspective on crime. The characteristics, distribution, and correlates of crime and, therefore, the implications for theory, policy, and programs are not discrepant by crime measure, but convergent. The data generated by a variety of measures are compatible and confirming sources of information on crime. The study and control of crime can best be informed by these complementary sources of crime statistics.
Joseph G. Weis
Brian C. Wold
See also Criminology and Criminal Justice Research: Methods; Criminology and Criminal Justice Research: Organizations; Statistics: Costs of Crime; Statistics: Historical Trends in Western Society.
Belson, William A. Juvenile Theft: The Causal Factors—A Report of an Investigation of the Tenability of Various Causal Hypotheses about the Development of Stealing by London Boys. New York: Harper & Row, 1975.
Biderman, Albert D. "Surveys of Population Samples for Estimating Crime Incidence." Annals of the American Academy of Political and Social Science 374 (1967): 16–33.
Black, Donald J. "Production of Crime Rates." American Sociological Review 35 (1970): 733–748.
Boggs, Sarah L. "Urban Crime Patterns." American Sociological Review 30 (1965): 899–908.
Cohen, Lawrence E., and Felson, Marcus. "Social Change and Crime Rate Trends: A Routine Activity Approach." American Sociological Review 44 (1979): 588–608.
Doleschal, Eugene. "Sources of Basic Criminal Justice Statistics: A Brief Annotated Guide with Commentaries." Criminal Justice Abstracts I 1 (1979): 122–147.
Ennis, Philip H. Criminal Victimization in the United States: A Report of a National Survey. Chicago: University of Chicago, National Opinion Research Center, 1967.
Federal Bureau of Investigation. Crime in the United States. Uniform Crime Reports for the United States. Washington, D.C.: U.S. Department of Justice, F.B.I., annually.
Fienberg, Stephen E. "Victimization and the National Crime Survey: Problems of Design and Analysis." Indicators of Crime and Criminal Justice. Quantitative Studies. Edited by Stephen E. Fienberg and Albert J. Reiss, Jr. Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics, 1980. Pages 33–40.
Guerry, Andre Michel. Essai sur la slatistique morale de la France, precede dun Rapport a L Academie des Sciences, par MM. Lacroix, Silvestre, et Girard. Paris: Crochard, 1833.
Hindelang, Michael J.; Gottfredson R.; and Garofalo, James. Victims of Personal Crime: An Empirical Foundation for a Theory of Personal Victimization. Cambridge, Mass.: Ballinger, 1978.
Hindelang, Michael J. "The Uniform Crime Reports Revisited." Journal of Criminal Justice 2 (1974): 1–17.
Hindelang, Michael J.; Hirschi, Travis; and Weis, Joseph G. "Correlates of Delinquency: The Illusion of Discrepancy between Self-Report and Official Measures." American Sociological Review 44 (1979): 995–1014.
——. Measuring Delinquency. Beverly Hills, Calif.: Sage, 1981.
Hirschi, Travis. Causes of Delinquency. Berkeley: University of California Press, 1969.
Keppel, Robert, and Weis, Joseph G. "Improving the Investigation of Violent Crime: The Homicide Investigation and Tracking System." Washington, D.C.: U.S. Department of Justice, 1993.
Klinger, David, and Bridges, George. "Measurement Error in Calls-For-Service as an Indicator of Crime." Criminology 35 (1997): 705–726.
Kulik, James A.; Stein, Kenneth B.; and Sarbin, Theodore R. "Disclosure of Delinquent Behavior under Conditions of Anonymity and Nonanonymity." Journal of Consulting and Clinical Psychology 32 (1968): 506–509.
Maltz, Michael D. "Crime Statistics: A Mathematical Perspective." Journal of Criminal Justice 3 (1975): 177–193.
Murphy, Fred J.; Shirley, Mary M.; and Witmer, Helen L. "The Incidence of Hidden Delinquency." American Journal of Orthopsychiatry 16 (1946): 686–696.
Nettler, Gwynn. Explaining Crime, 2d ed. New York: McGraw-Hill, 1978.
Nye, F. Ivan. Family Relationships and Delinquent Behavior. New York: Wiley, 1958.
Penick, Bettye K. Eidson, and Owens, Maurice E. B. III, eds. Surveying Crime. Washington, D.C.: National Academy of Sciences, National Research Council, Panel for the Evaluation of Crime Surveys, 1976.
Porterfield, Austin L. Youth in Trouble: Studies in Delinquency and Despair, with Plans for Prevention. Fort Worth, Tex.: Leo Potishman Foundation, 1946.
President's Commission on Law Enforcement and Administration of Justice. The Challenge of Crime in a Free Society. Washington, D.C.: The Commission, 1967.
Quetelet, Adolphe. Recherches sur le penchant au crime aux differens ages, 2d ed. Brussels: Hayez, 1833.
Rae, Richard F. "Crime Statistics, Science or Mythology." Police Chief 42 (1975): 72–73.
Reaves, Brian A. "Using NIBRS Data to Analyze Violent Crime." Bureau of Justice Statistics Technical Report. Washington, D.C.: U.S. Department of Justice, 1993.
Reiss, Albert J., Jr. Studies in Crime and Law Enforcement in Major Metropolitan Areas. Field Survey III, vol. 1. President's Commission on Law Enforcement and Administration of Justice. Washington, D.C.: The Commission, 1967.
Roberts, David. "Implementing the National Incident-Based Reporting System: A Project Status Report: A Joint Project of the Bureau of Justice Statistics and the Federal Bureau of Investigation [SEARCH, the National Consortium for Justice Information and Statistics]." Washington, D.C.: U.S. Department of Justice, 1997.
Robinson, Louis N. History and Organization of Criminal Statistics in the United States (1911). Reprint. Montclair, N.J.: Patterson Smith, 1969.
Sellin, Thorsten. "The Basis of a Crime Index." Journal of the American Institute of Criminal Law and Criminology 22 (1931): 335–356.
Sellin, Thorsten, and Wolfgang, Marvin E. The Measurement of Delinquency. New York: Wiley, 1964.
Short, James F., Jr., and Nye, F. Ivan. "Reported Behavior as a Criterion of Deviant Behavior." Social Problems 5 (1957–1958): 207–213.
Skogan, Wesley G. "Dimensions of the Dark Figure of Unreported Grime." Crime and Delinquency 23 (1977): 41–50.
Sparks, Richard F. "Criminal Opportunities and Crime Rates." Indicators of Crime and Criminal Justice: Quantitative Studies. Edited by Stephen E. Fienberg and Albert J. Reiss, Jr. Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics, 1980. Pages 18–32.
Turner, Anthony. San Jose Methods Test of Known Crime Victims. Washington, D.C.: U.S. Department of Justice, Law Enforcement Assistance Administration, National Institute of Law Enforcement and Criminal Justice, 1972.
U.S. Bureau of the Census. Criminal Victimization Surveys in the Nation's Five Largest Cities: National Crime Panel Surveys of Chicago, Detroit, Los Angeles, New York, and Philadelphia. Washington, D.C.: U.S. Department of Justice, Law Enforcement Assistance Administration, National Criminal Justice Information and Statistics Service, 1975.
U.S. Department of Justice, Law Enforcement Assistance Administration, National Criminal Justice Information and Statistics Service. Criminal Victimization Surveys in Eight American Cities: A Comparison of 1971/1972 and 1974/1975 Findings. Washington, D.C.: NCJISS, 1976.
U.S. National Commission on Law Observance and Enforcement [Wickersham Commission]. Report on Criminal Statistics. Washington, D.C.: The Commission, 1931.
Vollmer, August. "The Bureau of Criminal Records." Journal of the American Institute of Criminal Law and Criminology 11 (1920): 171–180.
Wallerstein, James S., and Wyle, Clement J. "Our Law-Abiding Law-Breakers." Probation 25 (1947): 107–112.
Warner, Barbara D., and Pierce, Glenn L. "Reexamining Social Disorganization Theory Using Calls to the Police as a Measure of Crime." Criminology 31 (1993): 493–517.
Wilkins, Leslie T. Social Deviance: Social Policy, Action, and Research. Englewood Cliffs, N.J.: Prentice-Hall, 1965.
"Statistics: Reporting Systems and Methods." Encyclopedia of Crime and Justice. . Encyclopedia.com. (May 27, 2017). http://www.encyclopedia.com/law/legal-and-political-magazines/statistics-reporting-systems-and-methods
"Statistics: Reporting Systems and Methods." Encyclopedia of Crime and Justice. . Retrieved May 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/law/legal-and-political-magazines/statistics-reporting-systems-and-methods
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:
Modern Language Association
The Chicago Manual of Style
American Psychological Association
- Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
- In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.