Citations

views updated

Citations

BIBLIOGRAPHY

A citation is a reference to any published work as well as any form of communication with sufficient details to uniquely identify the item. When academics, scientists, and other professionals refer to any published work in their own published work, they cite it, giving the author, year, title, and locus of publication (journal, book, working paper, etc.).

In academia, according to the principle of meritocracy, scholars, departments, and institutions are evaluated through objective criteria. One of the main instruments used to measure the quantity and quality of academic output is citation analysis. A citation count is used as a proxy of impact, signaling quality, because it informs how often a given published work, author, or journal is cited in the literature. It allows us to build rankings of scholars, departments, institutions, and journals. These rankings are subjected to significant variations over time. For instance, in economics, only three journalsAmerican Economic Review, Econometrica, and the Journal of Political Economy appear consistently in the list of top ten journals of the profession during the 19701995 period (e.g., Laband and Piette 1994; Hodgson and Rothman 1999).

According to David Laband and J. P. Sophocleus (1985), citations are the scientific communitys version of dollar voting by consumers for consumption goods. Holding prices constant, consumers decide to buy goods from certain producers because of the quality of their merchandise. Whether the purchase decision is influenced by the buyers friendship or family relationship with the seller does not matter. The relevant point is the volume of sales in which market shares are based. The same holds true for the consumption of scientific literature. What matters is the volume of citations, not the motivation behind each specific citation.

Any citation count has several dimensions. Among them, the sample of authors and journals, the time period in consideration, and self citations play an important role in defining the relative importance of an author, article, and journal.

In two papers in the journal Science, David Hamilton (1990, 1991) showed that about half of all science papers were never cited within five years after publication. David Pendlebury (1991) corrected the figures to show that uncitedness figures were 22 percent in the physical sciences, 48 percent in the social sciences, and 93 percent in humanities. Based on these figures, Newsweek concluded that nearly half the scientific work in this country is worthless (p. 44), suggesting that resources invested in science are wasted.

According to Arjo Klamer and Hendrik van Dalen (2002, 2005), the skewed distribution of citations is part and parcel of what they call the attention game in science. Because there are too many articles for any scholar to pay attention to, she has to make a selection and usually follows others in doing so. There is a snowball effect in the sense that one scholar reads an article because others cite it; by citing it in her work, others may turn to the article as well. This is consistent with the reward system of science and in particular with the Matthew effects of science (Merton 1968) in which a few scientists get most of the credit and recognition for ideas and discoveries made by many other scientists. For instance, Diana Crane (1965) found that highly productive scientists at a major university gained recognition more often than equally productive scientists at a less prestigious university.

Besides the leadership effect described above, the number of citations may also be influenced by academic networks because of their positive externalities. Any scholar with a wide academic network can more easily communicate her work through seminar presentations, workshops, and conferences, and publish it in influential books, monographs, and professional journals. Another important source of citations related to academic networks is the relative importance of the scholars network. A scholar working in any given field who has access to leading professionals, departments, associations, and their respective publication venues may find it easier to be influential and therefore to be widely cited.

The number of citations increases with the number of publications or the type of journals in which a scholar publishes. João Ricardo Faria (2003) presented a model in which the scholar is assumed to maximize the success of her career as measured by the number of citations of her work. The scholar may choose to publish in top journals, which have a higher rate of rejection, therefore making the expected number of publications low; a scholar with these preferences is called a K -strategist. If the scholar chooses to publish in lesser-known journals with lower rejection rates, she ends up having many papers accepted for publication; in this case, she is an r -strategist. Faria showed that any scholar following either an r - or K -strategy may achieve the same final amount of citations, and he conjectured that the most successful strategy is the one that combines both approaches to quantity and quality, a strategy called Samuelson ray. Samuelson ray is named after Paul A. Samuelson, because he has been one of the most productive and influential economists to date.

The number of citations to a scholars publications may vary over time with the quality of her work, the reputation of the journals where she has published, and the number of papers the scholar has published. Faria (2005) studied a Stackelberg differential game with scholars and editors. Journal editors are leaders who maximize the quality of the papers they publish, while the scholar is the follower willing to maximize the number of papers published, constrained by the way her work is cited. Faria showed that the number of citations increases with rules aimed at increasing a scholars productivity (i.e., tenure requirements) and with a journals reputation.

BIBLIOGRAPHY

Crane, Diana. 1965. Scientists at Major and Minor Universities: A Study of Productivity and Recognition. American Sociological Review 30: 699714.

Faria, João R. 2003. What Type of Economist Are You: r -Strategist or K -Strategist? Journal of Economic Studies 30: 144154.

Faria, João R. 2005. The Game Academics Play: Editors Versus Authors. Bulletin of Economic Research 57: 112.

Hamilton, David P. 1990. Publishing byor for?The Numbers. Science 250: 13311332.

Hamilton, David P. 1991. Research Papers: Whos Uncited Now? Science 251: 25.

Hodgson, G., and H. Rothman. 1999. The Editors and Authors of Economics Journals: A Case of Institutional Oligopoly? Economic Journal 109 (453): 165186.

Klamer, Arjo, and Hendrik P. van Dalen. 2002. Attention and the Art of Scientific Publishing. Journal of Economic Methodology 9: 289315.

Klamer, Arjo, and Hendrik P. van Dalen. 2005. Is Science a Case of Wasteful Competition? Kyklos 58: 395414.

Laband, David N., and M. J. Piette. 1994. The Relative Impacts of Economics Journals: 19701990. Journal of Economic Literature 32: 640666.

Laband, David N., and J. P. Sophocleus. 1985. Revealed Preference for Economics Journals: Citations as Dollar Votes. Public Choice 46: 317324.

Merton, Robert K. 1968. The Matthew Effect in Science. Science 159: 5663.

Pendlebury, David. 1991. Gridlock in the Labs: Does the Country Really Need All Those Scientists? Newsweek 117, no. 2: 44.

Pendlebury, David. 1991. Letters to the Editor: Science, Citation, and Funding. Science 251: 14101411.

João Ricardo Faria