Quantitative metrics are poor choices for assessing the research output of an individual scholar. Summing impact factors, counting citations, tallying an h-index, or looking at Eigenfactor™ Scores (described below)—none of these methods are adequate compared with what should be the gold standard: reading the scholar's publications and talking to experts about her work. But many scholars, librarians, historians of science, editors, and other individuals are also interested in larger-scale questions that require assessing hundreds or thousands of scholarly articles by hundreds or thousands of authors. “Given that my library can afford only one more subscription, should I subscribe to journal x or journal y?” “How often do physicists cite Biology journals, and do biologists pay equal attention to the physics literature?” “Has the increase in size of my journal caused a corresponding decline in average quality?” To answer questions such as these, aggregate bibliometric statistics can be very useful.
For decades, citation counts and impact factor scores have been the primary currency for this sort of assessment. While these measures have the virtue of simplicity, they discard much of the useful information that is present in the full citation network. For example, citation counts and impact factors do not account for where citations come from: by these measures, citations from prestigious journals are worth no more than citations from lower-tier publications, and no attempt is made to adjust for differences in “citation culture” between journals and across fields. We have developed the Eigenfactor Metrics to address these concerns and to provide a more sophisticated way of looking at citation data. The idea behind these metrics is that we can use computational power to extract the wealth of information inherent in the structure of citation networks. The Eigenfactor algorithm (de-scribed in detail at http://www.eigenfactor.org/methods.htm) is related to a class of network statistics known as eigenvector centrality measures. The approach is similar to that which Google uses to return search results. When ranking web pages, Google's PageRank algorithm takes into account not only how many hyperlinks a web page receives, but also from where those hyperlinks come. Our Eigenfactor algorithm does something similar, but instead of ranking websites, we rank journals, and instead of using hyperlinks, we use citations (Bergstrom, 2007).
One can view the Eigenfactor Score as the result of a random walk through the scientific literature. The algorithm corresponds to a basic model of research in which readers follow chains of citations as they move from journal to journal. Imagine that a researcher goes to the library and selects a journal article at random from a journal published in 2006. After reading the article, the researcher selects at random one of the citations from the article. She then proceeds to the journal that was cited, selects a random 2006 article from that journal and, as before, selects a citation to direct her to her next journal volume. The researcher does this ad infinitum. Because of the structure of the citation network, our model researcher will frequently visit large, important journals such as Nature or Proceedings of the National Academy of Sciences of the United States of America, and will seldom visit small journals in the lowest tiers of the publishing hierarchy. The frequency with which our model researcher visits each journal gives us a measure of that journal's importance within network of academic citations—and this frequency, expressed as a percentage, is essentially the Eigenfactor Score of the journal. In practice, we do not need to simulate this random walk to estimate the frequencies with which our model researcher visits each journal. Instead, we can compute the expected visitation frequencies directly from a matrix that records how often each journal cites each other journal.
We have applied the Eigenfactor algorithm to bibliometric data sets from several sources. At http://www.eigenfactor.org, we display the results of the Eigenfactor algorithm as applied to journal citation data from the Thomson Reuters Journal Citation Reports® (JCR). To each of the >7000 journals listed within the JCR, we compute two principal scores. The Eigenfactor Score is a measure of the journal's total importance to the scientific community; if a journal doubles in size while the quality of its articles remains constant, we would expect its Eigenfactor score to double. The Article Influence™ Score is a measure of the average influence, per article, of the papers in a journal and, as such, is comparable to the impact factor. Article Influence Scores are normalized so that the mean article in the JCR database has an Article Influence Score of 1.00. Thus, if a journal has an Article Influence Score of 3.0, its articles are on average three times as influential as the average article in the JCR database. In the future, we will also be making available a set of Eigenfactor Metrics calculated for other citation data from other commercial and noncommercial sources.
Article Influence Scores and total articles published for the top 25 journals by Eigenfactor score in the field of Neurosciences. Several prominent journals, including The Journal of Neuroscience, are labeled. The volume of each circle reflects the Eigenfactor score of the corresponding journal. A dynamic version of this graph, online as an animated movie at http://www.eigenfactor.org/bubble/neuro/, shows the change in rankings and size over the years 1997–2006, allows users to highlight individual journals, and allows users explore other statistics along the x and y axes.
Journal ranking is one of many uses for citation data. In addition to working with the Eigenfactor metrics, we are using citation data to explore the structure of science and the way that this structure is changing. We have developed ways of mapping the terrain of scholarship; these maps are available at http://www.eigenfactor.org as well. Ultimately, a better understanding of the scholarly landscape may be useful not only for those who study the structure of science, but also for practicing scientists as they navigate through ever-increasing volumes of literature.
Footnotes
-
Editor's Note: The misuse of journal impact factor in hiring and promotion decisions is a growing concern. This article is one in a series of invited commentaries in which authors discuss this problem and consider alternative measures of an individual's impact.
- Correspondence should be addressed to Carl T. Bergstrom, Box 351800, Kincaid 448, Department of Biology, University of Washington, Seattle, WA 98115. cbergst{at}u.washington.edu