Universality of citation distributions: towards an objective measure of scientific impact.

Universality of citation distributions: towards an objective measure of scientific impact.

Filippo Radicchi Complex Networks Lagrange Laboratory (CNLL), ISI Foundation, Torino, Italy    Santo Fortunato Complex Networks Lagrange Laboratory (CNLL), ISI Foundation, Torino, Italy    Claudio Castellano To whom correspondence should be addressed. E-mail: claudio.castellano@roma1.infn.it SMC, INFM-CNR, and Dipartimento di Fisica, “Sapienza” Università di Roma, Piazzale A. Moro 2, 00185 Roma, Italy
Abstract

We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator is considered, where is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.

I Introduction

Citation analysis is a bibliometric tool that is becoming increasingly popular to evaluate the performance of different actors in the academic and scientific arena, ranging from individual scholars hirsch05 (); egghe06 (); hirsch07 (), to journals, departments, universities evidence07 () and national institutions kinney07 () up to whole countries king04 (). The outcome of such analysis often plays a crucial role to decide which grants are awarded, how applicants for a position are ranked, even the fate of scientific institutions. It is then crucial that citation analysis is carried out in the most precise and unbiased way.

Citation analysis has a very long history and many potential problems have been identified brooks86 (); egghe90 (); adler08 (), the most critical being that often a citation does not – nor it is intended to – reflect the scientific merit of the cited work (in terms of quality or relevance). Additional sources of bias are, to mention just a few, self-citations, implicit citations, the increase in the total number of citations with time or the correlation between the number of authors of an article and the number of citations it receives bornmann08 ().

In this work we consider one of the most relevant factors that may hamper a fair evaluation of scientific performance: field variation. Publications in certain disciplines are typically cited much more or much less than in others. This may happen for several reasons, including uneven number of cited papers per article in different fields or unbalanced cross-discipline citations althouse08 (). A paradigmatic example is provided by mathematics: the highest 2006 impact factor (IF) garfield79 () for journals in this category (Journal of the American Mathematical Society) is 2.55, whereas this figure is ten times larger or even more in other disciplines (for example, New England Journal of Medicine has 2006 IF 51.30, Cell has IF 29.19, Nature and Science have IF 26.68 and 30.03, respectively).

The existence of this bias is well-known garfield79 (); egghe90 (); bornmann08 () and it is widely recognized that comparing bare citation numbers is inappropriate. Many methods have been proposed to alleviate this problem schubert86 (); schubert96 (); vinkler96 (); vinkler03 (); iglesias07 (). They are based on the general idea of normalizing citation numbers with respect to some properly chosen reference standard. The choice of a suitable reference standard, that can be a journal, all journals in a discipline or a more complicated set schubert96 () is a delicate issue zitt05 (). Many possibilities exist also in the detailed implementation of the standardization procedure. Some methods are based on ranking articles (scientists, research groups) within one field and comparing relative positions across disciplines. In many other cases relative indicators are defined, i.e. ratios between the bare number of citations and some average measure of the citation frequency in the reference standard. A simple example is the Relative Citation Rate of a group of articles schubert86 (), defined as the total number of citations they received, divided by the weighted sum of impact factors of the journals where the articles were published.

The use of relative indicators is widespread, but empirical studies naranan71 (); seglen92 (); redner98 () have shown that distributions of article citations are very skewed, even within single disciplines. One may wonder then whether it is appropriate to normalize by the average citation number, that gives only very limited characterization of the whole distribution. We address this issue in this article.

The problem of field variation affects the evaluation of performance at many possible levels of detail: publications, individual scientists, research groups, institutions. Here we consider the simplest possible level, the evaluation of citation performance of single publications. When considering individuals or research groups, additional sources of bias (and of arbitrariness) exist, that we do not tackle here. As reference standard for an article, we consider the set of all papers published in journals that are classified in the same Journal of Citation Report scientific category of the journal where the publication appears (see details in Sec. VI). We take as normalizing quantity for citations of articles belonging to a given scientific field the average number of citations received by all articles in that discipline published in the same year. We perform an empirical analysis of the distribution of citations for publications in various disciplines and we show that the large variability in the number of bare citations is fully accounted for when is considered. The distribution of this relative performance index is the same for all fields. No matter whether, for instance, Developmental Biology, Nuclear Physics or Aerospace Engineering are considered, the chance of having a particular value of is the same. Moreover, we show that allows to properly take into account the differences, within a single discipline, between articles published in different years. This provides a strong validation of the use of as an unbiased relative indicator of scientific impact for comparison across fields and years.

Ii Variability of citation statistics in different disciplines

First of all we show explicitly that the distribution of the number of articles published in some year and cited a certain number of times strongly depends on the discipline considered. In Fig. 1 we plot the normalized distributions of citations to articles that appeared in 1999 in all journals belonging to several different disciplines according to the Journal of Citation Reports classification.

Figure 1: Normalized histogram of the number of articles published in 1999 and having received citations. We plot for several scientific disciplines with different average number of citations per article.

From this figure it is apparent that the chance of a publication to be cited strongly depends on the category the article belongs to. For example a publication with 100 citations is approximately 50 times more common in “Developmental Biology” than in “Engineering, Aerospace”. This has obvious implications in the evaluation of outstanding scientific achievements: the simple count of the number of citations is patently misleading to assess whether an article in Developmental Biology is more successful than one in Aerospace Engineering.

Iii Distribution of the relative indicator

A first step toward properly taking into account field variations is to recognize that the differences in the bare citation distributions are essentially not due to specific discipline-dependent factors, but are instead related to the pattern of citations in the field, as measured by the average number of citations per article . It is natural then to try to factor out the bias induced by the difference in the value of by considering a relative indicator, i.e. measuring the success of a publication by the ratio between the number of citations received and the average number of citations received by articles published in its field in the same year. Fig. 2 shows that this procedure leads to a very good collapse of all curves for different values of onto a single shape. The distribution of the relative indicator seems then universal for all categories considered and resembles a lognormal distribution.

Figure 2: Rescaled probability distribution of the relative indicator , showing that the universal scaling holds for all scientific disciplines considered (see table 1). The dashed line is a lognormal fit with .

In order to make these observations more quantitative, we have fitted each curve in Fig. 2 for with a lognormal curve

(1)

where the relation , due to the fact that the expected value of the variable is 1, reduces the number of fitting parameters to one. All fitted values of , reported in Table 1, are compatible within two standard deviations, except for one (Anesthesiology) that is in any case within three standard deviations of all the others. Values of per degree of freedom, also reported in Table 1, indicate that the fit is good.

Index Subject Category year
1 Agricultural Economics & Policy 1999 266 6.88 42 1.0(1) 0.007
2 Allergy 1999 1530 17.39 271 1.4(2) 0.012
3 Anesthesiology 1999 3472 13.25 282 1.8(2) 0.009
4 Astronomy & Astrophysics 1999 7399 23.77 1028 1.1(1) 0.003
5 Biology 1999 3400 14.6 413 1.3(1) 0.004
6 Computer Science, Cybernetics 1999 704 8.49 100 1.3(1) 0.004
7 Developmental Biology 1999 2982 38.67 520 1.3(3) 0.002
8 Engineering, Aerospace 1999 1070 5.65 95 1.4(1) 0.003
9 Hematology 1990 4423 41.05 1424 1.5(1) 0.002
10 Hematology 1999 6920 30.61 966 1.3(1) 0.004
11 Hematology 2004 8695 15.66 1014 1.3(1) 0.003
12 Mathematics 1999 8440 5.97 191 1.3(4) 0.001
13 Microbiology 1999 9761 21.54 803 1.0(1) 0.005
14 Neuroimaging 1990 444 25.26 518 1.1(1) 0.004
15 Neuroimaging 1999 1073 23.16 463 1.4(1) 0.003
16 Neuroimaging 2004 1395 12.68 132 1.1(1) 0.005
17 Physics, Nuclear 1990 3670 13.75 387 1.4(1) 0.001
18 Physics, Nuclear 1999 3965 10.92 434 1.4(4) 0.001
19 Physics, Nuclear 2004 4164 6.94 218 1.4(1) 0.001
20 Tropical Medicine 1999 1038 12.35 126 1.1(1) 0.017


Table 1: List of all scientific disciplines considered in this article. For each category we report the total number of articles , the average number of citations , the maximum number of citations , the value of the fitting parameter in Eq. (1) and the corresponding per degree of freedom. Data refer to articles published in journals listed by Journal of Citation Reports under a specific subject category.

This allows to conclude that, rescaling the distribution of citations for publications in a scientific discipline by their average number, a universal curve is found, independent of the specific discipline. Fitting a single curve for all categories, a lognormal distribution with is found, that is reported in Figure 2.

Interestingly, a similar universality for the distribution of the relative performance is found, in a totally different context, when the number of votes received by candidates in proportional elections is considered fortunato07 (). In that case, the scaling curve is also well-fitted by a lognormal with parameter . For universality in the dynamics of academic research activities see also plerou99 ().

The universal scaling obtained provides a solid grounding for comparison between articles in different fields. To make this even more visually evident, we have ranked all articles belonging to a pool of different disciplines (spanning broad areas of science) according either to and to . We have then computed the percentage of publications of each discipline that appear in the top of the global rank. If the ranking is fair the percentage for each discipline should be around with small fluctuations. Fig. 3 clearly shows that when articles are ranked according to the unnormalized number of citations there are wide variations among disciplines. Such variations are dramatically reduced instead when the relative indicator is used. This occurs for various choices of the percentage . More quantitatively, assuming that articles of the various disciplines are scattered uniformly along the rank axis, one would expect the average bin height in Fig. 3 to be with a standard deviation

(2)

where is the number of categories and the number of articles in the -th category. When the ranking is performed according to we find (Table 2) a very good agreement with the hypothesis that the ranking is unbiased, while strong evidence that the ranking is biased is found when is used. For example, for , for -based ranking, while if is used, as opposed to the value in the hypothesis of unbiased ranking.

z
5 0.59 4.38 4.73 5.14 0.51
10 0.81 8.69 7.92 10.07 0.67
20 1.09 17.68 12.37 20.03 1.15
40 1.33 35.67 17.48 39.86 2.58


Table 2: Average and standard deviation for the bin heights in Fig. 3. Comparison between the values expected theoretically for unbiased ranking (first two columns), and those obtained empirically when articles are ranked according to (third and fourth columns) and according to (last two columns).

Figures 2 and 3 allow to conclude that is an unbiased indicator for comparing the scientific impact of publications in different disciplines.

Figure 3: We rank all articles according to the bare number of citations and the relative indicator . We then plot the percentage of articles of a particular discipline present in the top of the general ranking, for the rank based on the number of citations ( histograms on the left in each panel) and based on the relative indicator (histograms on the right). Different values of (different panels) lead to very similar pattern of results. The average values and the standard deviations of the bin heights shown are also reported in Table 2. The numbers identify the disciplines as they are indicated in Table 1.

For the normalization of the relative indicator, we have considered the average number of citations per article published in the same year and in the same field. This is a very natural choice, giving to the numerical value of the direct interpretation as relative citation performance of the publication. In the literature this quantity is also indicated as the “item oriented field normalized citation score” lundberg07 (), an analogue for a single publication of the popular CWTS (Centre for Science and Technology Studies, Leiden) field normalized citation score or “crown indicator” moed95 (). In agreement with the findings of Ref. althouse08 () shows very little correlation with the overall size of the field, as measured by the total number of articles.

The previous analysis compares distributions of citations to articles published in a single year, 1999. It is known that different temporal patterns of citations exist, with some articles starting soon to receive citations, while others (“sleeping beauties”) go unnoticed for a long time, after which they are recognized as seminal and begin to attract a large number of citations vanraan04 (); redner05 (). Other differences exist between disciplines, with noticeable fluctuations in the cited half-life indicator across fields. It is then natural to wonder whether the universality of distributions for articles published in the same year extends longitudinally in time so that the relative indicator allows comparison of articles published in different years. For this reason, in Fig. 4 we compare the plot of vs for publications in the same scientific discipline appeared in three different years. The value of obviously grows as older publications are considered, but the rescaled distribution remains conspicuously the same.

Figure 4: Rescaled probability distribution of the relative indicator for three disciplines (“Hematology”, “Neuroimaging”, and ”Physics, Nuclear”) for articles published in different years (, and ). In spite of the natural variation of ( grows as a function of the elapsed time), the universal scaling observed over different disciplines naturally holds also for articles published in different periods of time. The dashed line is a lognormal fit with .

Iv A generalized h-index

Since its introduction in 2005, the h-index hirsch05 () has enjoyed a spectacularly quick success ball05 (): it is now a well established standard tool for the evaluation of the scientific performance of scientists. Its popularity is partly due to its simplicity: the h-index of an author is if of his articles have at least citations each, and the other articles have at most citations each. Despite its success, as all other performance metrics the h-index has some shortcomings, as already pointed out by Hirsch himself. One of them is the difficulty to compare authors in different disciplines.

Figure 5: Inset: distributions of the number of articles published by an author during 1999 in several disciplines. Main: the same distributions rescaled by the average number of publications per author in 1999 in the different disciplines. The dashed line is a power-law with exponent .

The identification of the relative indicator as the correct metrics to compare articles in different disciplines naturally suggests its use in a generalized version of the h-index taking properly into account different citation patterns across disciplines. However, just ranking articles according to , instead of on the basis of the bare citation number , is not enough. A crucial ingredient of the h-index is the number of articles published by an author. As Fig. 5 shows, such a quantity also depends on the discipline considered: in some disciplines the average number of articles published by an author in a year is much larger than in others. But also in this case this variability is rescaled away if the number of publications in a year by an author is divided by the average value in the discipline . Interestingly, the universal curve is fitted reasonably well over almost two decades by a power-law behavior with .

This universality allows one to define a generalized h-index, , that factors out also the additional bias due to different publication rates, thus allowing comparisons among scientists working in different fields. To compute the index for an author, his/her articles are ordered according to and this value is plotted versus the reduced rank with being the rank. In analogy with the original definition by Hirsch, the generalized index is then given by the last value of such that the corresponding is larger than . For instance, if an author has published 6 articles with values of equal to 4.1, 2.8, 2.2, 1.6, 0.8 and 0.4 respectively, and the value of in his discipline is 2.0, his -index is equal to 1.5. This because the third best article has , while the fourth has . We plan to present the results of the application of this generalized index to practical cases in a forthcoming publication.

V Conclusions

In this article we have presented strong empirical evidence that the widely scattered distributions of citations for publications in different scientific disciplines are rescaled upon the same universal curve when the relative indicator is used. We have also seen that the universal curve is remarkably stable over the years. The analysis presented here justifies the use of relative indicators to compare in a fair manner the impact of articles across different disciplines and years. This may have strong and unexpected implications. For instance, Figure 2 leads to the counterintuitive conclusion that an article in Aerospace Engineering with only 20 citations () is more successful than an article in Developmental Biology with 100 citations (). We stress that this does not imply that the article with larger is necessarily more “important” than the other. In an evaluation of importance, other field-related factors may play a role: an article with an outstanding value of in a very narrow specialistic field may be less ”important” (for science in general or for the society) than a publication with smaller in a highly competitive discipline with potential implications in many areas.

Since we consider single publications, the smallest possible entities whose scientific impact can be measured, our results must always be taken into account when tackling other, more complicated tasks, like the evaluation of performance of individuals or research groups. For example, in situations where the simple count of the mean number of citations per publication is deemed to be important, one should compute the average of (and not of ) to evaluate impact independently of the scientific discipline. For what concerns the assessment of single authors’ performance we have defined a generalized h-index hirsch05 () that allows a fair comparison across disciplines taking into account also the different publication rates.

Our analysis deals with two of the main sources of bias affecting comparisons of publication citations. It would be interesting to tackle, along the same lines, other potential sources of bias, as for example the number of authors, that is known to correlate with higher number of citations bornmann08 (). It is natural to define a relative indicator, the number of citations per author. Is this normalization the correct one that leads to a universal distribution, for any number of authors?

Finally, from a more theoretical point of view, an interesting goal for future work is to understand the origin of the universality found and how its precise functional form comes about. An attempt to investigate what mechanisms are relevant for understanding citation distributions is in Ref. vanraan01 (). Further activity in the same direction would be definitely interesting.

Vi Methods

Our empirical analysis is based on data from Thomson Scientific’s Web of Science (WOS, www.isiknowledge.com) database, where the number of citations is counted as the total number of times an article appears as a reference of a more recent published article. Scientific journals are divided in 172 categories, from “Acoustics” to “Zoology”. Within a single category a list of journals is provided. We consider articles published in each of these journals to be part of the category. Notice that the division in categories is not mutually exclusive: for example Physical Review D belongs both to the “Astronomy & Astrophysics” and to the “Physics, particles & fields” categories. For consistency, among all records contained in the database we consider only those classified as “article” and “letter”, thus excluding reviews, editorials, comments and other published material likely to have an uncommon citation pattern. A list of the categories considered, with the relevant parameters that characterize them, is reported in Table 1. The category ”Multidisciplinary sciences” does not fit perfectly into the universal picture found for other categories, because the distribution of the number of citations is a convolution of the distributions corresponding to the single disciplines represented in the journals. However, if one restricts only to the three most important multidisciplinary journals (Nature, Science, Proc. Natl. Acad. Sci. USA) also this category fits very well into the global universal picture.

Our calculations neglect uncited articles; we have verified however that their inclusion just produces a small shift in , which does not affect the results of our analysis. In the plots of the citation distributions, data have been grouped in bins of exponentially growing size, so that they are equally spaced along a logarithmic axis. For each bin, we count the number of articles with citation count within the bin and divide by the number of all potential values for the citation count that fall in the bin (i.e. all integers). This holds as well for the distribution of the normalized citation count , as the latter is just determined by dividing the citation count by the constant , so it is a discrete variable just like the original citation count. The resulting ratios obtained for each bin are finally divided by the total number of articles considered, so that the histograms are normalized to 1.

References

  • (1) Hirsch JE (2005) An index to quantify an individual’s scientific research output. Proc Nat Acad Sci USA 102:16569-16572.
  • (2) Egghe L (2006) Theory and practise of the g-index. Scientometrics 69:131-152.
  • (3) Hirsch JE (2007) Does the h index have predictive power? Proc. Nat. Acad. Sci. USA 104:19193-19198.
  • (4) Evidence Report (2007) http://bookshop.universitiesuk.ac.uk/downloads/bibliometrics.pdf
  • (5) Kinney AL (2007) National scientific facilities and their science impact on nonbiomedical research. Proc Nat Acad Sci USA 104: 17943-17947.
  • (6) King DA (2004) The scientific impact of nations. Nature 430:311-316.
  • (7) Brooks TA (1986) Evidence of complex citer motivations. J Am Soc Inf Sci 37:34-36.
  • (8) Egghe L, Rousseau R (1990) in Introduction to Informetrics: quantitative methods in library, documentation and information science (Elsevier, Amsterdam).
  • (9) Adler R, Ewing J, Taylor P (2008) Citation Statistics. IMU Report http://www.mathunion.org/Publications/Report/CitationStatistics
  • (10) Bornmann L, Daniel H-D (2008) What do citation counts measure? A review of studies on citing behavior. J Docum 64:45-80.
  • (11) Althouse BM, West JD, Bergstrom T, Bergstrom CT (2008) Differences in Impact Factor Across Fields and Over Time. arxiv 0804.3116.
  • (12) Garfield E (1979) in Citation Indexing. Its Theory and Applications in Science, Technology, and Humanities (Wiley, New York).
  • (13) Schubert A, Braun T (1986) Relative indicators and relational charts for comparative-assessment of publication output and citation impact. Scientometrics 9:281-291.
  • (14) Schubert A, Braun T (1996) Cross-field normalization of scientometric indicators. Scientometrics 36:311-324.
  • (15) Vinkler P (1996) Model for quantitative selection of relative scientometric impact indicators. Scientometrics 36:223-236.
  • (16) Vinkler P (2003) Relations of relative scientometric indicators. Scientometrics 58:687-694.
  • (17) Iglesias JE, Pecharroman C (2007) Scaling the h-index for different scientific ISI fields. Scientometrics 73:303-320.
  • (18) Zitt M, Ramanana-Rahary S, Bassecoulard E (2005) Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation. Scientometrics 63:373-401.
  • (19) Redner S (1998) How popular is your paper? Eur Phys J B 4:131-134.
  • (20) Naranan S (1971) Power law relations in science bibliography: a self-consistent interpretation. J Docum 27:83-97.
  • (21) Seglen PO (1992) The skewness of science. J Am Soc Inf Sci 43:628-638.
  • (22) Fortunato S, Castellano C (2007) Scaling and Universality in Proportional Elections. Phys Rev Lett 99:138701.
  • (23) Plerou V, Nunes Amaral LA, Gopikrishnan P, Meyer M, Stanley HE (1999) Similarities between the growth dynamics of university research and of competitive economic activities. Nature 400:433-437.
  • (24) Lundberg J (2007) Lifting the crown—citation z-score. J Informetrics 1:145-154.
  • (25) Moed HF, Debruin RE, Vanleeuwen TN (1995) New bibliometric tools for the assessment of national research performance - database description, overview of indicators and first applications. Scientometrics 33:381-422.
  • (26) Van Raan AF (2004) Sleeping Beauties in Science. Scientometrics 59:461-466.
  • (27) Redner S (2005) Citation statistics from 110 years of Physical Review. Phys Today 58:49-54.
  • (28) Ball P (2005) Index aims for fair ranking of scientists Nature 436:900-900.
  • (29) Van Raan AF (2001) Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Scientometrics 51:347-357.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
97908
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description