Comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence

Comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence

O. Mryglod O. Mryglod Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine
22email: olesya@icmp.lviv.uaR. Kenna Applied Mathematics Research Centre, Coventry University, Coventry, CV1 5FB, England Yu. Holovatch Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine B. Berche Université de Lorraine, Campus de Nancy, B.P. 70239, 54506 Vandœuvre lès Nancy Cedex, France
   R. Kenna O. Mryglod Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine
22email: olesya@icmp.lviv.uaR. Kenna Applied Mathematics Research Centre, Coventry University, Coventry, CV1 5FB, England Yu. Holovatch Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine B. Berche Université de Lorraine, Campus de Nancy, B.P. 70239, 54506 Vandœuvre lès Nancy Cedex, France
   Yu. Holovatch O. Mryglod Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine
22email: olesya@icmp.lviv.uaR. Kenna Applied Mathematics Research Centre, Coventry University, Coventry, CV1 5FB, England Yu. Holovatch Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine B. Berche Université de Lorraine, Campus de Nancy, B.P. 70239, 54506 Vandœuvre lès Nancy Cedex, France
   B. Berche O. Mryglod Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine
22email: olesya@icmp.lviv.uaR. Kenna Applied Mathematics Research Centre, Coventry University, Coventry, CV1 5FB, England Yu. Holovatch Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine, 1 Svientsitskii Str., 79011 Lviv, Ukraine B. Berche Université de Lorraine, Campus de Nancy, B.P. 70239, 54506 Vandœuvre lès Nancy Cedex, France
Received: date / Accepted: date
Abstract

Many different measures are used to assess academic research excellence and these are subject to ongoing discussion and debate within the scientometric, university-management and policy-making communities internationally. One topic of continued importance is the extent to which citation-based indicators compare with peer-review-based evaluation. Here we analyse the correlations between values of a particular citation-based impact indicator and peer-review scores in several academic disciplines, from natural to social sciences and humanities. We perform the comparison for research groups rather than for individuals. We make comparisons on two levels. At an absolute level, we compare total impact and overall strength of the group as a whole. At a specific level, we compare academic impact and quality, normalised by the size of the group. We find very high correlations at the former level for some disciplines and poor correlations at the latter level for all disciplines. This means that, although the citation-based scores could help to describe research-group strength, in particular for the so-called hard sciences, they should not be used as a proxy for ranking or comparison of research groups. Moreover, the correlation between peer-evaluated and citation-based scores is weaker for soft sciences.

Keywords:
peer review citations Research Assessment Exercise (RAE)Research Excellence Framework (REF)
journal: Scientometrics

Introduction

Although it is not without critics, peer-review is mostly considered, amongst the broad academic community, to be the most reliable approach to assess the quality of academic research 2005_Raan (); Derrick (). However because it is expensive, time-consuming and dependent on different circumstances (the so-called Hawthorne effect, see Ref. 2012_Bornmann ()), it is tempting to seek other approaches and citation-based indicators offer an obvious alternative nature (); 2003_Warner (). Numerous scientometric indicators based on the citations number as well as the general number of publication and other aspects were proposed during the past half of century (e.g., see Garfield_1955 (); Garfield_1973 (); Hirsch_2005 (); Egghe_2006 ()). The real challenge is to invent a simple but reliable way to assess the individual or collective scientific performance. The sophisticated normalization procedures and different approaches have been designed to overcome the well-known nuances of citations counts Vinkler2001 (); Vinkler2003 (); 2005_Moed (). But this remains the problem of current importance till today. For over half a century, scientists and research managers have discussed the merits and drawbacks of each approach. For practicing academics the accuracy and reliability of peer review broadly wins out (see, e.g., Refs. Derrick (); Donovan (); Bornmann () and references therein). University managers, policy makers and the media, however, are attracted to the simplicity and economy of citation-based methodologies. Each approach is beset by ambiguities and problems and it is frequently argued that a combination may be needed to minimize the shortcomings of each. To achieve this, the technical and methodological limitations need to be clear 2005_Raan (). Here we address the question of whether a set of automated, scientometric or bibliometric indicators is a suitable substitute for, or component of, peer-review at the level of the research group or department.

The importance of evaluation of research quality at institutional level is exemplified by the growing number of reports produced by private companies and governmental bodies which document research performance of Higher Education Institutions within nations and worldwide (e.g., 2010_Nature (); 2012_Melbourne_report (); 2012_Evidence ()). The Research Assessment Exercise (RAE) and Research Excellence Framework are examples of such processes nationally in the UK, and the Shanghai Academic Ranking is a famous example on an international scale Shanhai (). The Shanghai Ranking, in particular, is widely known but heavily criticised by the scientometric community Florian (); Billaut (); Ioannidis (). Despite well-known weaknesses of different systems for ranking universities, these are of increasing importance in many developed countries, which seek to organize national assessments of research. Many aspects of the UK’s RAE, in particular, have been imitated in other countries 2010_2_Nature ().

In a recent paper 2012_Scientometrics () we compared a citation-based indicator developed by Thomson Reuters Research Analytics (previously known as Evidence) Evidence_web () to the peer-review-based RAE which was conducted in the UK in 2008. Thomson Reuters is one of the world’s leading providers of scientometric information and performance measures for academic and research institutions, governments, not-for-profit organisations, funding agencies, and others with a stake in research. E.g., Thomson Reuters’ (formerly the Institute for Scientific Information) Web of Knowledge is an important platform for information on citations in the sciences, social sciences, arts, and humanities. Using biology research institutions as a test case, we examined the correlations between results from both approaches at an amalgamated, research-group or department level. We made the comparison at two levels which we termed “absolute” and “specific”. “Absolute” measures refer to the totality of group strength – the research performance of the group as a whole. “Specific” quantities are normalised per head – the average strength, taken per group member. In this sense, “absolute strength” is the “volume of quality”. E.g., the absolute citation count for a department in a given period is the total number of citations to the department’s work, irrespective of how many researchers that department contains. The corresponding specific citation count is then the average number of citations per head (see, for example, Vinkler2001 (); Vinkler2003 ()).

Thus, the estimates of research “quality” and research “strength” introduced in 2010_Ralph (); 2011_Ralph () are specific and absolute notions, respectively. We showed that the citation-based specific measure provided by Thomson Reuters Research Analytics is not a good proxy for the peer-review specific measure provided by RAE, in that these two measures are rather poorly correlated. However, when scaled up to the actual size of a department (here and below means the number of researchers in group), the absolute citation impact is very strongly correlated with the overall strength as measured by peer review. This is important because funding in the UK is determined on the base of strength rather than quality .

Another important feature of our previous analyses was that they focused on the research quality and strength of groups rather than individuals 2012_Scientometrics (); 2010_Ralph (); 2011_Ralph (). In particular, the notion of two characteristic group sizes or “critical masses” was introduced in Refs. 2010_Ralph (); 2011_Ralph (). According to this concept, research performance is strongly dependent on group size up to a so-called upper critical mass . Groups larger than have either a reduced dependency of quality on quantity or no such dependency. A lower critical value was also introduced in Refs.2010_Ralph (); 2011_Ralph () and interpreted as the minimum size a research department should achieve to be stable in the long term. These two critical masses, the values of which are strongly dependent on the research discipline, allow research groups and departments to be categorised as being small if they have size , medium if or large if . E.g., for the biological sciences analysed in the pilot study of Ref. 2012_Scientometrics (), the estimates for critical masses are and 2010_Ralph (); 2011_Ralph (). (Fractions of staff are a feature of RAE in that Higher Education Institutes can include part-time researchers in their submissions and are counted as a proportion of full time equivalence RAE_web ().) However, since small and medium research groups have the same linear dependency of quality on quantity 2010_Ralph () it is sensible to combine them in the correlation analysis. The strongest correlations between citation- and peer-review based measures of institutional strength for the biological sciences was observed for the large groups.

The implication of our previous analysis, therefore, is that citations, if used in an informed manner, could possibly be used as a proxy for departmental or group strength (and thus feed into funding requirements), provided that the departments are large. For smaller departments, however, peer review remains essential to determine strength. Moreover, citation-based indicators should not be used in isolation to estimate research quality for large, medium or for small groups.

It is natural to ask to what extent these conclusions cover other disciplines. Is there a difference between so-called hard and soft sciences or between the natural and social sciences and humanities? One might expect to observe differences due to different citation behaviour in different disciplines 2005_Moed (); 2012_Stauffer () and due to technical restrictions such as a smaller coverage by the Web of Knowledge. E.g., in the humanities, dissemination of original research through books is more common than in the natural sciences, and these are usually ignored in citation counting. These are the questions we address in this paper. We present quantitative results from comparisons of peer review and citation-based indicators for several disciplines ranging from hard sciences to humanities. In particular, we consider chemistry; physics; mechanical, aeronautical and manufacturing engineering; geography and environmental studies; sociology and history.

Again, as in Ref. 2012_Scientometrics (), we used data from Thomson Reuters Research Analytics and the UK’s 2008 version of the Research Assessment Exercise (called RAE 2008). As in the pilot study for biology, here we provide evidence that correlations between specific citation indicators and peer-measured group qualities for all the disciplines are very weak, even in the case of ranked values. However, when scaled up to the actual size of the department , the absolute citation impact is strongly correlated with the overall group strength as measured by peer review. The correlation is very strong (above 95%) for the hard sciences, less strong for geography and engineering, and weakest for social sciences (below 90%). Although the correlations of measures are statistically strong for all the disciplines examined, since national assessment is linked to funding distribution, even small differences can involve large financial impact. Thus, the threshold of reliability of results should be very high. This means that our previous conclusions 2012_Scientometrics () indeed extend to the hard sciences, physics and chemistry. But they do not extend to beyond the natural sciences. The social sciences and humanities, in particular, require peer-evaluated measurements of both quality and strength.

1 Peer review and the Normalised Citation Impact for research institutes

1.1 The Research Assessment Exercise and the Research Excellence Framework

Quality related funding forms one element of the UK’s dual research-support system. Until now, this has been based on the RAE RAE_web () and the annual distribution of quality-related funding is over 2 billion euro. In the future it will be based on the Research Excellence Framework REF_web (). The evaluation of the quality of academic research output forms the major component of each of these schemes. Using published criteria, RAE 2008 assessed submissions in each of 67 different subject areas (units of assessment) and awarded a profile for each of them. All submissions are related to the assessment period which is from 1 January 2001 to 31 July 2007 RAE_web (). Submissions included four outputs (publications) per staff member. E.g., in physics 1686 scientists submitted to the RAE. This involves 6744 papers. (The actual number may be somewhat less than this because co-authored papers should be attributed proportionally to each contribution.) There was an average of 40 authors per submission, which translates into 160 papers per group. RAE experts seek to quantify the proportion of a department’s or research centre’s submitted work which falls into each of five quality bands. The highest band is denoted as 4* and represents world-leading research. Remaining bands are graded through 3*, 2* and 1* to the lowest quality level which is called “Unclassified” RAE_statement_panelE (). The RAE quality profile assigned to a given research group is represented by a set of values , which represent the percentage of a team’s research which was rated . For example, the profile would indicate that 25% of a groups research is of world-leading quality; 20% is of 3* (internationally excellent); 35% is of 2* quality (recognised internationally) and 15% is 1* (recognised nationally).

Governmental funding post RAE is determined by a formula which combines the quality scores in a weighted manner. While the formula is subject to regional and temporal variation (the latter often due to the influence of lobby groups) the one introduced by the Higher Education Funding Council for England immediately following RAE 2008 rated 4* and 3* research as being seven and three times the value of 2* work, while lower quality research was unrewarded HEFCE_web (). In Ref. 2012_Scientometrics (), we denoted the strength of a given research group by . This is defined as the volume of quality,

(1)

where is the size of the group of quality . The amount of quality-related funding distributed by the Higher Education Funding Council for England to a given university after RAE is a function of its strength . While strength determines future funding, it is, of course, not sensible to rank groups or universities according to their values because different group shave different sizes. However many media and managers readily rank according to the quality measures (although this also neglects very strong size effects as pointed out in Refs. 2010_Ralph (); 2011_Ralph ()).

At RAE, the overall quality profile is constructed by summing sub-profiles for three separate elements (quality of “outputs”, quality of “environment” and quality of “esteem”), of which outputs play the strongest role. In the future, the Research Excellence Framework will replace the RAE for peer-review, institutional research assessment REF_web (). The main difference is that overall quality profile will consist of “outputs”, “impact” and “environment” instead of “outputs”, “esteem” and “environment”. Here “impact” refers to non-academic impact (thus, not e.g., citations). This new element is one of the major innovations of Research Excellence Framework. But obviously, the very question about applicability of scientific results since long ago has been considered as one of the aspects of scientific productivity. Nevertheless, the “outputs” sub-profile remains the most important component of research assessment within Research Excellence Framework providing the 65% of Overall score. (The remaining 35% is distributed between “impact” (20%) and “environment” (15%) REF_web ().) To summarise, peer-review measures of research outputs will continue to dominate the UK’s assessment of institutional research quality and strength in the years to come, and will be the main factor upon which billions of euros worth of funding will be allocated.

Although they may be influenced by non-academic impact and environment (e.g., visibility), citation counts refer only to outputs. Therefore it is sensible to compare citation-based measures with the “outputs” category of RAE. These are readily available on the official RAE 2008 web-page RAE_web () and we will henceforth confine our attention to these measures. To maintain consistency of notation with respect to Ref. 2012_Scientometrics (), we denote by the peer-review measure of quality coming from the “outputs” category of RAE 2008. The corresponding absolute measure is denoted by .

1.2 Thomson Reuters Research Analytics citation indicator

As described in Ref. 2012_Scientometrics (), our citation-based measure of choice is that provided by Thomson Reuters Research Analytics. This company offers a service analysing research performance tailored to individual client requirements Evidence_web (). They have developed the so-called Normalised Citation Impact (NCI) as a coefficient of departmental performance in a given discipline.

Thomson Reuters Research Analytics calculate the NCI using data from Web of Knowledge databases Evidence_2010 (); Evidence_2011 (). Similarly to Relative Citation Rate (RCR) (i.e., Schubert1996 ()), the NCI is calculated by comparing to a mean or expected citation rate. It is a specific measure of academic citation impact because it is averaged over the entire research group. A non-trivial advantage of the NCI is that it takes account of different citation patterns between different academic disciplines. To achieve this, the total citation count for each paper is first normalised to an average number of citations per paper for the year of publication and either the field or journal in which the paper was published. This is called “rebasing” the citation count Evidence_2011 (). To compare sensibly with the UK’s peer-review mechanism, only the four papers per individual which were submitted to RAE 2008 were taken into account by Thomson Reuters Research Analytics in order to determine the average NCI for research groups Evidence_2011 () (citation data till the end of 2009 were analysed, see Evidence_2011 (), Appendix A).

Thus, the NCI may be considered as a citation-based specific measure of the academic impact of a department in a given field and we denote it by . The corresponding absolute measure of impact (the total volume of academic impact of the department or group) is denoted by . The relationship between the two is

(2)

1.3 Comparisons to be made

The objective of the remainder of this paper is to compare the peer-review and citation-based indicators for different disciplines. The specific indicators to compare are quality and citation impact and as measures of the average strength and impact of the group or department per individual contained within it. We also compare the absolute indicators and as measures of the overall strength and total impact of the group as a whole.

2 Weak correlation between specific measures of quality and impact

A 100% linear correlation between and would indicate that the citation-based indicator (NCI) is a perfect proxy for RAE peer-review quality scores. The actual correlations for different disciplines are depicted in Fig. 1 and are far from perfect.

(a)                                    (b)                                    (c)

(d)                                    (e)                                    (f)

Figure 1: Correlations between average quality of research groups according to RAE 2008 and average excellence of research groups according to Normalised Citation Impact for: (a) chemistry, (b) physics, (c) mechanical, aeronautical and manufacturing engineering, (d) geography and environmental studies, (e) sociology and (f) history. Different symbols represent large (black ) and medium/small (red ) groups. For engineering (c) information about group sizes is unavailable.

For the majority of disciplines one can observe some positive but weak correlation. This is quantified by a relatively small values of the Pearson coefficient , the values of which are listed in Table 1. The conclusion is clear – the NCI indicators should not be used in place of peer-review measures of research-output quality.

As stated, normalized scores (be they RAE quality measurements or NCI citation-based indicators) are frequently used for ranking research groups. For this reason we also check the correlation between ranks. The ranks are constructed by listing ratings of research groups ascending order of their corresponding scores. Then each department is assigned an ascending numerical rank (the average rank in the case of the equal scores). The linear correlation strength between ranked variables is expressed by Spearman coefficient and these are also listed in Table 1.

Description of the Pearson coefficient Spearman
data sets all large medium coefficient
groups groups /small of ranked
groups values
biology (see Ref. 2012_Scientometrics ())
(44 groups: 32 large,
7 medium, 5 small)
chemistry
(29 groups: 12 large,
14 medium, 3 small)
physics
(41 groups: 28 large,
9 medium, 4 small)
mechanical, aeronautical
and manufacturing engineering
(30 groups)
geography and environmental
studies
(41 groups: 28 large,
9 medium, 4 small)
sociology
(39 groups: 29 large,
8 medium, 2 small)
history
(79 groups: 30 large,
24 medium, 25 small)
Table 1: The approximate values of linear correlation coefficients between specific values and calculated for several different disciplines. Statistically significant values are highlighted in boldface (with significance level ).

Contrary to some earlier results which claimed high levels of correlation between the ranked RAE scores and citation counts (for example, for music 2008_Oppenheim () and for archaeology 2003_Norris ()), our values of Spearman coefficient are low, varying from 0.18 to 0.62. This is perhaps unexpected, since the normalised citation impact is a more sophisticated citation-based measure of academic impact compared to simple citation counting which was used in earlier analyses.

As stated earlier, it was established in Refs. 2010_Ralph (); 2011_Ralph () that the dependency of research quality on quantity of researchers differs depending on whether or not research groups exceed the upper critical mass . For this reason, we also investigate these categories (large and medium/large groups) separately. The correlation coefficients between - and -values for large and for medium/small groups are also listed in Table 1. As one can see, the proportions of groups with and differ for different disciplines: whereas the sociological groups are mainly large, there is a high proportion of small/medium groups in the field of geography and environmental sciences. However, this division does not help and the correlation coefficients for the two specific measures of group-research performance are poor. (Further subdivision of the category into separate sets of small and medium sized groups does not ameliorate the situation.) We conclude that the NCI is a poor proxy for peer review measures of research quality in all subject areas analysed.

3 Strong correlation between absolute measures of strength and impact

A conspicuous feature of the above analysis is that all research groups are treated as contributing the same weight to the analysis. For example, the RAE-measured quality scores for the history-research groups at the Open University and the University of Glamorgan are almost equal: . But, with 20.6 staff, the former is more than 3 times bigger than the latter which has only 6 researchers. This means that researchers in smaller groups contribute more weight to the analysis, and statistical inaccuracies in their scores are unduly amplified. This problem is remedied by multiplying the average quality of groups by their size, a process which also renders the specific measures absolute: quality becomes strength and the NCI is also scaled up to the volume of the group or department.

(a)                                    (b)                                    (c)

(d)                                    (e)                                    (f)

Figure 2: Correlation between (strength of research groups according to RAE 2008) and (absolute citation impact) for: (a) chemistry, (b) physics, (c) mechanical, aeronautical and manufacturing engineering, (d) geography and environmental studies, (e) sociology and (f) history. The symbols are as in Fig. 1.

From Fig. 2, there are clear correlations between and for all disciplines studied. The corresponding values of Pearson coefficient are given in Table 2. The values of the correlation coefficients for the six disciplines studied here vary from 0.87 to 0.96. For comparison, the equivalent statistic for the biology research groups studied in Ref. Evidence_2010 () was ). As in biology, the replacement of specific measures of quality and impact by their absolute counterparts has the effect of stretching the corresponding axes by amounts proportional to the quantity of the groups or departments, and this leads to improved correlations.

Description of the Pearson coefficient
data sets all large medium
groups groups /small
groups
biology
(44 groups: 32 large,
7 medium, 5 small)
chemistry
(29 groups: 12 large,
14 medium, 3 small)
physics
(41 groups: 28 large,
9 medium, 4 small)
mechanical, aeronautical
and manufacturing engineering
(30 groups)
geography and environmental
studies
(41 groups: 28 large,
9 medium, 4 small)
sociology
(39 groups: 29 large,
8 medium, 2 small)
history
(79 groups: 30 large,
24 medium, 25 small)

The correlation coefficients for biology given in Ref. 2012_Scientometrics () were based on the overall quality profiles . Here, to properly compare with the other subject areas and with , the output-based absolute scores are used instead.

Table 2: The approximate values of the linear correlation coefficients between and for several disciplines. Statistically significant values (with significance level ) are highlighted in boldface.

As observed previously for biology 2012_Scientometrics (), the correlation between and is usually best for large groups. The only exception is geography: in this case medium and small groups exhibit a better correlation than large ones. One may speculate as to the reasons for this. One possibility is the highly interdisciplinary nature of the research, which includes “a wide range of enquiries into natural, environmental and human phenomena” RAE_web (). Indeed, among the disciplines analysed in this paper, only the geographical unit of assessment was declared as highly interdisciplinary and this marks it out.

While the -values are high for all the disciplines, there is a noticeable difference between the hard sciences (chemistry, physics and biology 2012_Scientometrics ()) and “softer” disciplined (history and sociology). For the former set, the correlation coefficient between absolute measures exceeds 95%. For the latter set of disciplines it is smaller than 90%. The interdisciplinary area of geography and environmental studies with is positioned somewhere between these two categories as is the engineering discipline studied.

4 Conclusions

Based on the above results, the following three main conclusions may be drawn.

  • Weak correlations between specific measures of research quality and impact have been observed for the disciplines of chemistry; physics; mechanical, aeronautical and manufacturing engineering; geography and environmental studies; sociology and history. This signals that this citation-based measure is a poor proxy for peer-reviewed measures of the quality of research groups. Moreover, since rankings are based on normalized data, this indicates that citation-based indicators will provide quite different rankings to those based on peer review.

  • Strong correlation between absolute measures of research quality and impact which was previously observed for biology 2012_Scientometrics (), is seen to extend to various extents to the disciplines which were analysed here. Thus, citation-based measures may inform or serve as a proxy for peer-review measures of the strengths of research groups.

  • Although the citation-based measures could be a reasonable proxy for, or may inform about, the strengths of research groups for all disciplines studied, the results for the hard sciences are superior than those of the softer disciplines. Specifically, Pearson coefficients exceeding 95% were observed for physics, chemistry as well as for biology while the corresponding values for history and sociology are below 90%. The interdisciplinary areas of geography and engineering are in between with a linear correlation of between absolute measures of scientific excellence.

Since quality-related funding is strength based, the use of citation-based indicators may offer a much cheaper, and less intrusive alternative to the system currently in use in the UK and some other countries for large research groups in the hard sciences. However, such a proxy would be far less reliable for the social sciences and humanities. Moreover, citation-based indicators should not be used in isolation to compare the average quality of Higher Education Institutions or separate research groups. Nor should they be used for rankings.

Acknowledgements

This work was supported in part by the 7th FP, IRSES project No. 269139 “Dynamics and cooperative phenomena in complex physical and biological environments” and IRSES project No. 295302 “Statistical physics in diverse realizations”. The authors thank Jonathan Adams from Thomson Reuters Research Analytics for the data and Ihor Mryglod for fruitful discussions.

References

  • (1) van Raan A.F.J., Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods, Scientometrics, 2005, 62, No. 1, 133–143.
  • (2) Derrick G.E., Haynes A., Chapman S., Hall W.D., The Association between Four Citation Metrics and Peer Rankings of Research Influence of Australian Researchers in Six Fields of Public Health, PLoS ONE, 2001, 6, e18521.
  • (3) Bornmann L. The Hawthorne effect in journal peer review, Scientometrics, 2012, 91, 857–862.
  • (4) Editorial, Nature, 2010, 465 p.845 and Metrics Special at www.nature.com/metrics (last accessed April 2012).
  • (5) Warner J., Citation Analysis and Research Assessment in the United Kingdom, B. Am. Soc. Inform. Sci. Tech., 2003, 30, Iss. 1, 26-–27.
  • (6) Garfield E., Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas, Science, 1955, 122, No. 3159, 108–111.
  • (7) Garfield E., Citation Frequency as a Measure of Research Activity and Performance in Essays of an Information Scientist, Current Contents, 1973, 1, 406–408.
  • (8) Hirsch J.E., An index to quantify an individual’s scientific research output, PNAS, 2005, 102 (46), 16569-–16572.
  • (9) Egghe L., Theory and practise of the g-index, Scientometrics, 2006, 69, No. 1, 131–152.
  • (10) Vinkler P. An attempt for defining some basic categories of scientometrics and classifying the indicators of evaluative scientometrics. Scientometrics (2001), 50, No. 3, 539–544.
  • (11) Vinkler P. Relations of relative scientometric indicators. Scientometrics (2003), 58, No. 3, 687–694.
  • (12) Moed H.F., Citation analysis in research evaluation. Dordrecht, The Netherlands: Springer, 2005.
  • (13) Donovan C., Future pathways for science policy and research assessment: metrics vs peer review, quality vs impact, Science and Public Policy 34 (2007) 538–542.
  • (14) Bornmann L., Wallon G., Ledin A. Is the index related to (standard) bibliometric measures and to the assessments by peers? An investigation of the index by using molecular life sciences data, Research Evaluation 17 (2008) 149–156.
  • (15) Butler D., University rankings smarten up, Nature, 2010, 464, 16–17.
  • (16) Williams R., de Rassenfosse G., Jensen P., Marginson S.,U21 Ranking of National Higher Education Systems, Report of the project sponsored by Universitas 21, University of Melbourne, 2012.
  • (17) Bibliometric evaluation and international benchmarking of the UK’s physics research, Summary report prepared for the Institute of Physics by Evidence, Thomson Reuters, 2012.
  • (18) The official web-page of Academic Ranking of World Universities (ARWU): http://www.shanghairanking.com. Accessed 19 October, 2012.
  • (19) Florian R.V., Irreproducibility of the results of the Shanghai academic ranking of world universities Scientometrics 72 (2007) 25-32.
  • (20) Billaut  J.-C., Bouyssou D. and Vincke Ph. Should you believe in the Shanghai ranking? Scientometrics 84(2010) 237-263
  • (21) Ioannidis, J.P.A. et al., International ranking systems for universities and institutions: a critical appraisal BMC Medicine 5 (2007) 30.
  • (22) Macilwain C., Wild goose chase. Nature, 2010, 463, 291.
  • (23) Mryglod O., Kenna R., Holovatch Yu., Berche B., Absolute and specific measures of research group excellence, Scientometrics (2012) DOI 10.1007/s11192-012-0874-7, (to be published).
  • (24) The official web-page of Evidence Thomson Reuters: http://www.evidence.co.uk. Accessed 18 October 2012.
  • (25) Kenna R., Berche B., Critical mass and the dependency of research quality on group size, Scientometrics, 2010, 86, No. 2, 527–540.
  • (26) Kenna R., Berche B., Critical masses for academic research groups and consequences for higher education research policy and management, Higher Education Management and Policy, 2011, 23, Iss. 3, 1–21.
  • (27) The official web-page of the RAE 2008. http://www.rae.ac.uk/. Accessed 18 October 2012.
  • (28) Stauffer D., A Biased Review of Sociophysics, J. Stat. Phys., 2012 (to be published), DOI:10.1007/s10955-012-0604-9.
  • (29) The official web-page of the REF: http://www.ref.ac.uk/. Accessed 19 October 2012.
  • (30) RAE 2008. The panel criteria and working methods. Panel E. (2006). http://www.rae.ac.uk/pubs/2006/01/docs/eall.pdf. Accessed 19 October 2012.
  • (31) The official web-page of the Higher Education Funding Council for England. Funding for universities and colleges in 2009–10 (2009). Electronic Publication 01/2009 in the ADMIN-HEFCE Archives. Accessed 18 October 2012.
  • (32) The future of the UK university research base. Evidence (a Thomson Reuters business) report, July 2010.
  • (33) Funding research excellence: research group size, critical mass & performance. A University Alliance report, July 2011.
  • (34) Schubert A., Braun T. Cross-field normalization of scientometric indicators. Scientometrics (1996), 36, No. 3, 311–324.
  • (35) Oppenheim C., Summers M.A.C., Citation counts and the Research Assessment Exercise, part VI. Unit of assessment 67 (music), Information Research, 2008, 13, No. 2.
  • (36) Norris M., Oppenheim Ch., Citation counts and the Research Assessment Exercise. V Archaeology and the 2001 RAE, Journal of Documentation, 2003, 59, No. 6, 709–730.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
106588
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description