Peer review
Written evidence submitted by Dr Alastair Gill and Professor Nigel Gilbert (PR 75)
Perspectives from the Qlectives project
Declaration of Interests
The authors Dr Alastair Gill (Research Fellow) and Professor Nigel Gilbert (Professor of Sociology) are academics based at the Department of Sociology at the University of Surrey. They are both work on the ‘QLectives’ project supported by the European Commission’s Framework 7 Programme.
Executive Summary
1.
In this document we discuss peer review in relation to the work on quality in science undertaken as part of the European Union FP7 project ‘QLectives’. After discussing the scientific, social and technological contexts of peer review and perspectives on quality in science, we present findings from the project. From a survey of scientists and those related to science, we note three main findings: (I) Definitions of quality in science are for the most part distinct from other discussions of quality; (II) Quality in science mainly relates to replicability, novelty, independence, methods, and clarity of contribution; (III) There is overlap between concepts of quality, and processes in place to establish quality, such as peer review. From the study which applied a ‘wisdom of the crowds’ approach to blogs, we note that: (IV) Quality is conceptualized principally in general, abstract groupings, which to some extent are idealized, as well as (V) those which are more concrete and specific, such as a grouping relating to peer review (‘reviewing’, ‘published’, and ‘high impact’).
2.
From these findings, as well as more generally from the existing literature, we note that quality is important – and fundamental – to science. Analyzing both survey and blog data we note that peer review is an important part of the process of establishing quality in science, but draw attention to the fact that it is one part of a larger process rather than the only part. Therefore we recommend that discussions of the peer review process take place within the larger context of scientific quality. As noted above, quality in science is seen to relate to (i) replicability, (ii) novelty, (iii) independence, (iv) methods, and (v) clarity of contribution.
3.
Finally, although new technological developments and the burgeoning amount of scientific information available puts strains on existing ways of managing scientific literature and knowledge, these do open up exciting opportunities to consider basic assumptions at the heart of science.
Introduction
4.
In a Nature web debate devoted to peer review, the former editor of Nature and former executive director of Harvard Stem Cell Institute, Charles Jennings, made the following claims: ‘scientists understand that peer review per se provides only a minimal assurance of quality’ and ‘far more important is where a paper is published, and in fact this is the major function of peer review’. Further he goes on to say that ‘not all journals are equal, and not all peer review is equal either’, and advocates more systematic study of the area (Jennings, 2006).
5.
In this document we present research undertaken, in part, as a contribution to the European Union FP7 project ‘QLectives’. Although this project does not focus specifically on peer review as such, it is concerned with ‘quality’ in science and scientific content, and the social and technological contexts in which science is taking place. In addition to the study and discussion of peer review itself, we believe that it is important to understand how peer review fits in to the larger scientific, social and technological environment.
The scientific, social and technological contexts of peer review
6.
The increasing volume of scientific information and publications is placing pressure on both researchers and publishers: On the one hand, researchers not only have to manage more information, but they have also to make their own work more visible to others (e.g., Hey and Trefethen, 2003). In addition, the ‘architecture of participation’ promised by Web 2.0 has lowered the threshold for publishing content, for example through pre-publication papers, scientific blogs and open lab notebooks, with some publishers operating an open publishing policy (e.g., Sanderson and Neylon, 2008; Public Library of Science, http://www.plos.org; Arxiv, http://arxiv.org). These developments reduce the role of the publisher or editor in filtering the quality of information before publication. Web 2.0 technologies are being adapted by scientists to help make connections, recommend papers, and share and discuss ideas (‘Science 2.0’; Schneiderman, 2008). This combination of ‘computer science know-how with social science sensitivity’ is leading to the use of reference management and social networking tools (e.g. Mendeley.com, Academia.edu). A description of the functionality of a sample of these tools is provided in the appendix.
7.
These technological innovations also demonstrate that peer review is but one process in establishing the ‘quality’ of scientific work; new technologies are just a recent adaptation and formalization of this wider process. Instead of, scientists circulating typewritten manuscripts to their colleagues and academic contacts for feedback prior to the standard peer review process, they now post unpublished manuscripts on sites such as Arxiv. In a similar way, whilst there are many tools and websites that measure the number of views or downloads of papers hosted online, this is in many ways consistent with, and an extension of, one of the oldest forms of scientific currency, citations. As noted elsewhere, the number of citations a paper receives is generally thought to indicate its priority and importance in a research area (but Bornmann and Daniel, 2006 observe that citations can also relate to non-scientific, social factors). It is therefore important to note that peer review is just one stage in the process of establishing scientific quality. However, this leads to a larger question, which has been pursued as part of the QLectives project, namely: What is understood by quality in science?
Perspectives on quality in science
8.
Quality is considered integral to the definition of science, both in terms of function and content: quality is a task, an expectation, a prerequisite, a frame of reference and, even, a synonym of science (see for example Mazlish, 1982). Thus, the notion of quality is more central to the make-up of science than novelty or resilience. For example, the absence of quality in a paper undermines its scientific merit (BBC, 2010; RAE, 2008). The issues relating to ‘data overload’ from user-generated content and new models of publishing, reflect many of the concerns about ‘information quality’ more generally resulting from electronic publishing, which might be described in terms of ‘fitness for use’ (e.g., Neus et al. 2001).
9.
Fitness for use in scientific terms might be translated into research having impact, and therefore being highly cited. Yet use of citations as a measure of quality is a controversial topic, and relates to some extent to ‘non-scientific factors’ (see Bornmann and Daniel, 2006, for a review). For example, a study of papers judged to be ‘best’ in the conference by experts in the field did not relate to subsequent citation counts (Bartneck and Hu, 2009). One explanation for this discrepancy might be the lack of correspondence between the idealized perceptions of quality of the experts and the practical notion of quality used by scientists. Other measures calculated using citations, such as the Impact Factor and the Hirsch index (h-index), are similarly controversial (Garfield, 2006; MacRoberts and MacRoberts, 1989), and need to be placed in an explanatory context or perspective,
10.
Another aspect of quality in science is that of ‘best practice’, which determines what procedures should be enforced and how science should be undertaken (Khan, et al. 2001; cf. ISO, 2009). However, this information is by definition ‘top-down’, and does not necessarily reflect the opinions or instincts of individual scientists or even the understanding of quality by the scientific community.
Quality in Science: Preliminary findings from the QLectives project
11.
One aspect of the QLectives project research focuses on how quality is conceptualized in science, both by scientists themselves and those external to the process. A variety of methodological techniques are being used in these studies (Gill, Xenitidou, and Gilbert, 2010). In the following, we discuss results from a survey of 249 scientists and non-scientists, which elicited descriptions of ‘quality’ in both scientific and general contexts. We also discuss a study of 1171 blogs which discuss quality in science, to harness the ‘wisdom of the crowds’ and which used computational semantic techniques to examine the clustering of quality-related topics.
12.
From the survey examining general and science-specific notions of quality, we note the following:
13.
(I) Quality in scientific contexts is defined in distinct - but also partly in overlapping - terms, in comparison with more general concepts of quality. For example, scientific concepts of quality tend to be more detailed and specific; in the general contexts, quality was conversely viewed from a consumer-product perspective.
14.
(II) Quality in science specifically relates to issues such as replicability, novelty, independence, methods, and clarity of contribution. In particular, the main analytic categories are: clarity, correctness, depth, novelty, process-oriented (referring to the research process), results or proof oriented (empiricism and experimental methods), based on peer-review trust, appearance and value (the last two are also found in general contexts).
15.
(III) As we can see from the previous point (II) there is an overlap between how quality is conceptualized and current filters or processes used to establish quality, such as peer-reviewing, and being published in (high quality) journals. In the case of this latter issue, such definitions lead to a cyclical process of defining quality by its outcomes, which themselves do not capture quality research (for example, even journals which could be described as "low quality", may also be peer-reviewed).
16.
(IV) Our computational semantic analysis of blogs shows that discussions of quality in science can be clustered across 11 factors, with the initial, more abstract ones (discussing e.g. ‘proof’, ‘process’, ‘novelty’ and ‘function’) holding the most descriptive power. As such these first few factors appear to relate to desirable, and at times idealistic, characteristics of science, as well as more concrete terms which could act as a check list for scientists and reviewers (‘evidence’, ‘results’, ‘tested’, ‘controlled’).
17.
(V) Subsequent factors become more specific and focus on particular aspects of quality in science (e.g., correctness, relational aspects and professionalism). In particular, we note one factor relating to peer review, which discussed concepts such as ‘reviewing’, ‘published’, and ‘high impact’.
Acknowledgement
This work was partially supported by the Future and Emerging Technologies programme FP7-COSI-ICT of the European Commission through project QLectives (grant no.: 231200).
References
Bartneck, C. and Hu, J. (2009). Scientometric analysis of the CHI proceedings. In Conference on Human Factors in Computing Systems (CHI2009), pages 699– 708, Boston. ACM.
BBC (2010). Vince Cable reveals a strategy to cut science funding. BBC News Online, http://www.bbc.co.uk/news/business-11225197 (accessed February 04, 2011).
Bornmann, L. and Daniel, H.-D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64, 45-80.
Garfield, Eugene. 2006. The History and Meaning of the Journal Impact Factor. JAMA 295, no. 1 (January 4): 90-93. doi:10.1001/jama.295.1.90. http://jama.ama-assn.org.
Gill, A.J., Xenitidou, M. and Gilbert, N. (2010). Understanding quality in science: A proposal and exploration. Proceedings of the Quality in Techno-Socio Systems (QTESO) workshop at the Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems. Budapest, Hungary, September 2010.
Hey, A.J.G. and Trefethen, A.E. (2003). The Data Deluge: An e-Science Perspective, in: Berman, F; Fox, G.C. & Hey, A.J.G (eds.) (2003). Grid Computing–Making the Global Infrastructure a Reality, London: Wiley.
ISO (International Organization for Standardization) (2009). Selection and use of the ISO 9000 family of standards. Geneva: International Organization for Standardization.
Jennings, C.G. (2006). Quality and value: The true purpose of peer review. In Nature Online Peer Review: Debate. http://www.nature.com/nature/peerreview/debate/nature05032.html (accessed 8 March 2011).
Khan, K., ter Riet, G., Glanville, J.,Sowden, A. and Kleijnen, J. (2001). Undertaking Systematic Reviews of Research on Effectiveness. York: NHS Centre for Reviews and Dissemination, University of York.
MacRoberts, M.H., and MacRoberts, B.R. (1989). Problems of citation analysis: A critical review. Journal of the American Society for Information Science, 40(5), 342-349.
Mazlish, B. (1982) The Quality of ‘The Quality of Science’: An Evaluation. Science, Technology, & Human Values, 7, 42-52.
Neus, A. (2001). Managing Information Quality in Virtual Communities of Practice: Lessons Learned from a Decade of exploding Internet Communication. in: Pierce, E.; Katz-Haas, R. (eds.), Proc.6th Intl. Conf. on Information Quality at MIT.
RAE (2008). Research Assessment Exercise 2008: the outcome. Bristol: Higher Education Funding Council for England.
Sanderson, K., Neylon, C. (2008). Data on display. Nature, 455: 273.
Schneidermann, B. (2008) "Science 2.0," Science, 319, 1349-1350.
Alastair J. Gill and Nigel Gilbert
Department of Sociology, University of Surrey
10 March 2011
Appendix describing functionality of online scientific tools follows
|