Cognitive distances between evaluators and evaluees in research evaluation : a comparison between three informetric methods at the journal and subject category aggregation level

Rahman, A. I. M. Jakaria, Guns, Raf, Rousseau, Ronald and Engels, Tim C. E. Cognitive distances between evaluators and evaluees in research evaluation : a comparison between three informetric methods at the journal and subject category aggregation level. Frontiers in Research Metrics and Analytics, 2017, vol. 2, n. 6. [Journal article (Unpaginated)]

[thumbnail of Cognitive distances between evaluators and evaluees in research evaluation.pdf]
Preview
Text
Cognitive distances between evaluators and evaluees in research evaluation.pdf

Download (667kB) | Preview

English abstract

This article compares six informetric approaches to determine cognitive distances between the publications of panel members (PMs) and those of research groups in discipline-specific research evaluation. We used data collected in the framework of six completed research evaluations from the period 2009–2014 at the University of Antwerp as a test case. We distinguish between two levels of aggregation—Web of Science Subject Categories and journals—and three methods: while the barycenter method (2-dimensional) is based on global maps of science, the similarity-adapted publication vector (SAPV) method and weighted cosine similarity (WCS) method (both in higher dimensions) use a full similarity matrix. In total, this leads to six different approaches, all of which are based on the publication profile of research groups and PMs. We use Euclidean distances between barycenters and SAPVs, as well as values of WCS between PMs and research groups as indicators of cognitive distance. We systematically compare how these six approaches are related. The results show that the level of aggregation has minor influence on determining cognitive distances, but dimensionality (two versus a high number of dimensions) has a greater influence. The SAPV and WCS methods agree in most cases at both levels of aggregation on which PM has the closest cognitive distance to the group to be evaluated, whereas the barycenter approaches often differ. Comparing the results of the methods to the main assessor that was assigned to each research group, we find that the barycenter method usually scores better. However,the barycenter method is less discriminatory and suggests more potential evaluators, whereas SAPV and WCS are more precise.

Item type: Journal article (Unpaginated)
Keywords: cognitive distances, research expertise, research evaluation, barycenters, similarity-adapted publication vectors, weighted cosine similarity
Subjects: B. Information use and sociology of information > BB. Bibliometric methods
Depositing user: A. I. M. Jakaria Rahman
Date deposited: 05 Mar 2018 11:17
Last modified: 05 Mar 2018 11:17
URI: http://hdl.handle.net/10760/32400

References

Abramo, G., and D’Angelo, C. A. (2011). Evaluating research: from informed peer review to bibliometrics. Scientometrics 87, 499–514. doi:10.1007/s11192-011-0352-7

Aksnes, D. W., and Taxt, R. E. (2004). Peer reviews and bibliometric indicators:a comparative study at a Norwegian university. Res. Eval. 13, 33–41. doi:10.3152/147154404781776563

Allen, L., Jones, C., Dolby, K., Lynn, D., and Walport, M. (2009). Looking for landmarks:the role of expert review and bibliometric analysis in evaluating scientific publication outputs. PLoS ONE 4:e5910. doi:10.1371/journal.pone.0005910

Bazeley, P. (1998). Peer review and panel decisions in the assessment of Australian Research Council project grant applicants: what counts in a highly competitive context? Higher Educ. 35, 435–452. doi:10.1023/A:1003118502318

Benda, W. G. G., and Engels, T. C. E. (2011). The predictive validity of peer review: a selective review of the judgmental forecasting qualities of peers, and implications for innovation in science. Int. J. Forecast. 27, 166–182. doi:10.1016/j.ijforecast.2010.03.003

Bornmann, L. (2011). Scientific peer review. Annu. Rev. Inform. Sci. Technol. 45,197–245. doi:10.1002/aris.2011.1440450112

Bornmann, L., and Daniel, H.-D. (2005). Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of board of trustees’ decisions. Scientometrics 63, 297–320. doi:10.1007/s11192-005-0214-2

Bornmann, L., and Daniel, H.-D. (2006). Potential sources of bias in research fellowship assessments: effects of university prestige and field of study. Res. Eval. 15, 209–219. doi:10.3152/147154406781775850

Boyack, K. W. (2009). Using detailed maps of science to identify potential collaborations. Scientometrics 79, 27. doi:10.1007/s11192-009-0402-6

Boyack, K. W., Chen, M.-C., and Chacko, G. (2014). Characterization of the peer review network at the center for scientific review, National institutes of health. PLoS ONE 9:e104244. doi:10.1371/journal.pone.0104244

Casey-Campbell, M., and Martens, M. L. (2009). Sticking it all together: a critical assessment of the group cohesion–performance literature. Int. J. Manage. Rev. 11, 223–246. doi:10.1111/j.1468-2370.2008.00239.x

Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: a cross-disciplinary investigation. Behav. Brain Sci. 14, 119–135. doi:10.1017/S0140525X00065675

Daniel, H., Mittag, S., and Bornman, L. (2007). “The potential and problems of peer evaluation in higher education and research,” in Quality Assessment for Higher Education in Europe, eds A. Cavalli (London, UK: Portland Press), 71–82.

Efron, B., and Tibshirani, R. J. (1998). An Introduction to the Bootstrap. London:Chapman & Hall/CRC.

Egghe, L., and Rousseau, R. (1990). Introduction to Informetrics. Elsevier Science Publishers. Available at: https://uhdspace.uhasselt.be/dspace/handle/1942/587.

Engels, T. C. E., Goos, P., Dexters, N., and Spruyt, E. H. J. (2013). Group size,h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Res. Eval. 22, 224–236. doi:10.1093/reseval/rvt013

Engels, T. C. E., Ossenblok, T. L. B., and Spruyt, E. H. J. (2012). Changing publication patterns in the Social Sciences and Humanities, 2000–2009. Scientometrics 93, 373–390. doi:10.1007/s11192-012-0680-2

Geuna, A., and Martin, B. R. (2003). University research evaluation and funding: an International comparison. Minerva 41, 277–304. doi:10.1023/B:MINE.0000005155.70870.bd

Golledge, R. G. (1987). “Environmental cognition,” in Handbook of Environmental Psychology, Vol. 1, eds D. Stokols and I. Altman (New York: Wiley), 131–174.

Guns, R. (2016a). Bootstrapping Confidence Intervals for the Distances between Barycenters. Available at: http://nbviewer.jupyter.org/gist/rafguns/6fa3460677741e356538337003692389

Guns, R. (2016b). Confidence Intervals for Weighted Cosine Similarity. Available at: http://nbviewer.jupyter.org/gist/rafguns/faff8dc090b67a783b85d488f88952ba

Hammarfelt, B., and de Rijcke, S. (2015). Accountability in context: effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Res. Eval., 24(1), 63–77. doi:10.1093/reseval/rvu029

Hautala, J. (2013). Cognitive proximity in international research groups. J. Knowl.Manage., 15(4), 601–624. doi:10.1108/13673271111151983

HEFCE. (2015). The Metric Tide: Correlation Analysis of REF2014 Scores and Metrics (Supplementary Report II to the Independent Review of the Role of Metrics in Research Assessment and Management). Higher Education Funding Council for

England (HEFCE). Available at: http://www.dcscience.net/2015_metrictideS2.pdf

Jin, B., and Rousseau, R. (2001). An introduction to the barycentre method with an application to China’s mean centre of publication. Libri 51, 225–233. doi:10.1515/LIBR.2001.225

Langfeldt, L. (2004). Expert panels evaluating research: decision-making and sources of bias. Res. Eval. 13, 51–62. doi:10.3152/147154404781776536

Leydesdorff, L., and Bornmann, L. (2016). The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: the cases of “library and information science” and “science & technology studies”. J. Assoc. Inform. Sci.Technol. 67, 707–714. doi:10.1002/asi.23408

Leydesdorff, L., Bornmann, L., and Zhou, P. (2016). Construction of a pragmatic base line for journal classifications and maps based on aggregated journal-journal citation relations. J. Inform. 10, 902–918. doi:10.1016/j.joi.2016.07.008

Leydesdorff, L., and Rafols, I. (2012). Interactive overlays: a new method for generating global journal maps from Web-of-Science data. J. Inform. 6, 318–332. doi:10.1016/j.joi.2011.11.003

Leydesdorff, L., Rafols, I., and Chen, C. (2013b). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. J. Am. Soc. Inf. Sci. Technol. 64, 2573–2586. doi:10.1002/asi.22946

Leydesdorff, L., Rafols, I., and Chen, C. (2013b). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. J. Am. Soc. Inf. Sci. Technol. 64, 2573–2586. doi:10.1002/asi.22946

Leydesdorff, L., de Moya-Anegón, F., and Guerrero-Bote, V. P. (2010). Journal maps on the basis of Scopus data: a comparison with the Journal Citation Reports of the ISI. J. Assoc. Inform. Sci. Technol. 61, 352–369. doi:10.1002/asi.21250

Leydesdorff, L., Moya-Anegón, F., and Guerrero-Bote, V. P. (2015). Journal maps,interactive overlays, and the measurement of interdisciplinarity on the basis of Scopus data (1996–2012). J. Assoc. Inform. Sci. Technol. 66, 1001–1016. doi:10.1002/asi.23243

Leydesdorff, L., and Rafols, I. (2009). A global map of science based on the ISI subject categories. J. Am. Soc. Inf. Sci. Technol. 60, 348–362. doi:10.1002/asi.20967

McCullough, J. (1989). First comprehensive survey of NSF applicants focuses on their concerns about proposal review. Sci. Technol. Human Values 14, 78–88. doi:10.1177/016224398901400107

Montello, D. R. (1991). The measurement of cognitive distance: methods and construct validity. J. Environ. Psychol. 11, 101–122. doi:10.1016/S0272-4944(05)80071-4

Muscio, A., and Pozzali, A. (2013). The effects of cognitive distance in university industry collaborations: some evidence from Italian universities. J. Technol.Transfer 38, 486–508. doi:10.1007/s10961-012-9262-y

Nooteboom, B. (2000). Learning by interaction: absorptive capacity, cognitive distance and governance. J. Manage. Govern. 4, 69–92. doi:10.1023/A:1009941416749

OECD. (1997). The Evaluation of Scientific Research: Selected Experiences. Paris: OECD, 1–112. Available at: http://www.oecd.org/science/scitech/2754549.pdf

Over, R. (1996). Perceptions of the Australian research council large grants scheme: differences between successful and unsuccessful applicants. Austr. Educ. Res. 23, 17–36. doi:10.1007/BF03219618

Owens, B. (2013). Research assessments: judgement day. Nat. News 502, 288–290.doi:10.1038/502288a

Pudovkin, A. I., and Garfield, E. (2002). Algorithmic procedure for finding semantically related journals. J. Am. Soc. Inf. Sci. Technol. 53, 1113–1119. doi:10.1002/asi.10153

Rafols, I., Porter, A. L., and Leydesdorff, L. (2010). Science overlay maps: a new tool for research policy and library management. J. Am. Soc. Inf. Sci. Technol. 61, 1871–1887. doi:10.1002/asi.21368

Rahm, E. (2008). Comparing the scientific impact of conference and journal publications in computer science. Inform. Serv. Use 28, 127–128. doi:10.3233/ISU-2008-0562

Rahman, A. I. M. J., Guns, R., Leydesdorff, L., and Engels, T. C. E. (2016). Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level. Scientometrics 109, 1639–1663. doi:10.1007/s11192-016-2132-x

Rahman, A. I. M. J., Guns, R., Rousseau, R., and Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: a quantitative approach based on barycenters. J. Inform. 9,704–721. doi:10.1016/j.joi.2015.07.009

REF2014. (2014). Research Excellence Framework. Available at: http://www.ref.ac.uk

Rehn, C., Kronman, U., Gornitzki, C., Larsson, A., and Wadskog, D. (2014). Bibliometric Handbook for Karolinska Institutet. Stockholm, Sweden: Karolinska Institute.

Rousseau, R. (1989). Kinematical statistics of scientific output. Part II: standardized polygonal approach. Revue Française de Bibliométrie 4, 65–77.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item