Measuring the match between evaluators and evaluees: Cognitive distances between panel members and research groups at the journal level

Rahman, A I M Jakaria, Guns, Raf, Leydesdorff, Loet and Engels, Tim C.E. Measuring the match between evaluators and evaluees: Cognitive distances between panel members and research groups at the journal level., 2016 [Preprint]

[thumbnail of Measuring the match between evaluators and evaluee.pdf] Text
Measuring the match between evaluators and evaluee.pdf - Accepted version

Download (18MB)

English abstract

When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp. While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity’s profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories.

Item type: Preprint
Keywords: Research evaluation; Barycenter; Similarity-adapted publication vector; Journal overlay map; Matching research expertise; Similarity matrix
Subjects: B. Information use and sociology of information
B. Information use and sociology of information > BB. Bibliometric methods
Depositing user: A. I. M. Jakaria Rahman
Date deposited: 07 Oct 2016 13:01
Last modified: 07 Oct 2016 13:01
URI: http://hdl.handle.net/10760/30024

References

Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.

Barker, K. (2007). The UK research assessment exercise: The evolution of a national research evaluation system. Research Evaluation, 16(1), 3–12. http://doi.org/10.3152/095820207X190674

Berendsen, R., de Rijke, M., Balog, K., Bogers, T., & Bosch, A. (2013). On the assessment of expertise profiles. Journal of the American Society for Information Science and Technology, 64(10), 2024–2044. http://doi.org/10.1002/asi.22908

Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. http://doi.org/10.1111/j.1467-985X.2011.00689.x

Borum, F., & Hansen, H. F. (2000). The local construction and enactment of standards for research evaluation: The case of the Copenhagen Business School. Evaluation, 6(3), 281–299. http://doi.org/10.1177/13563890022209299

Boyack, K. W., Chen, M.-C., & Chacko, G. (2014). Characterization of the peer review network at the center for scientific review, National institutes of health. PLoS ONE, 9(8), e104244. http://doi.org/10.1371/journal.pone.0104244

Boyack, K. W., & Klavans, R. (2014). Creation of a highly detailed, dynamic, global model and map of science. Journal of the Association for Information Science and Technology, 65(4), 670–685. http://doi.org/10.1002/asi.22990

Buckley, H. L., Sciligo, A. R., Adair, K. L., Case, B. S., & Monks, J. M. (2014). Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology? New Zealand Journal of Ecology, 38(2), 335–339.

Butler, L., & McAllister, I. (2011). Evaluating university research performance using metrics. European Political Science, 10(1), 44–58. http://doi.org/doi:10.1057/eps.2010.13

Chen, S., Arsenault, C., Gingras, Y., & Lariviere, V. (2015). Exploring the interdisciplinary evolution of a discipline: The case of Biochemistry and Molecular Biology. Scientometrics, 102(2), 1307–1323. http://doi.org/10.1007/s11192-014-1457-6

Cohen, W. M., & Levinthal, D. A. (1989). Innovation and Learning: The Two Faces of R & D. The Economic Journal, 99(397), 569–596. http://doi.org/10.2307/2233763

Cohen, W. M., & Levinthal, D. A. (1990). Absorptive Capacity: A New Perspective on Learning and Innovation. Administrative Science Quarterly, 35(1), 128–152. http://doi.org/10.2307/2393553

Coryn, C. L. S., & Scriven, M. (2008). Editor’s notes. In C. L. S. Coryn & M. Scriven (Eds.), Reforming the evaluation of research: New directions for evaluation (Vol. 118, pp. 1–5). California: American Evaluation Association.

Efron, B., & Tibshirani, R. J. (1998). An Introduction to the Bootstrap. Boca Raton, Fla: Chapman & Hall/CRC.

Egghe, L., & Rousseau, R. (1990). Introduction to Informetrics. Elsevier Science Publishers. Retrieved from https://uhdspace.uhasselt.be/dspace/handle/1942/587

Engels, T. C. E., Goos, P., Dexters, N., & Spruyt, E. H. J. (2013). Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Research Evaluation, 22(4), 224–236. http://doi.org/doi: 10.1093/reseval/rvt013

Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the Social Sciences and Humanities, 2000–2009. Scientometrics, 93(2), 373–390.

ESF. (2011). European peer review guide: Integrating policies and practices into coherent procedures. Strasbourg: European Science Foundation.

Fields, C. (2015). How small is the center of science? Short cross-disciplinary cycles in co-authorship graphs. Scientometrics, 102(2), 1287–1306. http://doi.org/10.1007/s11192-014-1468-3

Gorjiara, T., & Baldock, C. (2014). Nanoscience and nanotechnology research publications: A comparison between Australia and the rest of the world. Scientometrics, 100(1), 121–148. http://doi.org/10.1007/s11192-014-1287-6

Gould, T. H. P. (2013). Do we still need peer review? An argument for change (Vol. 65). Plymouth, UK: Scarecrow Press.

Grauwin, S., & Jensen, P. (2011). Mapping scientific institutions. Scientometrics, 89(3), 943–954. http://doi.org/10.1007/s11192-011-0482-y

Hansson, F. (2010). Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning. Science and Public Policy, 37(4), 239–251. http://doi.org/10.3152/030234210X496600

Hashemi, S. H., Neshati, M., & Beigy, H. (2013). Expertise retrieval in bibliographic network: a topic dominance learning approach. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management (pp. 1117–1126). San Francisco, US: ACM. http://doi.org/10.1145/2505515.2505697

Hofmann, K., Balog, K., Bogers, T., & de Rijke, M. (2010). Contextual factors for finding similar experts. Journal of the American Society for Information Science and Technology, 61(5), 994–1014. http://doi.org/10.1002/asi.21292

Jin, B., & Rousseau, R. (2001). An introduction to the barycentre method with an application to China’s mean centre of publication. Libri, 51(4), 225–233. http://doi.org/doi: 10.1515/LIBR.2001.225

Kamada, T., & Kawai, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7–15. http://doi.org/10.1016/0020-0190(89)90102-6

Kington, J. (2014). Balanced Cross Sections, Shortening Estimates, and the Magnitude of Out-of-Sequence Thrusting in the Nankai Trough Accretionary Prism, Japan. Figshare. http://doi.org/10.6084/m9.figshare.1015774.v1

Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert panel reviews of research centers: The site visit process. Evaluation and Program Planning, 35(3), 390–397. http://doi.org/10.1016/j.evalprogplan.2012.01.003

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17. http://doi.org/10.1002/asi.22784

Leydesdorff, L., & de Nooy, W. (2015). Can “Hot Spots” in the Sciences be mapped using the dynamics of aggregated journal-journal citation relations? Retrieved from http://arxiv.org/abs/1502.00229

Leydesdorff, L., Heimeriks, G., & Rotolo, D. (2015). Journal portfolio analysis for countries, cities, and organizations: Maps and comparisons. Journal of the Association for Information Science and Technology. http://doi.org/10.1002/asi.23551

Leydesdorff, L., & Rafols, I. (2012). Interactive overlays: A new method for generating global journal maps from Web-of-Science data. Journal of Informetrics, 6(2), 318–332. http://doi.org/10.1016/j.joi.2011.11.003

Leydesdorff, L., Rafols, I., & Chen, C. (2013). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. Journal of the American Society for Information Science and Technology, 64(12), 2573–2586. http://doi.org/10.1002/asi.22946

Li, D., & Agha, L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348(6233), 434–438. http://doi.org/10.1126/science. aaa0185

McKenna, H. P. (2015). Research assessment: The impact of impact. International Journal of Nursing Studies, 52(1), 1–3. http://doi.org/10.1016/j.ijnurstu.2014.11.012

Milat, A. J., Bauman, A. E., & Redman, S. (2015). A narrative review of research impact assessment models and methods. Health Research Policy and Systems, 13, 18. http://doi.org/10.1186/s12961-015-0003-1

Molas-Gallart, J. (2012). Research governance and the role of evaluation: A comparative study. American Journal of Evaluation, 33(4), 583–598. http://doi.org/10.1177/10982 14012450938

Nedeva, M., Georghiou, L., Loveridge, D., & Cameron, H. (1996). The use of co-nomination to identify expert participants for technology foresight. R&D Management, 26(2), 155–168.

Neshati, M., Beigy, H., & Hiemstra, D. (2012). Multi-aspect group formation using facility location analysis. In Proceedings of the Seventeenth Australasian Document Computing Symposium (pp. 62–71). New York, USA: ACM. http://doi.org/10.1145/ 2407085.2407094

Nooteboom, B. (1999). Inter-firm Alliances: Analysis and Design. London: Routledge.

Nooteboom, B. (2000). Learning by interaction: absorptive capacity, cognitive distance and governance. Journal of Management and Governance, 4(1–2), 69–92.

Nooteboom, B., Van Haverbeke, W., Duysters, G., Gilsing, V., & van den Oord, A. (2007). Optimal cognitive distance and absorptive capacity. Research Policy, 36(7), 1016–1034. http://doi.org/10.1016/j.respol.2007.04.003

Oleinik, A. (2014). Conflict(s) of interest in peer review: Its origins and possible solutions. Science and Engineering Ethics, 20(1), 55–75. http://doi.org/10.1007/s11948-012-9426-z

Pina, D. G., Hren, D., & Marušić, A. (2015). Peer review evaluation process of Marie Curie actions under EU’s seventh framework programme for research. PLoS ONE, 10(6), e0130753. http://doi.org/10.1371/journal.pone.0130753

Rafols, I., Porter, A. L., & Leydesdorff, L. (2010). Science overlay maps: A new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871–1887. http://doi.org/10.1002/ asi.21368

Rahm, E. (2008). Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use, 28(2), 127–128.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2014). Assessment of expertise overlap between an expert panel and research groups. In Ed Noyons (Ed.), Context counts: Pathways to master big and little data. Proceedings of the science and technology indicators conference 2014 Leiden (pp. 295–301). Leiden: Universiteit Leiden.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: A quantitative approach based on barycenters. Journal of Informetrics, 9(4), 704–721. http://doi.org/10.1016/j.joi.2015.07.009

Rons, N., De Bruyn, A., & Cornelis, J. (2008). Research evaluation per discipline: A peer-review method and its outcomes. Research Evaluation, 17(1), 45–57. http://doi.org/10.3152/095820208X240208

Rousseau, R. (1989). Kinematical statistics of scientific output. Part I: geographical approach. Revue Française de Bibliométrie, 4, 50–64.

Rousseau, R. (2008). Triad or Tetrad: another representation. ISSI Newsletter, 4(1), 5–7.

Rousseau, R., Rahman, A. I. M. J., Guns, R., & Engels, T. C. E. (2016). A note and a correction on measuring cognitive distance in multiple dimensions. Retrieved from http://arxiv.org/abs/1602.05183v2

Rybak, J., Balog, K., & Nørvåg, K. (2014). ExperTime: Tracking expertise over time. In Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval (pp. 1273–1274). Broadbeach, Australia: ACM. http://doi.org/10.1145/2600428.2611190

Simon, D., & Knie, A. (2013). Can evaluation contribute to the organizational development of academic institutions? An international comparison. Evaluation, 19(4), 402–418. http://doi.org/10.1177/1356389013505806

Sobkowicz, P. (2015). Innovation suppression and clique evolution in peer-review-based, competitive research funding systems: An agent-based model. Journal of Artificial Societies and Social Simulation, 18(2), Article Number: UNSP 13.

Tseng, Y. H., & Tsay, M. Y. (2013). Journal clustering of library and information science for subfield delineation using the bibliometric analysis toolkit: CATAR. Scientometrics, 95(2), 503–528. http://doi.org/10.1007/s11192-013-0964-1

van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288. http://doi.org/10.3152/095820209X475360

van Eck, N. J., & Waltman, L. (2007). VOS: A new method for visualizing similarities between objects. In R. Decker & H.-J. Lenz (Eds.), Advances in Data Analysis: Proceedings of the 30th annual conference of the German Classification Society advances in data analysis (pp. 299–306). London: Springer.

van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. http://doi.org/10.1007/s11192-009-0146-3

van Eck, N. J., Waltman, L., Dekker, R., & van den Berg, J. (2010). A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS. Journal of the American Society for Information Science and Technology, 61(12), 2405–2416. http://doi.org/10.1002/asi.21421

Verleysen, F. T., & Engels, T. C. E. (2013). Measuring internationalisation of book publishing in the social sciences and humanities using the barycentre method. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Horlesberger, & H. Moed (Eds.), Proceedings of the 14th International Society of Scientometrics and Informetrics Conference (ISSI), 15 - 19 July 2013 (pp. 1170–1176). Vienna, Austria.

Verleysen, F. T., & Engels, T. C. E. (2014). Barycenter representation of book publishing internationalization in the Social Sciences and Humanities. Journal of Informetrics, 8(1), 234–240. http://doi.org/10.1016/j.joi.2013.11.008

VSNU. (2003). Standard Evaluation Protocol 2003-2009 for Public Research Organisations. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

VSNU. (2009). Standard evaluation Protocol 2009-2015: protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing a publication‐level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. http://doi.org/10.1002/asi.22748

Wang, Q., & Sandström, U. (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation, 24(3), 271–281. http://doi.org/10.1093/reseval/rvv009

Wessely, S. (1998). Peer review of grant applications: what do we know? The Lancet, 352(9124), 301–305. http://doi.org/10.1016/S0140-6736(97)11129-1


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item