Determining cognitive distance between publication portfolios of evaluators and evaluees in research evaluation: an exploration of informetric methods

Rahman, A I M Jakaria Determining cognitive distance between publication portfolios of evaluators and evaluees in research evaluation: an exploration of informetric methods., 2018 PhD thesis thesis, Faculty of Social Sciences, University of Antwerp. [Thesis]

[img]
Preview
Text
PhD Thesis_Jakaria Rahman.pdf

Download (10MB) | Preview

English abstract

This doctoral thesis develops informetric methods for determining cognitive distance between publication portfolios of evaluators and evaluees in research evaluation. In a discipline specific research evaluation, when an expert panel evaluates research groups, it is an open question how one can determine the extent to which the panel members are in a position to evaluate the research groups. This thesis contributes to the literature by proposing six different informetric approaches to measure the match between evaluators and evaluees using their publications as a representation of their expertise. An expert panel is specifically appointed for the research evaluation. Experts are typically selected in one of two ways: (1) straightforward selection: the person(s) in charge of the research evaluation has access to a list of acknowledged experts in specific fields, and limits its selection process to ensuring the experts’ independence regarding the program under evaluation; and (2) gradual selections: preferred profiles of experts are developed with respect to the specialization under scrutiny in the evaluation. Both ways leave some freedom for an “old boys’ network” to appoint someone without properly evaluating their qualifications. There are also other ways for expert selection, for example, inviting open application or the research groups that will be evaluated can propose their choice of experts. In research evaluation, an expert panel usually comprises independent specialists, each of which is recognized in at least one of the fields addressed by the unit under evaluation. The expertise of the panel members should be congruent with the research groups to ensure the quality and trustworthiness of the evaluation. All things being equal, panel members who are credible experts in the field are also most likely to provide valuable, relevant recommendations and suggestions that should lead to improved research quality. However, there was an absence of methods to determine the cognitive distance between evaluators and evaluees in research evaluation when we started working in July 2013. In this thesis, we develop and test informetric methods to identify the cognitive distances between the (members of) an expert panel on the one hand, and the (whole of the) units of assessment (typically research groups) on the other. More generally, we introduce a number of methods that allow measuring cognitive distances based on publication portfolios. In academia, publications are considered key indicators of expertise that help to identify qualified or similar experts to assign papers for review, and to form an expert panel. Our main objective is to propose informetric methods to identify panel members who have closely related expertise in the research domain of the research groups based on their publications profile. The main factor that we have taken into account is the cognitive distance between an expert panel and research groups. We consider the publication portfolio of the involved researchers to reflect the position of the unit in cognitive space and, hence, to determine cognitive distance. Expressed in general terms we measure cognitive distance between units based on how often they have published in the same or similar journals. Our investigations lead to the development of new methods of expert panel composition for the research evaluation exercises. We explore different ways of quantifying the cognitive distance between panel members and research group's publication profiles. We consider all the publications of the research groups (during the eight years preceding their evaluation) and panel members indexed in Web of Science (WoS). We pursue the investigation at two levels of aggregation: WoS subject categories (SCs) and journals. The aggregated citation relations among SCs or journals provide a matrix. From the matrix, one can construct a similarity matrix. From the similarity matrix, one can construct a global SCs or journal map in which similar SCs or journals are located more closely together. The maps can be visualized using a visualization program. During the visualization process, a multi-dimensional space is reduced to a projection in two dimensions. In this process, similar SCs or journals are positioned closer to each other. We propose three methods, namely the use of barycenters, of similarity-adapted publication vector (SAPV) and of weighted cosine similarity (WCS). We take into account the similarity between WoS SCs and between journals, either by incorporating a similarity matrix (in the case of SAPV and WCS) or a 2-dimensional base map derived from it (in the case of barycenters). We determine the coordinates of barycenters using a 2-dimensional base map based on the publication profiles of research groups and panel members, and calculate the Euclidean distances between the barycenters. We also identify SAPV using the similarity matrix and calculated the Euclidean distances between the SAPVs. Finally, we calculate WCS using the similarity matrix. The SAPV and WCS methods use a square N-dimensional similarity matrix. Here N is equivalent to 224 WoS SCs and 10,675 journals. We used the distance/similarity between panel members and research groups as an indicator of cognitive distance. Small differences in Euclidean distances (both between barycenters and SAPVs) or in cosine similarity values bear little meaning. For this reason, we employ a bootstrapping approach in order to determine a 95% confidence interval (CI) for each distance or similarity value. If two CIs do not overlap, difference between the values is statistically significant at the 0.05 level. Although it is possible for two values to have a statistically significant difference while having overlapping CIs, the difference is less likely to have practical meaning. Two levels of aggregation and three methods lead to six informetric approaches to quantify the cognitive distance. Our proposed approaches hold advantages over a simple comparison of publication portfolios. Our approaches quantify the cognitive distance between a research group and panel members. We also compare our proposed approaches. We examine which of the approaches best reflects the prior assignment of main assessor to each research group, how much influence the level of aggregation (journals and WoS SCs) plays, and how much the dimensionality matters. The results show that, regardless of the method used, the level of aggregation has only a minor influence, whereas the influence of the number of dimensions is substantial. The results also show that the number of dimensions plays a major role in the case of identifying shortest cognitive distance. While the SAPV and WCS methods agree at most of cases at both the levels of aggregation the barycenter approaches yield different results. We find that the barycenter approaches score highest at both levels of aggregation to identify the previously assigned main assessor. When it comes to uniquely identifying the main assessor, all methods score better at the journal level than at the WoS SC level. Our approaches, but of course not the numerical result, are independent of the similarity matrix or map used. All six approaches give the opportunity to assess the composition of the panel in terms of cognitive distance if one or more panel members are replaced and compare the relative contribution of each potential panel member to the panel fit as a whole, by observing the changes to the distance between the panel’s and the groups’. In addition, our approaches allow the panel composition authority to see in advance about the panel’s fit to the research groups that are going to be evaluated. Therefore, the concerned authority will have the opportunity to replace outliers among the panel members to make the panel fit well with the research groups to be evaluated. For example, the authority can find a best-fitting expert panel by replacing a more distant panel member with a potential panel member located closer to the groups.

Item type: Thesis (UNSPECIFIED)
Keywords: Barycenter, Bootstrapping, Cognitive distances, Confidence intervals,Expert panel, Journal overlay map, Matching research expertise, Overlay maps, Research evaluation, Similarity matrix, Similarity-adapted publication vector, Web of Science subject categories, Weighted cosine similarity.
Subjects: B. Information use and sociology of information > BB. Bibliometric methods
Depositing user: A. I. M. Jakaria Rahman
Date deposited: 26 Jul 2018 07:19
Last modified: 26 Jul 2018 07:20
URI: http://hdl.handle.net/10760/33241

References

Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: from informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.

Aggarwal, A., Imai, H., Katoh, N., & Suri, S. (1991). Finding k points with minimum diameter and related problems. Journal of Algorithms, 12(1), 38–56.

Ahlgren, P., Jarneving, B., & Rousseau, R. (2003). Requirements for a cocitation similarity measure, with special reference to Pearson’s correlation coefficient. Journal of the Association for Information Science and Technology, 54(6), 550–560.

Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: a comparative study at a Norwegian university. Research Evaluation, 13(1), 33–41.

Allen, L., Jones, C., Dolby, K., Lynn, D., & Walport, M. (2009). Looking for landmarks: the role of expert review and bibliometric analysis in evaluating scientific publication outputs. PLOS ONE, 4(6), e5910.

Ankomah, P. K., & Crompton, J. L. (1992). Tourism cognitive distance: a set of research propositions. Annals of Tourism Research, 19(2), 323–342.

Ankomah, P. K., Crompton, J. L., & Baker, D. (1996). Influence of cognitive distance in vacation choice. Annals of Tourism Research, 23(1), 138–150.

Arnold, E. (2004). Evaluating research and innovation policy: a systems world needs systems evaluations. Research Evaluation, 13(1), 3–17.

Baccini, A., & Nicolao, G. D. (2016). Do they agree? Bibliometric evaluation versus informed peer review in the Italian research assessment exercise. Scientometrics, 108(3), 1651–1671.

Balázs, K., & Arnold, E. (1998). Methods in the evaluation of publicly funded basic research: a review for OECD. Brighton: Technopolis.

Barker, K. (2007). The UK research assessment exercise: the evolution of a national research evaluation system. Research Evaluation, 16(1), 3–12.

Bazeley, P. (1998). Peer review and panel decisions in the assessment of Australian Research Council project grant applicants: what counts in a highly competitive context? Higher Education, 35(4), 435–452.

Benda, W. G. G., & Engels, T. C. E. (2011). The predictive validity of peer review: a selective review of the judgmental forecasting qualities of peers, and implications for innovation in science. International Journal of Forecasting, 27(1), 166–182.

Berendsen, R., de Rijke, M., Balog, K., Bogers, T., & Bosch, A. (2013). On the assessment of expertise profiles. Journal of the American Society for Information Science and Technology, 64(10), 2024–2044.

Bertocchi, G., Gambardella, A., Jappelli, T., Nappi, C. A., & Peracchi, F. (2015). Bibliometric evaluation vs. informed peer review: evidence from Italy. Research Policy, 44(2), 451–466.

Borg, I., & Groenen, P. J. (2005). Modern multidimensional scaling: theory and applications (2nd ed.). New York: Springer.

Borgman, C. L., & Furner, J. (2002). Scholarly communication and bibliometrics. Annual Review of Information Science and Technology, 36(1), 2–72.

Börner, K. (2010). Atlas of Science: visualizing what we know. Massachusetts: MIT Press.

Börner, K., Chen, C., & Boyack, K. W. (2003). Visualizing knowledge domains. Annual Review of Information Science and Technology, 37(1), 179–255.

Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45(1), 197–245.

Bornmann, L. (2014). Assigning publications to multiple subject categories for bibliometric analysis: an empirical case study based on percentiles. Journal of Documentation, 70(1), 52–61.

Bornmann, L., & Daniel, H.-D. (2005). Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of board of trustees’ decisions. Scientometrics, 63(2), 297–320.

Bornmann, L., & Daniel, H.-D. (2006). Potential sources of bias in research fellowship assessments: effects of university prestige and field of study. Research Evaluation, 15(3), 209–219.

Bornmann, L., & Leydesdorff, L. (2011). Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testing. Journal of the Association for Information Science and Technology, 62(10), 1954–1962.

Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011). A multilevel modelling approach to investigating the predictive validity of editorial decisions: do the editors of a high profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879.

Borum, F., & Hansen, H. F. (2000). The local construction and enactment of standards for research evaluation: the case of the Copenhagen Business School. Evaluation, 6(3), 281–299.

Boschma, R. (2005). Proximity and innovation: a critical assessment. Regional Studies, 39(1), 61–74.

Boyack, K. W. (2009). Using detailed maps of science to identify potential collaborations. Scientometrics, 79(1), 27.

Boyack, K. W., Chen, M.-C., & Chacko, G. (2014). Characterization of the peer review network at the center for scientific review, National institutes of health. PLoS ONE, 9(8), e104244.

Boyack, K. W., & Klavans, R. (2014a). Creation of a highly detailed, dynamic, global model and map of science. Journal of the Association for Information Science and Technology, 65(4), 670–685.

Boyack, K. W., & Klavans, R. (2014b). Including cited non-source items in a large-scale map of science: what difference does it make? Journal of Informetrics, 8(3), 569–580.

Boyack, K. W., Klavans, R., & Börner, K. (2005). Mapping the backbone of science. Scientometrics, 64(3), 351–374.

Broström, A., & McKelvey, M. (2016). Knowledge transfer at the science policy interface: how cognitive distance and the degree of expert autonomy shapes the outcome (CESIS electronic working paper series No. 441). The Royal institute of technology: center of excellence for science and innovation studies (CESIS). Retrieved from https://static.sys.kth.se/itm/wp/cesis/cesiswp441.pdf

Buckley, H. L., Sciligo, A. R., Adair, K. L., Case, B. S., & Monks, J. M. (2014). Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology? New Zealand Journal of Ecology, 38(2), 335–339.

Buter, R. K., Noyons, E. C. M., & Van Raan, A. F. J. (2004). A combination of quantitative and qualitative maps in an evaluative bibliometric context. In E. Banissi, K. Borner, C. Chen, M.

Dastbaz, G. Clapworthy, A. Faiola, … J. J. Zhang (Eds.), Eighth International Conference on Information Visualisation, Proceedings (pp. 978–982). London, UK: IEEE computer society.

Butler, L. (2007). Assessing university research: a plea for a balanced approach. Science and Public Policy, 34(8), 565–574.

Campanario, J. M. (1998a). Peer review for journals as it stands today—Part 1. Science Communication, 19(3), 181–211.

Campanario, J. M. (1998b). Peer review for journals as it stands today—Part 2. Science Communication, 19(4), 277–306.

Casey-Campbell, M., & Martens, M. L. (2009). Sticking it all together: a critical assessment of the group cohesion–performance literature. International Journal of Management Reviews, 11(2), 223–246.

Cattaneo, M., Meoli, M., & Signori, A. (2016). Performance-based funding and university research productivity: the moderating effect of university legitimacy. The Journal of Technology Transfer, 41(1), 85–104.

Chen, C. (2006a). CiteSpace II: Detecting and visualizing emerging trends and transient patterns in scientific literature. Journal of the American Society for Information Science and Technology, 57(3), 359–377.

Chen, C. (2006b). Information visualization: beyond the horizon (2nd edition). London: Springer-Verlag.

Chen, C. (2013). Mapping scientific frontiers: the quest for knowledge visualization (2nd ed.). London: Springer-Verlag.

Chen, S., Arsenault, C., Gingras, Y., & Lariviere, V. (2015). Exploring the interdisciplinary evolution of a discipline: the case of Biochemistry and Molecular Biology. Scientometrics, 102(2), 1307–1323.

Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: a cross-disciplinary investigation. Behavioral and Brain Sciences, 14(1), 119–135.

Clarivate Analytics. (2016). Web of knowledge - real facts. Retrieved January 10, 2017, from http://wokinfo.com/citationconnection/realfacts/#regional

Clarivate Analytics. (2017). Web of Science core collection help. Retrieved from https://images.webofknowledge.com/images/help/WOS/hp_subject_category_terms_tasca.html

Cobo, M. J., López-Herrera, A. G., Herrera-Viedma, E., & Herrera, F. (2011). Science mapping software tools: review, analysis, and cooperative study among tools. Journal of the Association for Information Science and Technology, 62(7), 1382–1402.

Cohen, W. M., & Levinthal, D. A. (1989). Innovation and learning: the two faces of R & D. The Economic Journal, 99(397), 569–596.

Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: a new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128–152.

Cook, W. D., Golany, B., Kress, M., Penn, M., & Raviv, T. (2005). Optimal allocation of proposals to reviewers to facilitate effective ranking. Management Science, 51(4), 655–661.

Coryn, C. L. S., & Scriven, M. (2008). Editor’s notes. Reforming the evaluation of research: new directions for evaluation, 118, 1-5.

Creswell, J. W. (2014). Educational research: planning, conducting, and evaluating quantitative and qualitative Research (5th edition). New Jersey: Pearson.

Cronin, B. (2005). The hand of science: academic writing and its rewards. Lanham, MD: Scarecrow Press.

Dahler-Larsen, P. (2012). Evaluation as a situational or a universal good? Why evaluability assessment for evaluation systems is a good idea, what it might look like in practice, and why it is not fashionable. Scandinavian Journal of Public Administration, 16(3), 29–46.

Daniel, H., Mittag, S., & Bornman, L. (2007). The potential and problems of peer evaluation in higher education and research. In A. Cavalli (Ed.), Quality Assessment for Higher Education in Europe (pp. 71–82). London, UK: Portland Press.

de Nooy, W., Mrvar, A., & Batagelj, V. (2012). Exploratory social network analysis with Pajek (2nd edition). New York: Cambridge University Press.

Demicheli, V., & Pietrantonj, C. D. (2007). Peer review for improving the quality of grant applications. Ochrane Database of Systematic Reviews, Issue 2, Art. No.: MR00000.

Dumais, S. T., & Nielsen, J. (1992). Automating the assignment of submitted manuscripts to reviewers. In SIGIR ’92 Proceedings of the 15th annual international ACM SIGIR conference on

Research and development in information retrieval (pp. 233–244). Copenhagen, Denmark: ACM Press.

Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. London: Chapman & Hall/CRC.

Egghe, L., & Rousseau, R. (1990). Introduction to Informetrics. Elsevier Science Publishers. Retrieved from https://uhdspace.uhasselt.be/dspace/handle/1942/587

Eisenhart, M. (2002). The Paradox of peer review: admitting too much or allowing too little? Research in Science Education, 32(2), 241–255.

Elsevier. (2016). Scopus: content coverage guide. Retrieved from https://www.elsevier.com/solutions/scopus/content

Engels, T. C. E., Goos, P., Dexters, N., & Spruyt, E. H. J. (2013). Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Research Evaluation, 22(4), 224–236.

Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the Social Sciences and Humanities, 2000–2009. Scientometrics, 93(2), 373–390.

ESF. (2011). European peer review guide: integrating policies and practices into coherent procedures. Strasbourg: European Science Foundation.

Fedderke, J. W. (2013). The objectivity of national research foundation peer review in South Africa assessed against bibliometric indexes. Scientometrics, 97(2), 177–206.

Fields, C. (2015). How small is the center of science? Short cross-disciplinary cycles in co-authorship graphs. Scientometrics, 102(2), 1287–1306.

Franceschet, M., & Costantini, A. (2011). The first Italian research assessment exercise: a bibliometric perspective. Journal of Informetrics, 5(2), 275–291.

Franklin, J. J., & Johnston, R. (1988). Co-citation bibliometric modeling as a tool for S&T policy and R&D management: issues, applications, and developments. In A. F. J. Van Raan (Ed.), Handbook of quantitative science and technology research (Vols. 1–325-389). Amsterdam: Elsevier Science Publishers.

Garfield, E., Pudovkin, A. I., & Istomin, V. S. (2003). Why do we need algorithmic historiography? Journal of the Association for Information Science and Technology, 54(5), 400–412.

Garfield, E., Sher, I. H., & Torpie, R. J. (1964). The use of citation data in writing the history of science. Institute for Scientific Infomration Inc Philadelphia.

Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: an international comparison. Minerva, 41(4), 277–304.

Godin, B. (2006). On the origins of bibliometrics. Scientometrics, 68(1), 109–133.

Goldfinch, S., & Yamamoto, K. (2012). Prometheus assessed?: research measurement, peer review, and citation analysis. Oxford: Chandos Publishing.

Golledge, R. G. (1987). Environmental cognition. In D. Stokols & I. Altman (Eds.), Handbook of Environmental Psychology (Vol. 1, pp. 131–174). New York: Wiley.

Golub, G. H., & Van Loan, C. F. (1996). Matrix computations (3rd ed). Baltimore, Md: Johns Hopkins University Press.

Gorjiara, T., & Baldock, C. (2014). Nanoscience and nanotechnology research publications: a comparison between Australia and the rest of the world. Scientometrics, 100(1), 121–148.

Gould, T. H. P. (2013). Do we still need peer review?: An argument for change. UK: Scarecrow Press.

Grabher, G., & Ibert, O. (2013). Distance as asset? Knowledge collaboration in hybrid virtual communities. Journal of Economic Geography.

Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380.

Granovetter, M. S. (1983). The strength of weak ties: A network theory revisited. Sociological Theory, 1, 201–233.

Grauwin, S., & Jensen, P. (2011). Mapping scientific institutions. Scientometrics, 89(3), 943–954.

Griffith, B. C., Small, H. G., Stonehill, J. A., & Dey, S. (1974). The Structure of scientific literatures II: toward a macro- and microstructure for science. Science Studies, 4, 339-365.

Guerrero-Bote, V. P., Zapico-Alonso, F., Espinosa-Calvo, M. E., Gómez-Crisóstomo, R., & Moya-Anegón, F. de. (2007). Import-export of knowledge between scientific subject categories: the iceberg hypothesis. Scientometrics, 71(3), 423–441.

Guns, R. (2016a). Bootstrapping confidence intervals for the distances between barycenters. Retrieved from http://nbviewer.jupyter.org/gist/rafguns/6fa3460677741e356538337003692389

Guns, R. (2016b). Confidence intervals for weighted cosine similarity. Retrieved from http://nbviewer.jupyter.org/gist/rafguns/faff8dc090b67a783b85d488f88952ba

Guston, D. H. (2003). The expanding role of peer review processes in the United States. In P. Shapira & S. Kuhlmann (Eds.), Learning from science and technology policy evaluation: experiences from the United States and Europe (pp. 81–97). Cheltenham, UK: Edward Elgar Publishing.

Hammarfelt, B., & de Rijcke, S. (2015). Accountability in context: effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Research Evaluation, 24(1), 63–77.

Hansson, F. (2010). Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning. Science and Public Policy, 37(4), 239–251.

Harrison‐Hill, T. (2001). How far is a long way? contrasting two cultures’ perspectives of travel distance. Asia Pacific Journal of Marketing and Logistics, 13(3), 3–17.

Harzing, A.-W., & Alakangas, S. (2016). Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics, 106(2), 787–804.

Hashemi, S. H., Neshati, M., & Beigy, H. (2013). Expertise retrieval in bibliographic network: a topic dominance learning approach. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management (pp. 1117–1126). San Francisco, US: ACM.

Hautala, J. (2013). Cognitive proximity in international research groups. Journal of Knowledge Management, 15(4), 601–624.

Healey, P., Rothman, H., & Hoch, P. K. (1986). An experiment in science mapping for research planning. Research Policy, 15(5), 233–251.

HEFCE. (2015). The Metric tide: correlation analysis of REF2014 scores and metrics (Supplementary report II to the independent review of the role of metrics in research assessment and management). Higher education funding council for England (HEFCE).

Hemlin, S. (1996). Research on research evaluation. Social Epistemology, 10(2), 209–250.

Hemlin, S., & Rasmussen, S. B. (2006). The Shift in academic quality control. Science, Technology & Human Values, 31(2), 173–198.

Hicks, D. (2004). The four literatures of social science. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research: the use of publication and patent statistics in studies of S&T systems (pp. 473–496). Dordrecht: Kluwer Academic Publishers.

Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251–261.

Hicks, D., Wouters, P., Waltman, L., Rijcke, S. D., & Rafols, I. (2015). Bibliometrics: the Leiden manifesto for research metrics. Nature. 520 (7548) : 429-31.

Hofmann, K., Balog, K., Bogers, T., & de Rijke, M. (2010). Contextual factors for finding similar experts. Journal of the American Society for Information Science and Technology, 61(5), 994–1014.

Holbrook, J. B. (2010). Peer review. In R. Frodeman (Ed.), The Oxford handbook of interdisciplinarity (pp. 321–332). New York: Oxford University Press.

Howlett, M. (2009). Policy analytical capacity and evidence-based policy-making: Lessons from Canada. Canadian Public Administration, 52(2), 153–175.

Huson, L. W. (2007). Performance of some correlation coefficients when applied to zero-clustered data. Journal of Modern Applied Statistical Methods, 6(2), 530–536.

Iman, R. L., & Conover, W. J. (1987). A measure of Top-Down correlation. Technometrics, 29(3), 351–357.

Janak, S. L., Taylor, M. S., Floudas, C. A., Burka, M., & Mountziaris, T. J. (2006). Novel and effective integer optimization approach for the NSF panel-assignment problem: a multiresource and preference-constrained generalized assignment problem. Industrial & Engineering Chemistry Research, 45(1), 258–265.

Jin, B., & Rousseau, R. (2001). An introduction to the barycenter method with an application to China’s mean centre of publication. Libri, 51, 225–233.

Kamada, T., & Kawai, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7–15.

Kay, L., Newman, N., Youtie, J., Porter, A. L., & Rafols, I. (2014). Patent overlay mapping: visualizing technological distance. Journal of the Association for Information Science and

Technology, 65(12), 2432–2443.

Kington, J. (2014). Balanced cross sections, shortening estimates, and the magnitude of out-of-sequence thrusting in the Nankai trough accretionary prism, Japan. Figshare.

Klavans, R., & Boyack, K. (2007). Is there a convergent structure of science? A comparison of maps using the ISI and Scopus databases. In D. Torres-Salinas & H. F. Moed (Eds.), Proceedings of ISSI 2007 (vol. 1, pp. 437–448). Madrid, Spain: Center for Information and Scientific Documentation (CINDOC) and Spanish National Research Council (CSIC).

Klavans, R., & Boyack, K. W. (2006). Quantitative evaluation of large maps of science. Scientometrics, 68(3), 475–499.

Klavans, R., & Boyack, K. W. (2009). Toward a consensus map of science. Journal of the Association for Information Science and Technology, 60(3), 455–476.

Kogan, M. (1989). The Evaluation of higher education: an introductory note. In Evaluating Higher Education. London: Jessica Kingsley Publisher.

Lancho-Barrantes, B. S., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010a). The iceberg hypothesis revisited. Scientometrics, 85(2), 443–461.

Lancho-Barrantes, B. S., Guerrero-Bote, V. P., & Moya-Anegón, F. (2010b). What lies behind the averages and significance of citation indicators in different disciplines? Journal of Information Science, 36(3), 371–382.

Langfeldt, L. (2004). Expert panels evaluating research: decision-making and sources of bias. Research Evaluation, 13(1), 51–62.

Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert panel reviews of research centers: the site visit process. Evaluation and Program Planning, 35(3), 390–397.

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the Association for Information Science and Technology, 64(1), 2–17.

Leydesdorff, L. (1987). Various methods for the mapping of science. Scientometrics, 11(5–6), 295–324.

Leydesdorff, L. (2004). Top-down decomposition of the journal citation reportof the Social Science citation index: graph- and factor-analytical approaches. Scientometrics, 60(2), 159–180.

Leydesdorff, L. (2006). Can scientific journals be classified in terms of aggregated journal-journal citation relations using the journal citation reports? Journal of the American Society for Information Science and Technology, 57(5), 601–613.

Leydesdorff, L., & Bornmann, L. (2012). Mapping (USPTO) patent data using overlays to Google Maps. Journal of the American Society for Information Science and Technology, 63(7), 1442–1458.

Leydesdorff, L., & Bornmann, L. (2016). The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies.” Journal of the Association for Information Science and Technology, 67(3), 707–714.

Leydesdorff, L., Bornmann, L., & Zhou, P. (2016). Construction of a pragmatic base line for journal classifications and maps based on aggregated journal-journal citation relations. Journal of Informetrics, 10(4), 902–918.

Leydesdorff, L., Carley, S., & Rafols, I. (2013). Global maps of science based on the new web-of-science categories. Scientometrics, 94(2), 589–593.

Leydesdorff, L., de Moya-Anegón, F., & de Nooy, W. (2016). Aggregated journal–journal citation relations in scopus and web of science matched and compared in terms of networks, maps, and interactive overlays. Journal of the Association for Information Science and Technology, 67(9), 2194–2211.

Leydesdorff, L., de Moya‐Anegón, F., & Guerrero‐Bote, V. P. (2010). Journal maps on the basis of Scopus data: a comparison with the journal citation reports of the ISI. Journal of the Association for Information Science and Technology, 61(2), 352–369.

Leydesdorff, L., & de Nooy, W. (2016). Can “Hot Spots” in the Sciences be mapped using the dynamics of aggregated journal-journal citation relations? Journal of the Association for Information Science and Technology, 68(21), 197-213.

Leydesdorff, L., Heimeriks, G., & Rotolo, D. (2016). Journal portfolio analysis for countries, cities, and organizations: maps and comparisons. Journal of the Association for Information Science and Technology, 67 (3), 741–748.

Leydesdorff, L., Kushnir, D., & Rafols, I. (2014). Interactive overlay maps for US patent (USPTO) data based on International Patent Classification (IPC). Scientometrics, 98(3), 1583–1599.

Leydesdorff, L., Moya‐Anegón, F., & Guerrero‐Bote, V. P. (2015). Journal maps, interactive overlays, and the measurement of interdisciplinarity on the basis of Scopus data (1996–2012). Journal of the Association for Information Science and Technology, 66(5), 1001–1016.

Leydesdorff, L., & Persson, O. (2010). Mapping the geography of science: distribution patterns and networks of relations among cities and institutes. Journal of the Association for Information Science and Technology, 61(8), 1622–1634.

Leydesdorff, L., & Rafols, I. (2009). A global map of science based on the ISI subject categories. Journal of the Association for Information Science and Technology, 60(2), 348–362.

Leydesdorff, L., & Rafols, I. (2012). Interactive overlays: a new method for generating global journal maps from Web-of-Science data. Journal of Informetrics, 6(2), 318–332.

Leydesdorff, L., Rafols, I., & Chen, C. (2013). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. Journal of the American Society for Information Science and Technology, 64(12), 2573–2586.

Li, D., & Agha, L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348(6233), 434–438.

Li, J., Sanderson, M., Willett, P., Norris, M., & Oppenheim, C. (2010). Ranking of library and information science researchers: comparison of data sources for correlating citation data, and expert judgments. Journal of Informetrics, 4(4), 554–563.

Lovegrove, B. G., & Johnson, S. D. (2008). Assessment of research performance in Biology: how well do peer review and bibliometry correlate? BioScience, 58(2), 160–164.

Massy, W. F. (Ed.). (1996). Resource allocation in higher education. Michigan: University of Michigan Press.

McCullough, J. (1989). First comprehensive survey of NSF applicants focuses on their concerns about proposal review. Science, Technology, & Human Values, 14(1), 78–88.

McDavid, J. C., Huse, I., & Ingleson, L. R. L. (2012). Program evaluation and performance measurement: an introduction to practice (2nd edition). Los Angeles: SAGE.

McKenna, H. P. (2015). Research assessment: the impact of impact. International Journal of Nursing Studies, 52(1), 1–3.

Milat, A. J., Bauman, A. E., & Redman, S. (2015). A narrative review of research impact assessment models and methods. Health Research Policy and Systems, 13(18), 1-7.

Mimno, D., & McCallum, A. (2007). Expertise modeling for matching papers with reviewers. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 500–509). California: ACM.

Moed, H. F. (2002). The impact-factors debate: the ISI’s uses and limits. Nature, 415(6873), 731–732.

Moed, H. F. (2007). The future of research evaluation rests with an intelligent combination of advanced metrics and transparent peer review. Science and Public Policy, 34(8), 575–583.

Molas-Gallart, J. (2012). Research governance and the role of evaluation: a comparative study. American Journal of Evaluation, 33(4), 583–598.

Molas-Gallart, J., D’Este, P., Llopis, O., & Rafols, I. (2015). Towards an alternative framework for the evaluation of translational research initiatives. Research Evaluation, 25(3), 235–243.

Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics, 106(1), 213–228.

Montello, D. R. (1991). The measurement of cognitive distance: methods and construct validity. Journal of Environmental Psychology, 11, 101–122.

Morris, S. A., & Martens, B. V. der V. (2008). Mapping research specialties. Annual Review of Information Science and Technology, 42(1), 213–295.

Moya-Anegón, F., Vargas-Quesada, B., Herrero-Solana, V., Chinchilla-Rodríguez, Z., Corera-

Álvarez, E., & Munoz-Fernández, F. J. (2004). A new technique for building maps of large scientific domains based on the cocitation of classes and categories. Scientometrics, 61(1), 129–145.

Nederhof, A. J. (2006). Bibliometric monitoring of research performance in the Social Sciences and the Humanities: a Review. Scientometrics. 66(1), 81-100.

Nederhof, A. J., & van Raan, A. F. J. (1993). A bibliometric analysis of six economics research groups: a comparison with peer review. Research Policy, 22(4), 353–368.

Nedeva, M., Georghiou, L., Loveridge, D., & Cameron, H. (1996). The use of co-nomination to identify expert participants for technology foresight. R&D Management, 26(2), 155–168.

Neshati, M., Beigy, H., & Hiemstra, D. (2012). Multi-aspect group formation using facility location analysis. In Proceedings of the Seventeenth Australasian Document Computing Symposium (pp. 62–71). New York, USA: ACM.

Nicholas, D., Watkinson, A., Jamali, H. R., Herman, E., Tenopir, C., Volentine, R., … Levine, K. (2015). Peer review: still king in the digital age. Learned Publishing, 28(1), 15–21.

Nooteboom, B. (1999). Inter-firm alliances: analysis and design. London: Routledge.

Nooteboom, B. (2000). Learning by interaction: absorptive capacity, cognitive distance and governance. Journal of Management and Governance, 4(1–2), 69–92.

Nooteboom, B., Van Haverbeke, W., Duysters, G., Gilsing, V., & van den Oord, A. (2007). Optimal cognitive distance and absorptive capacity. Research Policy, 36(7), 1016–1034.

Norris, M., & Oppenheim, C. (2003). Citation counts and the research assessment exercise V: archaeology and the 2001 RAE. Journal of Documentation, 59(6), 709–730.

Noyons, E. C. M. (2001). Bibliometric mapping of science in a policy context. Scientometrics, 50(1), 83–98.

Noyons, E. C. M. (2004). Science maps within a science policy context: improving the utility of science and domain maps within a science policy and research management context. In H. F. Moed,

W. Glänzel, & Ulrich Schmoch (Eds.), Handbook of quantitative science and technology research: the use of publication and patent statistics in studies of S&T systems (pp. 237–255). Netherlands: Springer.

OECD. (1997). The evaluation of scientific research: selected experiences (pp. 1–112). Paris:

OECD. Retrieved from http://www.oecd.org/science/sci-tech/2754549.pdf

OECD. (2015). Frascati manual 2015: guidelines for collecting and reporting data on research and experimental development. Paris: OECD Publishing.

Oleinik, A. (2014). Conflict(s) of interest in peer review: its origins and possible solutions. Science and Engineering Ethics, 20(1), 55–75.

Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of Documentation, 53(5), 477–487.

Over, R. (1996). Perceptions of the Australian research council large grants scheme: differences between successful and unsuccessful applicants. The Australian Educational Researcher, 23(2), 17–36.

Papponetti, V., & Bucchi, M. (2007). Research evaluation as a policy design tool: mapping approaches across a set of case studies (FEEM (Fondazione Eni Enrico Mattei) Research paper series No. 75). Trento: University of Trento.

Patterson, M. S., & Harris, S. (2009). The relationship between reviewers’ quality-scores and number of citations for papers published in the journal Physics in Medicine and Biology from 2003–2005. Scientometrics, 80(2), 343–349.

Pina, D. G., Hren, D., & Marušić, A. (2015). Peer review evaluation process of Marie Curie actions under EU’s seventh framework programme for research. PLoS ONE, 10(6), e0130753.

Pudovkin, A. I., & Garfield, E. (2002). Algorithmic procedure for finding semantically related journals. Journal of the American Society for Information Science and Technology, 53(13), 1113–1119.

RAE2001. (2002). Overview of the research assessment exercise. Retrieved August 19, 2016, from http://www.rae.ac.uk/2001/pubs/2_99/section1.htm

Rafols, I., Porter, A. L., & Leydesdorff, L. (2010a). Science overlay maps: a new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871–1887.

Rafols, I., Porter, A. L., & Leydesdorff, L. (2010b). Science overlay maps: a new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871–1887.

Rahm, E. (2008). Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use, 28(2), 127–128.

Rahman, A. I. M. J., Guns, R., Leydesdorff, L., & Engels, T. C. E. (2016). Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level. Scientometrics, 109(3), 1639–1663.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2014). Assessment of expertise overlap between an expert panel and research groups. In Ed Noyons (Ed.), Context counts: Pathways to master big and little data. Proceedings of the science and technology indicators conference 2014 Leiden (pp. 295–301). Leiden: Universiteit Leiden.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: a quantitative approach based on barycenters. Journal of Informetrics, 9(4), 704–721.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2016). Corrigendum to “Is the expertise of evaluation panels congruent with the research interests of the research groups: a quantitative approach based on barycenters” (Journal of Informetrics (2015) 9(4) (704–721). Journal of Informetrics, 10(4), 1052-1054.

Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2017). Cognitive distances between evaluators and evaluees in research evaluation: a comparison between three informetric methods at the journal and subject category aggregation level. Frontiers in Research Metrics and Analytics, 6(2).

Ramos, A., & Sarrico, C. S. (2016). Past performance does not guarantee future results: lessons from the evaluation of research units in Portugal. Research Evaluation, 25(1), 94–106.

REF2014. (2014). Research excellence framework. Retrieved from http://www.ref.ac.uk/

Rehn, C., Kronman, U., Gornitzki, C., Larsson, A., & Wadskog, D. (2014). Bibliometric handbook for Karolinska Institutet. Stockholm, Sweden: Karolinska Institute.

Reinhart, M. (2009). Peer review of grant applications in biology and medicine. Reliability, fairness, and validity. Scientometrics, 81(3), 789–809.

Rinia, E. J., van Leeuwen, T. N., van Vuren, H. G., & van Raan, A. F. J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria: evaluation of condensed matter physics in the Netherlands. Research Policy, 27(1), 95–107.

Riopelle, K., Leydesdorff, L., & Jie, L. (2014). How to create an overlay map of science using the Web of Science (pp. 1–38). Retrieved from https://www.leydesdorff.net /overlaytoolkit/manual.riopelle.pdf

Rip, A., & Meulen, B. J. R. van der. (1995). The patchwork of the Dutch evaluation system. Research Evaluation, 5(1), 45–53.

Rodriguez, M. A., & Bollen, J. (2008). An algorithm to determine peer-reviewers. In Proceedings of the 17th ACM conference on Information and knowledge management (pp. 319–328). California: ACM.

Rons, N., Bruyn, A. D., & Cornelis, J. (2008). Research evaluation per discipline: a peer-review method and its outcomes. Research Evaluation, 17(1), 45–57.

Rousseau, R. (1989a). Kinematical statistics of scientific output. Part I: geographical approach. Revue Française de Bibliométrie, 4, 50–64.

Rousseau, R. (1989b). Kinematical statistics of scientific output. Part II: geographical approach. Revue Française de Bibliométrie, 4, 65–77.

Rousseau, R. (2008). Triad or Tetrad: another representation. ISSI Newsletter, 4(1), 5–7.

Rousseau, R., Guns, R., Rahman, A. I. M. J., & Engels, T. C. E. (2017). Measuring cognitive distance between publication portfolios. Journal of Informetrics, 11(2), 583–594.

Rousseau, R., Rahman, A. I. M. J., Guns, R., & Engels, T. C. E. (2016). A note and a correction on measuring cognitive distance in multiple dimensions. ArXiv:1602.05183 [Cs]. Retrieved from http://arxiv.org/abs/1602.05183

Rybak, J., Balog, K., & Nørvåg, K. (2014). ExperTime: tracking expertise over time. In Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval (pp. 1273–1274). Broadbeach, Australia: ACM.

Salas, E., Grossman, R., Hughes, A. M., & Coultas, C. W. (2015). Measuring team cohesion: observations from the Science. Human Factors, 57(3), 365–374.

Salton, G., & McGill, M. J. (1986). Introduction to modern information retrieval. New York: McGraw-Hill, Inc.

Shiffrin, R. M., & Börner, K. (2004). Mapping knowledge domains. Proceedings of the National Academy of Sciences, 101(suppl 1), 5183–5185.

Simon, D., & Knie, A. (2013). Can evaluation contribute to the organizational development of academic institutions? An international comparison. Evaluation, 19(4), 402–418.

Sivertsen, G. (2016). Patterns of internationalization and criteria for research assessment in the social sciences and humanities. Scientometrics, 107(2), 357–368.

Small, H. (1999). Visualizing science by citation mapping. Journal of the American Society for Information Science, 50(9), 799–813.

Small, H., & Garfield, E. (1985). The geography of science: disciplinary and national mappings. Journal of Information Science, 11, 147–159.

Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99(4), 178–182.

Sobkowicz, P. (2015). Innovation suppression and clique evolution in peer-review-based, competitive research funding systems: an agent-based model. Journal of Artificial Societies and Social Simulation, 18(2), article number: UNSP 13.

Sokal, R. R., & Michener, C. D. (1958). A statistical method for evaluating systematic relationships. University of Kansas Science Bulletin, 38 (Part II) (20), 1409–1438.

Sombatsompop, N., & Markpin, T. (2005). Making an equality of ISI impact factors for different subject fields. Journal of the Association for Information Science and Technology, 56(7), 676–683.

Soós, S., & Kampis, G. (2012). Beyond the basemap of science: mapping multiple structures in research portfolios: evidence from Hungary. Scientometrics, 93(3), 869–891.

Stirling, A. (2007). A general framework for analysing diversity in science, technology and society. Journal of the Royal Society, Interface, 4(15), 707–719.

Suter, L. E. (1997). United States: the experience of the NSF’s education and human resources directorate. In OECD (Ed.), The Evaluation of Scientific Research: Selected Experiences (pp. 107–112). Paris: OECD.

Taylor, J. (2011). The assessment of research quality in UK universities: peer review or metrics? British Journal of Management, 22(2), 202–217.

Tian, Q., Ma, J., & Liu, O. (2002). A hybrid knowledge and model system for R&D project selection. Expert Systems with Applications, 23(3), 265–271.

Tseng, Y. H., & Tsay, M. Y. (2013). Journal clustering of library and information science for subfield delineation using the bibliometric analysis toolkit: CATAR. Scientometrics, 95(2), 503–528.

Tu, W. (2006). Zero-inflated data. In Abdel H. El-Shaarawi & Walter W. Piegorsch (Eds.), Encyclopedia of Environmetrics (Vol. 4, pp. 2387–2391). Chichester: John Wiley & Sons.

van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: a case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288.

van Eck, N. J. (2011). Methodological advances in bibliometric mapping of science (PhD Thesis). Erasmus Research Institute of Management, Rotterdam. Retrieved from http://repub.eur.nl/pub/26509

van Eck, N. J., & Waltman, L. (2007). VOS: a new method for visualizing similarities between objects. In R. Decker & H.-J. Lenz (Eds.), Advances in Data Analysis: Proceedings of the 30th annual conference of the German Classification Society advances in data analysis (pp. 299–306). Heidelberg: Springer.

van Eck, N. J., & Waltman, L. (2009). How to normalize cooccurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635–1651.

van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538.

van Eck, N. J., & Waltman, L. (2014). Visualizing bibliometric networks. In Y. Ding, R. Rousseau, & D. Wolfram (Eds.), Measuring Scholarly Impact (pp. 285–320). London: Springer International Publishing.

van Eck, N. J., Waltman, L., Dekker, R., & van den Berg, J. (2010). A comparison of two techniques for bibliometric mapping: multidimensional scaling and VOS. Journal of the American Society for Information Science and Technology, 61(12), 2405–2416.

van Leeuwen, T. N., & Medina, C. C. (2012). Redefining the field of economics: Improving field normalization for the application of bibliometric techniques in the field of economics. Research Evaluation, 21(1), 61–70.

van Leeuwen, T. N., Visser, M. S., Moed, H. F., Nederhof, T. J., & Van Raan, A. F. J. (2003). The Holy grail of science policy: exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57(2), 257–280.

van Raan, A. F. J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420.

van Raan, A. F. J. (2005). Fatal attraction: conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.

van Raan, A. F. J. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491–502.

van Steen, J., & Eijffinger, M. (1998). Evaluation practices of scientific research in the Netherlands. Research Evaluation, 7(2), 113–122.

Verleysen, F. T., & Engels, T. C. E. (2014a). Barycenter representation of book publishing internationalization in the Social Sciences and Humanities. Journal of Informetrics, 8(1), 234–240.

Verleysen, F. T., & Engels, T. C. E. (2014b). Internationalization of peer reviewed and non-peer reviewed book publications in the Social Sciences and Humanities. Scientometrics, 101(2), 1431–1444.

VSNU. (2003). Standard evaluation protocol 2003-2009 for public research organisations. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

VSNU. (2009). Standard evaluation protocol 2009-2015: protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

VSNU, KNAW, & NWO. (2014). Standard evaluation protocol 2015-2021: protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

Wainer, J., & Vieira, P. (2013). Correlations between bibliometrics and peer evaluation for all disciplines: the evaluation of Brazilian scientists. Scientometrics, 96(2), 395–410.

Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing a publication‐level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392.

Wang, F., Zhou, S., & Shi, N. (2013). Group-to-group reviewer assignment problem. Computers & Operations Research, 40(5), 1351–1362.

Wang, Q., & Sandström, U. (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation, 24(3), 271–281.

Warner, J. (2003). Citation analysis and research assessment in the United Kingdom. Bulletin of the American Society for Information Science and Technology, 30(1), 26–27.

Wessely, S. (1998). Peer review of grant applications: what do we know? The Lancet, 352(9124), 301–305.

Whitley, R. (2007). Changing governance of the public sciences. In R. Whitley & J. Gläser (Eds.), The Changing governance of the sciences: The advent of research evaluation systems (Vol. 26, pp. 3–27). Netherlands: Springer.

Wood, F. Q., & Wesseley, S. (2003). Peer review of grant applications: a systematic review. In F. Godlef & T. Jefferson (Eds.), Peer Review in Health Sciences (pp. 14–44). London, UK: BMJ Books.

Wouters, P., Thelwall, M., Kousha, K., Waltman, L., Rijcke, S. de, Rushforth, A., & Franssen, T. (2015). The Metric tide: literature review (supplementary report I to the independent review of the role of metrics in research assessment and management). Higher education funding council for England (HEFCE).

Wuyts, S., Colombo, M. G., Dutta, S., & Nooteboom, B. (2005). Empirical tests of optimal cognitive distance. Journal of Economic Behavior & Organization, 58(2), 277–302.

Zhou, Q., Rousseau, R., Yang, L., Yue, T., & Yang, G. (2012). A general framework for describing diversity within systems and similarity between systems with applications in informetrics. Scientometrics, 93(3), 787–812.

Ziman, J. (2002). Real science. what it is, and what it means. Cambridge, UK: Cambridge University Press.


Downloads

Downloads per month over past year

Actions (login required)

View Item View Item