Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
#### **Abstract** In this course, participants will learn about recent developments in the world of research performance evaluation. Together with the instructors, attendees will also practice how to bridge the principles of open science with research intelligence methods and tools to provide actionable knowledge about open science performance in research institutions. This course will provide the means to let the course participants explore research intelligence, a growing field of interest for professionals in scholarly communication. By learning and using open science evaluation practices, participants will be able to show research intelligence outcomes to policy makers to foster change in their institution. This course is specifically designed for data stewards, librarians, and policy makers who want to discover new approaches to advance the open science agenda in a data-driven way. At the end of this course, participants will feel at ease with the major developments in research intelligence reporting for open science by learning about the concepts of open science, apply (novel) evaluation techniques, and practice with open access research information sources. #### **Module 1: July 28th 2021** **Session 1**: 09:00 – 10:30 CEST **Session 1 repeated**: 17:00 – 18:30 CEST [Zoom links, sent via email, are not displayed here to avoid [Zoombombing](https://en.wikipedia.org/wiki/Zoombombing)] In the first module, [Dr. Antonio Schettino](https://antonio-schettino.com/) will guide the audience through a general overview of open science with a focus on institutional and funding policies on recognition and rewards and societal impact (particularly in Europe and the Netherlands). He will then review and critically examine some of the evaluation criteria typically used to rank institutions as well as individual researchers and their publications, highlighting the inability of such metrics to reflect the amount of transparency, accountability, and reusability of the scholarly output. Afterwards, he will introduce alternative evaluation frameworks that allow a more comprehensive analysis of the content of research rather than quantitative (publication) metrics. The audience will have the opportunity to engage in live discussions and reflect on how research is evaluated in their own institutions. At the end of this session, participants will have contextualized old and new evaluation criteria and be able to choose appropriate metrics that better map onto desirable principles of transparency, accountability, and reusability of intellectual products. #### **Module 2: August 2nd 2021** **Session 2**: 09:00 – 10:30 CEST **Session 2 repeated**: 17:00 – 18:30 CEST [Zoom links, sent via email, are not displayed here to avoid [Zoombombing](https://en.wikipedia.org/wiki/Zoombombing)] In the second module, [Ms. Tung Tung Chan](https://www.linkedin.com/in/tungtungchan/?originalSubdomain=nl) will provide an introduction to research intelligence applications and its recent developments in the evaluation of scholarly outputs. The goal is to introduce participants to a variety of data sources, present a set of standard and alternative metrics through use cases, and define strategic questions that guide research intelligence efforts. During live sessions, the course participants will work on strategic questions that are relevant to their context and operationalize performance evaluation using both standard and alternative metrics, open science metrics, as well as reflect on the comparison between outcomes from standard approaches and alternative performance assessments. #### **Module 3: August 4th 2021** **Session 3**: 09:00 – 10:30 CEST **Session 3 repeated**: 17:00 – 18:30 CEST [Zoom links, sent via email, are not displayed here to avoid [Zoombombing](https://en.wikipedia.org/wiki/Zoombombing)] The course ends with a guided assignment specifically aimed at retrieving and presenting research intelligence outcomes, and therefore contribute to the implementation of responsible research evaluation for advancing open science. The guided assignment will consist of a recorded step by step example as well as two live sessions led by [Dr. Armel Lefebvre](https://www.linkedin.com/in/armell/?originalSubdomain=nl) for discussions and presentations with the course participants. Here, participants will introduce an open science analysis on their own organization, using the techniques and tools presented by Tung Tung in the second part of the course.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.