Main content

Files | Discussion Wiki | Discussion | Discussion
default Loading...

Home

Menu

Loading wiki pages...

View
Wiki Version:
Project documentation -------------- **Why are we doing this project?** Currently in the sciences and some fields of the humanities, the prestige of the journals that a researcher publishes in is often an important factor when evaluating the researcher for grants, hiring, tenure, promotion or other situations. These evaluations use journal prestige as an indicator of research impact, but not only is this indication inaccurate [1], this also discourages researchers from publishing in open access journals. We should develop more accurate metrics of research impact which depend less on journal prestige, for use in these evaluations. For example, the San Fransisco Declaration on Research Assessment (DORA) recommends that "[f]or the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice." [2] This may help evaluations favor research with truly better impact, and incentivise publishing in open access journals, which have a well-documented citation advantage [3]. **Questions we sought to answer** 1. What is the purpose of research, and what impact does it aim for? **What we worked on** During a whole day of OAHack, we came up with a structure that delineates how scholarly impact can be measured in both the traditional and new manner. We know that impact factor is not the only means to measure scholarly impact, so we also developed other dimensions of criteria to measure scholarly impact. These dimensions include if research work solve real-life issues? Can knowledge research work be shared, can it the results be reproduced, replicated, easily accessed? More importantly, we also considered some long-term issues of social justice to which a piece of research could positively/negatively contribute. On that note, we discussed that the potential that open access data may lead to low-quality research data due to the lack of data screening. On the other hand, we are also concerned that open access may create a group of elite academic ecosystem that excludes certain users such as those from non-English-speaking countries or institutes lacking advanced facility to reproduce research results. For each of these opportunities and challenges for creating a new landscape of scholarly impact, we searched existing metrics that attempt to or have address(ed) these issues. Based on different criteria for scholarly impact, we also provide viable indications of scholarly achievement other than impact factor of journal publications. Our hope, is not to provide a comprehensive list of criteria to evaluate one's academic contribution (since it is naturally difficult if not impossible). Instead, we want to take initiative and raise the awareness that there should be various perspectives to examine scholarly impact. While the criteria and performance indication on our metric can vary from one setting to another, we have attempted a list of non-traditional performance criteria that can serve as reference to justify scholarly impact covering more aspects than any single metric in the past. **Relevant Project Links & Resources:** Mind map summary of our explorations of values, principles, implementations and measurement methods: https://mm.tt/1072293307?t=KOhGvSrbUp Post #OAHAck report out ----------------------- **Output summary:** The values of research that we identified are: 1. Solving problems in the world 2. Preserving and stewarding knowledge 3. Finding truth and certainty 4. Social justice *(To complete the triad of "good, true and beautiful," we may include a fifth point on developing a theory of aesthetics, which is an aim of the discipline of architecture, for example, which is not included in the other values.)* The principles, implementations and measurements that they branch out into are mapped out here: https://mm.tt/1072293307?t=KOhGvSrbUp **Discussions** A good metric or rubric is... - Simple - Trustworthy - Speaks to the values of faculty - Is likely to be used - Its use will lead to better decisions **Challenges encountered?** - We learned through conversations with faculty attendees of OAHack that the tenure and promotion process is by and large based on opinions and reputations of the researcher involved and their peers. These opinions may or may not be supported by data. **What's next (post #OAhack)?** Here are a list of steps that would build on the work done. 1. Complete the tree by finding existing metrics or rubrics, or proposing new ones, and link them to the "impactful practices" 2. Expand the tree by adding new branches that have been previously overlooked. 3. Choose, among each impactful practice, one or two good metrics or rubrics to present to researchers. 4. Create an online interface to collect these impactful practices and corresponding metrics or rubrics into a comprehensive menu from which researchers can easily choose from and be linked to services that will compute metrics for them. **Bibliography** 1. The PLoS Medicine Editors (2006) The Impact Factor Game. PLoS Med 3(6): e291. https://doi.org/10.1371/journal.pmed.0030291 2. The San Francisco Declaration on Research Assessment. https://sfdora.org/read/ 3. SPARC Europe (2015) The Open Access Citation Advantage Service (OACA). https://sparceurope.org/what-we-do/open-access/sparc-europe-open-access-resources/open-access-citation-advantage-service-oaca/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.