Main content

Little race or gender bias in an experiment of initial review of NIH R01 grant proposals  /

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Uncategorized

Description: The National Institutes of Health uses small groups of scientists to judge the quality of the grant proposals that they receive, and these quality judgments form the basis of its funding decisions. In order for this system to fund the best science, the subject experts must, at a minimum, agree as to what counts as a “quality” proposal. We investigated the degree of agreement by leveraging data from a recent experiment with 412 scientist reviewers, each of whom reviewed 3 proposals, and 48 NIH R01 proposals (half funded and half unfunded), each of which was reviewed by between 21 and 30 reviewers. Across all dimensions of NIH’s official rubric, we find low agreement among reviewers in their judgments of scientific merit. For judgments of Overall Impact, which has the greatest weight in funding decisions, we estimate that three reviewers yield a reliability .2, and 12 reviewers would be required to bring this reliability up to .5. Supplemental analyses found that reviewers are even less reliable in the language they use to describe proposals.

License: CC-By Attribution 4.0 International

Wiki

Add important information, links, or images here to describe your project.

Files

Loading files...

Citation

Tags

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.