Below we outline several methods for mathematically aggregating subjective probability
Judgments from individuals of a group. Individuals in this project evaluate the replicability of findings (‘claims’) from the social and behavioural sciences. Specifically, we ask them to estimate the “probability that direct replications of this study would find a statistically significant effect in the same direction as the original claim”.
We elicit this with a modified-Delphi approach for eliciting judgements from groups, called the IDEA protocol (Hemming et al. 2018). For each research claim assessed, participants follow this process:
1) Read the claim and scan the paper it came from (individuals)
2) Provide an anonymous estimate of replicability, including the (i) lower bound, (ii) upper bound and (iii) best estimate of the probability that the claim would successfully replicate, together with their justifications for their estimates (as individuals)
3) Receive feedback about how their individual estimates differ from others’ and from the average group response (as groups)
4) Discuss differences in opinion on the claim with the rest of their group, and ‘consider the opposite’ (reasons why a claim may or may not successfully replicate)
5) Provide a second anonymous estimate of replicability that incorporates insights gained through discussion (as individuals).