Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Exploratory Reports (ERs) is a format for empirical submissions that tend to address relatively open research questions, without strong a priori predictions of hypotheses. These studies are abductive (=often starting with an observation) and inductive/hypothesis-generating (=going from data to hypothesis). This means that authors can do as many analyses as they would like on a dataset, as long as they openly report it. These analyses should however generate predictions, and in some cases, these predictions can and should already be tested. At this stage, we are limiting the ER to two types: Machine learning and cross-validation (We include machine learning as a separate ER type, even though it often includes cross-validation (but not always, as in the case of conditional random forests or autoencoding)). Cross-validation can be done using more traditional, inferential statistics, machine learning, or another analysis approach. For research using cross-validation, we expect authors to submit a results-blind submission for the validation part of their manuscript. At least one validation set is required, a second validation set highly encouraged. The analyses for the validation sets will be blinded to reduce publication bias. Authors are also asked not to analyze data in their validation sets prior to submission. For those unfamiliar with exploratory research, we recommend reading Yarkoni and Westfall, viewing Rick Klein’s primer and running through tutorials Klein made available. These tutorials include analysis scripts for cross-validation. An analysis script for a type of supervised machine learning (conditional random forests) applied in social psychology is available from IJzerman et al. (2018). Typical exploratory reports include multiple tests and variables that go beyond basic hypothesis-testing. Central to ERs is the generation of hypotheses for confirmatory research. We therefore expect the discussions of ERs to include the following aspects: - A hypothesis generated from the research - A necessary sample size needed to test the hypothesis generated from the ER - A section constraining the generality of the authors’ hypothesis/hypotheses (see e.g., Simons et al., 2018) One of the ways we plan to reduce the workload for authors, editors, and reviewers, is by letting our editors create a project on the Open Science Framework (OSF) after a first-page overview submitted to the journal. This will allow authors and reviewers to work more efficiently by adopting a transparent “research workflow”. All reviews and editorial letters will be stored and will be open to our readers. Initial submissions will be triaged by the editorial team for suitability in Stage 1. For this stage, authors are requested to e-mail (rips-irsp@ulb.ac.be) a one-page, bullet-pointed overview prior to submitting their ER for full review. We will invite authors of proposals that pass triage to submit a full manuscript for in-depth peer review (Stage 2). Authors can collect their own data, but are also encouraged to use existing datasets (e.g., the ManyLabs datasets, see e.g., 1 or 2; the Human Penguin Project; the European Social Survey; LISS Panel data; Eurobarometer, International Social Survey Programme, British Household Panel Study, World Values Survey, American National Election Studies (we are open to suggestions for other datasets to be advertised here). Exploratory Reports Editors: - Hans IJzerman, Université Grenoble Alpes, France - Lorne Campbell, Western University, Canada Exploratory Reports Editorial Review Board: - Thomas Pollet - Northumbria University, United Kingdom - Robert McIntosh - University of Edinburgh, Scotland - Samantha Joel - Western University, Canada - Rick Klein - Université Grenoble Alpes, France - Yizhar Lavner - Tel Hai College, Israel
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.