Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
All code to reproduce all results reported in the paper Rausch, M., & Zehetleitner, M. (2023). Evaluating false positive rates of standard and hierarchical measures of metacognitive accuracy. *Metacognition & Learning*. https://doi.org/10.1007/s11409-023-09353-y **Abstract** A key aspect of metacognition is metacognitive accuracy, i.e., the degree to which confidence judgments differentiate between correct and incorrect trials. To quantify metacognitive accuracy, researchers are faced with an increasing number of different methods. The present study investigated false positive rates associated with various measures of metacognitive accuracy by hierarchical resampling from the confidence database to accurately represent the statistical properties of confidence judgements. We found that most measures based on the computation of summary-statistics separately for each participant and subsequent group-level analysis performed adequately in terms of false positive rate, including gamma correlations, meta-d’, and the area under type 2 ROC curves. Meta-d’/d’ is associated with a false positive rate even below 5%, but log-transformed meta-d’/d’ performs adequately. The false positive rate of HMeta-d depends on the study design and on prior specification: For group designs, the false positive rate is above 5% when independent priors are placed on both groups, but the false positive rate is adequate when a prior was placed on the difference between groups. For continuous predictor variables, default priors resulted in a false positive rate below 5%, but the false positive rate was not distinguishable from 5% when close-to-flat priors were used. Logistic mixed model regression analysis is associated with dramatically inflated false positive rates when random slopes are omitted from model specification. In general, we argue that no measure of metacognitive accuracy should be used unless the false positive rate has been demonstrated to be adequate. **Instructions** The file "MetacognitionAlphaErrorAnalysisV2.zip" contains the simulated experiments created by hierarchical bootstrap sampling from the confidence database using the R-script "MeasuresOfMetaCognition_simulate_based_on_database_v2.R". The confidence databse is available at https://osf.io/s46pr/. The R script "MeasuresOfMetacognition_alphaerror_by_participant_v2.R" computes gamma, slopes, the area under the typ 2 ROC curve, meta-d', meta-da, meta-d'/d', and meta-da/da for each simulated participant and tests for a group difference in each simulated experiments. The R script "MeasuresOfMetacognition_alphaerror_by_participant_correlationAnalysis.R" computes the same measures for each simulated participant and tests for a correlation with a continuous predictor in each simulated experiments. Both scripts requires the R packages plyr,snow, doSNOW, and Hmisc. The computation of meta-d' and meta-da also requires the R scripts CalculateMetaDprime.R and CalculateMetaDa.R, respectively. The R script "MeasuresOfMetacognition_alphaerror_hmetad_v2.R" tests for a group difference using HMeta-d by placing independent priors on both groups. It requries the files Function_metad_group.R and hmeta-d-jagsmodel.txt. The R script "MeasuresOfMetacognition_alphaerror_hmetad_regression.R" tests for a group difference using HMeta-d by placing a prior on the difference between groups. It requries the files Function_metad_regression.R and hmeta-d-jagsmodel_2.txt. The R script "MeasuresOfMetacognition_alphaerror_hmetad_correlationAnalysis.R" tests for the effect of a continuous predictor on Hmetad. It also requries the files Function_metad_regression.R and hmeta-d-jagsmodel_2.txt. The R scripts "MeasuresOfMetacognition_alphaerror_hmetadwithweakerpriors_correlationAnalysis.R" and "MeasuresOfMetacognition_alphaerror_hmetadwithstrongerpriors_correlationAnalysis" also test for the effect of a continuous predictor on Hmetad using weaker or stronger priors, respectively. The script "MeasuresOfMetacognition_alphaerror_hmetadwithweakerpriors_correlationAnalysis.R" requires the files "Function_metad_regression_weakerPrior.R" and "hmeta-d-jagsmodel_weakerPrior.txt". The file "MeasuresOfMetacognition_alphaerror_hmetadwithstrongerpriors_correlationAnalysis" requires the files "Function_metad_regression_informedPrior.R" and "hmeta-d-jagsmodel_informedPrior.txt". All scripts computing Hmetad require the R packages plyr,snow, doSNOW, Hmisc, tidyverse, magrittr, reshape2, rjags, coda, lattice, broom, ggpubr and ggmcmc. The R script "MeasuresOfMetacognition_alphaerror_logreg_v2.R" tests for a group difference using logistic mixed model regression assuming fixed slopes as well as random slopes. The R script "MeasuresOfMetacognition_alphaerror_logreg_correlationAnalysis.R" tests for the effect of continuous predictors. Both scripts require the packages plyr and lme4. The R script "MeasuresOfMetacognition_alphaerror_analyse_2.R" computes the false postive rate for each measure of metacognitive accuracy testing for a group difference, produces the corresponding confidence intervals as well as Bayes factors and makes the Figues. It requires the packages plyr, ggplot2, and Bayes factor. The result of the analysis is saved in the file "MeasuresOfMetacognition_alphaerror_Results_v2.RData". The R script "MeasuresOfMetacognition_alphaerror_analyse_correlationAnalysis.R" computes the false postive rate for each measure of metacognitive accuracy testing for a continous predictor, produces the corresponding confidence intervals as well as Bayes factors and makes the Figues. It also requires the packages plyr, ggplot2, and Bayes factor. The result of the analysis is saved in the file "MeasuresOfMetacognition_alphaerror_Results_CorrelationAnalysis.RData". The R script "MeasuresOfMetacognition_alphaerror_analyse_v3.R" performs two analysis requested during the Review process: Does the false positive rate 1) on sample size, and 2) on accuracy? It requires the packages plyr, ggplot2, R.utils, and BFpack. The results of the additional analysis is stored in the file ""MeasuresOfMetacognition_alphaerror_Results_v3.RData". The R script "MeasuresOfMetacognition_summarybasedstatistics_distributions.R" tests whether each summary-statistic based measure of metaocgnition is normally distributed for each simulation. It requires the packages plyr, ggplot2, moments, and ggridges. The results are stored in the file "Distributions_SummaryBasedStatistics.RData". All material is licensed under a CC-BY 4.0 license. For a human-readable summary of what you are allowed to do with that piece of work, see https://creativecommons.org/licenses/by/4.0/ **Licence** All material is licensed under a CC-BY-NC-SA 4.0 license. For a human-readable summary of what you are allowed to do with that piece of work, see https://creativecommons.org/licenses/by-nc-sa/4.0/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.