Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## Statistical Plan **Sample Size Determination and Power** In short, there are no firm grounds for determining what sample size would produce adequate power for our questions. A total of 30 participants was selected as a minimum for detecting medium-sized effects. However, funding may not permit recruitment of the full 30 participants, and analyses will be conducted with any participants who are enrolled. The results from this study may help clarify power analyses for future studies. **Interim Monitoring and Early Stopping** There is no rationale for early stopping. Participants are free to end their participation if they judge the assessments to be producing negative effects. Daily diary studies have been used for decades without any indication that they routinely cause even mild harm. Measurements being used have no rationale for interim monitoring. *Update 3-29-22: The first ten participants' models were examined in 2020 to determine whether models of the type proposed could converge. The analyst (TLR) had no (and still has no) access to participant response to the speech exposures. No changes were planned for the analysis plan based on these models, which converged well. However, additional experience with DSEM models (in unrelated data) has led to changes in plan (see analysis plan).* **Analysis Plan** As described further below, our primary intended analyses focus on Multilevel Dynamic Structural Equation Modeling (ML-DSEM). N of 1 DSEM models will also be used as needed and appropriate (e.g., if factor structures vary widely between individuals or there are not enough participants for ML-DSEM). **Statistical Methods** P-technique factor analysis will be used to test the assumption that the six EMA items load on either one or two factors for at least 50% of the sample. If this is the case, examination of a two-factor structure in Multilevel Dynamic Structural Equation Modeling (ML-DSEM) will be examined. An ML-DSEM model in which anxiety and avoidance items load on an Anxiety and Avoidance factor, and those factors in turn are predicted by themselves and each other in their t-1 instance, will be fitted. All regressive parameters will be estimated as random slopes. Random intercepts will also be estimated. If the model permits, random error variances will also be estimated. This basic model, provided it converges with an appropriately low PSR value that remains low after 2X as many iterations (using Mplus recommendations), will then be tested for fit (using the DIC) in comparison to (a) a one-factor model (all items loading on a single anxiety/avoidance factor and (b) any idiosyncratic models revealed by the p-technique factor analysis that occur in more than two participants. Significant variance for the random slopes would provide evidence for our primary hypothesis. The best fitting model will be used in a further ML-DSEM model, in which average SUDS at V1 and V2 are added as between-participants variables. The random slopes from the within-level will be used to predict V2 average SUDS above and beyond V1 average SUDS. Either (a) any one of the random slopes predicting or (b) the R-squared for the set of random slopes predicting will support hypothesis 2. If not all of the random slopes can be included in prediction due to model convergence issues, the cross-construct slopes will be prioritized. *Update 3-29-22: As noted in the pre-registration, we planned to follow best practices in DSEM analyses. Since the original plan was written above, the following changes have been made based on continued experience of the team with the technique, additional guides in the literature, and interactions with developers of the technique. These changes were not made based on initial results, and the below was written before the analyst had access to exposure outcomes. In other words, as originally planned, DSEM analyses are being conducted blind to exposure results, using what appear to be the best practices given current knowledge.* *1. Factor structure will be verified by both p-technique results and by comparing a one-factor ML-DSEM model to a two-factor model.* *2. We will examine the time and person invariant ML-DSEM models described by McNeish, but based on experience with other data, do not necessarily expect these models to converge (due to differences between number of time points).* *3. Before proceeding with Lag 1 models, we will examine the guidance of a DTVEM model to determine whether cross-construct relationships might be stronger or otherwise markedly different (e.g., change in sign) across other time courses. We will attempt to include lags of interest from DTVEM models if feasible (e.g., we can easily include Lag 2 or use averaging to examine only two week intervals, but beyond that additional lags may not be feasible; we will not be certain until we examine this in the data).* *Update 4-11-22: Note that "two week intervals" in the above is a typo, and "two day intervals" is what was meant. (We do not have the data to average over two week intervals!)* *4. To keep models as simple as possible, we will use observed variables (sum of avoidance and sum of anxiety items, standardized) instead of factor analytic models.* *5. We will test whether RDSEM or DSEM models are more appropriate.* *6. Note that attempts to include as many random factors as possible will continue.* *7. Instead of attempting ML-(R)DSEM modeling including response to exposure on the between-level, we will output parameters from the ML-(R)DSEM model and test prediction of response to exposure using the median parameters in standard multiple regression.* *Update 4-29-22: We realized that our language in the original pre-registration was imprecise. The final model produces 9 random slopes, of which four (the autoregressive and cross-construct predictions between fear and avoidance) are the ones implied by the primary hypothesis. The pre-registration thus does not make it clear whether we will investigate 9 predictors or 4 (although the text seems to imply 4, which is what we remember meaning to say). Further, of the 9 possible predictors, examining their correlations finds that two conceptually-related pairs are highly enough correlated (above .5) that one could worry about multicollinearity.* *Accordingly, before the analyst (TLR) was provided with exposure data, we decided to handle this situation as follows:* *1. Combine the fear/avoidance error slopes and the two time effect slopes to reduce the number of predictors overall.* *2. Set the regression up with Step 1 of cross-construct (prioritized at one point in the preregistration) and Step 2 autoregressive (part of overall priority in main hypothesis) and Step 3, the remaining predictors.* *3. Interpret Step 1 and Step 2 primarily (this is pretty clearly what we were thinking at the time, although we did not say it as precisely as we could have), but in a situation in which Step 3 leads to all predictors being wiped out, we plan to concede that the pre-registration should have been clearer.* *Update 5-2-22: To estimate difference in average SUDS across speeches using as much data as possible, average SUDS at each speech was estimated as a latent variable (with all loadings of one and intercepts constrained across time points for that SUDS rating) and a latent difference score estimated across speeches. We will investigate alternative ways of estimation if initial results suggest prediction. Only SUDS was selected because evidence suggested that neither dynamometer ratings nor PANAS negative affect ratings tracked with SUDS changes.* **Missing Data** All methods handle missing data appropriately. Participants who provide 50 or more timepoints of EMA data will be included in any models focusing on the relationship between anxiety and avoidance or exposure response, regardless of much much else they complete. *Update 5-2-22: We realized today that the above is a misstatement, in that people with fewer than 100 time points were not invited to continue on in the study and thus have no speech data to predict. They are included in the ML-DSEM model, however.*
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.