Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
## **Analytic Plan** ## To test whether partners demonstrate directional bias, tracking accuracy, and assumed similarity in their perceptions of each other’s relational boredom, we will use West and Kenny’s (2011) T&B Model of Judgment. Our data have a nested structure, with perceivers and partners’ multiple ratings of relational boredom across the 25 days (only have relational boredom data on 13 days) (Level 1) nested within dyad (Level 2). First, we will examine the associations across the perceivers’ judgments of their partner’s relational boredom and the partners’ actual reported relational boredom (the Level 1 repeated measures variables) to test the degree to which judgments of the partner’s relational boredom are biased and accurate. The intercept and effect of partners’ actual relational boredom ratings will be averaged across perceivers and days (see also Kenny, Kashy, & Cook, 2006; Overall, Fletcher, & Kenny, 2012). In accordance with the T&B Model (West & Kenny, 2011), the perceiver’s judgments of their partner’s relational boredom (the outcome variable) will be centered on the partner’s actual relational boredom ratings by subtracting the grand mean of all the partners’ relational boredom ratings (i.e., mean across days and dyads) from the perceivers’ judgments for each behavior. Centering in this way means that the intercept represents the difference between the mean of the partner’s actual relational boredom rating and the mean of the perceivers’ judgments of that relational boredom rating. The average of this coefficient across perceivers tests whether their judgments differ from the partners’ actual ratings across all days, as well as indicating the direction of that bias (i.e., directional bias, H1). A negative average intercept indicates that perceivers generally underestimate partners’ relational boredom, whereas a positive average intercept indicates that perceivers generally overestimate partners’ relational boredom. The effect (slope) of the partner’s actual relational boredom ratings on the perceiver’s judgments of those ratings reflects tracking accuracy (H2), and the effect (slope) of the perceiver’s own relational boredom ratings on their judgments of their partner’s relational boredom reflects assumed similarity. A positive slope indicates greater tracking accuracy or assumed similarity, respectively. Next, to explore the consequences of bias and accuracy in perceptions of relational boredom, we will conduct analyses using multilevel polynomial regression with response surface analyses (RSA; Edwards, 2002) following the guidelines of Shanock, Baran, Gentry, Pattison, and Heggestad (2010). These analyses allow us to test how the degree of agreement between partners (i.e., accuracy) and how the direction of disagreement (i.e., bias) is associated with global relationship quality. As per the guidelines outlined in Shanock et al. (2010), we will center the scores for perceptions of a partner’s boredom and the partner’s actual reported boredom on the midpoint of the scale (i.e., 4). Next, we will create squared versions of these variables and a product term (perceptions of the partner’s boredom × the partner’s actual boredom) and enter all five variables as predictors. The output obtained from the polynomial regression models is not interpreted directly; rather, the output is used to examine the significance of four surface test values (a1, a2, a3, and a4). The following indicates how each of these surface test values would be interpreted if it occurred in isolation. However, these effects rarely occur in isolation and there is no strict guideline on how to interpret them together. Therefore, it is up to the researcher to take into consideration the size of the effect and its validity based on previous research and theoretical consistency. The line of perfect agreement represents the levels of relationship quality when perceivers’ and partners’ ratings of boredom items are essentially the same. The slope of the line of perfect agreement is represented by a1, which allows us to answer whether matches at high values have different outcomes than matches at low values. The curvature along the line of perfect agreement is represented by a2, which allows us to determine whether matches at extreme values have different outcomes than matches at less extreme values. The line perpendicular to the line of perfect agreement is the line of incongruence, which represents the levels of relationship quality when perceivers’ and partners’ ratings of relational boredom are not in agreement. The slope of the line of incongruence is represented by a3, which allows us to answer whether one mismatch is better or worse than the other (i.e., is overestimation better or worse than underestimation). The curvature along the line of incongruence is represented by a4 and is a proxy for tracking accuracy, as it allows us to answer whether matches in perceivers’ perceptions of partners and partners’ actual ratings are better than mismatches in predicting outcomes (cf. Barranti et al., 2017). Thus, our primary focus is to examine how directional bias (a3) and accuracy (a4) are associated with relationship quality (H4, H5, and H6). We may also run additional analyses to confirm that partners’ relational boredom did in fact fluctuate over the course of the study, as it has been speculated to do in previous research. It is possible that boredom fluctuates over more extended periods of time (e.g. months or years), rather than at the daily level. Therefore, despite indication in the previous research that boredom does fluctuate over time, these analyses will remain exploratory given the current study design.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.