Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Recent Work on MTurk: There have been a few recent correlational findings suggesting that repeated participation (on MTurk) changes data quality. The examples I can think of include our BRM paper, the Rand paper (@Gabe - resend this to the group? Here it is), and the attached paper that shows that workers can learn how to game the system over time. Exposure to the IAT: There is also the IAT literature Kate mentioned. @Kate - can you send us some citations for the IAT declining over time? Clinical Testing and other cases with factually correct answers: The effect of repeated exposure to items is also well known in the clinical testing literature. I dropped a citation into our BRM paper at the last minute (Basso et al) that discusses this. It will be a good starting point for this literature. We don't need to dig too deep intot this. The basic point is that NN is a problem for items that measure performance because of practice effects. On a related note, in the behavioral economics literature, there was a big donnybrook that began in the 1980s about whether economists were more selfish than non-economists. From what I recall of this literature, there were numerous experimental games (including one done by Gilovich) that showed that economics students are more selfish, but field data were more equivocal. All of this is in line with the practice effects story, especially given that behavioral economics generally forbids deception. Literature on participant cross talk: Research on cross talk effectively makes the same point as we want to make. Foreknowledge influences responses. However, incentives are usually needed to motivate people to share information (e.g. Edulund et al 2010, but also this) Older literature on NN There was an epistemological crisis in the late 1960s/1970s that is sort of similar to the one that the field is going through now. This produced a number of lines of research relevant to participant foreknowledge. At lot of the older NN studies pertained to deception e.g. Silverman, I., Shulman, A. D., & Wiesenthal, D. L. (1970). Effects of deceiving and debriefing psychological subjects on performance in later experiments. Journal of Personality and Social Psychology, 14(3), 203. and Brock, T. C., Becker, L. A. (1966). 'Debriefing' and susceptibility to subsequent experimental manipulations, Journal of Experimental Social Psychology, 2, 3-5. For a relatively recent review see here. However, not all of them did. Glinski, R. J., Glinski, B. C., & Slatin, G. T. (1970). Nonnaivety contamination in conformity experiments: Sources, effects, and implications for control.Journal of Personality and Social Psychology, 16(3), 478. The best review I have found of these studies is here: Rosnow, R. L., & Aiken, L. S. (1973). Mediation of artifacts in behavioral research. Journal of Experimental Social Psychology, 9(3), 181-201. The Rosnow paper documents many of the studies that demonstrate the effects of foreknowledge. But foreknowledge does not always matter. For a recent example, see this. These address theoretical concerns that are related to foreknowledge, but less directly Greenwald, A. G. (1976). Within-subjects designs: to use or not to use?. Psychological Bulletin, 83(2), 314. Note Greenwald's point that repeated measures (and by implication nn) might actually be a good thing for paradigms that are trying to make inferences about phenomena that are repeated in their natural environment. Gergen, K. J. (1976). Social psychology as history. In Social psychology in transition (pp. 15-32). Springer US. Gergen basically argues that beliefs transmitted by psychologists can create behavior in the subjects they observe. Relevant mainly to nn in contexts where debriefings are provided, assuming (as Orne and other early researchers did) that participants are basically helpful and compliant.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.