Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Background and Rationale The insight moment is usually taken to be a sign of truth and many studies have found that solutions reached via an ‘Aha’ experience are likely to be correct (Danek et al., 2016, 2017; Salvi et al., 2017; Webb et al., 2016a, 2018b). However, ‘Aha’ experiences do not always predict correct solutions, and recent work has shown that false ‘Aha’ moments can be elicited artificially in laboratory experiments (Grimmer et al., in press). The new False Insight AnaDRM Task (FIAT) reliably leads participants to have insight experiences for incorrect anagram solutions by priming a semantic category and presenting anagrams that look similar to a primed associate. This effect relies on the careful combination of two specific manipulations and so far, no studies have investigated whether the FIAT effect changes depending on the instructions given to participants. In the current study, we aim to discover whether the FIAT effect persists despite warning participants of potential deception or explaining the paradigm completely. The FIAT, as its name suggests, was inspired by the classic Deese, Roediger, and MacDermott (DRM) paradigm (1995) which reliably elicits false memories of words that are semantically related to a list of studied primes. In this task, participants are presented lists of 10 semantically related words (e.g. “soil”, “seedlings”, “wheelbarrow”) and told to study them for a later recall test. After studying the words, participants are asked to solve some anagrams, one of which is a word that looks similar to an intended solution (e.g “endanger”, which looks like a semantic associate of the list: “gardener”). Participants tend to solve this so-called primed lure anagram incorrectly, but often still report experiencing an “aha” moment. These primed lure anagrams can therefore reliably elicit an “aha” moment for an incorrect solution—far more than the rate of false insights experienced for control anagrams not designed in a misleading configuration (Grimmer et al., in press). Our prior work investigated the FIAT along with measures of individual differences that might predict people’s susceptibility to making the intended error (https://osf.io/4vde3/). We tested 200 participants on an updated version of the FIAT and gave them several measures of psychosis proneness and thinking style since these measures had been shown in earlier work to correlate with the DRM effect (Corlett et al., 2009; Dehon et al., 2008; Graham, 2007; Laws & Bhatt, 2005; Watson et al., 2005). Echoing the findings of Nichols and Loftus (2019), this study found that measures of psychosis proneness and thinking style did not predict false insights on the FIAT. These results suggest that the process underlying the FIAT, is impervious to individual differences and rather is a more fundamental aspect of cognition. Based on these findings, we reasoned that situational variables ought to be considered to qualify our conclusion that the FIAT works via an automatic process that cannot be ameliorated. We anticipate that participants will be less vulnerable to the FIAT effect when they are given different instructions to those used in our earlier studies. Instructional manipulations are a rational next step upon discovering a new cognitive error. Many experimental paradigms revealing a bias, cognitive illusion or distortion have been subsequently tested on participants who are warned about the effect or taught exactly how the effect occurs to determine whether the effect persists despite corrective instruction. The DRM paradigm has been used in a multitude of such experiments, most revealing that false memories can occur even when people try to avoid them (Calvillo & Parong, 2016; Gallo et al., 1997; Guzey & Yılmaz, 2021; McDermott & Roediger, 1998; Peters et al., 2008). Soon after Roediger and McDermott published their impactful paper publicising the DRM effect, Gallo and colleagues (1997) ran follow-up experiments that showed that participants who were given explicit warnings about false memories did not show any difference in false memory error rates compared to participants who were given no warning. The rigidity of these memory intrusions with and without warnings was repeatedly shown in later studies across a range of additional manipulations of focus, decision criterion, and mood (Jou et al., 2016; McDermott & Roediger, 1998; Peters et al., 2008; Zhang et al., 2019). Aside from the DRM paradigm, other cognitive distortion effects can persist even when participants are warned about such effects. For example, the illusory truth effect occurs when people rate previously seen statements as more likely to be true than unseen statements, regardless of the statement’s actual veracity (Newman et al., 2020). Jalbert and colleagues (2020) warned some of their participants were warned that half of the presented statements weren’t true. They found that warning participants attenuated but did not eliminate the illusory truth effect, but only when the warning was provided both before exposure to the statements, and before the truth-judging test. Nadarevic and Aßfalg (2017) also found that warning participants reduced but did not eliminate the illusory truth effect. Another example is the misinformation paradigm, wherein participants are given misleading information after observing an event and are asked to later recall the event (Loftus et al., 1978). Participants in these experiments falsely recall irrelevant details taken from a post-event discussion when trying to remember the event itself. Monds et al. (2013) found that this effect occurred but was not diminished by warning participants that the post-event discussion could contaminate their memory for the event. Warnings have also been ineffective at reducing the revelation effect (Aßfalg & Nadarevic, 2015) and affective misattribution bias (Payne et al., 2005). We predict that our paradigm—the FIAT—will also show a resistance to corrective warnings, and that the effect will be reduced but not eliminated when participants are warned about the deceptive nature of the task. ## Hypotheses 1. We predict that participants who are aware of the FIAT effect will have fewer false insights than participants who are not 2. We predict that participants who are warned that the task contains a trick will have fewer false insights than those who are not. 3. We predict that participants who are both warned about a trick and given an explanation of the trick will show the fewest false insights. # Method ## Participants We simulated and analysed the results from 2000 datasets based on the mean differences we expected to find between each condition. This sensitivity analysis revealed a between-groups design with three conditions and 63 participants in each condition would be sufficient to detect our predicted differences between the instruction conditions in all 2000 of these simulations (100%). By decreasing the mean differences on the FIAT effect between the three instruction types to derive the smallest effect size of interest, which was η2G=.01, (Lakens et al., 2018) we could still detect a significant interaction between instruction type and anagram type in 1600 out of 2000 simulated datasets (80%). This entire sensitivity analysis is available under Files > Power Simulations. We therefore decided to use a sample of 255 native-English speaking participants. ## Design and Materials The experimental is exactly the same as in Grimmer et al. (2021) using the lengthened version of the FIAT. In this task, participants are presented with a list of ten semantically related words associated with a certain category, e.g. ‘plants’ ‘grass’ ‘gloves’ ‘seedling’. After the list, they are presented two anagrams to solve in random order. One anagram is a word either taken from the list or another word from the same semantic category, e.g. ‘seedling’ or ‘botanist’. The other anagram is a so-called primed lure—a word that looks similar to another associate, but neither appeared on the list, nor shares semantic association with it—e.g., ‘endanger’, which looks similar to ‘gardener’ when scrambled. These primed lures induce significantly more false insights than the non-lure anagrams due to being visually similar to a primed associate (Grimmer et al., in press). The current experiment will follow a 3-way between-groups design with the independent variable being instruction type. In addition to the original instructions used in Grimmer et al. (2021), we created two new conditions with different instruction videos (see links) designed to reduce or eliminate the false insight effect of the primed lure anagrams. The first group will watch a video identical to those in Grimmer et al. (in press). In the second condition, we will attempt to encourage participants to careful about solving the anagrams correctly, without revealing the nature of the effect completely. As in Jalbert et al., (2020) we will give the second group a brief warning that half of their solutions may be incorrect. We will alter the original instructions to warn participants against providing incorrect solutions. Unlike the control condition, there will be no specific instruction to work quickly and not double check solutions. Instead, we will say: “Beware… half of the anagrams have been designed to trick you into giving the wrong answer. Watch out for these anagrams and do your best to avoid being lured into giving the wrong answer!” We expect this condition to show an attenuated version of the FIAT effect. In the final condition, we will also give the above warning along with a full explanation of the FIAT paradigm. As in McDermott and Roediger (1998) and Landau and von Glahn (2004), we will warn participants that half of the anagrams are carefully designed to look similar a word that shared semantic association to the study list. After showing participants a demonstration of a trial, we will reveal which anagram was the primed lure and what the correct and incorrect answers were to thoroughly explain that half of the anagrams will be deceptive. As in the warning condition, we will also tell participants to do their best to provide the correct answer for every anagram. We expect that this condition will show no difference between false insight rates for primed lures and non primed lures. ## Procedure The experiment will be conducted online through Qualtrics. The FIAT procedure will be identical to Grimmer et al., (2021). Participants will be randomly assigned to one of the three instruction conditions. At the beginning of the experiment, participants will be shown one of three instruction videos described above. After viewing the video they will be guided through a practice trial of the task then asked a manipulation check question to check their understanding of the instructions. Participants who fail this question will be prompted to try again and will only be able to proceed until they have demonstrated understanding of the instructions. ## Planned Analyses * 2 (anagram type: primed lure, other) X 3 (Instruction condition: control, warning, warning + explanation) mixed ANOVA comparing rates of false insights between each instruction condition. * If a significant main effect of instruction type is found, we will run follow-up comparisons between each pair of instruction types. * If a significant main effect of anagram type is found, we will run follow-up comparisons between the anagram types to replicate and confirm our earlier findings that false insights occur most for primed lure anagrams. * If a significant interaction is found between instruction type and anagram type, we will run follow-up comparisons on the rates of false insights between each anagram type at each level of instructions. ## Links * [Control instructions video](https://vimeo.com/669241027/33354d704e) * [Warning instructions video](https://vimeo.com/669241001/327ea8519f) * [Warning + Explanation instructions video](https://vimeo.com/669240975/041fafe975) # References Aßfalg, A., & Nadarevic, L. (2015). A word of warning: Instructions and feedback cannot prevent the revelation effect. Conscious Cogn, 34, 75-86. https://doi.org/10.1016/j.concog.2015.03.016 Calvillo, D. P., & Parong, J. A. (2016). The misinformation effect is unrelated to the DRM effect with and without a DRM warning. Memory, 24(3), 324-333. https://doi.org/10.1080/09658211.2015.1005633 Corlett, P., Simons, J., Pigott, J., Gardner, J., Murray, G., Krystal, J., & Fletcher, P. (2009). Illusions and delusions: relating experimentally-induced false memories to anomalous experiences and ideas [Original Research]. Frontiers in Behavioral Neuroscience, 3(53). https://doi.org/10.3389/neuro.08.053.2009 Danek, A. H., & Wiley, J. (2017). What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately.(Report). Frontiers in Psychology, 7. doi:10.3389/fpsyg.2016.02077 Danek, A. H., et al. (2016). "Solving Classical Insight Problems without Aha! Experience: 9 Dot, 8 Coin, and Matchstick Arithmetic Problems." Journal of Problem Solving 9(1): 47-57. Dehon, H., Bastin, C., & Larøi, F. (2008). The influence of delusional ideation and dissociative experiences on the resistance to false memories in normal healthy subjects. Personality and Individual Differences, 45(1), 62-67. Gallo, D. A., Roberts, M. J., & Seamon, J. G. (1997). Remembering words not presented in lists: Can we avoid creating false memories? Psychon Bull Rev, 4(2), 271-276. https://doi.org/10.3758/BF03209405 Graham, L. M. (2007). Need for cognition and false memory in the Deese–Roediger–McDermott paradigm. Personality and Individual Differences, 42(3), 409-418. https://doi.org/https://doi.org/10.1016/j.paid.2006.07.012 Grimmer, H. J., Laukkonen, R., Tangen, J., & von Hippel, W. (in press). Eliciting false insights via semantic priming. Psychonomic Bulletin & Review. Guzey, M., & Yılmaz, B. (2021). False recognitions in the DRM paradigm: the role of stress and warning. Cognitive processing. https://doi.org/10.1007/s10339-021-01062-1 Jalbert, M., Newman, E., & Schwarz, N. (2020). Only Half of What I’ll Tell You is True: Expecting to Encounter Falsehoods Reduces Illusory Truth. Journal of applied research in memory and cognition, 9(4), 602-613. https://doi.org/10.1016/j.jarmac.2020.08.010 Jou, J., Escamilla, E. E., Arredondo, M. L., Pena, L., Zuniga, R., Perez, M., & Garcia, C. (2016). The role of decision criterion in the Deese–Roediger–McDermott (DRM) false recognition memory: False memory falls and rises as a function of restriction on criterion setting. Quarterly Journal of Experimental Psychology, 71(2), 499-521. https://doi.org/10.1080/17470218.2016.1256416 Lakens, D., Scheel, A. M., & Isager, P. (2018). Equivalence testing for psychological research: a tutorial. Advances in methods and practices in psychological science, 1(2), 259-269. https://doi.org/10.1177/2515245918770963 Landau, J. D., & Von Glahn, N. (2004). Warnings Reduce the Magnitude of the Imagination Inflation Effect. Am J Psychol, 117(4), 579-593. https://doi.org/10.2307/4148993 Laws, K. R., & Bhatt, R. (2005). False memories and delusional ideation in normal healthy subjects. Personality and Individual Differences, 39(4), 775-781. McDermott, K. B., & Roediger, H. L. (1998). Attempting to Avoid Illusory Memories: Robust False Recognition of Associates Persists under Conditions of Explicit Warnings and Immediate Testing. Journal of Memory and Language, 39(3), 508-520. https://doi.org/10.1006/jmla.1998.2582 Monds, L. A., Paterson, H. M., & Whittle, K. (2013). Can warnings decrease the misinformation effect in post-event debriefing? International journal of emergency services, 2(1), 49-59. https://doi.org/10.1108/IJES-06-2012-0025 Nadarevic, L., & Aßfalg, A. (2017). Unveiling the truth: warnings reduce the repetition-based truth effect. Psychological research, 81(4), 814-826. https://doi.org/10.1007/s00426-016-0777-y Newman, E. J., Jalbert, M. C., Schwarz, N., & Ly, D. P. (2020). Truthiness, the illusory truth effect, and the role of need for cognition. Conscious Cogn, 78, 102866-102866. https://doi.org/10.1016/j.concog.2019.102866 Nichols, R. M., & Loftus, E. F. (2019). Who is susceptible in three false memory tasks? Memory, 27(7), 962-984. https://doi.org/10.1080/09658211.2019.1611862 Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. (2005). An inkblot for attitudes: Affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89(3), 277-293. https://doi.org/10.1037/0022-3514.89.3.277 Peters, M. J. V., Jelicic, M., Gorski, B., Sijstermans, K., Giesbrecht, T., & Merckelbach, H. (2008). The corrective effects of warning on false memories in the DRM paradigm are limited to full attention conditions. Acta Psychol (Amst), 129(2), 308-314. https://doi.org/10.1016/j.actpsy.2008.08.007 Roediger, H. L., & McDermott, K. B. (1995). Creating False Memories: Remembering Words Not Presented in Lists. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(4), 803-814. https://doi.org/10.1037/0278-7393.21.4.803 Salvi, C., Bricolo, E., Kounios, J., Bowden, E., & Beeman, M. (2016). Insight solutions are correct more often than analytic solutions. Thinking & Reasoning, 22(4), 443-460. doi:10.1080/13546783.2016.1141798 Watson, J. M., Bunting, M. F., Poole, B. J., & Conway, A. R. (2005). Individual differences in susceptibility to false memory in the Deese-Roediger-McDermott paradigm. J Exp Psychol Learn Mem Cogn, 31(1), 76-85. https://doi.org/10.1037/0278-7393.31.1.76 Webb, M., Little, D., & Cropper, S. (2018). Once more with feeling: Normative data for the aha experience in insight and noninsight problems. Behavior Research Methods, 50(5), 2035-2056. doi:10.3758/s13428-017-0972-9 Webb, M. E., Cropper, S. J., & Little, D. R. (2019). “Aha!” is stronger when preceded by a “huh?”: presentation of a solution affects ratings of aha experience conditional on accuracy. Thinking & Reasoning, 1-40. doi:10.1080/13546783.2018.1523807 Zhang, W., Gross, J., & Hayne, H. (2019). Mood impedes monitoring of emotional false memories: evidence for the associative theories. Memory, 27(2), 198-208. https://doi.org/10.1080/09658211.2018.1498107
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.