Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
**Title:** How do the experiential components of insight affect judgements of truth in detecting fake news ? **Collaborators:** Hilary Grimmer, Jason Tangen, Ruben Laukkonen **Contact information:** For more information, please contact Hilary Grimmer hilary.grimmer@uqconnect.edu.au or Jason Tangen jtangen@psy.uq.edu.au. **Background and Rationale:** Truth judgments are a fundamental part of human reasoning. Several recent studies have suggested that truth judgements arise not from a conscious process of weighing evidence, but from phenomenology. For example, in a demonstration of the widely documented *illusory truth effect*, Reber and Schwarz (1999) presented true and false statements to participants in forms that were either perceptually clear and easy to read, or moderately visible. They found that easy to read statements were rated as true significantly above chance level, while the unclear statements were rated as true at chance level. This result held regardless of the actual veracity of the statement. Further research on this effect has demonstrated that repetition increases the likelihood of a statement being regarded as true (Kelley & Lindsay, 1993). More recently, this phenomenon has been observed with fake news headlines in a study by Pennycook, Cannon and Rand (in press), where a single instance of pre-exposure increased the ratings of truth for both fake and real headlines. These findings suggest that truth judgements are heavily influenced by one's experiential state at the moment of judgement. This notion ties in to the theoretical perspective on the phenomenology of insight moments described by Laukkonen and Tangen (in press). The subjective insight experience has been found to have a strong link to feelings of certainty (truth) as well as several positive affective states (Danek & Wiley, 2017; Topolinski & Reber, 2010). They propose, therefore, that in the event of having an 'aha' moment, the feelings associated with insight problem solving, are taken to carry valuable information about the quality of an idea or problem solution. That is, the commonly described "eureka" experience, acts as a heuristic by which the integrity of a solution is gauged, leading to the strong feelings of certainty associated with having a "eureka" experience. In line with this hypothesis, our own research has recently found that feelings of insight can be induced to increase truth judgments in unrelated propositions, see: https://osf.io/wau7h. That is, eliciting an insight experience by providing a statement that contains an anagram, increases the truth ratings for that statement when the anagram is successfully solved. This is intriguing, as no previous studies have investigated the transfer of insight phenomenology to the appraisal of stimuli other than the problem solution itself. In another study, Laukkonen, Ingledew, Schooler and Tangen (unpublished data) used a dynamometer to measure participants' subjective feelings of warmth as they solved insight and analytic problems. They found that this objective measure of insight phenomenology predicted accuracy and confidence in problem solutions. These findings thus show a clear link between so-called gut feelings and truth judgements. A growing body of literature exists on the phenomenology insight moments, or 'aha' moments. Large analyses of self-report studies on the experience of insight moments has identified several key phenomenological components that underlie the subjective experience of sudden insight. Namely, suddenness of a solution appearing, ease with which it it understood (processing fluency), positive affect, and confidence in the solution's veracity. This final point is the point of interest for the current study. The question of "how do we know what is true?" has been given minimal attention in psychological research. Perhaps this question at first seems ineffectual, as humans tend to look toward the weighing of material evidence to answer this. However, the clear influence of subjective experience on judgement and decision-making has been observed across a range of domains. In the current political climate, people's ability to accurately distinguish fact from fiction is becoming an increasing concern given the phenomenon of fake news. Very little research on this phenomenon exists in the literature. Investigating the phenomenological components of truth therefore could provide better understanding of how we can better harness out intuitive truth judgments. **The Present Research** We propose a study in which feelings of insight are induced through the completing of a popular insight task and performance on a fake news detection task is tested. This manipulation aims to induce the subjective experience of "truthiness" by raising feelings of positive affect and cognitive ease through solving an easy "System 1" intuitive task and test whether this leads to a criterion shift in judging news articles to be true. **Participants**: We determined that a sample size of 100 was sufficient to detect a small to moderate effect size (.3 to .5, see Dougal and Schooler, 2007) in a within groups design with a power of 0.8. Participants will be 100 undergraduate students from The University of Queensland. **Design and Materials:** Feelings associated with insight will be induced through the completion of an traditional insight task, the compound remote associates test (Bowden & Jung-Beeman, 2003). The opposite effect will be achieved through the completion of a difficult task that requires sequential seliberation, a series of maths problems involving order of operations. The dependent measure will be the detection of fake news. This will be measured by embedding several tasks within an article, so that the participant must solve them intermittently while they read the article. The articles will be chosen by searching websites that have been previously confirmed to be fake, and compiling a set of 6 fake articles. These articles will added to a set of 6 real articles taken from verified news websites. After completing the article and embedded tasks, participants will be asked to rate how true they feel the article to be on a scale of 1-12. Other questions about the article will also be asked (i.e., "How surprised did this article make you feel?" and "How positive did this article make you feel?"), which were included to obscure truth as the primary DV and as exploratory variables to further examine dimensions of the insight experience, and so are not included in our planned analyses. The full copies of the counterbalanced materials sets can be found in the OSF project files folder titled 'Materials'. ![enter image description here][1] **Measures**: Truth judgments will be reported on a scale of 1-12, with 1 being "Not at all true" and 12 being "Very true." Judgements will also be coded dichotomously. That is, ratings of 7 and over will be considered a response of "true" while rating of 6 and under will be considered "false" responses. Accuracy of these responses will be calculated by comparing the coded response to the actual truth of each news article. Other questions about the article will also be asked and answered on a 12-point scale. These are: "how surprising did you find this article?", "how positively/negatively did this article make you feel?", and, "During solving these problems how often did you experience insight moments?." For each participant, an accuracy score will be calculated signifying the proportion of fake article correctly identified as fake. **Procedure**: Participants are given an information sheet along with verbal instructions, and will then be given a set of 6 short fake news articles and 6 short real news articles, the order of which will be randomised. Within each article, there will be several short problem-solving tasks presented. These will be either CRA tasks to elicit insights, or multiplication tasks that don't elicit insights. The combinations of each article and each task type will also be randomised. Thus, at several points throughout reading the article, participants will have to stop reading and solve these problems, then continue reading. Once they reach the end of each article, participants will answer questions about the article by indicating their answer on a 12-point scale. **Planned Analyses**: - ANOVA: Problem Type (CRA vs Analytic) x Ground Truth (True vs False), with DV: Truthiness ratings - Signal Detection: Ground Truth x Problem Type, with DV: Discriminability and Response Bias. - Discriminability: How well people distinguish between True and False articles collapsing over hit and false alarm rates. - Response Bias: People's tendency to say "True" or "False" regardless of the actual ground truth of the items. - Solved vs. Unsolved CRAs with Truthiness ratings as the DV. - High "Experienced Insight Moment" ratings (9-12 on the 12-point scale) vs Low "Experienced Insight Moment" ratings (1-4 on the 12-point scale) with truth as the DV. **Exclusion Criteria**: 1. Any trials where no CRA or multiplication problems are solved will be removed from analysis comparing the two problem types. 2. Any trials where no insights are reported in the CRA condition will be removed from the analysis comparing the two problem types. 3. Each participant will be asked after completing the experiment whether they had already seen this article in the news or somewhere on the internet before participating. Any items that participants marked "yes" to will be eliminated from further analysis. **Hypotheses**: **1.** In the insight problem condition, ratings of truth will be higher overall than those in the analytic condition **2a.** When reading an article that contains embedded insight problems, participants will adopt a more liberal criterion in their truth judgements, being more likely to say the article is true. **2b.** When reading an article that contains embedded analytic problems, participants will adopt a more conservative criterion in their truth judgements, being more likely to say the article is false. **3.** Within the insight condition, articles for which the problems were correctly solved will have higher truth ratings than articles for which the insight problems were not solved. **4.** Within the insight condition, trials with high reported insight occurrence will have higher truth ratings. **Ethics**: Ethical clearance was obtained from The University of Queensland Psychology Ethics Review Committee on (06/04/18). The information sheet to be given to participants can be seen in the project files folder. Following the completion of the experiment, participants will be given a debrief sheet (see project files). Participants will all be given the opportunity to ask questions and contact the researcher for further information. Participation will be entirely voluntary, and participants are free to withdraw at any time. **References** Danek, A. H., & Wiley, J. (2017). What about False Insights? Deconstructing the Aha! Experience along Its Multiple Dimensions for Correct and Incorrect Solutions Separately.(Report). Frontiers in Psychology, 7. doi:10.3389/fpsyg.2016.02077 Topolinski, S., & Reber, R. (2010). Gaining Insight Into the Aha Experience. Current Directions in Psychological Science, 19(6), 402-405. doi:10.1177/0963721410388803 Laukkonen, R., Ingledew, D., Schooler, J., & Tangen, J. M. (2018, March 15). The phenomenology of truth: The insight experience as a heuristic in contexts of uncertainty. http://doi.org/10.17605/OSF.IO/9W56M Laukkonen, R., Schooler, J., & Tangen, J. M. (2018). How do we know when our ideas are true? The Eureka Heuristic and The Insight Fallacy. Retrieved from psyarxiv.com/ez3tn Pennycook, G., Cannon, T., & Rand, D. G. (in press). Prior Exposure Increases Perceived Accuracy of Fake News. Journal of Experimental Psychology. doi:10.2139 [1]: https://files.osf.io/v1/resources/sxdgu/providers/osfstorage/5aea64439a64d7000ee755bf?mode=render
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.