Main content
Improving Detection of Deepfakes
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Category: Project
Description: Deepfake are a type of computer-generated synthetic media where a person’s image is swapped with another person’s likeness (MIT Sloan, 2020), resulting in a highly-realistic fake image/video. Programs to generate deepfakes are becoming more easily accessible, raising concerns about the potential of this technology to be used for nefarious purposes (e.g., deepfakes of prominent figures being used to spread misinformation, deepfakes being used to blackmail or defraud; Smith & Mansted, 2020). These concerns have driven research into the public’s ability to detect deepfakes. An experiment by Kobis et al. (2021)—in which participants were presented with a series of videos (half of which were deepfakes) and asked to detect the deepfakes—found that the public’s ability to detect deepfakes is generally low (with participants performing barely above chance at the detection task). The authors also found that most participants severely overestimated their ability to detect deepfakes. The current study would engage a similar experimental methodology to Kobis et al. (2021) to further investigate this phenomenon. There will be a particular focus on the effects of providing participants with strategies on how to detect deepfakes (which was not investigated in the original Kobis et al. study).