Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Machine-learning has enabled the creation of ‘deepfake videos’; highly-realistic footage that features a person saying or doing something they never did. In recent years, this technology has become more widespread and various apps now allow an average social-media user to create a deepfake video which can be shared online. There are concerns about how this may distort memory for public events, but to date no evidence to support this. Across two experiments, we presented participants (N = 682) with fake news stories in the format of text, text with a photograph or text with a deepfake video. Though participants rated the deepfake videos as convincing, dangerous, and unethical, the deepfake video format did not consistently increase false memory rates relative to the text-only or text-with-photograph conditions. While further research is needed, the current findings suggest that deepfake videos do not always distort memory for public events any more than simple misleading text.
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.