Artifact Title: Deepfakes

Medium: Image and Text

Creators: Britt Paris and Joan Donovan

Year: 2019

Publisher: Data & Society

Article Title: Deepfakes, Cheap Fakes, and the Future of What We Call “Evidence”

Image Credit: CC BY-NC-SA 4.0 (creativecommons.org/licenses/by-nc-sa/4.0/legalcode): no changes have been made to the original image.

TERA Curator: Elonda Clay

Deepfakes

Deepfakes are the manipulation of digitized audio visual materials that use advanced techniques, such as AI, machine learning, CGI, and face-swapping, to hybridize or generate human bodies and faces. While more complex to produce, deepfakes are very convincing to the average viewer. Meanwhile, so-called cheap fakes are less convincing but still compelling. These include methods of manipulating audio/visual materials, such as slowing down or speeding up sound and video, photoshopping, cutting and re-staging the subject, lip-syncing, or placing an image in a context which is not its original context. Manipulated media and mediated deceptive productions can blur the lines between creative expression and evidence. Spotting these types of manipulated media productions and materials will require a new type of media and information literacy. A 2019 Data & Society report asks “If video can no longer be trusted as proof that someone has done something, what happens to evidence, to truth?”