The United States’ military research division, DARPA, is investing millions of dollars in identifying doctored images and videos, known as deepfakes. Since some of these deepfakes look convincing to most people, DARPA wants to develop software to spot them before international governments and rogue agencies use the technology against the country.
DARPA and deepfakes
The military research division has reportedly spent close to US$68 million up till now. And though it has developed an AI to spot deepfakes, the technology is still nascent. The AI scans a video, frame by frame, to determine its authenticity. However, the most advanced deepfakes on the Internet are still too sophisticated for the AI to authenticate.
Deepfakes pose a serious security threat to the U.S. Enemies can doctor videos of President Trump calling for nuclear war and spread them throughout the Internet to create panic. By the time authorities are able to counter the propaganda, it might be too late. There is a bigger danger posed by deepfakes — the erosion of public trust.
“I think we as a society right now have significant trust in image or video. If we see it, then we have faith that it happened… And so the ability for an individual or a small group of people to make compelling manipulations really undermines trust,” Matt Turek, manager of the media forensics program at DARPA, said to CBC News.
Identifying deepfake images
Currently, there are two popular ways of identifying deepfake images. The first method involves analyzing an image for modifications. Researchers will typically inspect for any alteration in pixels or metadata. They will also check for any reflections or shadows that do not follow the laws of physics. The second method is by verifying an image’s integrity right at the moment it was taken. This includes checking the camera’s coordinates, time stamp, location data, time zone, and so on.
While research has mostly been focused on the first method, it is the second one that seems to be promising since it has the potential of verifying millions of images in a short period of time. In fact, U.S.-based startup Truepic is utilizing this very idea in its smartphone app.
When an image is taken, the Truepic app’s proprietary algorithm will automatically verify photos at the time when they are captured. Truepic then uploads the images and stores them on its servers. If any image bearing a resemblance to the stored photos goes viral, the viral image can be compared against the stored image to check whether it has been altered or is genuine.
“As an added layer of trust and protection, Truepic also stores all photos and metadata using a blockchain — the technology behind Bitcoin that combines cryptography and distributed networking to securely store and track information,” according to Technology Review.
There is a drawback to Truepic’s solution — the fact that the company stores images on their servers raises privacy concerns. What if the photos get hacked? Will companies like Truepic prioritize monetization over privacy at some point and share users’ images with other companies? These questions need to be addressed.
Right now, deepfake creators have an upper hand since AI-driven doctored images and videos is a new field. However, significant investments from agencies like DARPA and other private businesses should soon be able to create a sophisticated AI that can identify deepfakes with high accuracy.
Follow us on Twitter, Facebook, or Pinterest