Breaking News

DARPA is funding new tech that can identify manipulated videos and ‘deepfakes’

The Menlo Park-based nonprofit research group SRI International has been awarded three contracts by the Pentagon’s Defense Advanced Research Projects Agency (DARPA) to wage war on the newest front in fake news. Specifically, DARPA’s Media Forensics program is developing tools capable of identifying when videos and photos have been meaningfully altered from their original state in order to misrepresent their content.

The most infamous form of this kind of content is the category called “deepfakes” — usually pornographic video that superimposes a celebrity or public figure’s likeness into a compromising scene. Though software that makes that makes deepfakes possible is inexpensive and easy to use, existing video analysis tools aren’t yet up to the task of identifying what’s real and what’s been cooked up.

As articulated by its mission statement, that’s where the Media Forensics group comes in:

“DARPA’s MediFor program brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform.

If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”

While video is a particularly alarming application, manipulation even poses a detection challenge for still images and DARPA is researching those challenges as well.

DARPA’s Media Forensics group, also known as MediFor, began soliciting applications in 2015, launched in 2016 and is funded through 2020. For the project, SRI International will work closely with researchers at the University of Amsterdam (see their paper “Spotting Audio-Visual Inconsistencies (SAVI) in Manipulated Video” for more details) and the Biometrics Security & Privacy group of the Idiap Research Institute in Switzerland. The research group is focusing on four techniques to identify the kind of audiovisual discrepancies present in a video that has been tampered with, including lip sync analysis, speaker inconsistency detection, scene inconsistency detection (room size and acoustics) and identifying frame drops or content insertions.

Research awarded through the program is showing promise. In an initial round of testing last June, researchers were able to identify “speaker inconsistencies and scene inconsistencies,” two markers of video that’s been tampered with, with 75% accuracy in a set of hundreds of test videos. In May 2018, the group will be conducting a similar test on a larger scale, honing its technique in order to examine a much larger sample of test videos.

While the project does have potential defense applications, the research team believes that the aims of the program will become “front-and-center” in the near future as regulators, the media and the public alike reckon with the even more insidious strain of fake news.

“We expect techniques for tampering with and generating whole synthetic videos to improve dramatically in the near term,” a representative of SRI International told TechCrunch.

“These techniques will make it possible for both hobbyists and hackers to generate very realistic-looking videos of people doing and saying things they never did.”


Read more
April 30, 2018 at 04:27PM

from TechCrunch
via IFTTT

No comments