Truth and authentication; the frightening world of video fakes | Reuters News Agency

Truth and authentication; the frightening world of video fakes

Deepfakes are an emerging video threat, catalyzed by advances in machine learning. Reuters synthesized its own deepfake video experiment, building expertise against the dangers.

By Giles Crosse, Reuters Community | Mar 22, 2019

Deepfake video; the state of play

Every day, the specialist team of social media producers at Reuters encounters video that has been stripped of context, mislabeled, edited, staged or even deeply modified through CGI –– at times for political or commercial gain.

Harm or truth within this third party content, which is very often shared on social networks, is delivered from the hands of users, whose intent varies.

Advances in AI-based technology mean it is now possible to create highly convincing videos showing real people – whether public figures or not – saying anything the creator desires, in any kind of setting. This means Reuters and other news organizations should be concerned about the potential for misuse of these so-called “deepfake” videos within the news agenda.

For now, at least, there is no slew of deepfakes to deal with; it’s more a looming threat. This is in contrast to the various simpler types of video fakery Reuters encounters and works to verify daily.

How Reuters is red flagging deceitful content; experimental deepfakes

Overall, levels of video manipulation are escalating, making day-to-day operations challenging for newsrooms tasked with delivering authentic content under tight time pressure. Checking and verification demands time, skill and judgement within an increasingly fast paced, news-first dynamic.

“From the perspective of the news business, the ability to trust or indeed identify disinformation in third party content is crucial,” says Nick Cohen, Reuters head of video products. “It’s vital for all news agencies to innovate and adapt to this challenging reporting environment.”

With the evolving threat of deepfakes in mind, Nick, along with Hazel Baker, head of UGC newsgathering for Reuters, recently constructed an in-house piece of synthetic video. This was a learning exercise designed to consider best practice when operating in this changing landscape. The results of the faked video were startling.

An interviewee was filmed in one language…

Deep fake video experiment | English

Another interviewee was filmed in another language…

Deep fake video experiment Evangeline

The sources were combined…

 

The resulting construct?

Deep fake video experimentfinal cut, captioned

As you can see, these types of video fakes pose a serious challenge to our ability make factual, accurate judgements. They’re a danger, harming the reliable facts essential to democracy, politics and society, and as such we need to be prepared to detect them.

Hazel Baker, global head of UGC newsgathering, Reuters

Identifying deceitful video

From the Reuters newsroom experiment, Hazel and Nick were able to identify some red flags which helped reveal that the video had been faked:

  • audio to video synchronization issues
  • unusual mouth shape, particularly with sibilant sounds
  • a static subject

Significantly, they also discovered that those who had knowledge of the experiment were far quicker to identify these specific markers than those coming to it unaware.

Nevertheless, these red flags may rapidly vanish as the technology improves. As such, Reuters is committed to tracking the development of deepfakes and investigating the potential for detection programmes to assist the human verification work which will always underpin analysis of third-party content.

Delivering raised awareness in newsrooms of deepfake technology and encouraging staff to listen to their senses is recommended. News organizations should employ a combination of human judgement, subject expertise and a well-established verification framework when seeking to minimize the impact of fakes.