Digital Forensics Meets Social Media: When Breaking News Hits Your Feed, a “Media Verification Assistant” Can Help Separate Fact from Fiction

By Lori Cameron
Published 05/22/2018
Share this on:

Burning car on fire on a highway road accident at day

When breaking news floods our social feeds—from legitimate news outlets to anyone with a cell phone—it’s hard to figure out what to believe.

That’s why researchers at the University of Southampton and the Information Technologies Institute in Greece, which is part of the Centre for Research and Technology Hellas (CERTH), are working on a new digital forensics platform they call a “media verification assistant” to help us separate fact from fiction.

The old adage “seeing is believing” just isn’t true anymore. Digital media has paved the way for anyone to doctor images and video to misrepresent truth. Furthermore, flat out lies and half-truths litter the digital landscape in ways that are difficult to reign in.

“As soon as a breaking news event starts trending on Twitter, it is accompanied by considerable numbers of false claims and content misuse. This involves the use of multimedia for misinforming the public and misrepresenting people, organizations, and events. Misuse practices range from publishing content that has been digitally tampered using photo-editing software to falsely associating content with an unfolding event,” say Stuart E. Middleton, Symeon Papadopoulos, and Yiannis Kompatsiaris, authors of “Social Computing for Verifying Social Media Content in Breaking News.” (login may be required for full text)

Because it is important for news organizations to be able to verify eyewitness media in very short time, journalists are turning to social-computing approaches to automatically analyze and verify user-generated content in real time. The goal is two-fold: To discover whether or not multimedia has been tampered with and to find ways to fact check information sources that are fast and cost-effective.

Some current solutions hold promise, but it may take deep learning to ensure the creation of a successful media verification assistant, the authors say.

Using image and video forensics to verify multimedia content

When pictures and video are tampered with, many forensics methods can easily detect changes by examining metadata that editing software often attaches to the files.

However, when multimedia files are uploaded to sites like Facebook and Twitter, most of that metadata is stripped away.

The authors describe a European Commission co-sponsored project called REVEAL, an especially ambitious multi-partner enterprise, that seeks to address these problems.

“Recent work in the REVEAL project addresses the poor performance of individual tampering-detection methods by generating tampering probability heat maps based on a number of complementary forensic-analysis algorithms. The inclusion of multiple image-forensics algorithms and side-by-side comparisons gives a powerful means to journalists to understand where possible digital tampering has occurred,” they say.

A digital-forensics platform for image verification (screenshot taken from the Media Verification Assistant).

The authors also discuss a software program called TUNGSTENE, which is the only software of its kind used to analyze digital images and photographs for anomalies using filters and algorithms.

This was the same software used to determine that images the Russian government published as part of its investigation into the downing of Malaysian Airlines Flight 17 over Ukraine in 2014 were significantly modified or altered.

Verifying eyewitness media using fact extraction and fact-checker websites

The first order of business in fact-checking is to rifle through the mountains of text piling up on social media after an event and extracting “facts” worth verifying. Some half-truths, embellishments, or borderline opinions aren’t worth the time in takes to verify them. Other statements are.

“Fact identification approaches, especially for news-related sources, try to classify sentences into nonfactual, unimportant factual, and ‘check worthy’ factual statements so that they can be filtered prior to fact extraction. Fact extraction is a type of information extraction problem that runs alongside information extraction techniques for concepts such as event, topic, location, and time,” the authors say.

An interactive real-time visualization mapping extracted facts and eyewitness media in posts about the December 2016 Malta plane hijacking (screenshot taken from the Journalist Decision Support System).

Several algorithms use verb phrases and noun phrases to determine context. Other systems rely on domain-specific databases like PolitiFact or web-scale datasets like DBpedia.

The research and dataset landscape for researchers interested in verification of social media and news-related content.

The authors underscore the collaborative efforts of researchers under the REVEAL project in speeding up the task of verifying user-contributed information and content sourced from social media platforms.

They say that it “is one of the first efforts to bring together those technologies under a single platform that could provide comprehensive verification support to professional users.”

How deep learning could conquer challenges to social media verification

A media verification assistant is on its way, but it faces several challenges.

“One key challenge involved in delivering such an integrated solution is the lack of an appropriate human–computer interaction (HCI) approach that would empower users (e.g., journalists) to make optimal use of the technologies described above. Given the extensive use of algorithms, an effective HCI approach would need to build the trust of users by providing intuitive control and a clear explanation of the results,” the authors say.

Deep learning could assist the ongoing development of a media verification assistant.

“The problem of real-time verification of user-generated content is expected to remain unsolved in the near future, but marked improvements have already been achieved on individual parts of the verification process thanks to social-computing approaches incorporating intelligent information processing. In the future, we anticipate considerable progress on this problem by incorporating the latest advances from deep learning. One example would be employing generative adversarial networks to build highly accurate and robust models for visually distinguishing between tampered-with and untampered-with regions in multimedia content,” the researchers wrote.

Related research on fake news and social media in the Computer Society Digital Library

Login may be required for full text.

 


 

About Lori Cameron

Lori Cameron is a Senior Writer for the IEEE Computer Society and currently writes regular features for Computer magazine, Computing Edge, and the Computing Now and Magazine Roundup websites. Contact her at l.cameron@computer.org. Follow her on LinkedIn.