AI Can Help Win the War Against Fake News
AI Can Help Win the War Against Fake News

AI Can Help Win the War Against Fake News

It may have been the first bit of fake news in the history of the Internet: in 1984, someone posted on Usenet that the Soviet Union was joining the network. It was a harmless April’s Fools Day prank, a far cry from today’s weaponized disinformation campaigns and unscrupulous fabrications designed to turn a quick profit.

Today, misleading and maliciously false online content is so prolific that we humans have little hope of digging ourselves out of the mire. Instead, it looks increasingly likely that the machines will have to save us.

One algorithm meant to shine a light in the darkness is AdVerif.ai, which is run by a startup of the same name. The artificially intelligent software is built to detect phony stories, nudity, malware, and a host of other types of problematic content. AdVerif.ai, which launched a beta version in November 2017, currently works with content platforms and advertising networks in the United States and Europe that don’t want to be associated with false or potentially offensive stories.

The company saw an opportunity in focusing on a product for companies as opposed to something for an average user, according to Or Levi, AdVerif.ai’s founder. While individual consumers might not worry about the veracity of each story they are clicking on, advertisers and content platforms have something to lose by hosting or advertising bad content. And if they make changes to their services, they can be effective in cutting off revenue streams for people who earn money creating fake news. “It would be a big step in fighting this type of content,” Levi says.

AdVerif.ai scans content to spot telltale signs that something is amiss—like headlines not matching the body, for example, or too many capital letters in a headline. It also cross-checks each story with its database of thousands of legitimate and fake stories, which is updated weekly. The clients see a report for each piece the system considered, with scores that assess the likelihood that something is fake news, carries malware, or contains anything else they’ve ask the system to look out for, like nudity. Eventually, Levi says he plans to add the ability to spot manipulated images and have a browser plugin.

Testing a demo version of the AdVerif.ai, the AI recognized the Onion as satire (which has fooled many people in the past). Breitbart stories were classified as “unreliable, right, political, bias,” while Cosmopolitan was considered “left.” It could tell when a Twitter account was using a logo but the links weren’t associated with the brand it was portraying. AdVerif.ai not only found that a story on Natural News with the headline “Evidence points to Bitcoin being an NSA-engineered psyop to roll out one-world digital currency” was from a blacklisted site, but identified it as a fake news story popping up on other blacklisted sites without any references in legitimate news organizations.

Some dubious stories still get through. On a site called Action News 3, a post headlined “NFL Player Photographed Burning an American Flag in Locker Room!” wasn’t caught, though it’s been proved to be a fabrication. To help the system learn as it goes, its blacklist of fake stories can be updated manually on a story-by-story basis.

AdVerif.ai isn’t the only startup that sees an opportunity in providing an AI-powered truth serum for online companies. Cybersecurity firms in particular have been quick to add bot- and fake news-spotting operations to their repertoire, pointing out how similar a lot of the methods look to hacking. Facebook is tweaking its algorithms to deemphasize fake news in its newsfeeds, and Google partnered with a fact-checking site—so far with uneven results. The Fake News Challenge, a competition run by volunteers in the AI community, launched at the end of 2016 with the goal of encouraging the development of tools that could help combat bad-faith reporting.

Read the source article in MIT Technology Review.