Teenagers in Macedonia earn $1,000 a month for five hours of work thanks to artificial intelligence (AI) bots that work within social networks to propagate fake news. Trolls sway political elections, fraudulent promotions scam consumers, and fake stories provoke violence. The quest to stop the spread of fake news has even led to the creation of a new job title at Facebook: the News Feed Integrity Data Specialist.
Humans Aren’t up to the Job
The process of detecting fake news drains our mental health, as YouTube’s chief executive, Susan Wojcicki, learned. Instead, she is relying on AI to screen content at scale because there’s only so much a human moderator can take before mental health is affected. AI did the work of 180,000 people working 40 hours a week to remove videos for violent extremism.
Using AI to Separate Fact from FictionSetting aside the very machines used to create malicious content to hunt for fake content has limitations. Elon Musk’s GPT-2 was designed to write news stories and works of fiction, but he says that the risks of malicious use are so high that they are holding back GPT-2’s full release to the public to “allow more time to discuss the ramifications of the technological breakthrough.”
Machines do learn from humans, but the actual discernment process required to separate fact from fiction is lacking in machine learning (ML). Fake news stories are cleverly designed to have shreds of truth embedded within them to help make them more plausible, which tricks the machines as easily as it does humans. Discernment and interpretation of articles draw from common sense, societal norms, and political understanding—tools missing from our current capabilities within natural language processing (NLP) algorithms. Recognizing these limitations, some promising AI solutions circumvent the need to teach machines how to read and understand the news. Here’s how three of them work.
Using Fake News Behaviors
Money may be the driver of fake news, but people do their own part when it comes to the spread of fake news. A study found that humans spread phony stories on Twitter 20 times faster than truth. Harnessing the ability to track these patterns enabled Fabula AI to harness technology to detect propagation patterns that target potential fake news with a class of ML algorithms. The system is capable of learning patterns on “complex, distributed data sets such as social networks. The underlying core algorithms are a generalization of convolutional neural networks to graphs that have been developed by the team over the past years.” Rather than relying on the context of the news, Fabula AI evaluates a collection of behaviors, including social network connectivity, how news spreads, and user profiles. The solution can detect fake news with 93 percent accuracy within milliseconds of processing and after only 2 to 20 hours.
Fake Word Choices
Certain specific word strings consistently appear in fake news stories. These subtle differences are remarkably reliable and enable researchers at the Massachusetts Institute of Technology (MIT) to teach machines how to identify these words to flag potential fake news. This solution holds promise for its ability to augment human fact checkers so that they can work more efficiently, as the tool guides them to which stories have the highest chance of being fake.
Going to the Source
Researchers at Qatar Computing Research Institute and MIT’s Computer Science and Artificial Intelligence Lab believe that by teaching machines to identify risky sites, they can stop the spread of fake news. “ If a website has published fake news before, there’s a good chance they’ll do it again.” When AI has identified a piece of fake news from one of the risky sites, the stories can be stopped from spreading more “news.”
Significant challenges exist in developing robots that can filter our news to separate the fake from the real, but humans still may bring the larger risk. As the study about how fake news spreads on Twitter reveals, humans are the weak link. As AI continues to learn from humans, how long will be before AI learns to follow the human tendency to give more attention to sharing outrageous fake stories rather than the truth.