
โThe earth has ears,โ a local proverb goes. โAnd the news has wings.โ
Though it was taught to me early on as a lesson about gossip โ a lesson that, it turns out, was pretty accurate โ the latter half of that proverb has found new meaning in todayโs information landscape, particularly with the rise of misinformation and disinformation.
After all, if thereโs one thing that can travel as fast as (if not faster than) news of your best friendโs exโs latest cheating scandal with his new girlfriend whom he was secretly seeing while still committed to someone else, itโs fake news.
In fact, a 2018 study found that compared to news about true events, false news online tends to reach more people at a much faster rate. The problem of fake news is worrisome for already fragile democracies, and in a pandemic, it can be particularly deadly. In the early months of 2020, for example, 6,000 people were hospitalized as a result of COVID-19 misinformation, and at least 800 lost their lives.
While different groups have looked towards strict legislation, better platform governance, and interventions for public media literacy, others still are turning to a form of technology thatโs grown to a $327.5 billion market, and growing all the time: artificial intelligence (AI).

For those of us who didnโt major in STEM, AI can be understood as computer systems that simulate human intelligence and work extremely fast. Siri and Alexa are some of the more common examples of AI at work every day, but AI is also behind some of our most-used tech, like Google Search results, Gmailโs Smart Compose feature, ride-sharing apps, and even our Netflix recommendations.
Over the past few years, scientists have created AI that can do some pretty cool things, like accurately diagnose skin cancer and help with disaster recovery.
So, in todayโs information war, the crucial question is: Can AI fight fake news?
The short answer, unfortunately, is yes and no. To better understand where and how AI currently fits in our information landscape, letโs first cover how it works.
AI, a Primer
AI systems work through software that is taught through large amounts of training data. From there, iterative processing and algorithms analyze patterns and correlations, which the software can then use to make accurate predictions in the blink of an eye.

By the way!
Did you know weโre launching a Kickstarter campaign? In the next few months, our campaign for โGentle Jack: The Party Game for Bad Friendsโ goes live! Visit the official website or follow the Kickstarter page to stay in the loop.
Itโs the difference, for example, between your childhood Furby that โlearnsโ English and a more advanced chatbot youโd come across on eCommerce sites. The former is actually pre-programmed to speak English and less gibberish the more it is used and played with โ it doesnโt actually learn a new language. The latter, meanwhile, is fed a large number of examples of text chats so it can learn to produce believable text for people to converse with.
The data doesnโt have to be purely words and sentences, though. For example, AI programs are being developed to recognize lesions in MRIs and help doctors more accurately diagnose tumors and cancers.
One program can be trained using as many as 60,000 tumor samples across 150 different cancer entities, which is easily much more than human doctors might see in their careers and lifetimes. In this way, the resulting program can help prevent misdiagnosis, promote early detection, and save lives.
Other programs might also use the information found in gases, using gas chromatography-mass spectrometry, to detect cancer via urine samples. Researchers are working to utilize the same technology to detect other diseases like COVID-19.

AI programming tends to focus on several cognitive skills, but the most important are learning, reasoning, and self-correction:
- Learning: AI programs acquire data and work to create rules โ or algorithms โ that define how this data is turned into more actionable steps.
- Reasoning: AI programs focus on how to choose the correct algorithm for achieving a certain outcome.
- Self-correction: AI programs are designed to learn from their own mistakes. Thus, they fine-tune algorithms continuously based on initial data fed to them and continuous applications.
AI Fighting Fake News
In the fight against fake news, AI programs are taught how to differentiate real from fake information by feeding it both fake news and authentic news. Data libraries like RealNews, for example, contain thousands of authentic published articles, while FakeNewsNet serves as a data repository of fake news content, with respect to social context and spatiotemporal information.
Once this is over, an AI program is able to build complex algorithms that can identify whether a piece of text is likely to be fake news at incredibly fast speeds.
Some AI programs, like dEFEND, go the extra mile: On top of detecting fake news articles, the program is also trained to explain why a particular piece of content or user comment is detected as false information.
Kai Shu, an assistant professor at Illinois Institute of Technology and one of the project researchers, explains, โThe idea of Defend is to create a transparent fake news detection algorithm for decision-makers, journalists, and stakeholders to understand why a machine learning algorithm makes such a prediction.โ
By being able to flag fake news and explain which parts might contain unreliable information, projects like this can help quickly draw clearer lines between whatโs real and whatโs fake. Part of what makes fake news so compelling, after all, is that theyโre designed to seem real by playing to our cognitive biases โ that is, we like shortcuts, we like to be right, and we like to believe what our friends say.

Aside from flagging fake news, other AI projects have focused on the issue of how false information spreads from account to account, friend network to friend network. Here, whatโs analyzed isnโt the content per se, but the patterns in which the content is shared on social media.
We know, for example, that fake news stories online tend to be shared much faster than authentic articles, and there are subtle differences in their patterns that AI programs can learn and identify.
Michael Bronstein, professor at the University of Lugano in Switzerland and lead researcher at Project GoodNews, explains that on platforms like Facebook, fake news stories tend to have far more shares than likes as compared to regular posts.
By studying sharing patterns, AI can then be used to detect the coordinated and inauthentic behavior designed to spread fake news articles. If shared with platforms, technology like this can be used to nip the problem in the bud โ when fake news is posted and reshared, for example, a platformโs algorithm can automatically de-prioritize it on peopleโs feeds, or mark it as potentially false information, so that it becomes less visible and shareable.
Those are the two main ways that AI is currently being developed to fight fake news: analyzing the content itself, and the ways in which this content is shared. And if we are able to roll all of this out online, then the problem is solved, right?
Not quite.
AI for Fake News
The problem is, people behind the highly industrialized world of disinformation can use the tech, too.
In Bots
Social media bots are pieces of software meant to create content or interact with people on social media. Not all bots are bad, as in the case of self-care reminder account @tinycarebot, but a lot of bots are designed to amplify fake news messages and pollute platforms with violent or inflammatory content.
Some might be simpler in nature, and are rule-based and not AI. Rule-based bots, as the name suggests, function according to a predefined set of rules: post X on this time, if Y happens, do Z. But AI bots tend to be more complex and are designed to not only learn from their own experiences, but, crucially, to mimic humans convincingly.
The rate at which these AI programs learn โ and their owners adjust โ is why identifying bots on social media has been described as a cat and mouse game.
In Content Generation
Another way AI technology has been used in the service of disinformation is in writing actual fake news content.

With the technology behind natural language processing advancing so dramatically over the last few years, developers have found that AI technology can now generate entire articles that are plausible, cheap, and most of all, quick. Some publications, like Bloomberg and the Associated Press, have adopted the technology in certain cases, particularly in numbers-heavy areas like stocks or sports.
One website, called NotRealNews.net, even markets AI-generated content news stories in an effort to โsupportโ journalists โ an idea thatโs been met with staunch criticism.
Still, the real worry is when this same technology is used for disinformation. Propagandists and other spreaders of fake news need only input keywords and ideas theyโd like to see, and algorithms can generate whole blog posts and fake news stories to elaborate. These articles can then be shared en masse via bots, or through the less high-tech work of troll accounts.
In Deepfakes
The most technologically advanced โ and most alarming โ of AI-based disinformation, deepfakes are a portmanteau of โdeep learningโ and โfake.โ Itโs basically synthetic media, kind of like an advanced version of photoshopping your selfies, where AI is used to replace an existing image or video with someone elseโs likeness or to make someoneโs likeness do or say things they never have.
“One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” an eerie likeness of Mark Zuckerberg says in this video. “Spectre showed me that whoever controls the data, controls the future.”
The tech works by being fed hoursโ worth of footage of someoneโs face and voice. From there, it can learn expressions and patterns, and place these onto something else entirely.
In other words, it can make your likeness do or say anything a programmer wants.
Aside from the oft-cited example of Jordan Peeleโs PSA with an Obama deepfake, the tech has been used for all sorts of things, like a deepfake of Jon Snow apologizing for Game of Thronesโ terrible final season and, horrifyingly, celebrity face-swapped pornography.
Because we tend to think of video footage as more or less accurate recordings of actual events, the technology can have scary consequences at the hands of propagandists inciting hatred and violence through fake speeches or doctored videos.
Like bots and fake news articles, deepfakes hold the power to make or break someoneโs reputation and, more broadly, public trust in institutions.
If our media environment becomes saturated with inauthentic content and false information, many will likely fall prey to fake news or worse โ give up trust in the media landscape altogether.
So, What Now?
The thing about AI as a tool is that itโs just that: a tool. Itโs not a silver bullet.
Just as researchers look to develop ways to fight fake news, so do disinformation networks continue to develop their own algorithms for making and spreading it.
This isnโt to say that thereโs no hope in the fight against misinformation and disinformation, because there is. But for a problem thatโs more than just mathematics and algorithms, the solutions to it must be just as multi-faceted.
More than the tech itself, fake news is a problem of how we think of truth. Itโs often discussed as a critical thinking issue, but itโs also a public trust in traditional media issue. Itโs also about inequitable labor conditions that force people to cling to disinformation work, as well as the few individuals and organizations that stand to gain from all this.
To deal with all that, weโll need technology working alongside sound policy, responsible media, meticulous research, and better-informed readers.
Fact From Fiction is a biweekly column on misinformation and disinformation around the world.