
If you spend any amount of time on the internet, you mightโve seen or heard about bots. Perhaps youโve even interacted with one.
Short for robots, bots are pieces of software designed to create content and interact with people on social media. They are either partially or fully automated, and in recent years, theyโve risen to public consciousness as part of our growing conversations about misinformation and disinformation.
Unlike chatbots that we encounter more often in customer service roles, bots on social media are often cheaper and less complicated to manage. Where a single chatbot might need a person or team to develop and maintain, social media bots can be managed in the hundreds or thousands by just one person, who can then use them for their own purposes across various issues โ whether thatโs climate change or Covid-19.
People have sounded the alarm on bots and their harmful effects on political discourse, but just how many bots are out there? The answer depends on who you ask.

Twitter estimates that around 5% of its 300 million user base are bot accounts. But other studies, like a 2017 report by the University of Southern California and Indiana University, place this figure between 9% and 15%. This translates to up to 48 million accounts โ and theyโre saying this is a conservative estimate.
Not All Bots Are Made Equal
For starters, itโs important to note that not all bots are bad. Some are actually quite handy, like @WhatTheFare, a Twitter bot that helps users look up the Uber fare for specific pick-up and drop-off points, and @EarthquakeBot, which alerts people to earthquakes that are at least 5.0 on the Richter Scale in real time.
Meanwhile, @NYPDedits logs Wikipedia edits created by users with IP addresses in the New York Police Department. This was in response to reports of edits to pages about police brutality and its victims, like Eric Garner and Amadou Diallo, coming from 1 Police Plaza. These edits included the erasure of information about police misconduct, as well as insidious rewording of real events. For example, โGarner raised both his arms in the air,โ was edited by users with NYPD IP addresses as โGarner flailed his arms about as he spoke.โ
Still others can be quite refreshing to have on your feed, like @MuseumBot, which posts images from the Metropolitan Museum of Art four times a day, and @tinycarebot, which reminds users to take small breaks for self-care every now and then.
However, itโs the bots that pretend to be human that we should worry about. Referred to as โbad bots,โ these are the ones that pretend to be people and are often used for political reasons, like spreading propaganda, inflating politiciansโ follower counts, attacking political rivals, and hijacking their conversations.

By the way!
Did you know weโre launching a Kickstarter campaign? In the next few months, our campaign for โGentle Jack: The Party Game for Bad Friendsโ goes live! Visit the official website or follow the Kickstarter page to stay in the loop.
The use of bad bots for politics has been reported across the globe โ from the Peรฑabots in Mexico (named after former president Enrique Peรฑa Nieto) to the StrongerIn-Brexit conversations, where 1% of accounts made up one-third of all tweets related to Brexit. In the US, around one in four tweets about the first presidential debate in 2016 were made by bots.
Outside of politics, bots have also been used as part of coordinated disinformation campaigns surrounding issues like the anti-vax movement. Theyโve also been used to influence stock and financial markets.
Interestingly, there are some areas of the world where bots are not very popular among those looking to control online spaces. For example, chief architects of networked disinformation in the Philippines are wary of using them โ relying instead on real-life writers who are more knowledgeable about local vernacular and are more creative.
How Social Media Bots Work
When used in high numbers, bots can generate buzz around a person, product, or issue, and push a specific point of view.
Bots as Message Amplifiers
Bots are often programmed to retweet questionable, or low-credibility, articles within seconds of these articles being posted. This was common in the 2016 US presidential elections, as social bots worked to make certain pieces of content appear more popular. Among low-credibility content sources, 1 in 3 of its top sharers are bots โ much higher than sources of fact-checked content.
By creating the illusion that a particular story or source is popular, bots and those who wield them encourage actual humans to trust the source and share the post. Itโs this momentum-building effect that helps make fake news so compelling to real people: The more often we see a message, the more likely we might think itโs true.
University of Southern Californiaโs Dr. Emilio Ferrara, however, argues that this tendency can also be used for spreading positive messages and behavior. In his teamโs study about Twitter bots for good, positive hashtags about health tips and fun activities were able to influence people to adopt positive behaviors when theyโre more exposed to them.

Either way, a key function fulfilled by bots is to provide a baseline buzz from which messages can go viral. โOnce enough accounts are tweeting about the same thing, that creates buzz,โ says Terry College of Businessโs Carolina Salge. โAnd organizations really respond to buzz.”
Aside from sharing content on their own accounts, bots also tend to target accounts with many followers โ either by mentioning them in their own tweets about low-credibility content or replying to that personโs tweets with links to that article. This way, followers of verified or popular accounts might see a botโs tweet, or the accounts themselves might retweet the bot.
Bots as Content Polluters
Aside from promoting content from low-credibility sources, bots also make a lot of noise on their own to create new divides, worsen existing divides, or hijack movements from the opposite side of a divide.
One example, explored by researchers from George Washington University, the University of Maryland, and Johns Hopkins University, is the issue of vaccines. “The vast majority of Americans believe vaccines are safe and effective,” George Washington Universityโs David Broniatowski explained back in 2018. โBut looking at Twitter gives the impression that there is a lot of debate.โ
Their study found that bots posted anti-vaccination messages as much as 75% more than the average Twitter user, making up a huge chunk of online discourse at the time. Though the long-term effects of this campaign โ especially in todayโs pandemic โ are yet to be explored, itโs clear that bots often use topics like vaccination as a wedge to erode public trust in key institutions.

Similarly, in the weeks leading up to the Catalan independence referendum in 2017, bots were used to bombard influential Twitter users on both sides of the debate with violent, inflammatory content. The goal, it seemed, was to worsen existing political divides and boost feelings of alarmism and fear both during and after the referendum.
Another common way that bots โpolluteโ our information ecosystem with inauthentic behavior is through hashtag hijacking, or when bots hired by an individual or organization co-opt their opponentsโ hashtags with spam. They then also report their opponentsโ legitimate content so that their posts might get removed from the platform. Through this, the original message of the hashtag gets lost in the noise, and the opponents of a bot’s client would have a harder time organizing online.
Itโs worth noting, however, that this technique isn’t just used by bots. Large groups of people can hijack hashtags too โ and sometimes, for good reason, as in the case of K-Pop fans who fought against racism and drowned out the #WhiteLivesMatter hashtag.
Fake News, Real World
Because bots are designed to mimic people, it can be hard to tell which account is a bot and which one is not. And just like the people who make them, bots can also be good or bad.
Factor in the sheer number of tweets and posts that are made every minute, as well as bot makersโ ability to react to measures taken on by platforms against them, and you can see why University College Londonโs Juan Guzman describes bot detection as a โcat-and-mouse game.โ
โEvery time we identify a characteristic we think is prerogative of human behavior, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects,” adds Dr. Ferrara.
To help keep people from falling for common bot tactics, studies have pointed to the effectiveness of flagging tweets from suspicious accounts. Meanwhile, organizations like Quartz have put up bots like @probabot_, developed specifically to identify other bots masquerading as humans.
For its part, Twitter encourages users to report suspicious behavior so it can better improve its measures against platform manipulation.
So How Can You Tell?
Though most Americans are aware of bots and the threat they present, less than half of those who know about them are confident that they can spot them. But there are some tell-tale signs you can watch out for across different social media platforms.
Look at Their Profile
If the profile was created very recently, with longer usernames that contain numbers and an empty bio, then itโs very likely a bot. No pictures, or pictures that do not show faces, are also red flags.
Look at Their Network
For less sophisticated bots, one tell-tale sign would be their friend or follower network. Bots tend to follow other bots, while humans tend to follow other humans. Often, bots also have high following counts and very low follower counts.
As bots grow, however, this technique may not be as useful. Older and more sophisticated bots have been found to build entire social networks that closely resemble real humansโ networks.
Look at Their Account Activity
When thinking about whether an account youโre interacting with is a bot or not, look at how often they tweet or post. A lot of posts or retweets in a short amount of time is one clue, especially if all their tweets are about the same thing, with the same hashtags, over and over. Moreover, humans tend to tweet less and scroll more towards the end of their online sessions, a behavior that bots donโt tend to have.
Other studies have also found that humans tend to write more positive messages than bots, and tend to change their sentiments about topics over time. Bots, which are paid to promote certain messages, donโt do that.

A Word of Caution
Bad bots can be, well, very bad, and the tips above can help you discern if you are talking to one and if it’s time to report. But researchers also caution against automatically assuming that someone whose political views go against yours is automatically a bot.
For instance, in the aftermath of the 2016 elections, Twitter saw an uptick of people accusing each other of being bots โ when they were, in fact, real-life people. These false accusations are not only symptomatic of a larger issue of hostility on social media, where itโs becoming normal to insult and dehumanize others. But crucially, it also helps actual bots hide in plain sight.