Will Political Bots Decide the Next Election? Probably.

Can you be manipulated by bots to change your opinion? Well, if your opinions are strong, probably nothing will change them. No advertising, no counter arguments, and probably no logic will change deeply held beliefs and opinions. But what if you are undecided on a candidate or an issue? Couldn’t you be swayed by the opinions of your friends or followers on social media? Many of those who engineer election campaigns are betting that you can be.

The threat is so serious that some are calling for a “no first use” of bots pledge by all candidates. That’s a very nice idea, but it doesn’t take much for someone not officially connected to a candidate to take a shot at using bots to manipulate public opinion. In an age in which rumor can be mistaken for real news, supporters of particular positions or candidates can wreak havoc through the manipulation of social media. Links to stories, true or not, can be distributed, false accusations can be made, and pictures or videos could be edited to show things that did not really happen. Remember the picture of the New York Stock Exchange being flooded during hurricane Sandy? Quite a few so-called reliable media outlets, looking to sensationalize their coverage, accepted the pictures as fact and were, thereby, taken in. With the collapse of traditional journalism, it has become difficult to separate the real from the unreal news. The use of bots can obscure this divide even further.

So how do these bots go about exerting their social influence? It has been estimated that 10% of Twitter accounts are bots. Political bots can be programmed to retweet anything a politician says, keeping what they say in the spotlight and giving them a higher profile. They can hijack opposition tweets by reading their hash tags and inserting messages that counter the original one or confuse readers enough to neutralize the original message. If many bots are used, as they often are, a particular topic may appear to be ‘trending’ and newsworthy even when it is not. The going rate for Twitter followers is about a dollar per thousand and if you want them to like or share your post, you can pay these marketers a little more. It is little wonder, then, why political candidates may be attracted to this method of campaigning. A fringe candidate could become main stream simply through generating a false increase of interest.

This is all well and good, but how exactly do you get bots to work for you? “Dear Friend! Want to achieve online fame? Got a website or business? We now offer Views Likes and Followers of highest quality!” So goes the lead caption on Swenzy, one of the world’s biggest “social media marketing” companies. They claim not to use bots but that “ACTIVE real twitter followers will be delivered that decided to follow your account.” Well, not everyone really buys this. On the other hand, given the influence that social media sites have, it is inevitable that they will be exploited for PR purposes. Some say that inflated site statistics are simply part of modern marketing campaigns. After all, even the US State Department admitted to paying $630,000 to buy Facebook fans. Even President Obama has, according to the Daily Mail, 19.5 million fake followers, which would make up more than half of his account. All social sites have tried to stop this inflation and threaten accounts that use such marketing tactics with having their accounts permanently suspended; however, usually after a short time, the marketers are back at work.

If you don’t want to go through these marketing firms, you can even make your own bots. For $700, you can buy a program called Zeus that will allow you to build your own botnets and give you a dashboard from which you can control your bot army. Bots are no longer the easily identifiable spammers that they used to be. They are designed to seem like real people, so real, in fact, that people have fallen in love with them on dating sites, and don’t think these have only been starry-eyed romantics. Robert Epstein, an expert on artificial intelligence, fell in love with a bot…twice. He admits that his logic system was bypassed by her attractiveness. (see my post on Phishing with Naked Woman and Romantic Lures for a closer look at this phenomenon) It even allowed him to rationalize her bad English (“I wish to know that makes you, think, and I shall wait your answer, holding my fingers have crossed”). Epstein later confessed, “I had been fooled partly because I wasn’t thinking clearly: I had wanted to believe that a beautiful young woman really cared about me.” All this is to show that it is not as easy to identify a bot as you may think. Social media sites are under pressure to remove them but, unless your account suddenly gets thousands of followers overnight, bots are not so easy to detect. In countries such as Pakistan, people are paid extremely low wages to click on ads to make money for advertisers and website owners. These so-called “click farms” obscure the situation even further. They are, after all, real people with real accounts. They can be hired to give you the popularity you might look for.

Nowadays, the number of followers, likes, and views can give you an aura of power and make you more attractive to future employers, customers, and, of course, members of the opposite sex. If you are a politician, it is vital, in a democracy, that you be liked. People want to believe that the person they are voting for actually has a chance of winning and having a large social following seems to indicate a politician’s electability.

Governments have been increasingly using bots to advance their political agendas. If someone voices an opinion on social media that they object to, why not employ bots that will give an alternative opinion and give the appearance that the original poster is in the minority? This is, in concept, nothing new. Governments, companies, and organizations have been paying people to comment online to achieve various results. What’s different here is the volume of such comments that can be generated through bots.

But, will US political parties actually use them in the next election? In close elections in certain districts, their use may be difficult to resist. Hackers have been employed in the past to bring down rival websites by using denial of service attacks. In a similar way, opponents could give a rival politician so many followers that their social accounts may be automatically suspended or pointed to as an example of unethical padding.

So, yes, political bots will be used in one way or another. They may be farmed out in such a way that their sources will be untraceable and, even if they are traced, any knowledge of them can be easily denied. The more fanatical a viewpoint, the more likely proponents of it will use any method they can to disseminate and popularize it. However, mainstream politicians cannot be complacent. Whatever they say can be obscured or twisted by a bot army. Rumors and false accusations can be spread by bots and, even when it is denied and proven false, may leave its stain on any politician so attacked. Social networks may try to be vigilant, but they will always end up being reactive rather than proactive. The only thing the public can do is to be skeptical of any sensational political stories, take trending political topics with a grain of salt, and doubt political popularity based on social media statistics. Otherwise, we may be in a position best captured in a quote by Clarence Darrow: “When I was a boy I was told that anybody could become President; I’m beginning to believe it.”



One thought on “Will Political Bots Decide the Next Election? Probably.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s