AI Exploits Human Emotions to Dominate Professional Poker and Chess Players

Gary Kasparov called chess “a very brutal form of psychological warfare” and, for him, it was. Kasparov was well known for intimidating his opponents. He had one trademark move in which he would pick up his watch, which he always placed on the table at the beginning of a match, and put it back on his wrist. It was a signal to his opponent that he was tired of playing and would now start ending the game.

This sort of intimidation may have panicked human opponents, but it was ignored when his opponent was the IBM computer known as Deep Blue. In fact, it was Deep Blue’s cold logic that was the undoing of Kasparov. When the computer failed to fall for one of his favorite ploys, Kasparov was the one who was intimidated. He did not recover and, in the end, lost a match many say he should have won. For the first time, AI had exploited human emotions to achieve its goal, even though it was not aware that this was what it was doing.

Humans no longer challenge computers to chess matches. Computers are simply too good. Most, if not all, high ranking chess players use them to train, but that’s about it. The same has happened with the far more complex game of Go. At one point in a key match between AI known as AlphaGo and the world champion, Ka Jie, Jie thought he had a chance to win. “I’m putting my hand on my chest, because I thought I had a chance. I thought I was very close to winning the match in the middle of the game, but that might not have been what AlphaGo was thinking. I was very excited, I could feel my heart thumping!” The computer, AlphaGo, was not worried at all and coldly dissected its over-excited opponent.

AI, using self-learning networks, can conquer almost any game because all games have a set of rules. Some rules, like those for Go, are more well-defined. Others, like video games, have more flexible rules, yet, these, too, have been conquered. Recently, AI, going by the name of AlphaStar, defeated professional players in Starcraft II. According to recent news stories, this AI is now secretly posing as a real life player and seeing how it performs on regular gaming sites. We should eventually see the unmasking of this anonymous player and get some idea on how it performed. I’m guessing it will do really well.

Now, you would think that AI’s lack of emotions could, at some point, be exploited by humans. You would think that a game that depended on bluffing or emotional manipulation might confuse AI which is logic dependent. This, however, has now been proven wrong.

This month, an AI program developed by Carnegie Mellon University, named, Pluribus, beat the best professional poker players in No-Limit Texas Hold’em. This was not a one-off event. The computer participated in 10,000 hands against a total of 13 professional opponents and came out on top. What’s more interesting, however, is that the AI learned how to bluff. Watch how, in the video below, it bluffs on a hand that most professionals would fold on.

 

 

In some way, the AI determined that the individuals it played against could be deceived. It indirectly determined the humans’ emotional responses to its play.

The developers of the program claim that the computer’s biggest strength was its ability to use mixed strategies. All players attain to this, but, being human, they probably fall into certain discernable playing patterns over time. Pluribus used pure randomness in its play which made it impossible for the human players to determine what strategy it was deploying. In fact, the computer had no overall strategy. Poker face or no poker face, the computer’s opponents lacked the ability to fool it by using psychological ploys.

In fact, the computer used its opponents’ humanness against them. The AI program appears to have been able to determine the playing patterns of the other players and exploit these to win hands. Like all neural net AI, Pluribus is programmed to improve its basic skill set with every game. It learns to alter its approach with each group of new players it faces and each game it wins or loses. It also determines how each of the players in the group is likely to respond to a move on its part. Then, it analyzes all of the data it has accumulated to calculate its next move and gain a victory.

This ability of AI to calculate all potential outcomes that could result from specific human behaviors may make it valuable in negotiations. However, others don’t believe this is practically possible. They question the amount of calculating power this would require.

“The fundamental problem is the decision tree. At each point where the AI must make a decision, the tree splits. So, for instance, if there are always 2 decisions, and there are 10 decision points, then there are 2 to the power of 10 different paths the AI must consider. This means that for any moderately long sequence of decisions the number of paths the AI must search is greater than the lifespan of the universe.”

 This is simply not true and displays a lack of understanding of how neural networks operate. It’s like saying that you have to measure the distances between each tree and all the surrounding trees to determine whether or not you are in a forest. Neural networks may start with specific data but then arranges this data into patterns. It then arranges these patterns into bigger patterns and so on. Chess players don’t analyze the potential moves for each piece on a chess board before they make a move. They recognize patterns of pieces and work out their next moves based on them. In this way, processing power is conserved.

As I mentioned above, AI did not directly need to detect human emotions in order to win at poker. However, what if it could learn to read human emotions in the facial expressions of its opponents? Would this help or hurt the AI? You could argue this both ways. It may be possible to fool the AI with false signals of surprise or happiness. You could bluff and make the computer perform a bad move. However, this is not the way it AI usually works.

With every mistake it makes, the AI improves its ability. If the past is any indication of the future, AI should eventually be able to read human facial expressions better than humans. Recent work in the area of AI emotion detection shows that these machines can recognize the correct emotion from facial expressions 98% of the time. In a study by Kiju Lee and Xiao Liu of Case Western Reserve University, the AI was trained with a database of 3500 different facial expressions to achieve its 98% accuracy result. The two researchers are working on a new program which is achieving even higher recognition success rates. In addition, the recognition is occurring at a faster speed. Like humans, they are able to quickly determine the emotional state of an individual they interact with by simply analyzing their facial expressions.

But can they be fooled by humans who, after all, often fake various emotional states for a number of reasons? It seems that these machines learn to recognize facial expressions that are simulated and not real. Look at the sample of the training data shown below. Which of these pictures from the study do you, as a human, think are acted out? The photos, from left to right, are supposed to represent anger, disgust, fear, happiness, sadness, surprise, and neutral.

facial

It makes a difference whether the AI is trained on simulated emotions because, if we, as humans, can tell which ones are simulated and the AI can’t, we can deceive the AI.

However, if a particular AI has long term interactions with a single human, it may be able to differentiate real from faked emotions over time. Now, let’s make this clear. The AI may be able to assess an emotional state, but this does not mean it can feel the emotions associated with it. Due to this limitation, these machines are, in truth, cyber psychopaths. That is, they learn to understand emotions and potentially learn to manipulate humans with them while never feeling emotions themselves, This ability to manipulate humans with fake emotions would prove especially deadly if robots get to the stage where they become more human-looking.

This all seems to point to a rather bleak future for humanity. Facebook algorithms are already scouring posts to determine if a user is depressed and a possible suicide victim. Other facial recognition systems key in on nervous individuals who may present problems. Emotions, which many point to as the foundation for our humaness, may need to be suppressed in a world controlled by emotion-reading machines. In the process, humans may evolve more into machines than machines evolve into humans.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s