Skip to Content
Artificial intelligence

Facebook’s new poker-playing AI could wreck the online poker industry—so it’s not being released

Multiplayer poker is the latest game to fall to artificial intelligence—and the techniques used could be vital for trading, product pricing, and routing vehicles.
July 11, 2019
playing cards
playing cardsJack Hamilton | Unsplash

Poker requires a skill that has always seemed uniquely human: the ability to be devious. To win, players must analyze how their opponents are playing and then trick them into handing over their chips. Such cunning, of course, comes pretty naturally to people. Now an AI program has, for the first time, shown itself capable of outwitting a whole table of poker pros using similar skills.

A team from Carnegie Mellon University (CMU) and Facebook used a combination of AI techniques to out-bet and out-bluff human players in a game of six-player, no-limit Texas Hold’em. Each of the humans involved had previously won more than a million dollars at the poker table—which included Darren Elias, who holds the record for most World Poker Tour titles, and Chris “Jesus” Ferguson, who has won six World Series of Poker titles, as well as one inferior poker bot.

The new AI, Pluribus, played 5,000 hands against the poker players and consistently won more than its opponents. In another test involving 13 players and 10,000 hands, the bot again emerged victorious. Pluribus adopted some surprising strategies, including “donk betting,” or ending one round with a call but then starting the next round with a bet. It also bluffed like a seasoned pro.

The algorithm was so successful the researchers have decided not to release its code for fear it could be used to empty the coffers of online poker companies. “It could be very dangerous for the poker community,” says Noam Brown, a Facebook researcher and former student at CMU who helped develop the algorithm. (Brown was this year named one of MIT Technology Review’s Innovators Under 35).

“You typically figure out where your opponent is weak, but there was no finding weaknesses,” says Jason Les, one of the poker pros in the game. “This AI was so strong you couldn’t find anything to exploit or take advantage of.”

Les says he did fairly well against the AI but was “basically lucky,” and he reports that it caught him bluffing several times. He also says the AI player taught him a few tricks. “Some of the things I saw the AI do opened my mind to how to play ‘multi-way’ [multi-player] pots,” he says.

Games like chess and Go have become a standard way to measure progress in artificial intelligence (even though each game involves only a limited and narrow aspect of what constitutes human cleverness). But the games AI has conquered so far mostly involve just two players, and most are played in such a way that an opponent’s moves are clear to see. The most popular forms of poker, in contrast, involve a table of multiple players, and many hidden cards.

“It is mind-blowing,” says Tuomas Sandholm, a professor at Carnegie Mellon who helped develop Pluribus. “I didn’t think we were anywhere close—it was only about a year ago that I started to believe.”

Sandholm and Brown developed an AI program capable of playing superhuman one-on-one poker in 2017. But it seemed that creating a program capable of beating several players would be so much more complex as to be almost impossible. Unlike the two-player version of the game, it isn't clear that the multiplayer version has a single optimal strategy (what’s known as the “Nash equilibrium”).

Then, last year the pair developed a technique more efficient at exploring the possible permutations that come with each freshly dealt card. As it isn’t possible to consider every possible hand or every possible strategy each player is using, the new technique whittles down the search to a manageable subset.

Remarkably, whereas previous algorithms needed a supercomputer, Pluribus runs on a single server. Details of the research, and the game played against poker experts, appear in the journal Science today.

“With six players, it is so much more complicated that you can’t search to the end of the game,” says Brown, who is now part of the Facebook Artificial Intelligence Research (FAIR) group. “The [search] algorithm is the key.”

The achievement is a major landmark for artificial intelligence. A computer program must use more than just brute computation to prevail at poker. It also needs an ability to negotiate under uncertain circumstances, using the principles of game theory.

The techniques used in poker have many practical uses, from pricing products to routing self-driving cars through busy traffic. Strategy algorithms are also potentially useful in defense contexts. Indeed, Sandholm has a consulting company called Strategy Robot that works on defense projects. But he stresses that the code used to conquer poker would be useless in such contexts, because it is tailored very specifically to the card game.

Vincent Conitzer, a professor at Duke University who specializes in AI and game theory, says it will be important to see if the techniques employed can be applied to other multiplayer games.

“One might have thought that multiplayer poker would require some fundamentally different techniques,” Conitzer says. “The work raises a lot of interesting questions about the nature of games and strategic settings.”

Brown says Facebook has no plans to apply the techniques developed for six-player poker, although they could be used to develop better computer games.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.