Skip to Content
Computing

“Magic: The Gathering” is officially the world’s most complex game

A new proof with important implications for game theory shows that no algorithm can possibly determine the winner.
An image of packages of Magic: The Gathering playing cards
An image of packages of Magic: The Gathering playing cardsNathan Rupert

Magic: The Gathering is a card game in which wizards cast spells, summon creatures, and exploit magic objects to defeat their opponents.

In the game, two or more players each assemble a deck of 60 cards with varying powers. They choose these decks from a pool of some 20,000 cards created as the game evolved. Though similar to role-playing fantasy games such as Dungeons and Dragons, it has significantly more cards and more complex rules than other card games.

And that raises an interesting question: among real-world games (those that people actually play, as opposed to the hypothetical ones game theorists usually consider), where does Magic fall in complexity?

Today we get an answer thanks to the work of Alex Churchill, an independent researcher and  board game designer in Cambridge, UK; Stella Biderman at the Georgia Institute of Technology; and Austin Herrick at the University of Pennsylvania.

His team has measured the computational complexity of the game for the first time by encoding it in a way that can be played by a computer or Turing machine. “This construction establishes that Magic: The Gathering is the most computationally complex real-world game known in the literature,” they say.

First, some background. An important task in computer science is to determine whether a problem can be solved in principle. For example, deciding whether two numbers are relatively prime (in other words , whether their largest common divisor is greater than 1) is a task that can be done in a finite number of well-defined steps and so is computable.

In an ordinary game of chess, deciding whether white has a winning strategy is also computable. The process involves testing every possible sequence of moves to see whether white can force a win.

But while both these problems are computable, the resources required to solve them are vastly different.

This is where the notion of computational complexity comes in. This is a ranking based on the resources required to solve the problems.

In this case, deciding whether two numbers are relatively prime can be solved in a number of steps that is proportional to a polynomial function of the input numbers. If the input is x, the most important term in a polynomial function is of the form Cxn, where C and n are constants.  This falls into a class known as P, where P stands for polynomial time.

By contrast, the chess problem must be solved by brute force, and the number of steps this takes increases in proportion to an exponential function of the input. If the input is x, the most important term in an exponential function is of the form Cnx, where C and n are constants. And as x increases, this becomes bigger much faster than Cxn. So this falls into a category of greater complexity called EXP, or exponential time.

Beyond this, there are various other categories of varying complexity, and even problems for which there are no algorithms to solve them. These are called non-computable.

Working out which complexity class games fall into is a tricky business. Most real-world games have finite limits on their complexity, such as the size of a game board. And this makes many of them trivial from a complexity point of view. “Most research in algorithmic game theory of real-world games has primarily looked at generalisations of commonly played games rather than the real-world versions of the games,” say Churchill and co.

So only a few real-world games are known to have non-trivial complexity. These include Dots-and-Boxes, Jenga, and Tetris. “We believe that no real-world game is known to be harder than NP previous to this work,” says Churchill and co.

The new work shows that Magic: the Gathering is significantly more complex. The method is straightforward in principle. Churchill and co begin by translating the powers and properties of each card into a set of steps that can be encoded.

They then play out a game between two players in which the play unfolds in a Turing machine.  And finally they show that determining whether one player has a winning strategy is equivalent to the famous halting problem in computer science.

This is the problem of deciding whether a computer program with a specific input will finish running or continue forever. In 1936, Alan Turing proved that no algorithm can determine the answer. In other words, the problem is non-computable.

So Churchill and co’s key result is that determining the outcome of a game of Magic is non-computable. “This is the first result showing that there exists a real-world game for which determining the winning strategy is non-computable,” they say.

That’s interesting work that raises important foundational questions for game theory. For example, Churchill and co say the leading formal theory of games assumes that any game must be computable. “Magic: The Gathering does not fit assumptions commonly made by computer scientists while modelling games,” they say.

That suggests computer scientists need to rethink their ideas about games, particularly if they hope to produce a unified computational theory of games. Clearly, Magic represents a fly in the enchanted ointment as far as this is concerned.

Ref: arxiv.org/abs/1904.09828 : Magic: The Gathering Is Turing Complete

Deep Dive

Computing

Inside the hunt for new physics at the world’s largest particle collider

The Large Hadron Collider hasn’t seen any new particles since the discovery of the Higgs boson in 2012. Here’s what researchers are trying to do about it.

Why China is betting big on chiplets

By connecting several less-advanced chips into one, Chinese companies could circumvent the sanctions set by the US government.

How Wi-Fi sensing became usable tech

After a decade of obscurity, the technology is being used to track people’s movements.

Algorithms are everywhere

Three new books warn against turning into the person the algorithm thinks you are.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.