Million Dollar Computer Chess Problem Will Take ‘Thousands of Years’ to Solve
Credit to Author: Claire Downs| Date: Wed, 06 Sep 2017 14:51:28 +0000
Today, we get our gaming rocks off by watching things like simulated hippos shoot penguins with machine guns. But 200-years ago, people were excited by “unsolvable” chess problems, crafted by chess composers to delight and befuddle the masses. The n-Queen’s Completion Puzzle is one such problem, and researchers at The University of St. Andrews believe it will take thousands of years for artificial intelligence to solve a souped-up version of the problem on a 1000×1000 square board. So much so, that they are placing a million dollar bet for any computer programmer who can prove or disprove this assessment wrong.
The problem, first posited in 1848, challenges a player to place eight queens on a conventional, 8×8 board without having any queen threaten another. No two queens can share the same column, row, or a diagonal. Before guffawing to yourself at the perceived ease of this setup, you should know that while there are 4,426,165,368 possible queen positions, there are only 92 solutions.
Image: Wikimedia Commons
Traditional chessboard setups of the n-Queens Puzzle can be solved easily by computers using trial-and-error, systematically evaluating every possible permutation. But as the board size is scaled up and more queens are added, this method becomes computationally expensive, requiring longer amounts of time to solve. For example, if the board is scaled to n-27 (a 27×27 board), there are 2.34 quadrillion possible solutions.
A new study, published in The Journal of Artificial Intelligence Research notes that as the equation is heightened to n-1,000 with 1,000 queens to place, AI tweaks out, sending itself into infinite permutation attempts like a frustrated Roomba trying to vacuum a corner. This puzzle actually illuminates a more general weakness that current computers exhibit. A solution may be easy to verify once you have it, but computing the solution becomes exponentially harder to the point of impossibility. Though the problem is simple, the inability for computers to solve it efficiently is indicative of present limitations.
Read More: ‘Really Bad Chess’ Is a Fun Insult to Chess
After publishing their findings, University of St. Andrews’ Dr. Ian Gent, Dr. Christopher Jefferson, and Dr. Peter Nightingale, were contacted by the Clay Mathematics Institute, which has agreed to give a million dollar prize to anyone who can teach AI to quickly and efficiently solve the puzzle at n-1,000. Alternatively, if someone can mathematically prove AI will never be able to do it, they can also take home the money.
As pointed out in the paper, if a computer can solve this iteration of the Queen’s Completion, computers may be capable of solving other extremely complex algorithms. For example, if a computer can solve this problem, the researchers say, it might also be able to decrypt complicated security.
I spoke with Gent over email in Scotland to find out if this money is real, or if the winnings are simply hypothetical given the complexity of the task.
“It is real and exists,” Gent explained over email.”The prize doesn’t get paid instantly though—you have to publish a paper about the proof in a good journal and also wait two years in case anybody finds a mistake.” Two years is but a pittance when, according to Gent, he and many researchers share the belief that it could take “thousands of years” before a human (or a computer) come up with an efficient way to solve this problem and many others.
“It’s not an exact calculation but most people believe that you can’t solve the problem efficiently,” he continued. “If that’s right, then it won’t be hard to find examples where the best computer programs might take thousands of years to solve it. At another level, I have no idea if anybody will win the million dollars.”
Deadly heatwaves could burn us alive by the year 2100, so AI better get to work if any human is to reap the benefits of a million dollar prize. And adjusted for inflation, million dollars in one thousand years may seem meager wages for solving a nearly impossible chess problem spanning millennia and several generations.
Dr. Gent did offer some encouraging words to programmers vying for the prize: “Even one of the greatest mathematicians of all time, [Carl Friedrich] Gauss, made some mistakes when he was studying the Queen’s problem. They weren’t major mistakes but still, that’s interesting to me.”
Get six of our favorite Motherboard stories every day by signing up for our newsletter.