The article completely misinterprets the press release[1].
The University of St Andrews isn't offering a prize; they've just shown that the Queens puzzle on an n-by-n board (EDIT: actually, the n Queens completion puzzle) is NP-hard.
(By the way, even the press release, though less wrong than the article, is still pretty bad: apparently NP-complete problems are hard because they use "backtracking". Also it seems to suggest that this is something computer programmers would solve, rather than mathematicians...)
Yes (one of the authors here), the press release came out very badly.
The reference to the Clay prize was to try to demonstrate that lots of people care about solving NP-complete problems in general...
N-Queens has long been known to polynomial time (although there is no known efficient way of counting the total number of solutions).
We showed N-Queens completion is NP-hard. This is (in my opinion) interesting to people who care about such things, because it's a long standing problem and has the advantage of being easy to explain. However, it is mainly just a curiosity, and provides a new easy way of demonstrating NP-completeness.
Phys.org is a dumpster fire, and ANY information from there is better linked to from another source, of which there are many. I recommend the AAAS sites www.eurekalert.com and www.sciencemag.com. As well as Nature. At least then you'll have doi links, and a better class of journalism.
Completely off-topic: As a one-time civil engineer, mathematical dullard, and frequent New York visitor, who is losing his mind, I thought this was about completing a subway line. In my defense, the N train does go to Queens (though it's not called the N-Queens), and everybody knows completing a subway line is a really hard problem, especially if you're following the story of the new 2nd Ave line.
The million-dollar prize just appears to be the Clay Institute's prize for solving P vs NP. I am guessing that the queens problem (a special case of maximum independent set in a graph) is NP-Hard.
If you click through to the "release from the university" on phys.org[1], then the paper itself is listed at the bottom[2], though without a hyperlink. Via the abstract [3]:
> "The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal. The n-Queens Completion problem is a variant, dating to 1850, in which some queens are already placed and the solver is asked to place the rest, if possible. We show that n-Queens Completion is both NP-Complete and #P-Complete."
That's just a good heuristic, though, it does not guarantee finding a solution. Also note that NP is a class of decision problems, and finding a solution for the n-Queens problem is not.
None of the two decision problems that arise naturally from n-queens, deciding whether there exists solutions for a given board size, and verifying whether a candidate is indeed a solution, are NP-hard (solutions exist for n \not\in {2, 3}; verification can clearly be done in time quadratic in n). Counting solutions is #P-complete, though.
If you read the cites on that link, you'll see that the value for 26 was only added in 2016. They do not have a closed form formula, or they would have shown it.
I mean, they could just be slow revealing. I doubt it, though.
Fair enough. I was obviously reading the above as having a polynomial time as having an efficient time one. Where efficient was shorthand for "quick." Both leaps, I concede were misguided.
Do you mean that it's #P and not a priori NP? Counting problems can sometimes be in NP by deciding whether the count is the correct one (with the right encoding, function problems can be thought of as a subset of decision problems).
A not too silly example is counting partitions: the number of ways of writing a positive integer as a sum of non-decreasing positive integers (like 5=3+2 or 5=4+1 or 5=2+2+1). There's a dynamic programming solution with running time within O(n^3), so it is definitely in NP.
In another comment you mention that finding a solution to n-queens is not NP because it's not a decision problem. I'm confused because I thought the solution would be the polynomial certificate to the problem "an nxn board can be filled with n mutually non-threatened queens."
> Do you mean that it's #P and not a priori NP? Counting problems can sometimes be in NP by deciding whether the count is the correct one (with the right encoding, function problems can be thought of as a subset of decision problems). It is unlikely that a #P-complete problem
Essentially, #P is the class of problems where you compute the number of accepting runs for some problem in NP, so yes, #P and NP are closely related.
You can turn a counting problem into a decision problem by a suitable encoding, but that turns it into a different problem (and the complexity depends on the encoding). The more natural fit for a class of decision problems corresponding to #P would be PP, the class of problems where the majority of runs on a probabilistic TM accepts, which contains NP.
> In another comment you mention that finding a solution to n-queens is not NP because it's not a decision problem. I'm confused because I thought the solution would be the polynomial certificate to the problem "an nxn board can be filled with n mutually non-threatened queens."
The point is that deciding whether there is a solution is not NP-hard (it is in NP though, precisely by your argument that a solution is the polynomial certificate that can be verified in polynomial time). Indeed, existence of a solution can be decided in constant time (and is thus in P), since there are solutions whenever n is neither 2 nor 3. Furthermore, for any given n (except 2 and 3), a solution can be constructed explicitly in linear time (for a unary encoding of n). Nevertheless, counting solutions is hard.
Yes, changing the encoding can change the complexity, but I was taking issue with "because it is #P it cannot be NP,' which is what I got out of
> Counting solutions is not a decision problem, so it can't be in NP.
Is this just a statement that it is a category error to compare the set of function problems with the set of decision problems? Sure, that's true, but you can still ask questions like "does this #P problem have a polynomial reduction to an NP problem."
> Nevertheless, counting solutions is hard
Do you have a citation for counting n-queens solutions (not completions like in the featured paper) being #P-hard? (or NP-hard?)
> Sure, that's true, but you can still ask questions like "does this #P problem have a polynomial reduction to an NP problem."
That's an ill-posed question though. NP is not closed under polynomial Turing reductions (otherwise we had NP = coNP), so some problem Q being polynomially Turing-reducible to some other problem S in NP does not tell you anything about the complexity of Q. Other notions of polynomial reductions, in particular polynomial many-one reductions, don't apply, because the require both problems to be decision problems. So while you can indeed ask such a question, it doesn't make much sense to do so.
Coming back to the problem at hand, consider what a polynomial certificate would look like that assures that no further solutions existed (if “does the n-queens problems have exactly m solutions” were in NP, such a certificate must exist), and how it could be verified in polynomial time (i.e., without checking each other solution candidate).
> Do you have a citation for counting n-queens solutions (not completions like in the featured paper) being #P-hard? (or NP-hard?)
No, you're talking about a different problem: that of counting the total number of n-queens solutions. The original article is talking about the problem of deciding, given a partial placement of queens, whether it can be completed by adding further queens to form an n-queen solution.
When Computer Scientists talk about "fast" they generally mean that an algorithm is solvable in `n^k` steps, with `n` being based on the size of the input (1000 or 1000000 in your example) and `k` being a constant. Algorithms that follow that rule are called Polynomial, or P for short.
A lot of problems require more steps (e.g. `2^n` steps) as far as we know, but if you have a solution to the problem you can easily (with a P algorithm) verify that it is the correct solution. These are called NP problems.
One of the biggest questions in Computer Science is whether all NP problems also have a P algorithm (P=NP), or whether some problems will always require NP time (P≠NP)
This doesn't have a direct relation with "real" computer time (with modern computers a `2^n` algorithm is quite usable if your n is small enough), but it does mean that once your n becomes large enough and your problem is in NP, you can basically forget about finding an exact answer to your problem.
When we talk about the big-O for a given problem, the default interpretation is for a deterministic computer. All of our existing deterministic algorithms for NP-complete problems are exponential, and solving an NP-complete problem like SAT on a non-deterministic computer is trivial (and not especially useful since we don't know how to build those).
A decision problem (i.e., determining whether a string is a member of some set) is in NP if there is a program and polynomials p and q such that for every string x in the set, there is some proof of that fact of size at most q(|x|), which the program can verify is correct in time at most p(|x|).
This implies that there exists an exponential-time deterministic algorithm to solve any NP problem: just check all the possible proofs, reporting "yes" if you find one that works, or "no" when you've generated all the proofs size q(|x|) and found that they fail.
You either read the earlier version before it was corrected, or you're misunderstanding. The problem is not the N-Queens problem, it's the N-Queens /completion/ problem. The problem is this:
Here's an NxN board with k queens already placed. Is there a way to place another N-k queens so they are all mutually non-attacking?
Most instances will be easy, but current algorithms to solve this are such that for each N, the time taken in the worst case is an exponential function of N.
Problems in NP hard that are also in NP are known as NP-complete. NP complete problems share the characteristic that all problems in P can be reduced to any NP problem in polynomial time (which is to say: you can encode P problems as particular instances of NP problems). Therefore, if P = NP, all problems in NP are reducible to one another.
It's not just "believed" that showing a polynomial-time algorithm for an NP-complete problem shows that P=NP; it's proven (in fact, this is part of the definition of NP-complete...)
The University of St Andrews isn't offering a prize; they've just shown that the Queens puzzle on an n-by-n board (EDIT: actually, the n Queens completion puzzle) is NP-hard.
(By the way, even the press release, though less wrong than the article, is still pretty bad: apparently NP-complete problems are hard because they use "backtracking". Also it seems to suggest that this is something computer programmers would solve, rather than mathematicians...)
[1]: https://phys.org/news/2017-09-simple-chess-puzzle-key-1m.htm...