I first saw this paper yesterday when it was posted on HN but didn't make it to the front page and I'm uncertain about how much energy to put into it (unfortunately my health gives me very little to spare). So far I've only read the introduction and it doesn't obviously involve any advanced mathematics that I'm unfamiliar with so I could perhaps attempt an implementation.
In addition to what I said above another factor arguing against investing much time in this is the fact that this was first published last year and nothing seems to have come of it in that time. Moreover it appears that the current paper is a revision from the one that was published last year. Does anyone know what the history of this was ? Were bugs found in the first version ?
This is a bad sign for the paper. The prior cases of solving big foundational problems I'm aware of all introduce substantial new math. So much so that the actual problem is something of an afterthought. Wiles's proof of Fermat's last theorem is substantial.
It's not impossible that this is legitimate, but... it seems unlikely that P=NP has evaded proof for so long if the solution were straightforward. There are also a LOT of plausible-but-flawed proofs -- it's not unsolved for a lack of trying.
I still think it's possible that there could be a relatively simple solution that has so far eluded humans. They are burdened by many individual, institutional and societal biases and I think they are far less intelligent than they generally believe.
The final complexity discussion uses the graph size constraint gained by the "improvement" but doesn't consider how to handle the extra labeling meaningfully. Basically, the pre- and post-improvement algorithms put the exponential work in different spots, and the sloppiness of the algorithm description (I mean, really, why tell us you're using a stack for BFS and then have "determine the subset of satisfied constraints" as a step) makes it easy to ignore.
I'm also being a little generous with the algorithm itself. As described, some of the trie optimizations seem to make certain combinations of satisfied expressions impossible to notice, but I think it's not a big deal to make this part work. The properties of the trie structure (and of sorting the variables by occurrence, for that matter) don't seem to be used.
The proofs and even explanations in this paper aren't very clear to me...
> Doing this enables us to represent each Di as a variable sequence, but with all the negative literals being removed. See Table I for illustration.
What justification does he have for throwing out negated variables ? If you do that the problem likely becomes trivial, but has nothing to do with the original problem.
It does create issues with determining which expressions are satisfied, since two different paths can meet at a merged trie vertex higher up, which requires the extra bookkeeping that kills it.
Remember the "faster than light neutrinos" thing? They published their stuff basically saying "Okay, we're really uncomfortable with this result so can someone explain what we've missed here?".
Papers like this always feel like the author is surprised anyone is even doubting them. "What, it's just the hardest problem in your field, worthy of a Millennium prize, and I'm publishing it in some lesser-known journals with little peer review. What of it?"
Come on. Have some modesty. Have some self-doubt! Reach out to Terry Tao and you know he'll happily explain your mistake and then help you write a better paper.
> Here, we notice that if we had one more span, <v3 , v4 , v5 >, for example, it would be connected to <v1 , v2 , v3 >, but not overlapped with <v1 , v2 , v3 >. Being aware of this difference is important since the overlapped spans imply the consecutive ‘’s, just like <v1, v1, v2> and <v1, v2, v3>, which correspond to two consecutive ‘’s: (c2 , ) and (c3 , ). Therefore, the overlapped spans exhibit some kind of transitivity. That is, if s1 and s2 are two overlapped spans, the s1 ∪ s2 must be a new, but bigger span. Applying this operation to all the spans over a p-path, we will get a ’transitive closure’ of overlapped spans.
It's a pitty they don't release any source code that we can unit-test against problems with known solutions (while also profiling run-time to stay within polynomial bounds) :)
edit: what am I missing, their reduction to a DNF seems unsuitable to me?
I read proposition 1 [1] to mean: they find an optimal solution for the DNF V∪X, then expect the corresponding CNF C to have at least as many satisfied clauses. However isn't that insufficient? I mean there could be an even better solution with more satisfied clauses in C, that are totally missing when only looking for solutions for V∪X. But maybe I'm just misunderstanding something.
(I'm leaving aside the fact that cryptocurrency theft is a crime that most people would be disinclined to commit, as well as the extraordinarily high likelihood that this proof is incorrect.)
You can target anyone's wallet in O(1) time, no proofs needed :^)
https://en.wikipedia.org/wiki/2-satisfiability#Maximum-2-sat...
For some problems, you can pick easy instances (even if large), or there can be ways to go backwards and generate an instance you know the answer to (prime factorization comes to mind).
I believe that there's problems where it's difficult to even find a hard instance, it's just also difficult to prove that no difficult instances exist either. SAT might be one if I remember correctly, solvers actually do really well.
It looks like the paper was previously published in a relatively low impact AI conference last year. It seems like it should be in FOCS, STOC, or a prestigious math journal to have significant credibility.
(To the ones who did not waste their best years doing theoretical computer science, this was in jest - the paper most definitely contains mistakes)
I am also an amateur working on P=NP. Last week, I think I also proved that P=NP, but with a different method, and was about to seek publication.
My result seems very similar to his, yet very different. I can prove that class of SAT which is intersection of 2SAT and XORSAT is NP-complete by reduction to 3-SAT. Then I follow the approach in Melville's Krom 1967 paper on 2-SAT, and prove that certain polynomial-sized logic (that corresponds to the intersection) is refutable complete. So you can essentially generate all formulas in that logic and if you don't find contradiction, the instance is satisfiable.
I have also did some preliminary testing of my method, and was able to factor small integers with it. However, there was a bug.
So, to sum up, I am not surprised that P=NP with a constructive and efficient algorithm. Take it for what you want. The future is gonna be interesting (crypto DOOMSDAY).
How does this help to support your belief that P=NP has been solved by someone else? Surely it wouldn't surprise you if it turns out they were as wrong as you were before?
PS: Also, reducing 2SAT to 3SAT doesn't help proving that P=NP. The opposite reduction would, if you were able to do the reduction in polynomial time. But maybe I misunderstood something about what you attempted.
So, I have some evidence, both experimental and theoretical, there is an efficient polynomial algorithm out there (and possibly many different methods).
Unfortunately, not 100% verified because despite what many smartypants are saying here, it's incredibly difficult to even have a conversation about a possibility of a relatively uncomplicated proof that P=NP. (I think Millenium prize is part of the problem, but that's another discussion).
And to clarify, I am reducing 3-SAT to 2XSAT, not 2-SAT. 2XSAT generalizes 2-SAT to arbitrary linear equations rather than literals (we can think of a literal as a linear equation on 1 variable).
I will happily send you (or anybody) the draft I have, so that you can critique it.
I played with propagation a lot in the past, but I don't think local propagation (that resolves conditions of a bounded number of variables at a time, until everything stabilizes) can ever work, because it would contradict Razborov result on monotone circuits. I don't remember the exact argument, but I think it would imply a polynomial monotone circuit capable of resolving the instance. Also, XORSAT is not really amenable to this either, to solve arbitrary XORSAT instance you have to add together arbitrary number of linear equations, so the number of variables per equation can balloon up.
That's why more recently I focused on understanding what exactly is the role of XORSAT, and by serendipity, I stumbled upon the above-mentioned reduction (which I think is a really exciting result). XORSAT is great for testing different algorithms, because despite the fact we have a nice polynomial algorithm (Gaussian elimination), one can easily create arbitrary (and loopy) instances where propagation is difficult.
The way I see it, to avoid the problems with the bound, you need to preprocess the linear equations "inherent" in the instance by adding extra variables. The exact way to add variables depends on the instance, but it can be done by solving the linear equations. Propagation algorithms that go into the process blindly, without understanding the underlying linear structure, will mysteriously fail.
However, my current approach through logic, basically building up theorems about the instance, until it is decided, can be considered a generalization of the propagation approaches. The messages are true statements about variables, which are propagated by deduction rules of the logic. But this is done only after we have discerned the linear structure in the problem, which is addressed globally, as described above.
The reality is yes it's an arxiv paper and I'm not a computational complexity expert so I cannot gauge it for myself.
Talk about burying the lede.
If you get a password in polynomial time you are golden.
In any case, that someone says they proved P=NP is not such a big red flag as people seem to think.
Still, there's almost certainly a mistake in this paper.
Now map a big, critical and difficult problem to the 2-Maxsat problem, solve it in polynomial time, get rich and come back to argue if it is really a proof of P = NP.
Is this serious?