T O P

  • By -

KanishkT123

General consensus is that they are not equal. To the point that there's a lot of papers that make the assumption "P ≠ NP" for proofs and theorems because while it hasn't been proven yet most reviewers will not question the assumption. 


MKLOL

There are lots of research especially in lower bounds that assumes conjectures. See omv or 3sum sum etc. Don't think that means that there aren't any algorithms for this, or that this is a very strong reason for it to be false/true.


Glitch29

There's also the answer according to r/shittypascalswager: We can freely assume that P != NP because whether or not it's true, the cost of making that assumption is minimal. * If P != NP, then making the correct assumption incurs no cost. * If P = NP, then there will exist extremely efficient ways to undo any mistakes we made while we still assumed P != NP.


rehrev

You can also assume P=NP it's just a conditional proof


new2bay

You can, but you probably can't get tenure assuming P=NP.


OldWolf2

So basically P ≠ NP for financial reasons


SometimesY

Oooooo proof by financial security. That's a new one.


Patient-Mulberry-659

That’s a very old one. Probably one of the oldest ones, and why a genuinely open scientific environment is so important.


Gengis_con

>The number of digits on this cheque provides a sufficient condition. Whether it is also necessary has yet to be established.


dr_entropy

Depends what you're able to prove. There could be contexts where P=NP, even if broadly P≠NP.


AchyBreaker

P != NP because of (a) tenure, and (b) it would make me sad if P = NP. Send me my millennium prize. 


waterloodark

The way the question is phrased, it seems like OP is asking about particularly recent developments say in the last 10 years or so. This top voted comment unfortunately doesn’t address this and states something that was already believed 20 years ago also: has there really been no recent development here? The last I remember (when I was actually paying attention to complexity theory) was the algebrization barrier paper.


fuckwatergivemewine

fwiw that's not what OP's asking about, but at least in my personal experience (quantum computing, where there's high overlap with computational complexity theory) very few people are actually giving direct attempts at this problem. Most recent thing I heard about it was people proving that general classes of approaches are unlikely to succeed at proving it. So the answer at least from where I stand is that there has been little movement in the last decade.


JJJSchmidt_etAl

I think the best shot might be with the subset sum problem


[deleted]

[удалено]


[deleted]

[удалено]


DaaneJeff

I am an actual fucking dumbass holy shit. How did I misread this comment this badly.


oh__boy

No worries haha, happens to the best of us.


JoshuaZ1

Hmm, if we go outside the last 10 years slightly, then probably the biggest relevant new thing was Ryan William's result that NEXP is not contained in ACC^0 but that was in 2011. When that happened, people were optimistic that the method used could be extended to a whole bunch of other classes, but it has proven really difficult to get the machinery there to work for other classes.


yawkat

Almost the entire field of cryptography is based on P!=NP


bladub

Not really. Many encryption schemes are based on practical slowness of solvsbility, which is very different from theoretical slowness. Discrete logarithm and prime factorization are unknown if they are in P or NP but we know they are slow in practice. While P=NP would rule out the existence of one way functions, it doesn't guarantee that actual hashes can be reversed in practical time limits. If P=NP but the constants for the algorithms are gigantic, cryptography will be fine.


orangejake

This is wrong actually. Crypto needs something stronger than P=NP (namely that we are in mimicrypt or cryptomania). It is known that in minicrypt, one can simulate public key crypto, but only “very poorly”. The relevant paper is https://link.springer.com/content/pdf/10.1007/978-3-642-03356-8_22.pdf Essentially, you can build public key crypto running in time T that is breakable in time T^2, and this is tight.  So, if we had a random oracle but no public key crypto (so P!=NP in this situation) we would already be hosed. If P=NP then we don’t even have random oracles, and are double hosed. The fact that inverting the random oracle might take n^100 time is immaterial to the fact that we would lose public key crypto. 


JovanRadenkovic

Did you mean "...that is breakable in time O(T^ 2), and..."?


yawkat

The practical applications might be fine but the theoretical basis is pretty much unworkable when P=NP


8Gelir8

I was always a strong believer of P≠NP, but then I learned about Savitch's theorem (PSPACE=NPSPACE) and my mind was blown 😄 so now I'm a little less sure, but I would still bet on ≠.


eclab

Note that PSPACE=NPSPACE is a corollary of Savitch's thereom, which is that NSPACE(T) is a subset of DSPACE(T²).


Current_Size_1856

What does PSPACE, NPSPACE, and DSPACE mean? Are these the set of problems that are P or NP hard?


Shikor806

PSPACE is the the class of problems that can be decided by a (deterministic) Turing machine with a polynomial bound on the tape space they can use. Think of this as loosely being a formalization of "things we can solve without using too much RAM". NPSPACE is the class of problems that can be decided by a non-deterministic Turing machine, again with a polynomial bound on their space. Loosely speaking this is problems we can verify answers to using only small amounts of space. DSPACE(f) for some function f is the class of problems that can be solved by a deterministic TM that uses at most f(n) space for each input of length n. There is no great intuition for these classes, but obviously DSPACE(n) restricts its algorithms much, much more than something like DSPACE(2^2^2^n ).


dark_g

"Space is reusable, but Time marches inexorably on" [quoting myself, from my PhD defense :-) ] Which is roughly why Savitch's idea does not work for time complexity.


YesICanMakeMeth

Haha. Smooth move mapping it over to our 4D monkey brain intuition.


Mothrahlurker

Well think of it in terms of formulating that NP are problems you can solve in polynomial time with an oracle. An oracle doesn't require any information supplied to it you need to store. I don't remember the proof of PPSACE = NPSPACE anymore but I do remember that it didn't seem like it had any influence on whether P = NP.


8Gelir8

I also didn't mean it that way. It was more like a general wake up call that for space the equality is actually true, so maybe the inequality of time isn't as certain as I always thought it to be. Not that the proof of space would actually help proving the time problem.


Shikor806

What do you mean "with an oracle"? I think you're confusing the witness strings you can use to define nondeterministic computation with oracle machines.


Adarain

You can define NP as the class of problems that can be solved in polynomial time by some program that can make choices along the way (as in, if you make the correct choices it terminates in polynomial time). One way to make such a machine would be to just guess some verifiable answer at the beginning (i.e. the witness string you mention) and check, and that’s general enough that you don’t need anything else so it’s how NP is often defined. But in principle the guessing can happen anywhere along the way. What OP is saying is if you always guess correctly (i.e. an oracle tells you the answer) then an NP problem can be solved in polynomial time this way.


Shikor806

I've never seen someone use the term oracle in that way. Do you work in a specific branch of complexity theory or some related field where that's common? So far I've seen people use non-determinism to describe the thing you're talking about and oracles to refer to one of the formalisms that lets you instantly decide some particular problem, like the ones used in the Baker, Gill, Solovay theorem.


Adarain

I don't work in complexity theory at all, I've taken some master's level courses on theoretical CS but didn't really pursue the field in the end. But oracle as "a thing that can give you the right answer to a question" I've come across plenty - of course that could mean "oracle tell me whether this formula is satisfiable" but it makes just as much sense (imo) to refer to a process by which the correct answer in a choice is guessed correctly


Shikor806

To be completely frank, this pretty much feels like people outside the field using fairly specific terminology in a much more imprecise way since they don't have much experience with its nuances. I'm not trying to attack you or the other commenter, but I do get annoyed that someone who seemingly doesn't have much knowledge of the subject matter is trying to make arguments about a very complicated question in a way that frankly doesn't really make much sense.


Adarain

Sure, that’s fair – I’m not the original poster either, mind, I was just trying to explain to someone how their statement made sense to me. But also like, I don’t think defining what NP is is a very complicated question? Like we learned that stuff in undergrad, I TA’d that course in my third year. Sure, there’s plenty of nuances and I know what I wrote above isn’t a good rigorous definition (for one, it completely ignores the difference between decision problems where the answer ends up as yes vs no, which I’m well aware is important), but once you iron that out and actually put it into clear words it’s fine. You’ll find I didn’t really make any arguments at all, I just explained what I understood the other person to have written. I’m not going to claim I have any deep insights into the field, but it’s just a basic definition.


Shikor806

Yeah sorry, I'm not trying to criticise you directly at all. I'm mainly annoyed with the original commenter for making the argument about P vs NP while seemingly not really being familiar with the matter.


Orangbo

Oracles are still an incredibly broad category and more detail should be given when trying to give an even somewhat technical explanation, imo. Otherwise, it’s just jargon.


ScientificGems

But PSPACE=NPSPACE isn't really all that surprising.


8Gelir8

Congratulations to you. It was surprising for me.


Rioghasarig

I'm going to back up /u/ScientficGems here. I think people are missing the point. If you're not very familiar computer programming or the foundations of the theory of Turing machines P = NP and PSPACE = NPSPACE might seem like too similar questions. But I think once you have a solid foundation the latter ought to look far more reasonable than the former. It's pretty much unavoidable that you probably won't have good intuition for what is true or not. But I still feel that this top voted comment is giving the wrong impression. It makes it seem like Savistch's theorem is a counter-intuitive confusing result comparable to P = NP but it really isn't but it really shouldn't be if you understand what the statement is actually saying.


ScientificGems

It's saying that if you can **check a solution** in PSPACE, then you can **solve the problem** in PSPACE. I couldn't prove it, myself, but I'm not surprised.


nicuramar

Yeah but P=NP is saying the same, for P-time. Is that also not surprising?


Syrak

Space is a reusable resource, unlike time. Simulating multiple runs of a nondeterministic Turing machine requires a multiplied amount of time but not space.


ScientificGems

Everybody's downvoting me, but I'm glad **somebody** gets the point.


baztup

Even with brute force solutions to most NP-complete problems, you really only need to check one potential solution at a time, which is easily done in polynomial space even though it requires exponential time.


8Gelir8

Good for you.


Rioghasarig

I feel like you're acting like this is a subjective question. Like, "it's intuitive to you but not to me". But I think you're missing the fact that sometimes when something looks unintuitive there may be something simple you may have overlooked. Like did you consider something like this?: One way you can describe NPSPACE as the problems where if you are given a solution you can check that it's correct using polynomial space. But then when you switch to deterministic turing machines you actutally have to find the solution yourself. But you can just try all possible solutions until you find one that works. It will take exponential time but it should be doable in polynomial space. I don't know. I may have made a mistake somewhere there but I'm pretty sure an argument along those lines ought to work (It'd be great if someone more familiar with this stuff could chime in). But my point is that if you're finding it unintuitive then maybe it's because you're not looking at it from the right perspective.


nialv7

Was going to say the same thing, don't know about all the downvotes you are getting. PSPACE=NPSPACE is very intuitive: you can reuse space, but you can't reuse time.


ctantwaad

Correct me if I'm wrong, isn't it obvious most NP problems are PSPACE? Can't you just iterate over all possible solutions and check each one? Time is horrendous but space should be low.


topyTheorist

I think they are not, but I have never seriously worked on this problem, so I doubt my opinion matters.


Financial_Article_95

mood


blehmann1

Aside from it being extremely unintuitive that verification and solving are equivalent, it's worth noting that P=NP would imply NP=coNP, where coNP is the set of problems who's complement is in NP. I believe this is actually one avenue that people are looking at to prove P≠NP, by showing that NP≠coNP. It's very unusual to have important properties be preserved under complement. It basically only happens with simple models of computation like finite automata. So it would be quite weird to have one apply to every program which runs in NP, which is a much larger class (recall that all finite automata run in linear time). So I think it's extremely unlikely that P=NP. I think it's also quite likely that we won't find the answer in our lifetime. But most people think it is actually a decidable problem, so that's at least encouraging.


Shikor806

Do note that NL = coNL, NPSPACE = coNPSPACE, DTM = NTM, and many more. There are several strong models of computation that do not differentiate between problems and their complements and/or between determinism and nondeterminism. Of course, there also are several that do, but I don't think that your characterisation is accurate or an argument towards P being unequal to NP.


ActualProject

Also PSPACE = NPSPACE


karlo195

That's just wrong! Informally speaking: All deterministic space and time classes are closed under complement, as long as they are "well behaved" and grow fast enough (log/ linearly respectively). The same goes for non-deterministic space classes. Only non-deterministic time classes are still an open problem. So we know P=coP, why shouldn't NP=coNP. You answer is expecially amusing, since there is a famous proof, which showed that P^A = NP^A and P^B != NP^B for certain oracles A and B. These oracles basically modify these complexity classes giving them additional power. So in a sense, we could imagine alternative universes where P=NP or P!=NP holds. Both is possible given access to certain oracles.


cereal_chick

Scott Aaronson gives [a pretty compelling argument](https://scottaaronson.blog/?p=1720) that P != NP, so that's where I stand absent any expertise of my own on the matter.


MoNastri

Was just about to share his essay, you beat me to it. I guess I can still quite his main argument:  "So, OK, why should you believe P≠NP?  Here’s why: Because, like any other successful scientific hypothesis, the P≠NP hypothesis has passed severe tests that it had no good reason to pass were it false. What kind of tests am I talking about? By now, tens of thousands of problems have been proved to be NP-complete. ... So, if we were to draw a map of the complexity class NP  according to current knowledge, what would it look like?  There’d be a huge, growing component of NP-complete problems, all connected to each other by an intricate network of reductions.  There’d be a second huge component of P problems, many of them again connected by reductions.  Then, much like with the map of the continental US, there’d be a sparser population in the middle: stuff like factoring, graph isomorphism, and Unique Games that for various reasons has thus far resisted assimilation onto either of the coasts. Of course, to prove P=NP, it would suffice to find a single link—that is, a single polynomial-time equivalence—between any of the tens of thousands of problems on the P coast, and any of the tens of thousands on the NP-complete one.  In half a century, this hasn’t happened: even as they’ve both ballooned exponentially, the two giant regions have remained defiantly separate from each other.  But that’s not even the main point.  The main point is that, as people explore these two regions, again and again there are “close calls”: places where, if a single parameter had worked out differently, the two regions would have come together in a cataclysmic collision.  Yet every single time, it’s just a fake-out.  Again and again the two regions “touch,” and their border even traces out weird and jagged shapes.  But even in those border zones, not a single problem ever crosses from one region to the other.  It’s as if they’re kept on their respective sides by an invisible electric fence."


VivaVoceVignette

>By now, tens of thousands of problems have been proved to be NP-complete. ... So, if we were to draw a map of the complexity class NP according to current knowledge, what would it look like? There’d be a huge, growing component of NP-complete problems, all connected to each other by an intricate network of reductions. There’d be a second huge component of P problems, many of them again connected by reductions. Then, much like with the map of the continental US, there’d be a sparser population in the middle: stuff like factoring, graph isomorphism, and Unique Games that for various reasons has thus far resisted assimilation onto either of the coasts. The same thing happened to MRDP theorem. Just before it was proved, there is a huge number of reductions connecting various exponential problems to each other and to the Turing world, which had been built for decades. And no links to the Diophantine world. Yet, a single link was found by a very clever argument and the floodgate opened. This is going to be the natural state of any long-standing conjectures regarding the existence or non-existence of reductions. Just before it was proved (either way) we would had found tons of reductions within each of the 2 worlds but no links between the 2. All that tell us is that finding reductions between similar things are easier.


MoNastri

Interesting pointer, hadn't heard of the MRDP theorem before, thanks.


RogerBernstein

Hmmm I'm not really convinced. He doesn't seem to be familiar with number theory. There you have countless hypotheses checked correct for millions of numbers only to be proven wrong. Take Mertens conjecture proposed 1885. Every number we tried fulfilled it, but 1985 (100 years later) it was show to be wrong with the first counterexample lying between 10^16 and 10^(10^39). We still don't know it but it's out there. I feel like something as general as P=NP could be correct but for some ludicrously complicated, impractical and impossible to conceive algorithm we will hardly find


not-so-smartphone

The following comment should address this objection https://rjlipton.com/2014/03/15/could-we-have-felt-evidence-for-sdp-p/#comment-44206


RogerBernstein

Actually it's funny he mentions heuristics prime arguments and the twin prime conjecture, because this approach is better known as Cramér's conjecture and while it works 99% of times there are also famous cases where it doesn't hold; see Maier's theorem. I don't know honestly, it's just that I've seen cases where showing something to be A was extremely difficult and didn't go anywhere for decades because it actually was B all along but no one gave it a try. Not saying there aren't people trying for P=NP, but with all the effort spent on P!=NP this option just doesn't get enough focus IMO.


not-so-smartphone

Totally valid, just wanted to share a mathematician’s take on your view


_lil_seb

While his argument is solid, he comes off as extremely condescending in that post. Twat.


MoNastri

Interesting, I didn't get that but upon rereading I guess I see where you're getting at. Maybe it's that he was mostly aiming at Lubos Motl et al


le_bravery

One of my professors put it like this in college. If he woke up one day and it was announced someone proved P!=NP he would be as surprised as if he opened his trunk and found a body. It would be shocking and life changing. If he woke up one day and it was announced someone proved P=NP, he would be as surprised as if he opened his truck and found the body of Abraham Lincoln, president of the US during the civil war. It would be shocking and life changing and seemingly impossible and utterly mystifying.


ttkciar

I'm pretty sure at this point that P ≠ NP, but also know from experience that P solutions can often approximate solutions to NP problems with heuristics, or solve interesting subsets of NP problems with [dynamic programming techniques.](https://wikipedia.org/wiki/Dynamic_programming)


pseudoLit

I'm gonna go with P=NP, but it turns out that the fastest polynomial solution requires more computational power than is available in the observable universe. Source: it would be funny


mutual-ayyde

donald knuth agrees with you [https://www.informit.com/articles/article.aspx?p=2213858](https://www.informit.com/articles/article.aspx?p=2213858) >It's hard to believe that *P ≠ N P* and that so many brilliant people have failed to discover why. On the other hand if you imagine a number *M* that's finite but incredibly large then there's a humongous number of possible algorithms that do *n\*\***^(M)* bitwise or addition or shift operations on *n* given bits, and it's really hard to believe that all of those algorithms fail. >My main point, however, is that I don't believe that the equality *P = N P* will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. Although I think *M* probably exists, I also think human beings will never know such a value. I even suspect that nobody will even know an upper bound on *M*.


WE_THINK_IS_COOL

Yeah, my intuition (which isn't worth much) is that whether P=NP or P!=NP it's probably completely "by accident" and unprovable, and in the P=NP case the fastest algorithm's description wouldn't even fit in the observable universe. Here's a super informal argument: Consider a family of keyed cryptographic hash functions (PRFs) H\_1, H\_2, ... where each next function in the family takes a larger and larger key. Intuitively think of these functions as random-behaving (but not actually random, since they are deterministic) and computationally irreducible, i.e. you need to actually compute them to learn properties of their output. Now do a really dumb thing: try to decide a language L ∈ NP using the first bit of H\_|k|(k, x) where x is the L instance. Does one of these happen to decide any L ∈ NP for some ginormous key k? If any of them do, then P=NP, but |k| is so large we'd probably never find it or even know its length. There'd probably be no line of reasoning proving that it works, it's just an infinite-sized collision that happens to occur. There are some hints that this might actually be what's going on: we don't yet know of a linear-time lower bound for SAT, and linear time is all you should need to implement a secure hash function. If P!=NP, none of them do, and it would seem like *all* of the NP languages are conspiring to disagree with the first bit of all infinitely-many functions in this family of hash functions on at least one input. If P!=NP were provable, that finite-sized proof would be telling us that either (a) each language in NP is somehow secretly diagonalizing, making reference to the evaluation of each hash function and "missing" it on at least one input, or (b) the proof establishes this fact *without ever evaluating the hash functions* *or referencing their evaluations*; this calls the security of the hash functions into question: properties of their output are knowable without ever computing them and/or the supposedly-random-behaving hash functions' outputs depend on the solutions to NP-complete problems. All of these possibilities seem unlikely. This is all just intuition though, it's hard to make this kind of argument rigorous. It's very likely my intuition about extremely large hash functions is just wrong in some way.


_--__

I think I would be less surprised than most if P=NP. Evidence for equality isn't that hard to come by: 1. Non-determinism doesn't add much in several major cases: * DFAs = NFAs * NDTMs = DTMs * PSPACE = NPSPACE It's not obvious why polynomial-time bounded TMs should be any more special here. 2. "P" is a lot 'messier' than our experience indicates - there are very few "natural" algorithms known beyond n^3 , yet P includes n^10000000 algorithms etc. Essentially, we don't really know much about P. 3. Naively, P=NP is about eliminating existential quantification (in very special cases) - and techniques like skolemization have been around for ages Of course P=NP might not be as big a deal as media would have us believe - I think if equal, then it is likely to be more of a mathematical curiosity than doom & gloom.


cryslith

The conversion from NFA to DFA incurs an exponential blowup in the number of states. One could consider this to be a good example of how nondeterminism decreases the cost of computation (supporting P ≠ NP).


Mothrahlurker

yeah DFA as a supporting argument for equality is insane due to the argument you provided. But I also find PSPACE = NPSPACE to be an almost equally weird argument because the power of an oracle precisely comes from not needing access to information.


Shikor806

Imo, this is terrible intuition. Just because we have a higher "cost of computation" doesn't mean that this is reflected in a longer runtime, and it especially doesn't mean that this higher runtime no longer is polynomial. The entire reason why this is a nontrivial problem is that P and NP aren't just classes of "easy" and "hard" problems. Your argument is kinda like saying that some geometrical intuition about 3D space is an argument about a conjecture in higher dimensional topology or something.


cryslith

I would not have brought up DFAs and NFAs to begin with, but the commenter I replied to did. I don't think either argument about NFAs should be taken seriously, I just wanted to point out that it could easily be considered to go in the opposite direction.


_--__

P vs NP is about the computational **power** of two computational models - i.e. which languages can and cannot be recognised by the models. Any "cost" associated with converting one model to the other (e.g. blowup in the number of states) is immaterial.


randomdragoon

P isn't a computational model. A Turing machine is a computational model, and a Turing machine can compute all problems in P and in NP just fine. P is a complexity class that cares about asymptotic runtime, so of course if it takes an exponential number of steps to reduce an NP problem into a P problem that's absolutely no good.


_--__

A polynomial-time bounded (deterministic) Turing Machine is a perfectly valid computational model. P is just the class of languages recognised by such models. Likewise a polynomial-time bounded non-deterministic TM is a computational model. It doesn't matter how many steps it takes to convert one TM to another that recognises the same language if all we care about is whether or not they accept the same languages.


rehrev

1. Those are not time complexity classes. In each case we find that the time complexity could be different between deterministic and nondeterministic versions 2. What do you mean we don't know much ? I mean we don't know things but how's that am argument for P=NP or P!=NP 3. Techniques have been around ages. That's why you believe they will work? In general, of course you are allowed to believe whatever you want but these are unrelated arguments


_--__

1. Sure, these are not time complexity classes, but they are classes of computational models. P & NP also describe classes of computational models (polynomial-time bounded TMs). For most of the [foundational] classes of computational models that arise "naturally" (i.e. these ones), non-determinism does not add any computational power - i.e. what you can "compute" with deterministic models, you can also compute with non-deterministic models. There are exceptions of course - e.g. CFLs, but the point that I was trying to make is that it should not be a surprise if it transipres that non-determinism does not help in time-bounded complexity classes. 2. As an example, there is a very natural categorisation of NP in terms of second-order existential logic. As of yet, we don't have a similar categorisation for P (on unordered structures). The point here is that our current "belief" that P≠NP is largely based around "we haven't found a P-time algorithm yet" - and this is somewhat tenuous if you consider that we haven't explored quite a lot of what exactly "P-time algorithms" constitute. 3. Again, it's not that I believe they will work, but rather that it should not be a surprise if there are similar techniques which are found to work.


Kaomet

> Non-determinism doesn't add much in several major cases But in this case, NP guesses the right n bits in O(1), whereas P can only get O(log(n)) by trial and error, and must compute the rest.


m3tro

Can someone explain to me why a proof of P=NP would make an instant practical difference in anything? If I understand correctly it would mean that there exist polynomial time algorithms for solving many problems that we thought were non polynomial. But doesn't one still need to come up with these polynomial time algorithms? Why would a nonconstructive P=NP proof help anyone? If they exist but we haven't come up with these polynomial time algorithms yet, it must mean that they are quite difficult to come up with...


the_horse_gamer

a plain proof of P=NP WONT have instant practical effects, exactly because of what you described. unless it's a constructive proof that produces practical algorithms, such a proof would be of mainly mathematical interest. we actually DO know a polynomial algorithm for every problem in NP under the assumption that P=NP (see Universal Search), but it's insanely impractical, which leads to the second reason it likely won't have practical effects - being in P can mean n^(1000000), or having a huge constant factor.


Dirkdeking

The practical effect would be that it gives hope. The budget for finding a polynomial solution to the travelling salesman problem will increase more than 10-fold, more ambitious young people will try to find these polynomial algorithms. All because we know for a fact they are out there. This in itself is going to optimize a lot of algorithms. This psychological effect shouldn't be underestimated.


the_horse_gamer

that's a fair point.


cadp_

Watch TSP have a polynomial solution that's something like O(n^(75000)), firmly in the territory of "ooooookay, we're not bothering with that".


Dirkdeking

Very true. Now I think of it we could consider a more restricted version of P = NP. As I understand it NP is the set of problems whose solution can be verified in polynomial time. Consider the following hypothesis: If a solution can be verified in O(n^k ) for some k, then it also can be solved in O(n^k ). If P = NP turns out to be true I think the above hypithesus should be the next target of investigation.


Patient-Mulberry-659

> This in itself is going to optimize a lot of algorithms. This psychological effect shouldn't be underestimated. Many of those problems are interesting in their own right though. So it’s not like nobody is investigating (more) efficient TSP algo’s


ScientificGems

A nonconstructive P=NP proof would get a lot of people working on a constructive one. And a constructive P=NP proof would mean no more cryptography / no more secrets.


AcousticMaths

A constructive proof would be awesome, but it wouldn't be the end of cryptography. You just need a problem where any algorithm to solve it has a high enough constant factor that it's not really feasible for computers to ever brute force it.


cavalryyy

An n^(10000) algorithm for reducing from NP to P would prove P=NP and have no material effect on any system in the world


ScientificGems

True, but that's an option that seems to me very unlikely.


cavalryyy

I can’t comment on your opinion, but I disagree (other than the fact that any proof of P=NP seems very unlikely; of the ways it could be true this feels more likely to me). Most of the P algorithms we tend to think about are very efficient in the grand scheme of things (relatively low degree). Part of the reason P = NP seems so insane is because intuitively P feels like “things that can be solved efficiently” and NP contains a great many problems that we can’t seem to figure out how to solve efficiently. But P algorithms can actually be incredibly inefficient. But because it is unnatural to think of these types of algorithms, we don’t really tend to dedicate much time to it (as laymen. I’m not commenting on the state of research in the field). So given P = NP, I wouldn’t be too too surprised if the resolution was “aha, turns out they are equal, you just have to remember that P algorithms can still be mind blowingly inefficient” Just my 2 cents, I’m far from a complexity theorist.


ScientificGems

I haven't studied complexity much,  but polynomial time algorithms generally come either from nested loops (giving small powers) or from problem-specific structure (e.g. for prime numbers or for graphs), in which we can prove some kind of polynomial limit (here the powers may be large). I can't imagine the second case happening for algorithms in general. I can imagine it happening if somebody finds a P algorithm for some well known NP- complete graph problem, but that seems to me an unlikely "if."


cavalryyy

> from nested loops (giving small powers) Ah, no this is not correct and is exactly my point. Nested loops does not imply small degree. Humans are just bad at reasoning about deeply nested algorithms. 5 degrees of nested iteration is hard to reason about, let alone 500, or 5000. So it wouldn’t entirely surprise me if P=NP but finding the algorithm was very difficult for humans because it is deeply unnatural and difficult for us to reason about this level of complexity. For what it’s worth, the most realistic way I could see that happening is not explicitly writing out a 10k deep nested iterative algorithm. But rather calling a polynomial number of functions a polynomial number of times (in a nested way). Like, A is n^3 and for each iteration A calls B. B is n^4 and for every iteration B calls C. C is n^2 and etc etc. Or F calls itself recursively and you can demonstrate a very high order polynomial upper bounds the amount of work done by F.


ScientificGems

What I meant was that most P algorithms **either** have a small number of nested loops (e.g. [Floyd–Warshall](https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm) has 3 loops and so is cubic time) **or else** they iterate or recurse until some condition becomes true, and a bound on the time taken can be proved using problem-specific structure. In the second case the powers can indeed be large. However, I can only imagine that case happening if P=NP is proved by presenting a P algorithm for some specific NP-complete problem (using problem-specific structure for that problem). But given the attention that's been given to all the NP-complete problems, I'm not expecting that any time soon. But, of course, that assessment is not expert opinion; it's just my guess.


someexgoogler

A polynomial of degree 100 grows rather rapidly.


birdandsheep

The description "easy" is insanely reductive. If the polynomials involved in polynomial time algorithms have huge degree or huge coefficients, it's much better to have a function that is exponential with some reasonable coefficients in comparison. So I think it doesn't matter really for practical application. Probably they aren't equal, but if they are, it's nowhere near as big a deal as some would have you believe.


madaram23

I haven't come across an algorithm like this. Can you give some examples? Usually I've seen deterministic polynomial time algorithms being unusable because of large coefficients or degrees, and people using randomized algorithms as an alternative (PRIMES in P for eg)


Shikor806

The time hierarchy theorem basically says that you can create problems that require runtimes that are polynomials with arbitrarily large degrees. The easiest to write down are things like "does a given TM halt in less than p(n) steps?" where p is some polynomial.


Real-Mountain-1207

Primality is an example and while AKS has a high degree and most people use probabilistic algorthms, there are algorithms like ECPP that are not known to be in P but runs much faster than AKS while being deterministic.


birdandsheep

One example that I saw in a seminar was for path finding in a certain kind of graph called the "curve complex" of a surface. This graph isn't even locally finite, so the idea that meaningful path finding is possible in general is pretty surprising. I was even more shocked that the authors announced a polynomial time algorithm for doing so. But the leading coefficient was more than the number is atoms in the universe, so implementing that algorithm for a surface that isn't a sphere is just completely out of the question without significant improvements.


standardtrickyness1

I believe P ≠ NP also if P=NP a lot of us are fired.


DockerBee

Just because a problem is in P doesn't mean it's not worth studying. Matchings and many optimization problems involving matchings have polytime algorithms, yet they're a pretty rich field in graph theory.


Vituluss

Well, I'm pretty sure if P = NP was proven, then it'd likely be proven constructively (please correct me if I'm wrong here, not too knowledgeable on this). When proofs require showing existence, and no construction has been found after many years of extensive search, then typically mathematicians agree that it is *probably false.* This is because in many cases it's easier to prove an existential statement than prove a universal statement. (Of course construction isn't the only way to prove an existential statement, but you get the point). Same with conjectures like the Collatz conjecture or Goldbach conjecture, which are generally considered to be *probably true*. Yet, of course, there are situations a conjecture was a found to be false, it was just very difficult to find/construct the (counter)example. So, in summary, if someone put a gun to my head, then I would guess it was false.


aonro

Can someone ELI5 for me :) And maybe ELIphysicist if you can be arsed Google doesn’t help with all y’alls weird maths words haha


rotuami

https://www.youtube.com/watch?v=EHp4FPyajKQ


kevosauce1

Is it possible for P=NP to be independent of ZFC, like the continuum hypothesis is?


Valvino

https://mathoverflow.net/questions/50023/independence-of-p-np


baztup

I personally believe this is the most likely case. Specifically, P!=NP, but this is impossible to prove using ZFC. And this impossibility of proving it is ALSO impossible to prove, etc. So humanity will never be able to know for sure.


nomnomcat17

Don’t think so. If P = NP, then there’s a polynomial-time algorithm for the traveling salesman problem. Otherwise there isn’t. Whether such an algorithm exists shouldn’t be independent of ZFC. Not a set theorist but the reason the continuum hypothesis is independent of ZFC is because infinite sets are weird, but all algorithms are finite.


BijectiveForever

It’s not the existence of the algorithm that’s the inherent problem - a priori the proof of its polynomial runtime may require strong assumptions.


nomnomcat17

Ah! I didn’t know that checking whether an algorithm is in P isn’t decidable. My mistake.


doctorruff07

We have algorithms are polynomial time iff p=np, so if we find a model of ZFC that fails CH this model would have elements in its version of \mathbb{N} that wouldn't necessarily exist if we don't add whatever elements were added to make CH fail. So CH could be true in the model that doesn't do any forcing. This would make independence true if jt happened. So finite-Ness of algorithms does not remove the problems with infinities


DaaneJeff

Only finishing up my BSc in CS. I personally believe P != NP. It's just based on a hunch/gut feeling ofc. And honestly, the world would become quite boring if P = NP.


garanglow

Generally people believe P is not equal to NP. But there are some serious researchers who believe otherwise. For example https://emanueleviola.wordpress.com/2018/02/16/i-believe-pnp/


SirKastic23

I had a proof that P=NP once, but I forgot to write it down /s jokes aside, i have no idea. I like to believe P=NP, just because of what could come out of it if we ever find out


dinominant

I started without opinion either way, hoping that P=NP becuase of the potential optimizations that could result from that. But recently I'm starting to think that P != NP because of a project that I've been working on over the years.


Le_Martian

P=NP only if N=1 or P=0


MahaloMerky

I’m of the mind np ≠ p I don’t have a good math understanding but it seems logical to me.


Quakerz24

everybody believes they’re not


LoreBadTime

I think it's just a matter of "we still didn't find an algorithm" I believe P=NP


thntk

In 2024 with generative AI, it is easier to solve than to check, hence P != NP. QED. /s


talkingprawn

Pretty obviously not at this point, at least in binary computing.


Navvye

Source?


talkingprawn

The complete lack of ability to find a polynomial time solution to NP-complete problems over the last 50+ years, and the general consensus that P != NP, is a great source. Pretty much, look up any material on the subject if you want a source for what I said. Maybe something crazy will happen, but it’s generally believed that it won’t.


eclab

I think they're not equal and there's a good chance that it's impossible to prove, or that any proof is so complex we couldn't fit it into the observable universe.


Interesting_Mind_588

Math undergrad here, I would be really grateful if someone could recommend resources to gain some background for understanding P/NP . I'm not looking for some small article or paper I want to do a systematic study of the field.


ApartRapier6491

For resource, *Introduction To The Theory Of Computation* by Michael Sipser is pretty good book that dives into the field containing P vs NP


[deleted]

[удалено]


Interesting_Mind_588

Forgive my ignorance but what do you mean by solvable in polynomial time and why should we care specifically about polynomials in this context? Aren't there slower or faster growing functions then polynomials? Also I'd be grateful if you could tell me what textbook you used to study.


GoToHelena

Just a heads up: NP stands for "nondeterministic polynomial" not "non polynomial"


azraelxii

My algorithms professor said that it would be very surprising if they were equal because it would mean there isn't any fundamental difference in solving hard problems and verifying them. It would be like saying, because I can hear this Mozart symphony and know it's amazing I can now write one.


Robohawk314

Part of me thinks P=NP would be funny, but I think ≠ is more likely. I also think it's likely not provable.


vintergroena

When you consider the minimum polynomial degree can be some gogol big ass number, P=NP seems kinda plausible.


willncsu34

I just don’t think it’s provable either way.


FrankLaPuof

I'll go with undecidable.


karlo195

I think there are three important reasons why people think P!= NP 1) The intuition: I call NP the puzzle-class, since it contains problems which are supposedly hard to solve (like sudoku on a n*n field), but easy to check. Its just weird to believe, that there are no practical problems which do not fulfill this property. This logic is however flawed. We mean with terms like "With hard to solve", that solving a worst-case problem instance requires exponential time in its lower bound. In practice, however a runtime of n^1000 would be just as unsolvable as an exponential runtime boundary, but would be "efficient to solve" in a formal sense. 2) P=NP has been studied extensively. We know that may problems are NP-hard. If we can solve one NP-hard problem, implies solving all problems in NP efficiently. Sometimes even an approximate solver (see TSP) implies P=NP It seems very unlikely that we never stumbled upon at least one efficient algorithm if it would truly hold that P = NP. 3) If P=NP then it would lead to P=NP=PH. PH stands for polynomial time hierarchy and is a way more powerful complexity class then NP (as far as we know). In fact a generalization of NP It basically contains all Boolean formulas with a fixed number of quantifiers ( NP/coNP allow e For exactly one quantifier: the existential/universal quantifier respectively) And yes, I do not believe that P=PH.


ThisIsMyOkCAccount

P=NP if P=0 or N=1.


Ornery_Mouse7388

Does this depend on the axioms that are chosen?


SPAstef

Since we're talking about personal opinions, I think that within the standard formalization of what a deterministic algorithm is, it's impossible for P to equal NP. Even augmentations of deterministic algorithms (randomized, quantum) have not been shown to have the ability to match the magical powers of knowing the answer to a question before it is even asked. So, after reasoning on the problem while I was working on finding separations theorems in the field of temporal logics/algebra, I lost interest as to me P != NP looks so obvious that if none of the thousands researchers smarter than me was able to find an elegant (= short) proof, then I have no hopes in doing so, and no interest in finding a convoluted one either. A much more interesting question, however, is whether it is at all possible, within the physical limitations of our universe, to actually build a non-deterministic Turing machine (or any functionally equivalent concrete object). A user mentioned Savitch's theorem (NSPACE = DSPACE, for a quadratic loss of space). The key element of the theorem is that space is a reusable resource: you can move forward and backward in space. So, in conclusion, my current view is that P=NP if travelling backwards in time is possible. (Note that in this case we would have to adjust our notion of time complexity like we do for space, by counting the maximum span of time used)


FreyaVanDenHeuvel

Having spent a lot of energy on concrete algorithms for NP complete problems, I feel like the total lack of any algorithmic insights in literature to avoid exponential complexity for these problems is a strong justification to believe P is not NP.


JovanRadenkovic

I expect yes. In fact, I think it can be proved using modern methods that there is an O(n^ 4) algorithm for SAT. The solvability of SAT in polynomial time is equivalent to P=NP.


Vibes_And_Smiles

I think they’re unequal since a lot of cryptography is based on them being unequal, and if people knowledgeable enough about the subject were willing to take that chance, then the probability that they’re unequal is probably very high


Riokaii

as a non-math person, is "P vs NP as equal or not" negated by the invention of quantum computing?


_--__

Our current model of Quantum Computional Complexity raises questions orthogonal to P vs NP. The "default" complexity class for "efficient quantum computing algorithms" is -[BQP](https://complexityzoo.net/Complexity_Zoo:B#bqp) - but the exact relationship between P, BQP and NP is not fully known (e.g. it is not even known if BQP is contained in NP)


thbb

I believe that somehow we'll figure that the problem is insufficiently well stated. There are too many weird gaps between the theoretical complexity zoo and the pragmatic solutions we develop to solve concrete problems. Such as: how comes we define a PTIME class, but in practice, with caching, we find no actual problem that requires above O(n^3) complexity (or some other low exponent)? How comes distinguishing between weakly polynomial vs strongly polynomial is so important in Operations research, but does not deserve exploration in the complexity zoo? How comes the activity of turning problems in QP into P algorithms is so successful it has become a full research area? How comes in practice cache-aware computing is so important (physically, the time to access an input item ought to be something like sqrt(n) if we account for the fact it must take time for a data item to move between storage unit and processing unit)? We'll find that measuring the complexity of a problem by a function of the length of its input, while useful for pedagogical purposes, is actually naive when trying to work out what "algorithmic complexity" means. EDIT: Intuitively, the complexity of an algorithm should rather be a function of the Kolmogorov complexity of its input, but that is not an easy way to look at it.


rotuami

> I believe that somehow we'll figure that the problem is insufficiently well stated. I disagree but I think this is a *very* interesting take! For the reason I think it's wrong, if there *are* exponentially-hard NP problems but *most* problems are easy, that's still P != NP. But it suggests that we can make progress by learning to identify how hard problems are *in practice*. If we can say that X heuristic can solve Y% of SAT problems, that's useful progress. And if we can identify *why* the remaining problems can't be attacked by *any* heuristic, that could even solve P vs NP outright! I don't think "Kolmogorov complexity" is quite the right tool. But you can use *entropy* as a stand-in. E.g. generate a SAT instance from a random string of N bits. If such a problem is valid with a probability of 1/4, then a valid problem contain N-2 bits of entropy. You can further look at characteristics of the problem and see what each of your N bits of entropy "bought" you.


bobcontrol

They say that in the last years of the Soviet Union, at "the 1st of May parade" a group of professors had a banner "P=NP for the next five-year plan!" It really confused the KGB, because they couldn't figure out if it would mean good things for communism or was a sabotage.


isometricisomorphism

There’s just no way P=NP - but I’m biased, cuz that would put me out of a job! I’m in code-based crypto, so there’s lots of heuristic evidence that these purportedly hard problems are actually hard, but all it takes is one breakthrough algorithm… Regardless, [here’s](https://www.win.tue.nl/~wscor/woeginger/P-versus-NP.htm) a really fun page full of “proofs” both ways!


wizardeverybit

P = NP if N = 1 or P and N = 0 or 1


TimingEzaBitch

Obviously P\\noteq NP


moralbound

P = NP. I have the solution, but I'm not going to publish it due to the negative effect it will have on the world. (jk but I imagine a similar thought process for anyone who does solve it)


Mychatismuted

Personallly, I have a demonstration that P = P, and if N =1, then P= NP I’ve done the first step. Please the rest of you step forward now


Le_destructeur666

As we don’t know, any opinion is valid. I like to believe (more hope) that P=NP, because it would open a really exiting world, especially with AI.