Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As per usual, the comments here have a loose understanding of NP-Hardness. Anyone discussing limited space to store the levels, simply doesn't understand what NP-Hard means. A problem can still be NP Hard even if your input size is small.


If the input size is bounded, then the problem H can't be NP-hard unless P = NP.

Proof: A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time reduction R from L to H.

By assumption, H has finitely many instances, so precalculate a binary lookup table T. Accessing a lookup table takes time polynomial in L's instance size, namely O(1).

The algorithm "T . R" is then a polynomial-time algorithm for H, so P = NP.


Creating the look up table took non-polynomial time. Go back to school.


It's irrelevant how long the lookup table took to create: P = NP if such an algorithm exists, regardless of how difficult it is for humans to find the algorithm.


2 to the billion is still constant....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: