As per usual, the comments here have a loose understanding of NP-Hardness. Anyone discussing limited space to store the levels, simply doesn't understand what NP-Hard means. A problem can still be NP Hard even if your input size is small.
If the input size is bounded, then the problem H can't be NP-hard unless P = NP.
Proof:
A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time reduction R from L to H.
By assumption, H has finitely many instances, so precalculate a binary lookup table T. Accessing a lookup table takes time polynomial in L's instance size, namely O(1).
The algorithm "T . R" is then a polynomial-time algorithm for H, so P = NP.
It's irrelevant how long the lookup table took to create: P = NP if such an algorithm exists, regardless of how difficult it is for humans to find the algorithm.