###### tags: `one-offs`
# Learning from Complexity Lower Bounds
**Overview**: In this note, I discuss some thoughts about my reactions to complexity-theoretic lower bounds, and productive ways to learn from negative results in general.
## Algorithmic Hardness
At some point, I was driven to reflect on the nature of (computational) lower bounds, i.e. theorems which say something like:
if you want to solve problem $P$, with _any_ algorithm which uses only information $I$, uniformly well over some class of problems $C$, then it will take at least time $T \geqslant T \left( P, I, C\right)$.
At some level, I honestly used to find these results vaguely upsetting. The optimist in me didn't like the idea that, even for the most ingenious algorithm, this barrier would still be insurmountable. Of course, in some important respects, this is not actually what these results are saying. There is an important gap to be bridged between the mathematical content of these statements and their practical implications.
To take a more optimistic view of these results, one can read them as telling you where to look for better answers. If you think that it is possible to do something cleverer than an 'optimal' algorithm (as dictated by the theorem), then you might implicitly be making use of some information which is not contained in $I$, or thinking about a strict subset of the problems in $C$, or deviating from the assumptions of the theorem in some way or another. Formalising these deviations can then be a useful contribution.
Separately, it is also plainly useful (though not necessarily joyful) to be informed that a problem is hard in some essential way. The reasoning here is that such information offers guidance on how to allocate computational efforts in a more general setting. For example, if your current algorithm for some interesting task requires running an inner loop which attempts to solve a high-dimensional density estimation problem (which is known to be hard), then the negative results can suggest to you that there would be value in reformulating your problem or algorithm,
A bonus phenomenon which sometimes arises in this context is that even quite subtle deviations from the original statement of a theorem can change things drastically. For example, a well-known example of an NP-hard problem is the Travelling Salesman Problem (TSP), which takes as input a graph whose edges are labeled with weights. It is known that if these weights are compatible with a metric structure (which is a very reasonable assumption in certain applications), then solving TSP to within a relative error of $3/2$ can be solved in polynomial time.
All in all, it is understandable to find negative results frustrating. The healthy response is to learn what you can from the negative implications, and sometimes to use them to guide you towards better assumptions and more positive results.