The perils of employing heuristics of unknown quality are often disregarded in practice, all in the interest of 'time to market' and 'practical' solutions for NP-Hard optimization problems. See, for example, Dr. Gerald Brown's papers and presentations along with the late Dr. Rick Rosenthal on this topic. (also see old post on 'the paradox of optimality'). Importantly, Dr. Brown reminds us of the huge difference between 'known unknowns' and 'unknown unknowns', before we start to make the poor assumption that NP-Hard automatically implies a quick, randomized heuristic approach. Dr. Michael Trick's recent blog entry on NP-Hardness is illuminating. Such heuristics do have a role to play in O.R. practice, depending on the business problem at hand. We attempt to illustrate, to the non-technical audience in particular, using a simple example:
The 2-D Laughing Stock Problem
PICTURE 1: shows the feasible region (a polygon), the optimal solution, and the one the heuristic algorithm found.
PICTURE 2: shows the new constraint added by the user that reduces the feasible space. The previous heuristic solution is infeasible now. Solver re-optimizes.
PICTURE 3: shows the new heuristic solution that is near-global optimal. The bewildering user experience so far is that he/she has added a highly restrictive constraint, yet the app ended up with a dramatically better solution, one even better than the "optimal". Imagine driving a car that has such heuristics built into its steering response.