Major ORMS conferences typically coincide with new releases of existing commercial optimization software products like CPLEX and Gurobi, as well as the introduction of new ones. In the last few days, we have seen the entry of an exciting new solver product, Sulum Optimization. Dr.Bixby's eagerly awaited updates on the state of the solver universe speak of truly breathtaking improvements. So how important are such off-the-shelf solvers in practice?
A complementary 'state of the union' question that is worth asking is 'How much progress has been achieved by humans as far as analyzing decision problem formulations in practice ?' While glancing through the journal papers in the last three decades and comparing the work with the approaches taken by researchers in the 1940s-1970s, I couldn't help but feel that there is a non-improvement, perhaps even a retrogression along this dimension. It's tough to quantify the answer, but we can try to describe reasonable boundary conditions.
1. A terrible formulation is one which cannot be readily analyzed to determine an acceptable solution to the original business problem. This is not the same as 'finding a feasible solution to the mathematical optimization model', although these questions are somewhat related. Yet, a whole lot more time and publicity is directed toward the latter question. Apparently quite a few MI-Hard (mission impossibly hard) problems have been discovered out there, especially in the last three decades. These instances are so mysterious that even an acceptable solution is elusive despite the obviously massive progress in solvers and hardware. What does this tell us? An acceptable solution is either already available or can be found using OR / business insight. I certainly haven't yet come across such a MI-Hard instance across multiple industries.
2. A viable formulation is one which you can analyze to find an excellent answer to the original problem by partial or complete enumeration, within a reasonable amount of time. In other words, you are doing your job as a practitioner well if your design simply avoids crappy solutions to the original problem, leaving you with the task of scanning just the few remaining good ones. This process is merely a refinement of the 'acceptability analysis' in (1) (Watson would ask what is 'constraint programming' ?). Not only is the process of being a 'human solver' quite enjoyable, it also goes a long way in convincing your employer and a customer that a competitor cannot simply buy their own solver and achieve parity. Plus it builds street-cred for you and OR. On the other hand, a few journals love to have MI-Hard problems subjected to 'shock and awe' within their pages.
There may be rare exploratory formulations that require the full power of such solvers to generate insight when very little is practically known about the original problem. In general, solvers are useful in practice like health insurance is useful in health care. Some feel that buying the best or most expensive health insurance will somehow translate into an equivalent improvement and stability in our health, forgetting that fundamental choices like exercise, lifestyle, and diet are far more important factors. In fact, if we find that we are using our health insurance a lot, it is all the more important to question those fundamental choices.
'Which is the best solver today' is a interesting question, but the best decision model that you build will be one that most certainly does not depend on the answer.