Sunday, December 11, 2011

Solver precision: a consumer perspective

This is a quick post motivated by a useful question posed in this nice blog on OR software that wondered about the need for so many decimal-points of precision in commercial solvers. As Prof. Paul Rubin touched upon in his brief response, there's always going to be a few numerically-unstable instances that may justify this level of precision. The question as well as the response was quite instructive and this post mostly adds context.

Speaking from the perspective of a consumer who deploys  such solvers, this quest for precision is not just an academic exercise. Ill-conditioned instances are inevitable in certain business situations and are not (yet) rare. A customer (e.g. a store manager using an algorithmic pricing software product) routinely stress tests their newly-purchased/updated decision-support system that has such a solver hidden inside it. Customers will sometimes specify extreme input values to gain a 'human feel' for the responsiveness of the product that their management asked them to use. In fact, even routine instances can occasionally defeat the best of data-driven scaling strategies, and the old nemesis, degeneracy, always lurks in ambush.

Such solvers also have to be robust enough to work 'off the box' for millions of varied instances (to the extent possible) across industries and changing business models within any given industry. Being 'optimally precise' may pay off in terms of reduced support and maintenance costs for the vendor who writes the solver code, as well as for the consumer who deploys it, and less headaches for the end-user, the customer.

With the information available in journals and the Internet, it is realistically possible for a DIY practitioner to assemble a reasonably fast and robust solver to handle their own company's LPs, but the task of building high-precision, enterprise-level MIP solvers is best left to the specialists.

Thursday, December 1, 2011

OR models for discouraging criminal intent

Dr. Subramaniam Swamy, the brilliant lawyer who exposed the 2G scam in India is trying to raise awareness about this issue amongst the Indian public via a series of talks around the country. In one of his talks, he white-boards a simple but useful high-level model (shown below) to determine the level of financial penalty that should be imposed to deter future scams. This is especially important in the case of crooked politicians who tend to be thick-skinned and are willing to ride out their years in jail in order to enjoy their ill-gotten stash after 'serving out their term'. Conditioning on the chances of getting caught (p),

E[reward] = (1-p)R - p.D

where R = reward from the scam, and D = penalty to be paid if you are caught (primary decision variable). Swamy's point was that, in the Indian context, even though 'p' is relatively small in value for various reasons, all is not lost if 'D' can be made sufficiently large (proportional to 'R' times the odds of evasion) to ensure that the LHS value is unattractive enough.

Let's apply this model to the roads of India, where traffic violations are the norm. A given O-D (origin-destination) path is formed by several links, where link (i) is associated with a probability p(i), reward R(i), and penalty D(i). Crooks try to find the path of least resistance, e.g, smallest cumulative 'p' or max cumulative E[reward]. The police can counter this by a solving p-median type problem (this is a different 'p') where they can optimally position their scarce resource (traffic cops) to maximize an aggregate measure of expected penalty based on link traffic volumes. Given budget restrictions, these scarce resources have become even more scarce. However, it appears that in many other parts of the world, police departments have been smart enough to create a multiplier effect via an 'illusion of ubiquity' achieved via a 'ghost archetype' that periodically and randomly enforces the law in prominent locations in a particularly visible and pitiless manner. Recognizing the 'human element', in this case, the nonlinear 'customer response' to certain deployment scenarios can lead to a solution that increases the perceived probability p(i) for certain key links in the transportation network (beyond its actual statistical rate) without increasing actual capacity. Similarly, the presence of crooked 'bribe friendly' cops at busy intersections can lead to a nonlinear drop in public trust and perceived p(i), regardless of an increase in overall capacity.

To summarize, OR based decision support models can be effective in making a dent in corruption via analytically driven decisions that help maximize the perceived penalty for criminal/corrupt acts that in turn, leads to an actual incremental reduction in the relevant crime/scam rate, despite a scarcity of resources.