An interesting aspect of OR and business analytics practice is the chance to observe the impact of a decision support system (DSS) on paying customers who deploy it in real-life situations to improve business decisions. A good DSS tends to make users more innovative - they feel more secure that they have 'science' and automation working for them, and try to maximize benefits and progress toward becoming super-users.
A robust DSS should anticipate and react practically to a wide range of user inputs. In particular, if the DSS is based on a historical data-driven analytical model, inputs that are within the historical range would produce outputs roughly based on 'interpolations' - more reliable. On the other hand, an input not seen in prior history is (again, roughly speaking) equivalent to operating a heavy machine outside the functional limits, and may produce extrapolations that may make analytical sense, but little business sense. The latter input scenario makes the DSS look 'silly' because the business-savvy user would have provided a far better manual answer for such new inputs, because unlike the DSS that is just a computer program, they 'know'.
A bad DSS is one that is not trusted by users. There's more to it that good analytics. They may turn off the analytical portions and just use the automating features. Usually, it is equally if not more important to get the non-analytical portions right to ensure that customers are always getting some value for money. Then there's my pet 'timely failure theory' gleaned the hard way. If "infeasible" is an utterly unavoidable option in a DSS output for what ever reason, then make sure that such failures are immediate. Nothing irritates twitter-era humans more than having to wait 20 minutes to find out that nothing is going to happen.
In the ongoing cricket world-cup, the new umpire decision review system (UDRS) that predicts 'outs' using the Hawk-eye ball-tracking technology is turning out to be an example of a good DSS. The machine-free accuracy rate of umpires was 92%, which jumps to 98% with DSS-assisted decisions. If the DSS determines that the predicted result is marginal, the original decision of the umpire stands.
In the past, if the users (umpires) had a slight doubt, they would err on the side of caution. In cricket, that would amount to a 'not-out' ruling in favor of the batsman. That behavior appears to have changed a bit. Now, the decisions are slightly more 'aggressive' i.e., in favor of what the DSS decision is likely to be if a TV-referral were to be made. If the DSS is a more accurate predictor (which it appears to be), then this is likely to be a positive change in user-behavior, on average. A big chunk of the 6% error reduction may well have come from a reduction in the number of 'false not-outs'. Given that cricket today is skewed in the favor of batsmen, this is a welcome development.
On the other hand, cricket administrators have to work hard on defining a simple and effective operating range for the DSS. Otherwise, detractors (batsmen!) will use this as an excuse to get the DSS banished. Similarly, the time taken by the incumbent process to process and return a response to an input review is far too long . A batsman who is going to ruled 'out' by a damn machine after playing an idiotic shot doesn't want the global television audience to see replay after embarrassing replay of his moment of madness.