The last post related to cricket was quite a while ago (that the Indian cricket team has been repeatedly thrashed since then is a mere coincidence). This post focuses again on the Decision Review System (DRS), a technology-aided analytical decision-support system to aid cricket umpires. The toolkit includes a set of multiple video cameras, heat-sensing 'hot spot' technology, and ball-tracking devices that record the point of impact, as well as an additional set of predictive algorithms to forecast the counterfactual trajectory of the cricket ball (you can be forecasted 'out' in cricket). Despite the best efforts of cricket's custodians, considerable user unease with the DRS persists. In fact, it has been recently acknowledged that the use of the decision support system has had a significant impact on the game (user response: altering playing styles and inducing more 'OUT' decisions from umpires), something which this tab predicted a year ago. Reasons for discomfort also include the lack of uniformity in its deployment, the incremental dollar cost of the DRS versus incremental returns, and equally importantly from a fan and player perspective, DRS reliability (both real and perceived). This post will focus on the last two issues.
The International Cricket Conference (ICC) has focused almost exclusively on improving the technology (e.g. increased
number of video frames per second, etc). The main argument here is that while an improvement in the unconditional success rate
for the DRS may seem impressive, it would be more helpful if statistics
are calculated and presented conditional on the corresponding human
decisions made. Toward this, let's look this MBA-ish 2x2 decision matrix (sorry). Strictly speaking, the terms 'correct' and 'incorrect' in the matrix mean 'almost surely correct' and 'almost surely incorrect', respectively .
1. The ICC has a wonderful set of umpires in their 'elite panel' that referee the most important inter-nation test matches (these elite umpires are a scarce resource, and their globe-trotting schedule optimization is yet another operations research problem - perhaps a good topic for part-8 of this series). Prior to the DRS, the umpires achieved a respectable success rate of more than 90%. Consequently in such situations, the DRS getting it right is a relatively uninteresting event. This situation is denoted as the neutral zone (top-left box). Therefore the focus is on the remaining 7-10% of the time when the decisions are contentious.
2. Clearly the case where the umpire is wrong and the DRS is right (as judged by video and predicted-trajectory evidence) is a win-win for the DRS and players. This is the green-zone (bottom left) and appears to be the exclusive area of ICC's focus as far as technological improvements. However, it is not necessarily desirable to accord top priority to the goal of achieving further improvements in this statistic.
3. The problems arise when the DRS occasionally produces visibly and audibly confounding results. This is represented by the top-right box, the 'high conflict zone'. In some instances, it could be because of technological gaps or operator error (there was a recent example where an umpire whose sole job consisted of watching the TV replay and hitting one of two buttons managed to hit the wrong one). However, in other instances, the predictive component of the DRS that is used to probabilistically judge LBW (leg-before-wicket) 'OUT' decisions appeared to be flawed or incompatible because:
a. Greater the required length (or duration) of the predicted values, the more noisier the forecasted trajectory.
b. Lesser the observed portion of the ball trajectory available for 'training' (especially after spinning and bouncing off the cricket pitch), the less reliable the prediction.
The years of prior refereeing experience of the umpire, and other human cognitive powers that help him arrive at the decision is pitted against hardware and algorithmic prediction prowess. The challenge is to be able to be aware of the many degrees of freedom involving a rotating cricket ball in motion while also taking into account the effect of the cricket pitch and local conditions.
4. There may be rare irritable cases where despite best efforts, uncertainty prevails and both the umpire and DRS manage to get it wrong (bottom right box).
If the ICC can provide data on the frequency of observations that fall in each of these 4 boxes, we can of course calculate the conditional probability of a correct decision given the DRS response using well-known conditional probability models and compare with the corresponding results for the manual system. For example, how likely is it that the batsman is actually OUT given that the DRS overruled an umpire's original 'NOT OUT' decision? Such analyses helps figure out the impact of false positives and false negatives that comprise the conflict zone observations. In particular, the false-positive rate, i.e. the case where a batsman tests positive ('OUT') using a DRS when he is actually NOT OUT, should be minimized given the nature of this sport.
Recommendations
The biggest stumbling block appears to the the top-right box (high-conflict zone) that erodes user trust every time the DRS wrongly overrules what appears to be a sound cricketing decision by the umpire. As a priority, the ICC should isolate and eliminate those components that increases the occurrence of such situations. The likely candidates for culling will be the trajectory-predictor and existing flawed versions of 'hot spot'. These innovations should be reintroduced at a later stage only after sufficient improvements have been made (and while also keeping the resultant cost down) to ensure that the expected failure rates are well under control. Viewed from this perspective, a recent decision by the Indian cricket board to do away with the predictive component of the ball-tracking technology is actually the right one.
@dualnoise on twitter
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.