Sunday, July 14, 2013

Analytics and Cricket - XI: Using DRS Optimally

The Ashes
The first Ashes cricket test that concluded earlier today ago in England triggered this post.
(pic source link: guardian.co.uk)

This post is again related to the Decision Review System (DRS) that combines machine and human intelligence to support evidence-driven out/not-out decisions in cricket. The previous cricket-related post can be found here where a reader critiqued an earlier post on the false positives issue of DRS (that saw a lot of visits last week during this cricket match).  It's now apparent that England won the match thanks in part to their superior use of DRS compared to Australia.

DRS
The DRS consists of 3 components:
1. The human (a team of umpires, camera manners, and hardware operators)

2. The hardware (hot-spot, slow-mo cameras, snick-o-meters, etc). These are the data gathering devices.

3. The Analytics and Software (ball-tracking and optimal trajectory prediction, aka 'Hawkeye' based system)

This gives us three separate (but possibly correlated) sources of error:
a. Operator error

b. Hardware error: Technology limitations - resolution, video frame rates, hardware sensitivity, etc. may be inadequate at times for sporting action that occurs as fast as 100 mph or spin at 2000 rpm ...

c. Prediction algorithm error: Given such variations in sporting action, a forecast of the future trajectory of the ball is also subject to uncertainty.

(pic source link: dailymail.co.uk)

A smart user, after sufficient experience with the system, will be able to grasp the strengths and limitations of the system. In test cricket, a team is allowed no more than two unsuccessful DRS reviews per inning. Thus it is a scarce resource that must be cleverly used to maximize benefit.  In fact, the DRS is an example of a situation where the use of a decision support system (DSS) itself involves decision-making under uncertainty using a meta optimization model.

Optimal Usage of DRS
There are several factors that dictate when the trigger must be pulled by a cricket captain to invoke DRS to try and overturn an on-field umpiring decision.
i. The probability of success (p)
ii. The incremental reward, given a successful review (R)
iii. The cost of an unsuccessful review = cost of status quo (set to 0, normalized)
iv. The expected future value of having no more than k reviews still available in the inventory (concave, decreasing f(k), with f(0) = 0).

Reviewing only based on (ii) is a like a "Hail Mary" and banks on hope. On the other hand, paying exclusive attention to (i) may not be the best approach either, since it can result in a captain using up the reviews quickly, reducing the chances of taking advantage of the DRS later when "you need it the most". A person who doesn't use DRS at all (or too late to have an impact) leaves unclaimed reward on the table.

Probability Model
We'll start with a simple model. It's not perfect, or the best, but merely a good starting point for further negotiations.

The value of do-nothing  = f(k).
The value of a DRS review = p[R + f(k)] + (1-p)[f(k-1)] = pR + pf(k) + (1-p)f(k-1).

It is beneficial to go for a review when:
pR > (1-p)[f(k) - f(k-1)] or

p/(1-p) > [f(k) - f(k-1)]/R

i.e., odds must be greater than marginal value of a review / marginal reward

In other words:
it is good to review when the odds of overturning the on-field decision exceeds the ratio of the expected cost of losing a DRS review, to the expected incremental reward.

Use Case: for fifty-fifty calls (p = 0.5) with a single DRS review in the inventory, you would want to review only if you are convinced that the present reward is likely to exceed the value of not having DRS for the remainder of the innings. For a fixed reward, the RHS increases steeply after the first unsuccessful review due to the concave f. To be really safe, you want to risk a second and final unsuccessful review only when you can trigger a truly game-changing decision that greatly increases the chances of winning the match.  In general, R may neither be a strictly increasing nor a decreasing function of time. This is especially true in limited-overs cricket where a game-changing event can occur very early in the game. However in soccer, baseball, or basketball, R can be reasonably approximated as an increasing function over time. In general, it makes sense to save the review for the end-game. In any sporting event, including cricket, which is heading for a close finish, it may be beneficial to delay the use of a review.

In the Ashes test, Michael Clarke, the Australian captain appeared to pay more attention to 'R' and less attention to 'f', and was left without recourse at a crucial stage, and this hurt his team. On the other hand, the England skipper Alastair Cook delayed the use of DRS: The last wicket of a closely contested match fell when the game had reached a climax (R = R_max), and was DRS-induced. Thus, optimally delaying DRS involves the constant assessing and updating of risk versus reward and pulling the trigger when the odds are in your favor.

Analytical Decision Support Systems
A smart organization will aware of the strengths, weakness, and value of DSS based decisions. In some industries that are characterized by shrinking margins, even small incremental gains in market-share or profitability using DSS can alter the competitive landscape of the market. This motivates an interesting question: If two firms employ the same decision analytics suite provided by the same vendor, does it necessarily cancel out? or like in cricket, can one firm do a better job of maximizing value from the DSS to gain a competitive advantage?

Updated July 17, 2013:
It turns out, that wicketkeeper Matt Prior was instrumental in ensuring England's good DRS strategy. As we all know, a good prior saves your posterior during crunch time!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.