Tuesday, October 12, 2010

Popular Mechanics and the Frankenstein design principle

Popular Mechanics is a favorite magazine for the sheer variety of mechanical gizmos it throws up inside its pages (check out the Jaguar supercar). It is a permanent fixture in many auto-service shop waiting rooms where we spend a good part of a lifetime. The October 2010 issue of Popular Mechanics was a pleasant surprise - not one but two OR-related topics were covered. The first one was a short article on optimizing waiting experience in queues. The metriclastic US population (is that a new word?) that shuns neat divisions by 10, rightly resents having to spell Q using 5 letters, one of which is Q, and simply calls it a 'line'. On the other hand, 'line theory' is not too informative. Pop-mech doesn't mention OR explicitly, but we know that queuing theory and OR are inseparable.

The queuing article (online version here), among other things, mentions that a smart researcher in Taiwan, Pen-Yuan Liao, derived an equation to compute a 'Balking Index' that tells you when and how many customers are likely to flee a long line and 'defect' to a better one in a multi-Q system. Obviously, information like this helps determine optimal staffing levels to meet the required service levels, minimize costs, and improve the customer experience. Apart from just analyzing people standing in line, queuing models have many and diverse applications and is a whole field of study. There's also some expert comments in the magazine article by Dr. Richard Larson from MIT. His name would be familiar to the OR fraternity.

The second article talks about risk management in the context of the Gulf-Coast oil spill. This tab's summary of the article from an OR perspective is this: When it comes to designing and operating complex systems, there needs to be a greater emphasis on managing conditional expectations associated with low-probability high-consequence events. This is in addition to tracking traditional risk metrics (minimizing expected cost, probability of failure, etc), i.e. we should be tracking multiple risk objectives, something i recall working on many years ago as a grad student.

The Frankenstein principle
Dr. Petroski, a civil engineering professor at the Duke University quotes in the article, "When you have a a robust system, you tend to relax". And it's proven to be true, sadly. BP continually pushed the risk-envelope to boost profits without paying adequate attention to the associated safety trade-off. In part, this was due to a false sense of safety in a historically robust system. There's always a first time, and shockingly, there was no plan 'B' in the event of a catastrophic failure. Bhopal, Chernobyl, Gulf coast and a few more like these have occurred in just the last 30 years. BP engineers were left having to prove that components would fail rather than answer the question "is it safe to operate?" - two completely different propositions. This practice of hastening project completion by placing an unfair burden of proof on the scientists and engineers may be widespread.

The Frankenstein design principle extends Murphy's law. It simply states that if for some crazy reason, you want to build something monstrously complex, then at least design it assuming apriori that at some point it will fall apart and come back to snack on your aposteriori.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.