This is a quick post motivated by a useful question posed in this nice blog on OR software that wondered about the need for so many decimal-points of precision in commercial solvers. As Prof. Paul Rubin touched upon in his brief response, there's always going to be a few numerically-unstable instances that may justify this level of precision. The question as well as the response was quite instructive and this post mostly adds context.
Speaking from the perspective of a consumer who deploys such solvers, this quest for precision is not just an academic exercise. Ill-conditioned instances are inevitable in certain business situations and are not (yet) rare. A customer (e.g. a store manager using an algorithmic pricing software product) routinely stress tests their newly-purchased/updated decision-support system that has such a solver hidden inside it. Customers will sometimes specify extreme input values to gain a 'human feel' for the responsiveness of the product that their management asked them to use. In fact, even routine instances can occasionally defeat the best of data-driven scaling strategies, and the old nemesis, degeneracy, always lurks in ambush.
Such solvers also have to be robust enough to work 'off the box' for millions of varied instances (to the extent possible) across industries and changing business models within any given industry. Being 'optimally precise' may pay off in terms of reduced support and maintenance costs for the vendor who writes the solver code, as well as for the consumer who deploys it, and less headaches for the end-user, the customer.
With the information available in journals and the Internet, it is realistically possible for a DIY practitioner to assemble a reasonably fast and robust solver to handle their own company's LPs, but the task of building high-precision, enterprise-level MIP solvers is best left to the specialists.