Often times, the difference between the theory and practice of O.R. is the same as that between a carefully choreographed brain surgery at Boston General versus brain repair performed at the MASH 4077. In the former case, the idea is come back with a generic breakthrough that can be safely re-used for all surgeries. In the latter case, the aim is to get through the Korean war (on TV albeit) with the least number of casualties among the huge number of wounded that randomly show up.
The textbook approach prides itself on being data-agnostic and coming up with new "small polynomial" techniques that exploit special structure in a problem class. Data agnosticism is something we practitioners can ill-afford since the proof is in the pudding. If we build the best recipe that is optimized for no more than 6-7 guests, and we never expect to see more than ten guests, ever, then it is pointless to fuss over a complex recipe-generator that optimally serves a thousand guests, but can't serve five as quickly and as well. And if there are thousands of such five-guest parties to be served, the former approach wins hands down. Of course, now if we were optimally designing a dam or two, things would be a little different since strict service levels come into play. So it is case dependent, and bringing your skill and art into play to deal with such differences and putting your money where your model is, happens to be one of the reasons why O.R. practice is never dull.
Textbooks recommend that we exploit problem structure. Industrial optimization exploits problem structure to an extent, but can and should exploit the structure in data. The Simplex method that is making such a robust comeback via GUROBI is worst-case exponential and rarely does a bad job because good implementations thrive on real-world LP data and the numerical properties of computers. In the real world, even strongly NP-hard optimization problems are quite manageable. Do not let textbooks scare you! After all, there were no computers around when architects in South India optimally designed the Brihadeeswara temple a thousand years ago - the design required that the tall temple's shadow be constrained to within its (convex) perimeter any time of day. They achieved an elegant and 'feasible' design that stands 216 feet high while using incredibly heavy granite stones to do this. And yes, the location of the symbolic 'idol' of the deity coincides with the centroid of the overall structure. Now this has got to be one great place for spiritual reflection or thinking O.R!
It's a strange dichotomy in the global O.R. community. The universities are focused on 'defense' and abhor seeing any "2^n" kind of numbers anywhere - a systematic O.R. approach that generates lower bounds to protect your answers, but is also capable of forcing turnovers, i.e., can be quickly converted into good quality solutions with a little bit of imagination. The book on the Traveling Salesman Problem by the stalwarts of our discipline is quite fascinating in this context. The O.R. business community goes with whatever brings home the Dosa, exponential or otherwise, and there is healthy scorn for the 'defense' part. Approximation / local optimum / randomized methods can be thought of as being part of the 'offense'. It gets the glory and can be used to quickly initiate a product, and that perhaps is a reason why many a young practitioner is hooked to it. However, it is generally a good idea to consider using both approaches. Over the product life cycle, it is usually the best defense-offense combo that wins the game, all other things being equal.