VW has been hit hard by this, with their stock price being hammered, the CEO leaving, and legal actions just getting started. The head of their US operations has been called before congress this Thursday, and Germany's version of the EPA, the KBA, has demanded a plan for a fix by tomorrow, October 7th.
I'm not in the auto industry, and don't own a VW, so I really don't have a dog in this fight. A quick glance at the About Me box in the right column shows I'm a radio engineer, so my only familiarity with the whole question of Engine Control software and how to optimize diesel engines is from reading on the subject. It's miles away from my expertise, but I wanted to touch on the subject of benchmark tests like this. Benchmarks, standards, and making the numbers is part of engineering and a bigger part of marketing.
Consider the computer most of you are using to read this. Computers have been subject to benchmarking how fast they operate for as long as they've been consumer items. Electronic Design's editor Bill Wong writes:
... I was the first PC Labs Director for PC Magazine ... many decades ago. I helped put together the first benchmarks for PCs, printers, and networks. We distributed them via bulletin-board systems running on banks of modems as well as on 5.25-in floppy disks.Surprised? You shouldn't be. People shopping for computers would read the benchmarks and tests searching to see which computer ran faster than the others, or the same speed but sold cheaper. Having good published benchmarks could be the different between a successful and unsuccessful model. That's big bucks to a place like Dell or another computer maker. It's the same concept in ham radio; influential writers will point out the importance of some specification and soon all the hams looking for that sort of radio will be diligently reading specs to compare radios and see which radios are better at that particular specification. In both cases, when consumers are shopping by one particular benchmark, they'll buy one that's an imperceptible amount better. In this case the benchmarks were faked: when the engine was run in the test lab, doing programmed exercises on a dynamometer, it responded differently than it did with people driving. Buying it based on its benchmarks would cheat the buyers.
Eventually we had graphics benchmarks, which is where a lot of cheating occurred. In some instances, video drivers were specifically written to check if tests were being run and adjusted the way the driver performed often doing nothing. In one sense, this is a valid optimization since a set of operations that does no useful work such as changing what is on the screen could help improve the performance of the overall system. Of course, adding this check could lower it, too.
The bottom line is that there are huge incentives for companies to come up with ways to benchmark better and look better to buyers. Software controlled engines are ripe for this sort of thing to happen. It's especially likely to happen if the designers believe the test conditions are completely different from real world use and the test is just a barrier between them and selling their cars. They think customers will be happier with cars that perform better in real driving than passing regulatory tests.
But the regulators are the ones who can ruin your business and arrest you.