Did NHTSA let Tesla off the hook too easily? Absolutely.Likewise, the problems raised in the comments here about cars interfering with each other or signals bouncing off of tunnel walls or bridges and other common examples are not fundamentally different from problems managed successfully in other systems. Yes we have problems with cellphones and other commercial systems. There's a fundamental difference between systems that were designed to be robust in the face of understandable problems and systems that aren't. The high reliability world approaches things differently than the commercial "let's slap this together and ship it" world. Your wired phone in your house will fail if everyone in the neighborhood tries to use their phone because the systems are designed for about 10% of the lines being used. It doesn't have to be that way, it's just cheaper that way and has been proven to work well enough.
As I said in the section on the image recognition piece, I think we have to conclude an image system can't do everything we need it to do, but could it "work well enough"? I don't think so. The problem with image rejection systems is that they take millions of training sessions to be reasonably accurate for a small set of things, and here we have a nearly limitless set it has to recognize and know what to do about. Do we need nearly infinite training time?
In a larger sense, the problem with the whole idea of Advanced Driver Assistance Systems (ADAS) is that individually, all the pieces of hardware can be envisioned to do the job, but taken as a system, there isn't a chance that the software can work.
If we think of autonomous cars as "cruise control that includes the steering wheel" and simply stays in its lane, even that's getting dicey. A fully autonomous car that you get into and say, "take me to work" or "take me to grandpa's house" and does so while you just sit as a passive cargo is probably more like 50 years away. I did five hours of driving yesterday and I really would have liked to not have to deal with traffic in the rain and let JARVIS drive, but JARVIS is a comic book character. Note to fans of the the "Singularity" when computers suddenly surpass the sum of all human intelligence: let's just call it vaporware. People have been talking about this coming for close to 30 years. It's much like nuclear fusion reactors that I first read were "20 years away" back in 1971. If it happens before then, all bets are off, but as I've said many times and places (most recent), AI is over-hyped.
Driving is a perfect example of the old saying, "hours and hours of boredom interrupted by moments of shear terror". What do you do when you see a truck stopped on your street for a delivery? You probably slow or even stop to look for other traffic and an opportunity to go around the truck. Unless the ADAS car is programmed to do so, it's not going to know what to do. What if there's a broken down car in the middle of your lane? Or any one of a thousand oddities you see on the road in the course of a year. Those are easy problems. Without true intelligence, the software has to recognize it's a truck, understand the laws, understand what it can and can't do and choose the right option. What about watching for kids darting between cars on the same street, or riding their bike on the shoulder of the road?
As I concluded last time, it's impossible to teach the computer "if a child runs out after that ball, slam on the brakes, and if you can't stop, hit something like a parked car". If it was my kid in the intersection, I'd still prefer a real human mother to any computer driving a car because a real human mother is very likely going to care more about any child than damaging her own car. A computer isn't going to understand the concept of "child" or "person". They need to be much more sophisticated AI systems than we have now. I'm not talking about the actor-voiced "Watson" on the commercials; I'm talking about a really advanced Turing test where you could talk to the computer for a long time and not know it's artificial. Good luck with concepts like "do I hit the adult on the bike or injure my passengers by hitting the parked bus?"
The truth of the matter is that driving has difficult moments and not all accidents are due to someone drunk, texting, or on the phone. Some accidents come down to not having good options left.
As many commenters have pointed out, the biggest risk in these systems is the combined mad dash to put them into place on the part of industry and the Fed.gov itself. The public is less sanguine about autonomous cars, with a AAA survey I reported on in September showing
...three-quarters of Americans reported feeling afraid to ride in a self-driving car. One year later, a new AAA survey found that fear is unchanged. While the majority are afraid to ride in a fully self-driving vehicle, the latest survey also found that the majority (59%) of Americans are keen to have autonomous features in their next vehicle. This marked contrast suggests that American drivers are ready embrace autonomous technology, but they are not yet ready to give up full control.Commenter Dan said on the second post,
I suspect that sooner rather than later we are going to see one of these autonomous vehicles murder a family of children or a school bus full of kindergarden kids and the legal fallout will end this experimentation for a very long time.and I've frankly been waiting to hear that news with dread. I'm sure the ambulance chasers are chomping at the bit for it to happen. They like to go after the deepest pockets possible and among the deepest pockets possible are in play here. I can see them going to court even though the "expert" NHTSA ruled in Tesla's favor in the decapitation accident. Juries are under no requirement to follow the NHTSA, in my understanding. If a jury awards a big punitive award in a case like this or like Dan describes, it could well put the development of these systems on ice.