Monday, January 23, 2017

NHTSA Lets Tesla Off the Hook

We've written in this space before about a Tesla "autonomous" car that mistook a truck for the sky and got its driver killed last May.  The National Highway Traffic Safety Administration released the results of its investigation into the accident last Thursday (1/19) (pdf warning) finding no defects in Tesla’s Automatic Emergency Braking (AEB) and Autopilot systems.  To quote from that EE Times piece by correspondent Junko Yoshida:
Definitely, Tesla is exonerated.

Did NHTSA let Tesla off the hook too easily? Absolutely.

What lessons or guidance, if any, did NHTSA’s findings offer to the rest of the automotive industry? Very little.
For technical analysis, Yoshida went to Mike Demler of the Linley Group, a  technology analysis and consulting company in the electronics industry.  Demler had a lot to say, and none of it terribly flattering.  He said NHTSA essentially had already decided the case by other decisions they've made along the way to this.
“The critical issue here is that the Tesla fatality involved the combination of AEB and Autopilot,” said Demler. As the report shows, NHTSA “already accepts that AEB reduces accidents, and that the Tesla AEB performed according to the current state-of-the-art,” he explained. “But what happens when you combine AEB with Autopilot?”

He noted, “It would be different if Autopilot just added lane-centering to Adaptive Cruise Control (ACC) and AEB. [If that’s the case], Autopilot’s primary function isn’t safety, but it’s convenience.”
But it didn't just involve the routine lane-centering of Autopilot, which is like cruise control for steering: "keep it between the dashed lines".  This accident involved a truck turning left in front of the car and the Tesla's inability to tell a truck from the sky.  I argue that once a child learns the words for truck and sky, age 3?, no child is going to make that mistake, but the car's decision cost the life of the Tesla's owner/driver.  Furthermore, NHTSA's investigation said that truck should have been "observable" for 7 full seconds before the crash, time in which the driver should have been able to apply brakes and take control away from the car - if he knew he had to.  NHTSA says:
ODI (the Office of Defects Investigation)’s analysis of Tesla’s AEB system finds that 1) the system is designed to avoid or mitigate rear-end collisions; 2) the system’s capabilities are in-line with industry state of the art for AEB performance through MY 2016; and 3) braking for crossing path collisions, such as that present in the Florida fatal crash, are outside the expected performance capabilities of the system.
The question is whether or not the driver knew to not let the system drive in that circumstance, and if the car tried to notify him at some point.

There really is a lot to this story and it's worth your time.  Realize that in absolving Tesla of responsibility in this accident, NHTSA is implicitly encouraging more use of systems like this that are vulnerable in the event of a crossing vehicle.  NHTSA is saying Tesla can't be held liable in a lawsuit; the problem was the driver.  In saying NHTSA “already accepts that AEB reduces accidents, and that the Tesla AEB performed according to the current state-of-the-art,” they're committing to more cars with AEB systems.  Furthermore, since we all know agencies like NHTSA work at a snail's pace, Tesla was able to end-run around them by updating the software in the car "over the air" before they completed their investigation and NHTSA apparently took Tesla's story uncritically. 

Closing words to Mike Demler quoted in EE Times:
“NHTSA was obviously careful to apply a very narrow focus to this investigation,” Demler observed. “I’d say it was about 90% AEB, and only 10% Autopilot.”

NHTSA “could have dug into Autopilot more, at least as a warning to other manufacturers building similar systems,” he noted. In summary what they said was “we accept Tesla’s descriptions of how Autopilot works, the warnings they provide, and the fix they implemented.”

That’s pretty weak, Demler added.

The Florida Highway Patrol accident report on the May fatality.

11 comments:

  1. Firstly, the truck may have been 'observable' for 7 seconds but how many seconds was the truck situation 'correctable'. That is, the driver may have assumed the autopilot would avoid the truck. Once that assumption was proven false, there was no time to correct.
    Secondly, the gov't had to show Tesla as not liable. Indeed, autopilots for all manufacturers will be shown to be not liable. Otherwise, lawsuits would forever shackle the carmakers from producing auto-driving cars. The cars may 'learn' but there is an infinite number of unlearned situations that will be encountered (albeit on a progressively lower probability).

    ReplyDelete
  2. Somewhere inside that software are the moral choices of do you run over three grandmas or one baby, crash into the crowd or the bridge abutment. Be interesting to know what choices your autopilot will make. I assume the self-loathing mental illness expressed in religions like government and green power will make the programmers choose to sacrifice the car's driver, and the drivers ok with this choice.

    ReplyDelete
    Replies
    1. Well, point taken, but I don't think the software is smart enough to frame the question as three grandmas or a baby, it's only one blob vs. another. No software is capable of distinguishing humans from any other radar or sonar return.

      The Tesla buyers, like the Department of Transportation (DOT) and NHTSA seem to automatically think robotic drivers will be better than humans, so that any amount killed off at while Tesla debugs their software is acceptable. Humans would kill more. For evidence, I quote a comment from the EETimes article:

      Another writer against Tesla. Nicely done.

      No mention that the NHTSA found that there was a ~40% drop in crashes (air bag deployments) before/after autopilot hardware was installed in vehicles. The improvement in crashes in autopilot-only (highway) miles was likely far greater, but wasn't used to keep an appples-for-apples comparison.


      See? 40% fewer air bag deployments in cars with Autopilot, so it's better than humans already! According to this comment.

      Delete
    2. Was there an airbag deployment when the Tesla drove under the semi truck? If not, that accident counts in the 'better safety' column.

      You're the radio guy, but my guess is the higher conductivity of sheet steel would give a sharper amplitude rise in the radar return than a body.

      Delete
    3. My Canon brand pocket camera is good at putting the focusing box around human faces, and consider its CPU limitations from size and battery life. An autopilot will already have a good model of the world. Any blob is the projection of a five foot tall human shape, with human-type rounded edges, can reasonably be assumed to be a human. I think it is easy to make a good first pass at categorizing isolated humans, road signs, parking meters, fireplugs, bicyclists, etc. None of these categorization are perfect, but if your software has an 80% chance of discriminating between a parked car and a pedestrian, wouldn't you act on it?

      Delete
    4. Anon 1714 - As I understand it, the radar is aimed downwards because they had trouble with the emergency brakes activating for the overhead signs on Interstates and other big highways. But, yeah, the truck should have a much bigger radar signature.

      And it has been a while, but as I understand it, the car went under the truck without applying brakes. Airbags depend on sensors behind the bumper, which wasn't touched. It just took off the roof and a few inches of front and read windows.

      And Anon 1734, I don't think it's quite that easy. You're voicing is from the view of a human looking at an unfocused image on a screen. It's not as advanced as you are. The Canon has a pretty easy job in comparison.

      Delete
  3. I suspect there is a certain amount of politics involved in the NHTSA actions. Mr. Musk is a heavy contributor to politics. And he expects return on his investment. The bureaucracy understands what that means.

    And by the way, that is the reason for my concern as to how well the safety of the SpaceX launch and recovery process has been reviewed. The Air Force also understands that Mr. Musk is in favor with national politicians. From my time participating in launch safety reviews for a number of different payloads and launch vehicles, the same standards do not seem to be applied for SpaceX as are done for ALMOST everyone else.

    ReplyDelete
    Replies
    1. I've heard some of those stories about SpaceX. DrJim, who's regular commenter hear has had some, too - I don't want to speak for you, Jim!

      In this case, I think NHTSA is convinced that autonomous cars are a good thing and they're pushing them. They seem to think they're better than humans and if a bunch of folks get killed while Tesla, Google, et al debug their software, well it's better than humans would do. I'm sure at some DOT meeting, someone said, "Do you honestly think a nation full of cousin-humping rednecks will be safer drivers than robots?"

      Delete
  4. In the near future, your smart car won't drive you to the flash mob protest site, because it's an accident site and needs to be avoided for safety reasons. Smart cars make every car the back seat of a police car.

    ReplyDelete
  5. "the system’s capabilities are in-line with industry state of the art "

    I could say the same thing about the Titanic - it was built to the very highest standards of watertight integrity (during wartime, it would have been converted to a fast armed merchant cruiser and the ship was built to include watertight bulkheads to ensure survivability from battle damage), safety standards etc. including the number of lifeboats.

    So it doesn't set me aquiver with confidence that the software is essentially "the best we can do".

    Phil B

    ReplyDelete
  6. Humm, a last pardon by the Obama administration before leaving office. What an asswipe.

    ReplyDelete