Vision Systems Intelligence’s Magney made it clear, “The radar did recognize the truck as a threat. Radar generally does not know what the object is but radar does have the ability to distinguish certain profiles as a threat or not, and filter other objects out.”Here's a diagram of the accident from the Florida Highway Patrol's report. The accident occurred a few miles northeast of Williston, Florida, and about 25 miles SW of Gainesville, Florida:
If so, what was radar thinking?
Tesla’s blog post followed by Elon Musk’s tweet give us a few clues as to what Tesla believes the radar saw. Tesla understands that vision system was blinded (the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”). Although the radar shouldn’t have had any problems detecting the trailer, Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events.'"
This is a worst case accident. The driver should have been paying attention and taken control back from the car. The car was clearly not slowing down so it chose not to react to the threat. The car impacted at a very weak spot on most cars; the windshield and roof over the driver's head. Most of the typical accident mitigation schemes that deploy airbags and so on focus on front end or rear end collisions, even door collisions, and it's arguable that if the car had smashed into the truck's rear end or front end that the normal collision safety features would have helped the driver survive.
In light of that collision and the attention that it received from Federal regulators, a reasonable person might think that Tesla had removed the autopilot option or made it require constant input from the driver (a control movement or something to indicate "I'm here and awake"). Apparently not. EE Times reports on this story from the Tesla owner's forums about a collision of a Tesla Model S running the latest software early this month.
I was on the last day of my 7-day deposit period. I was really excited about the car. So I took my friend to a local Tesla store and we went for a drive. AP was engaged. As we went up a hill, the car was NOT slowing down approaching a red light at 50 mph. The salesperson suggested that my friend not brake, letting the system do the work. It didn't. The car in front of us had come to a complete stop. The salesperson then said, "brake!". Full braking didn't stop the car in time and we rear-ended the car in front of us HARD. All airbags deployed. The car was totaled. I have heard from a number of AP owners that there are limitations to the system (of course) but, wow! The purpose of this post isn't to assign blame, but I mention this for the obvious reason that AP isn't autonomous and it makes sense to have new drivers use this system in very restricted circumstances before activating it in a busy urban area.I'd be inclined to blame this on the car salesman, since he's the one who suggested they not brake in order to demo the autopilot system, but that belies the fact that it's ultimately the autopilot that failed to do its job. Yes, again, the driver should have intervened, but the Tesla autopilot is looking less and less "ready for prime time".
Last, but not least. I cancelled my order until I know more about what happened.
The company told us that the AutoPilot operated exactly as designed in this situation by alerting the driver to brake and asking him to take control. Tesla says the driver failed to do so because of a miscommunication inside the car.The EE Times article shows some of the software screens that Tesla drivers are prompted with, and it seems to this old guy that the approach Tesla is taking is the Silicon Valley Software Startup approach: they present the users with a screen that tells them specific things they're responsible for knowing - much like the EULA you get on a software package. Hopefully it's not 9000 pages long like a software EULA. You know, you're just itching to open that new software package you just shelled out for, but they want you to read the massive EULA that tells you if you complain you'll be Bill Gates' towel boy.
There's a clash of cultures going on here. The big automakers are used to long development cycles and a burning need to prove high levels of reliability and performance before they put something on the market. Tesla and Google are more like Silicon Valley Startups: let's get something on the market and we'll keep tweaking it. Reading the comments on that Tesla owner's blog, it's clear that the Model S fanboys have the same mindset. It's a cliche among electronics hardware engineers that if we did the same sorts of things that software companies do we'd be doing hard time in prison. Release a box that doesn't really meet its promises, then sell an upgrade to it that makes it do what it was supposed to in the first place? That's the standard approach to software. I find it hard to imagine that Tesla fanboys would be making excuses for Tesla if the hardware engineers delivered a car that traveled half as far on a charge as it was sold as capable of going, or that didn't run at all under some circumstances. That's essentially what the autopilot software/systems engineers are doing.