Vision Systems Intelligence’s Magney made it clear, “The radar did recognize the truck as a threat. Radar generally does not know what the object is but radar does have the ability to distinguish certain profiles as a threat or not, and filter other objects out.”Here's a diagram of the accident from the Florida Highway Patrol's report. The accident occurred a few miles northeast of Williston, Florida, and about 25 miles SW of Gainesville, Florida:
If so, what was radar thinking?
Tesla’s blog post followed by Elon Musk’s tweet give us a few clues as to what Tesla believes the radar saw. Tesla understands that vision system was blinded (the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”). Although the radar shouldn’t have had any problems detecting the trailer, Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events.'"
The truck, vehicle 1 (V01), turned in front of the Tesla which did not slow down. Instead, V02 went (mostly) under the truck, decapitating the driver, 45-year-old Joshua Brown, and then traveling off the road into a nearby field. It eventually came to rest in a field. Looking at that area on the Bing maps, it could have well run over someone in that field; there are houses there.
This is a worst case accident. The driver should have been paying attention and taken control back from the car. The car was clearly not slowing down so it chose not to react to the threat. The car impacted at a very weak spot on most cars; the windshield and roof over the driver's head. Most of the typical accident mitigation schemes that deploy airbags and so on focus on front end or rear end collisions, even door collisions, and it's arguable that if the car had smashed into the truck's rear end or front end that the normal collision safety features would have helped the driver survive.
In light of that collision and the attention that it received from Federal regulators, a reasonable person might think that Tesla had removed the autopilot option or made it require constant input from the driver (a control movement or something to indicate "I'm here and awake"). Apparently not. EE Times reports on this story from the Tesla owner's forums about a collision of a Tesla Model S running the latest software early this month.
I was on the last day of my 7-day deposit period. I was really excited about the car. So I took my friend to a local Tesla store and we went for a drive. AP was engaged. As we went up a hill, the car was NOT slowing down approaching a red light at 50 mph. The salesperson suggested that my friend not brake, letting the system do the work. It didn't. The car in front of us had come to a complete stop. The salesperson then said, "brake!". Full braking didn't stop the car in time and we rear-ended the car in front of us HARD. All airbags deployed. The car was totaled. I have heard from a number of AP owners that there are limitations to the system (of course) but, wow! The purpose of this post isn't to assign blame, but I mention this for the obvious reason that AP isn't autonomous and it makes sense to have new drivers use this system in very restricted circumstances before activating it in a busy urban area.I'd be inclined to blame this on the car salesman, since he's the one who suggested they not brake in order to demo the autopilot system, but that belies the fact that it's ultimately the autopilot that failed to do its job. Yes, again, the driver should have intervened, but the Tesla autopilot is looking less and less "ready for prime time".
Last, but not least. I cancelled my order until I know more about what happened.
For their part, Tesla said their autopilot should not have been used in this "city traffic" scenario. They maintain that even under this misuse of their car, the system did what it was supposed to and the problem that caused the accident was poor communications among the occupants in the car.
The company told us that the AutoPilot operated exactly as designed in this situation by alerting the driver to brake and asking him to take control. Tesla says the driver failed to do so because of a miscommunication inside the car.The EE Times article shows some of the software screens that Tesla drivers are prompted with, and it seems to this old guy that the approach Tesla is taking is the Silicon Valley Software Startup approach: they present the users with a screen that tells them specific things they're responsible for knowing - much like the EULA you get on a software package. Hopefully it's not 9000 pages long like a software EULA. You know, you're just itching to open that new software package you just shelled out for, but they want you to read the massive EULA that tells you if you complain you'll be Bill Gates' towel boy.
There's a clash of cultures going on here. The big automakers are used to long development cycles and a burning need to prove high levels of reliability and performance before they put something on the market. Tesla and Google are more like Silicon Valley Startups: let's get something on the market and we'll keep tweaking it. Reading the comments on that Tesla owner's blog, it's clear that the Model S fanboys have the same mindset. It's a cliche among electronics hardware engineers that if we did the same sorts of things that software companies do we'd be doing hard time in prison. Release a box that doesn't really meet its promises, then sell an upgrade to it that makes it do what it was supposed to in the first place? That's the standard approach to software. I find it hard to imagine that Tesla fanboys would be making excuses for Tesla if the hardware engineers delivered a car that traveled half as far on a charge as it was sold as capable of going, or that didn't run at all under some circumstances. That's essentially what the autopilot software/systems engineers are doing.
If this had been case of getting rear-ended by a Non Tesla car, the driver would have been given a ticket.
ReplyDeleteSo who gets the ticket in this case? Who is "at fault"?
And the lawyers will get paid regardless....
The company told us that the AutoPilot operated exactly as designed in this situation....
ReplyDeleteAnd there you go.
I've spent way too much time debugging code to have any trust whatsoever in autopilots, whether they're on an Airbus 340 or a Tesla. In limited (read: tightly controlled) environments code that's had the absolute snot wrestled out of it may be fairly reliable; trying to digitize the randomness of an analog environment is a prescription for major, major trouble.
What other lighting (or weather, or environmental) conditions will the AP / radar systems not work in? If it isn't smart enough to detect a 53 ft trailer, what else won't it detect? The comment you pulled from the Tesla forum should scare the bejeebers out of anyone who drives a small car, rides a motorcycle, owns a bicycle, or routinely performs bipedal ambulation.
Or owns a tractor trailer......
I'm not interested in "who is the responsible party"; having my next of kin sue the manufacturer isn't high on my to-do list.
It is impossible to run all the real world scenarios, hence autopilot will continue to kill people.
ReplyDeleteOf course the other side of the argument is that it will kill less people, and that may be statistically correct, but as in the real cases above it will cause different accidents that it would be as liable as a dunk driver.
But maybe there is a middle ground. I believe that all automated systems should be limited to highly controlled environments such as select highways (no cross traffic and other hazards that can lead to a higher rate of randomness). This can allow not only a big drop in accidents and deaths, but likely improve throughput on congested roads.
There is just too much randomness that can happen on crowded city streets, and maybe at best the controller can be used as a backup pilot to manual control.
I agree that there is a different mentality of the tech world and old industry. The techies have not been exposed to possible direct human death from their products (some arms of tech have such as BentleyNevada, but not Microsoft, Google, and Facebook). They do need to slow down in their launch and gain more data under their bellies. It will likely take big lawsuits that will prompt such reaction, and they will certainly get many.
Sort of fusing both Nosmo King's and Unknown's comments:
ReplyDelete...having my next of kin sue the manufacturer isn't high on my to-do list. You and me both!
As I understand it, there are still situations where aircraft autopilots get confused and give control back to the pilots. Aircraft software of that importance is developed to standards way tougher than what it appears that the Tesla engineers are doing, and the aircraft operating conditions are much more constrained than cars. There are no planes darting out of side streets, no kids standing around on clouds waiting to cross the "highway in the sky", and there are no big aircraft turning left in front of oncoming traffic.
Autopilot might be useful on the interstate, the most predictable driving that we ever encounter, but it will be decades before it will be trustworthy on local, smaller roads. If ever.
For better or worse, the onslaught of lawsuits and the heavy hand of federal regulations constrained the automakers to slower development cycles and more careful design. At major avionics corporation, our design cycles for new products took longer than the life of most consumer products, especially small ones like a smartphone. It was because of tougher development standards. The commercial software folks competing for your dashboard could profit from doing the same.
Drones are well on their way to revolutionizing war. Robots too it seems are going to fulfill the dreams of so many science fiction writers of the past. Robotic vehicles could be the third leg of this triad. Tesla (and others) may be breaking ground for driving cars on city streets but coincidently are breaking ground for trucks/cars with bombs to be used by terrorists or for that matter those fighting terrorism.
ReplyDelete...there are still situations where aircraft autopilots get confused and give control back to the pilots
ReplyDeleteThat's a pretty important acknowledgement right there. I'm not able to discuss detail, but Where I Used To Be involved stuff of sufficient importance that it involved "defaulting to the human" as a matter of course. That human was required to be trained to a level infinitely beyond what's expected of "automobile d̶r̶i̶v̶e̶r̶s̶ operators." FWIW, even given the incredible level of training those humans made errors, which is why the organizational structure had multiple layers of even incredibly more trained and very highly experienced senior personnel directly and closely overseeing the first layer of incredibly well trained personnel. Some stuff absolutely, positively requires a fully-applied zero tolerance for error.
Me sitting on a motorcyle at a traffic light, or in a car with my family at that same light, is one of those things.
I'm loathe to suggest "regulation" of such activities because "regulation" suggests "government involvement" and that inevitably leads to "random unidentifiable bureaucrats covering their asses with no respect or recourse for whomever suffers from their incompetence."
Elon Musk has sufficient financial resources to buy off governmental action and most of the victims of his company's technical shortcomings, even if that involves expensive attorneys.
I fully understand people aren't perfect and they screw up, which has the potential to cause my demise; I also fully understand the limitations of "humans developing software" which may produce the same result. The big difference is that the controlling human is directly involved and has the oppportunity to initiate corrective action based on millenia of stimuli input, while software has no such advantage.
According to the AMA (American Motorcycle Association), the new DOT "Road to Zero" (fatalities) includes no motorcyclist input, at all.
ReplyDeleteSelf driving cars worry me- do they see a bike? I have noticed, for example, the speed radars communities put up , the ones with the cautionary flashing signs, do not return very good reading with motorcycles- some bikes seem to be highly radar reflective and others not so much.
The visibility of a bike to radar depends on a design aspect called radar cross section. It's a complex subject and I never worked with it beyond its mention in a class or two. Both the bike's shape and the materials it's made of change the amount of reflection.
DeleteMost sensor operated traffic lights don't 'read' motorcycles either - they don't have enough metal to set off the coils in the pavement.
DeleteThey'll probably make cyclists hang corner reflectors all over their bikes!
ReplyDeleteVehicle autopilots of any type must be very carefully designed, no fault, absolutely reliable systems - which as others have said CANNOT be rushed and which Silicon Valley computer types have little understanding of or patience for. Years ago I heard the saying 'if phone had as many problems as computer do, there would be riots' and it applies here also; this is a life critical system and not something to take lightly, but it seems that Tesla, Google, etc ARE taking it lightly, and then they blame someone else instead of fixing their system - with an attitude like that, they won't improve.
ReplyDeleteAs problems like this continue, this sloppiness with people's lives will turn the public against them and hold back development of technologies like this.
It is interesting that the news talks alot about new entrants to the field (Google, Tesla, etc) and very little about established players in the field like CMU (Carnegie Mellon University) who has been operating self driving cars since the early 1990's.