Saturday, January 26, 2019

It's the Unknown Unknowns That Get You

Former Secretary of Defense Donald Rumsfeld once said something really profoundly important in a press conference in 2002.  Talking about going into Iraq before the start of Gulf War II, he said there were different levels of things they needed to account for.
There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.
Naturally, the language generated all sorts of derision from the people of the press who aren't familiar with formalized problem-solving.  Most of the engineers I know just quietly nodded in understanding - or thought it was pretty unremarkable.  Generally speaking, it's the unknown unknowns that cause your biggest problems, as Rumsfeld said in that speech.

The problem of unknown unknowns is one of the biggest challenges facing autonomous vehicles and driver assistance systems (ADAS).  I find it a bit funny that there's a growing undercurrent in the industry that goes, "hey!  this is harder than we thought!"  Suddenly, the predictions are becoming less rosy.   There are questions that maybe they're pushing ahead into road tests a bit too soon.

Trade magazine Microwaves & RF takes a dive into some of the issues and comes away talking about unknown unknowns.

It seems to me that one of the biggest problems in ADAS has been that an assumption has been taken as fact.  That assumption is that autonomous cars will be safer than human-driven cars.  This assumption is far from proven.  It's a popular assumption because it makes sense on some level: we all know about accidents caused by inattention: falling asleep at the wheel, texting or other distractions.  We also know about accidents caused by following too closely where a robotic system can have faster reflexes (perhaps aided by vehicle to vehicle communication).  Just as I expect my CNC machine to not get distracted and turn a screw too many times, we expect the autonomous car to not text another car or otherwise not pay attention.

What escapes our notice is that humans deal with an astonishing number of variables while driving; from changes in the environment (rain, sleet, dust, degraded lane lines, etc.) to changes in the road itself.  Everyone recognizes that younger, less experienced drivers make more mistakes than older, more experienced drivers, but they expect the "younger, less experienced autonomous cars" to make even fewer mistakes.

Maybe the safety record of human drivers really isn't that bad.  Maybe driving is very hard and people are really quite good at it.

The RAND corporation did a study (pdf warning) to compute the number of miles some autonomous systems would have to drive to demonstrate the same or better safety as a human driver.  The results are surprising.   They conclude these systems would need to drive 275 million miles to statistically prove they're equivalent to the safety a human can provide.  This plot shows the number of vehicle miles, in millions, vs the requirement and the number of miles driven by the fleets of three testing companies.

If a test car drove 45 mph, 275 million miles would take it 6.11 million hours, or just about 700 years of driving 24/7/365.  Sure that could be divided up: have 100 autonomous vehicles driving at 45 mph and you reduce that to 7 years.  Congratulations, you've now created the largest autonomous vehicle fleet in the world.

Don't say you're going to use artificial intelligence, that just puts the question off onto verifying the AI.  As the article puts it:
There’s a general agreement that the only way autonomous vehicles can become a reality is through the application of machine learning. The possible scenarios a vehicle could encounter are basically infinite and it’s impossible to hard-code the algorithms to successfully negotiate all of them. Instead, massive data sets are being recorded along with how humans react to the driving scenarios that are then fed into neural networks.

While this allows design engineers to reasonably tackle the problem of algorithm design, it makes the test engineer’s job much harder. Algorithms are now a black box. This requires more extensive testing because you don’t have a fundamental understanding of the code that can be used to generate test scenarios. Rather, you need to test against almost every conceivable scenario to ensure the algorithms function properly.
Right now, the industry is kind of in the "Wild Wild West" mode: everything is changing as developers try to get an edge on their competitors.  Worldwide safety and testing standards haven't quite been finalized.  This all combines to create a situation that demands closer looks at everything.  Test engineers have a rough task in front of them. 


  1. Vehicles that are both autonomous AND safe will EVENTUALLY be common. That is a
    'when not if' issue. Sooner or later the hardware, programming and heuristics will achieve the level of complexity required for computer to learn as well as a human does and to be able to apply prior experience to NEW and NOVEL situations to arrive at the correct solutions. Can't say when that will happen but it will....if we don't get Skynet/Terminators first and get wiped out. The question is HOW MANY HUMANS MUST DIE during the process of creating, testing and perfecting these systems. And at what level do we say ENOUGH and put an end to the carnage to save innocent lives. History shows us that politicians and big business doesn't actually care about the costs in human lives if a profit is to be had. So we can't expect the law or the government to actually protect us from being killed by driverless cars in kindergarden. To insure the safety of those we care about will require extrajudiciary and extragovernmental actions.
    As long as there is money to be made and control over people to be had we can expect the new paradigm of computer controlled transport to be FORCED UPON US.

  2. They've been saying it's harder than they thought for what now, 20 years?

  3. Speaking of "unknown unknowns"....

    ..."Algorithms are now a black box.

    Since we do not know what AI will learn (or, even how much it may learn), how it will learn what it learns, how much of it it will learn, what it will do with the knowledge, and what additional learning will take place as it absorbs the first degree learning - and, seemingly, have no way to query the AI so we can figure out or predict the learning cycle, I'll head out on the proverbial limb and make a prediction:

    This will not end well.

    By that I do not mean "Skynet," although that may be one possibility, but there are infinite potential paths between expecting the computer to drive to the supermarket and :Skynet," any of which may produce an unexpected result, including multiple brand new "unexpected result trees."

  4. They don't care. They know full well that, with the Right People running the government, they are at no risk of loss. THEY won't be the ones killed by ADAS. It will instead be Mere Citizens. And they want to get rid of those damn Deplorables anyway. They'll be glad to start with 18 wheelers:

    1. I almost went down the rabbit hole of self driving trucks, but I try to keep the posts from getting too long and rambling.

      Long haul trucks are likely to be an early application for autonomous vehicles, but only trucks that get on and off the interstate system at controlled locations. Driving on an interstate is considerably easier than a lot of other self-driving tasks; the system mostly just needs to keep it between the lines, and don't hit anything. They're not likely to have to deal with pedestrians, crossing traffic and much slower vehicles. It would probably still be best to isolate those trucks, though.

    2. I would think that trains offer the ideal environment for testing. Total control over the vehicles and the environment. The fact that train companies are not testing self-driving engines says to me that they trust level has a long, long way to go.

    3. The thing about trains that might be stifling autonomous systems is unions. Since the is in bed with unions, it may be that there's no visible payoff for companies working in rail systems.

  5. It certainly will be a long time before we see self driving cars as envisioned - like so many things we've saw in Popular Mechanics back in the 50's, 60's, etc.

    One of the more interesting effects of self-driving cars was the anticipated impact on organ donation - they are expecting a lot less potential donors since their # 1 source of organs come from auto accident victims who are otherwise healthy young adults. Wow. They are hoping that organs can be lab grown before this "problem" of not enough accidental deaths is realized.

  6. I've never seen any discussion of the different types of driving skills either, or what they're hoping the AIs will learn. Stuff like the training bodyguards or security get. Or even chauffeurs.

    I think they'll reduce the problem space, declaring AI only lanes and pathways, or maybe even whole roads.

    What I hope could come out of it is tele-operation. No reason home bound or handicapped people couldn't be the human in the loop, or even the entire brains behind the remote operation. Kind of like taxi drivers or uber, but they log into your vehicle and 'do the driving for you.' It could even be automated so if you get in drunk, the car calls for you.

    I don't think we'll see vehicles as capable as even the worst humans anytime soon. And once a car kills its occupants to 'save' someone else, the ardor will cool. No one will believe that killing the occupants was 'the only choice.'


    1. Tele-operation is a neat concept. It would require wide bandwidth radio links for the real time control. Time lag could be fatal, so that would be a design priority.

      The aviation industry has talked about having people on the ground remotely operating the aircraft as a way around the looming pilot shortage. This could be a way around the shortage of long haul truckers that they say is looming. In both cases, I'm not sure what you gain because the operator has to be fully qualified and fully involved. It's not like you can get away with using people without the same skill sets, or like you can use one pilot for several aircraft/one driver for several trucks.

      One of the reasons that Uber car killed the woman in Arizona is that there was a human driver to regain control, but he was "zoned out" and not attentive enough to grab control.

      I think they'll reduce the problem space, declaring AI only lanes and pathways, or maybe even whole roads.

      One of the earliest things I saw on autonomous cars was that it was far cheaper to pack more traffic density on a road by putting electronics in the vehicles than it was to build more lanes.

      We'd want the vehicles to have separate lanes or roads but there isn't enough money to build an alternative road system.

  7. Maybe they can run them with scaled up versions of this:


    1. That's pretty cool! I'm sure there's some trade offs between more moving air in his eight cylinders and the losses in the gears, but if all he cares about is making a moving display, you've got to admire it. Especially since it's scrap material, like hair spray cans and aluminum bottles.

      I know that NASA has a few design notes on the use of Stirling engines for places where fuel is hard to supply, so they run on heat from the sun or what would be wasted heat, but I haven't looked into them deeply.

  8. The fact that we haven't got self-driving trains (which is a one-dimensional problem of a point object on a line) down pat yet should have been a huge flashing red light warning to engineers that self-driving cars would be a pipe dream for decades before any hope of being ready for prime time.

    We've already seen how the current crop uses random pedestrians as their beta-testers. (Once, apiece, generally.)

    Cars are generally a two-dimensional problem set. (At least, until we get to multi-level parking structures.)

    Self-driving planes and ships, in three dimensions?

    Sh'yeah, about the time you grow your third set of teeth.

    Unless we're going to apply ship and plane navigational criteria (1 mile horizontal and 1000' vertical separation for airliners, for instance) to autos.

    One car per horizontal mile for autos on the roads and streets ought to bring some lovely implications for highway management, shouldn't it?
    Short of that...yeah, probably not in my lifetime.

    Because testing in Phoenix suburbs is going to become a problem when they turn one loose in Manhattan, let alone a Rome traffic roundabout. And then the beta-testing pedestrian problem starts approaching the kill ratio of Nazi camp guards.

    1. But they don't care. Because their current testing is primarily eliminating Deplorables, which is their goal anyway. And once it moves to the hives, ABCNNBCBS and their dead-tree fellow travelers will ascribe the deaths to Globull Warming, just as they already do for everything else. And the hive dwellers will lap it up!

    2. And most of those dead pedestrians will only be Goyim, anyway. They won't count anymore then than they did back when the tribe was helping Lenin and Stalin.

  9. I like your example: "need to drive 275 million miles to statistically prove". If they wired 10% of vehicles to observe real driving in all conditions, that could be data for machines to know and apply accordingly in self driving cars. SO would/could computers know when and where? I don't see that happening. Like someone said -I'm still waiting for my flying car.

  10. Two important issues regarding automated vehicles.

    1. Snow. Slippery, inconsistent road surfaces and poor visibility.

    2. Terror. Autonomous vehicles as bomb delivery systems.

  11. It’s been a real struggle developing autonomous flight systems for UAV’s in traffic. The problem is much harder for cars. Aircraft can maneuver in 3 dimensions plus adjust their speed, and the congestion is light compared to road traffic. Cars are limited to roads and lanes, have to deal with all kinds of traffic, have to recognize and obey traffic lights and signs, have to deal with emergency vehicles. A cars maneuver options vary constantly depending upon the adjacent and following cars, weather conditions, condition of the car itself (flat tire). Things (like children and dogs) suddenly appear in front of cars.

    The environment is easier for the smart car to handle if all road vehicles are part of an autonomous networked system and under control of a real-time master transportation co-ordinator. But that adds a new and very complicated problem of its own.

    The navigation problem is harder for cars as well. Except for landing and take-off, if a plane has a 100 ft position error its nothing. For a car, 2 feet can be a disaster. A car needs to stay on the road and in its lane whether its in a city or a rarely used rural road in the middle of Kansas. Is the USA going to rework and maintain every road in the nation, implant sensors, put down special paint or markers?

    And whats the standard and protocol for these sensors, for things such as collision avoidance? Uber, Google, GM, Apple, etc. can’t all have different standards. The aircraft companies have fought each other for decades over these types of issues because a small word change in a standard can mean $Billions to a company. Car companies will be the same.

    All of this drives the end result to a government run and government controlled transportation system, with no cars/trucks (except for cops and politicians) under the control of a human driver. Maybe not even personally owned vehicles at all.

    So its not just designing a smart car. Its an entire system, it requires a national infrastructure, and its technical and political. But it’s not going to happen soon – just doing the car is hard enough. I don’t expect it in my lifetime.

    1. Aircraft can maneuver in 3 dimensions plus adjust their speed, and the congestion is light compared to road traffic.
      I like to say that aircraft autopilots don't have to be alert for kids stepping off a cloud and running in front of them. No horse-drawn vehicles, no bicycles, nothing but very similarly equipped airplanes up there. They can separate them by having them take off a minute apart and keep the same velocity.

      The problem is much bigger for cars. Aviation has decades of the companies working together in industry boards to decide how these systems work (collision avoidance systems and such). The auto industry is going to have to do that, too.