Tuesday, April 7, 2020

Meanwhile... Boeing To Redo Failed Starliner Test from December

It probably won't surprise any readers that after the troubles (one, two, three) with last December's Starliner capsule mission (unmanned), Boeing has committed to flying another mission to show their manned capsule can work.  The announcement came Monday evening. 
"We are committed to the safety of the men and women who design, build and ultimately will fly on the Starliner just as we have on every crewed mission to space," Boeing said in a statement. "We have chosen to refly our Orbital Flight Test to demonstrate the quality of the Starliner system. Flying another uncrewed flight will allow us to complete all flight test objectives and evaluate the performance of the second Starliner vehicle at no cost to the taxpayer."
Readers will remember that the flight only met the lowest level objectives that were set for it.  It never achieved an orbit that would allow the Starliner to dock with the Space Station, and the week long mission was shortened to two days.  In the aftermath, several potentially disastrous software problems were found.  That's when it hit the fan.
...This led NASA's chief of human spaceflight, Doug Loverro, to designate the December flight a "high-visibility close call" in March.

Asked to explain why he did this, Loverro told reporters at the time, "We could have lost a spacecraft twice during this mission." As a result of this, NASA has begun to investigate Boeing's safety culture. The agency also formally opened a process during which its Safety Office will investigate space-agency elements that may have led to the incident—likely focusing on why NASA did not detect the errors in Starliner's flight software.
Boeing says they will pay for this mission in its entirety, for which it budgeted $410 million.  No date has been given for the flight, even a preliminary "No Earlier Than" (NET) estimate.  Sources to Ars Technica say to expect it no earlier than next fall. 

That means that SpaceX could potentially fly more than one of their Crew Dragon missions to the ISS before Boeing is able to fly a manned Starliner mission - I expect that would be NET than early 2021. 

All of this subject to the additional restrictions and re-scheduling of everything, everywhere due to the virus pandemic. 


Starliner test vehicle on the pad, 12/19/19. Trevor Mahlmann photo.





7 comments:

  1. Between the screw ups on Starliner and 737 Max, I think Boeing has a REAL problem with their current software development approach.

    Sweet justice would be that Boeing launch Starliner on top of a Falcon 9 to reduce costs.

    ReplyDelete
    Replies
    1. It's not just that. I heard within the last few days that the FAA issued an Airworthiness Directive against the 787, requiring operators to apply the "Universal Software Fix" - turn the aircraft off and on every 51 days. To prevent "several potentially catastrophic failure scenarios".

      That one strikes close to home because the company I retired from did a *lot* of work on the 787. It carries several radios I worked on. I don't do software, but was surrounded by people who did. Now I have to wonder where the software came from.

      Delete
  2. So, are they going to tie firecrackers to a cat's tail? Are they going to use Everclear for fuel? Sorry, but my opinion of Boeing's engineering acumen would indicate that they should be building plastic Revell models instead of real spacecraft.

    "If it were our airplane, it would be crashing." - Quick Change

    ReplyDelete
  3. My uncle and I have some differences of opinion on software engineering philosophy. He's an avionics engineer, and he may know what he's talking about in his field. Still, the "restart every x days" bug seems reminiscent of something I was discussing.

    I was complaining about some random bit of nonsense banging about the software engineering field: Pointers being "deprecated" in compiled languages. I said it was a necessary language feature to be able to refer to and manipulate memory - to do in language what the processor *does*. My uncle disagreed and said (paraphrasing) that programmers are too dumb to manipulate their own memory and had to be protected from making any mistakes - hence the towering pile of difficult object oriented abstraction that takes the place of telling the processor what to do.

    I replied that all this garbage between the programmer and the hardware is just making it more difficult to reason about what the hardware is doing, and if you can't understand what you're doing, you can't avoid mistakes. I gave an example of passing large objects by reference instead of by value. If you can't refer to where it lives, you have to make a copy every time you descend into another call - this will fill up your memory and make things terribly slow.

    My uncle did mention that that was more or less exactly what happened in some bug that they spent months trying to track down. I was thinking giant gigabyte meshes, but when you're talking about embedded processors with microscopic (10s - 100s kB) memory even arrays can be too large to manipulate by copying them every time they're passed as an argument.

    I still think that the programmers need full low-level control and just need to understand what they're doing. There are disciplined ways of managing things so that you don't make mistakes. On the other hand, some complicated object that does all sorts of garbage collection and manipulation in the background makes the memory-behavior of the program an unintelligible black box. You *can't* protect the hardware from the programmer.

    MadRocketSci

    ReplyDelete
    Replies
    1. I'm guessing you spent time programming in assembly language, down to the metal.

      Back in the early '80s, the Pentagon raised a warning flag about software in DOD jobs. They said it was becoming the bane of DOD contracts, that virtually all jobs were delivered late because of SW and it didn't work as it was supposed to when delivered. Back then, I had very little visibility about the software content in projects.

      Some years later, when I went into commercial avionics, the industry had long-ago adopted a set of regulations for SW called DO-178. It added TONS of monitoring, reviewing and other overhead to SW development. No line of code, no procedure, nothing, was allowed without a requirement, that requirement was reviewed, the code was inspected and tested.

      It led to the cynical saying, "one day to write the code, one week to document it, one month to write the requirement." In low to the ground commercial companies, they say, "we'll fix it in the software" as if that's the cheapest way to handle things. In the avionics world, it was becoming a thing to "fix it in the hardware".

      It became a regular occurrence during development to be testing radios and find the software did exactly what the documents said it should, but it was the wrong thing.

      It's why I've been saying for years that Software will be what ends the human race. Not AI per se, I started saying this ages before there was a lot of talk about AI, just SW in general.

      Delete
    2. Fighting for space on the headstone of our civilization: Microsoft Powerpoint. It's a serious contender just on its own.

      Actually, that's an interesting point - software doesn't just allow us to compute the solutions to hard mathematical problems: It also provides a very poor way of interacting with the world and each other, possibly diverting all our communications into a deranged set of channels. I remarked the other day that it takes 15 minutes to boot up CATIA for a CAD experience that's so bad it's like typing a novel with mittens. It took older drafstmen 15 minutes to lay out an airplane concept sketch with calculations and center of mass.

      MadRocketSci

      Delete
  4. It sounds like someone was trying to use some sort of garbage collector to clean up leaky memory. Garbage collectors never really clean when they're supposed to and never as completely as if you had ensured no memory leaks to begin with. (Why Java is not a systems language!)

    Ideally you wouldn't have any dynamic allocation/deallocation to begin with and you would know your memory layout on the device!

    MadRocketSci

    ReplyDelete