Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But what does it say about their QA or lack of it?

Just to be clear, I’m not faulting Airbus. I take issues with the shallow snark at Boeing. The JetBlue incident was serious.

Airbus isn’t immune to controversies , like AF447 or Habsheem air show crash in 1988





As software developers, we should perhaps refrain from criticizing aeronautical engineers' QA standards.

"push to prod, let the users debug for us" would at least, I'd hope, offer lower ticket prices for said users.

Early in my career, I worked for a subcontractor to Boeing Commericial Airplanes. I've worked in Silicon Valley ever since. As a swag, the % of budget spent on verification/validation for flight-critical software was 5x versus my later jobs. Early in the job, we watched a video about some plane that navigated into a mountain in New Zealand. That got my attention.

On the other hand, the software development practices were slow to modernize in many cases e.g. FORTRAN 66 (but eventually with a preprocessor).


Likely air New Zealand flight 901 which crashed into mount Erebus in Antarctica (not in New Zealand proper) in 1979. https://en.wikipedia.org/wiki/Mount_Erebus_disaster

Yes, thanks for the identification.

As an aerospace software engineer, I would guess that, if this actually was triggered by some abnormal solar activity, it was probably an edge case that nobody thought of or thought was relevant for the subsystem in question.

Testing is (should be!) extremely robust, but only tested to the required parameters. If this incident put the subsystem in some atmospheric conditions nobody expected and nobody tested for, that does not suggest that the entire QA chain was garbage. It was a missed case -- and one that I expect would be covered going forward.

Aviation systems are not tested to work indefinitely or infinitely -- that's impossible to test, impossible to prove or disprove. You wouldn't claim that some subsystems works (say, for a quick easy example) in all temperature ranges; you would definite a reasonable operational range, and then test to that range. What may have happened here is that something occurred outside of an expected range.


> Habsheem air show crash in 1988

Surprisingly Google couldn't handle the typo: Habsheim. https://en.wikipedia.org/wiki/Air_France_Flight_296Q

3 dead, 133 survivors, started (undisputedly) with pilots intentionally and with approval trying to fly a plane full of passengers 30 meters off the ground, possibly with safety systems intentionally disabled for demonstration purposes.

Past that, there are disputes what kept them from applying enough power and elevator to get away from the trees that they were flying towards, including allegations of the black box data being swapped/faked.


In most capitalist organizations QA begs for more time. "getting to market" and "this years annual reports" are what help cause situations not here, not the working class, who want to do a good job.

a problem on 1 flight in a gazillion, and you complain about QA?

Yes? How do you know it’s not? They roll back to a previous version. How do they know that version isn’t prone to the same issue?

Not involved with this particular matter. What I would want to see is logs of the behavior of the failing subsystem and details of the failing environment. This may be able to to be reproduced in an environmental testing lab, a systems rig lab, or possibly even in a completely virtual avionics test environment. If stimulating the subsystem with the same environmental input results in the error as experienced on the plane, then a fix can be worked from there. And likewise, a rollback to a previous version could be tested against the same environment.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: