Early in my career, I worked for a subcontractor to Boeing Commericial Airplanes. I've worked in Silicon Valley ever since. As a swag, the % of budget spent on verification/validation for flight-critical software was 5x versus my later jobs.
Early in the job, we watched a video about some plane that navigated into a mountain in New Zealand. That got my attention.
On the other hand, the software development practices were slow to modernize in many cases e.g. FORTRAN 66 (but eventually with a preprocessor).
As an aerospace software engineer, I would guess that, if this actually was triggered by some abnormal solar activity, it was probably an edge case that nobody thought of or thought was relevant for the subsystem in question.
Testing is (should be!) extremely robust, but only tested to the required parameters. If this incident put the subsystem in some atmospheric conditions nobody expected and nobody tested for, that does not suggest that the entire QA chain was garbage. It was a missed case -- and one that I expect would be covered going forward.
Aviation systems are not tested to work indefinitely or infinitely -- that's impossible to test, impossible to prove or disprove. You wouldn't claim that some subsystems works (say, for a quick easy example) in all temperature ranges; you would definite a reasonable operational range, and then test to that range. What may have happened here is that something occurred outside of an expected range.
3 dead, 133 survivors, started (undisputedly) with pilots intentionally and with approval trying to fly a plane full of passengers 30 meters off the ground, possibly with safety systems intentionally disabled for demonstration purposes.
Past that, there are disputes what kept them from applying enough power and elevator to get away from the trees that they were flying towards, including allegations of the black box data being swapped/faked.
In most capitalist organizations QA begs for more time. "getting to market" and "this years annual reports" are what help cause situations not here, not the working class, who want to do a good job.
Not involved with this particular matter. What I would want to see is logs of the behavior of the failing subsystem and details of the failing environment. This may be able to to be reproduced in an environmental testing lab, a systems rig lab, or possibly even in a completely virtual avionics test environment. If stimulating the subsystem with the same environmental input results in the error as experienced on the plane, then a fix can be worked from there. And likewise, a rollback to a previous version could be tested against the same environment.
Just to be clear, I’m not faulting Airbus. I take issues with the shallow snark at Boeing. The JetBlue incident was serious.
Airbus isn’t immune to controversies , like AF447 or Habsheem air show crash in 1988