All joking aside, as a former NASA Space Shuttle engineer I'm very impressed by this private-enterprise venture into heavy-lift launch services. People have often speculated about how much cheaper launch services might be if they were in the private sector -- now we can find out. The preliminary signs are very good.
I read your IAmA where you had a reply to a message, now deleted, in which you must have been asked about aerial circumnavigation. You mentioned that you'd use a solar balloon, and that you merely would have to have a strategy for darkness.
I've spent a fair amount of time thinking about this, and it may interest you to know that:
Water vapor has both a lower molar density than air
The phase transition of water vapor gives more than enough energy to heat up more than the equivalent volume of air to displace the reduced volume of water vapor turning to liquid.
So, a possible strategy for darkness is to have a two bladdered balloon, one with water vapor + air, one with just air. Then, when the water vapor condenses, transfer the heat to air sourced from outside the balloon via a counterflow heat exchanger. Put that warmed air in the air bladder. You won't stay at precisely the same height, but you will retain volume and you can stay buoyant.
Many things work well when your thermal energy storage is positively buoyant.
Regards,
Come visit the next time you're free around Berkeley or Oakland.
This is an interesting altitude-control scheme. When I think about this problem, I consider a method that involves a flexible helium container (in the simplest form, a helium balloon) and a control system that involves pumping helium out of the balloon into a high-pressure tank to descend, and releasing it from the tank to the balloon to rise. It's a variation on the submarine buoyancy method, which also requires power to run pumps.
The advantage is that the method doesn't just throw away the helium, it recycles it, but the recycling activity require a lot of power. And it's way complicated compared to your method.
Some key parts: "Lutus designed electronics for the NASA Space Shuttle and created a mathematical model of the solar system that was used by the Jet Propulsion Laboratory during the Viking Mars mission" and then "... he started writing computer programs on his first personal computer, an Apple II. In the 80s, he would eventually program Apple Writer,"
I also recommend his book about sailing around the world "Confessions of a Long-Distance Sailor"!
My all-time favorite sailing book quote, from a book titled "Never Again", about a sail in the terrifying "South 40s" near Antarctica: "After the mast blew off, the boat become much more stable." :)
Why 'f * ing'? (Sorry, I'm italian... of course I know the meaning of the word, but I would like to know why do you use it for Paul Lutus, TIA)
I'm not the OP, but in some circles the four-letter expletive is used to emphasize the importance or intensity of something. E.g. "William [freaking] Shatner" means the original, unique, William Shatner, not some other William Shatner. "So [freaking] good!" means extremely good.
Hey I think I met this guy. I was in Houston for a NASA internship, he had a telescope set up in front of a Barnes & Noble, and he was introducing random passers by to a view of Jupiter (or Saturn) in the telescope.
That reminds me of a euphemism that I found delightful that I heard once while working for a space company... "achieving submerged geostationary orbit," I think it was.
I always thought "controlled flight into terrain" was a good euphemism for "pilot error leading to crash". If you don't think about it, "controlled flight into terrain" almost sounds good: It's controlled! It's flight! Terrain is involved!
It really means "The plane crashed because of pilot error, not because the pilot lost control due to a mechanical fault or extreme weather."
> I always thought "controlled flight into terrain" was a good euphemism for "pilot error leading to crash".
Usually, but not always. The "controlled flight" part means the airplane wasn't either broken or outside its normal control envelope. The implication is that the crash resulted from something other than an inability to control the airplane. The usual assumption at that point is pilot error, but there are other possibilities -- malfunctioning navaids, bad charts, bad instructions from the ground such as incorrect headings or altimeter settings.
Once an airliner pilot asked for clearance into La Guardia in NYC involving a path that led across the downtown area after dark. He was given a flight level in meters but understood his assigned altitude to be in feet. He was flying between the buildings when the ATC and pilot sorted out their unit-of-measurement difficulties. The only reason ATC knew something was wrong was because the aircraft had an altitude-reporting transponder, or we might have had a 9/11 level catastrophe much earlier.
On that topic, in 1945 a B-25 flew into the Empire State Building in fog, something most people have long since forgotten:
To summarize, the aircraft model with which the crew were most familiar had a pitot heat switch that was activated by an upward movement (the ergonomic standard direction), but this aircraft required a downward movement (or the reverse, I forget which it was, but they were reversed). This meant they went through the checkist and set all the controls, but this control was set wrong through no fault of the pilots (more the fault of the manufacturer). Result: no pitot heat in winter conditions.
Next, during the after dark flight, the pitot tube froze solid in icing conditions, after which the airspeed indication became an altitude indication, at a time when the aircraft was in a climb. This made the pilots think they were overspeeding the aircraft, and they responded by pulling back on the controls. The stick shaker worked as it should have (warning of a speed approaching a stall), But the pilots interpreted this as ... get this ... mach buffet. The pitot system kept delivering seemingly higher and higher airspeeds as they climbed and (in reality) approached stall speed.
The aircraft finally stalled fully, deeply, and unrecoverably. For those who don't know this, you must not ever stall an airliner, because they are perfectly balanced front to back, to save fuel, but the side effect is if the aircraft is stalled, it cannot be recovered and will flat-spin right into the ground.
Small private planes will typically nose down in a stall, often recovering right away for an inexperienced pilot, but airliners have different priorities, one of which is economical operation. An economical airplane cannot afford to have a constant air pressure on the top of the elevator control surfaces, so this is designed out. But in trade, you must not ever stall the aircraft or you will lose it.
In the final analysis, multiple factors were involved (as usual), but the fact that the pitot heater switch had an activation direction opposite the ergonomic standard, all by itself could have prevented the crash.
It's my favorite story about the value of adopting consistent ergonomic standards -- to increase a quantity or activate something, controls should move up, or to the right, or clockwise. To deactivate or decrease a quantity, the reverse. How hard is that?
For those reading this who aren't pilots: pilots aren't ghouls, we like reading accident reports because every report teaches us something that we might use to save ourselves and/or our passengers if we encounter the same conditions.
Yes, I need to retract my prior comment about the engine "exploding" [1]; the failure mode was much more benign, and the surround engines were never under threat. However, the nozzle was very clearly ejected following the shut-down, so for all intents and purposes, it became an ex-engine. And the Falcon didn't flinch. Impressive.
Watching this private space stuff manifest is absolutely thrilling. I was tangentially involved in things way back in the early 90's, when the private space geeks were regarded as crackpots by the mainstream aerospace community, if indeed they were acknowledged at all. They certainly didn't have any money (aside from odd forays such as Andrew Beal's[2] and Gary Hudson's[3] abortive adventures), and often did come across as crackpots, but for the life of me it seemed like they had a valid point. To see those sort of folks finally succeed is just infinitely thrilling.
Just a heads up - I tried to send a message to you on your website (arachnoid.com) and got the following error:
Error: embedded tags.
Warning: eregi() [function.eregi]: REG_EMPTY in /home/arachn5/public_html/messages/processMessage.php on line 84
Please press the "back" button to correct your entry.
I don't have any HTML in my message, and I'd really like to send it :-) Do you have an email address I could use please?
> Do you have an email address I could use please?
Yes, But for obvious reasons I don't want to post it in this forum.
Please post a plain-text message to my message board and I will reply. Then you can embed links and tags in your reply if you want. That way neither of us needs to post our e-mail addresses in a public forum.
I was curious about how reliable we should predict a Falcon 9 launch to be based on the stated design parameters (survives the loss of any 2 engines) and current launch data.
To make comparisons, IIRC the failure ratio of the unmanned missions is 10-15% and in the manned missions is about 2%.
An important detail is what happens when 3 or more engines fail simultaneously. Can they transform the mission in a soft landing or the "payload" is totally doomed? (For example, the Apollo XIII mission was a failure, but no one died.)
For a crew mission, yes the mission would be aborted and the Dragon launch abort thrusters would fire getting them away from the rocket as fast as possible and deploy a parachute where they would land in the ocean.
One factor that I don't think has been answered, if this engine shutdown was going to be the 3rd and would cause a mission loss, could if have been persisted with.
It isn't clear whether it was on the way to complete engine failure or it was shut down to play it safe.
Two thoughts here.
1) You're working from an awfully small sample, especially for a component that is supposed to be of such a high reliability. Then again, look at the probabilistic assessment of Shuttle success vs. actual.
2) The Merlin 1C engine is only going to make a few more flights afaik. Starting some time in 2013 they'll be switching to Merlin 1D.
Indeed, predictions are extremely hard. Especially about the future.
At least I am explicit in the assumptions behind that model. They are:
1. A priori, all possible reliability numbers for a single engine are assumed equally likely. (This can be debated endlessly, but you need SOME prior for Bayesian analysis. If we had more data, then the prior would matter a lot less, but we don't so it does.)
2. Failure of engines is independent.
3. The rocket actually operates according to design parameters. That is it will survive the loss of any 2 engines, but not the loss of 3.
4. Past performance is a predictor of future performance.
Unfortunately #2 is extremely unlikely to be true. Engine failures are very related in very unexpected ways. The SpaceX guys have worked very hard to make them as independent as possible but with so many failure modes it is impossible to say that an engine failure (i.e. explosion) will have no impact on its neighbors.
Actually, in the NAS Oceana F/A-18 crash, two different engines failed for different reasons. The first one was a right engine compressor failure, the second was an apparent afterburner blowout. A twin engine jet crashed as a result.
Do you have a primary source? I'm genuinely interested in the report if it is available. Google just gives news stories saying "OMG a plane crashed into an apartment building!" which isn't helpful to the engineer in me.
Here is the mishap report http://goo.gl/GuHG5
I mentioned this because double engine failures due to separate causes do actually occur, although rarely.
Short version: The right engine compressor failed due to apparent fuel ingestion, causing a major over-temp. The noise was mistakenly attributed to a blown tire, so the pilot left the gear down. This required MAX Afterburner on the remaining engine to recover, except the engine had a afterburner blowout, and didn't provide MAX power and the jet departed controlled flight due to low speed.
Thanks for the report. A really interesting read for the engineer in me. Under "primary cause analysis" on page 18 of the report (pg 24 of the pdf) it says that they may have actually been related:
In summary, after the right engine failed due to fuel ingestion, the left engine had to push some air over to the non-functioning engine (for cooling I assume but it isn't stated). When the left engine afterburner did not light, it's "relight logic" did not trigger possibly because of the lower air amount. So the engineer who wrote the relight logic, assumed that the temperature would drop at a certain rate when the afterburner failed to light. Because the engine was working to assist the failed engine, that temperature drop did not happen and thus the afterburner did not attempt to automatically relight itself.
> 4. Past performance is a predictor of future performance.
This is your real error. SpaceX is effectively acting in perpetual “test” mode. New lessons are learned with each flight and each rocket test, and that greatly informs all follow-on operations.
The type of analysis you've done is reasonable for a fleet of deployed 747's. It is (forgive me) entirely useless for this sort of endeavour at it's current stage.
I wonder what your analysis of SpaceX's future performance would have been in Oct 2008, right after their first successful Falcon 1 launch and three failed priors?
Horizontal axis is how many engines failed, and vertical axis is how many times that number of engines failed in the simulation.
The code that generated this is:
from scipy.stats import *
from pylab import *
failures = [binom(9,p).rvs() for p in beta(1,1,1e5) if binom(9,p).rvs() == 1]
hist(failures, bins=9, range=(0,9))
show()
The way this code works is it first picks a `p` from the prior. This `p` represents the failure probability for a single engine. Then we simulate the number of engines that fail when you have 9 of them, and filter out just the worlds where that number is equal to 1 as in the SpaceX launch. Or equivalently, we annihilate all the worlds that have a different observation than ours -- this is a central tenet in Bayesian statistics in contrast to frequentist statistics: we only base our inferences on the things that happened and not on things that hypothetically could have happened but didn't. In a slogan you could say "our fantasies are irrelevant". Then in the worlds that remain, where the same observation was made as in our world, we simulate the number of engines that fail on a new rocket launch and collect the results in a histogram.
So the posterior probability on the entire rocket failing for uniform prior is around 25%. Uniform prior means that you believe that all single engine failure probabilities are equally likely: you think it's equally likely that engines fail with 10% probability as with 63% probability. Tweak the first two parameters to beta to change your prior belief. If you've seen n engines fail in your life and k engines succeed, then setting the first parameter to n+1 and the second to k+1 is a reasonable choice (so the current setting corresponds to not having seen any rocket launches prior to this one). For example if you've seen 2 engines fail and 99 engines succeed you use `beta(3,100,1e5)`: https://dl.dropbox.com/u/388822/rocketfailuredistr_for_optim... Hardly any probability mass left for entire rocket failure :)
In Bayesian analysis you have some set of prior theories with associated probabilities, you observe data, you alter your estimate of the likelihood of those theories, and then your next prediction becomes a weighted average of those predictions.
Your analysis is a reasonable first cut, but the above question is a good one, and your reply is rather incorrect.
First, you haven't really done a Bayesian analysis, for several reasons.
The easiest problem to fix is, you didn't specify any priors. We could validate your style of calculation by assuming single-engine failures are IID with parameter "p," uniform on the unit interval. (A beta distribution would be the standard conjugate prior.)
If we go ahead and make the uniform-p assumption, then what you've calculated is a most-likely posterior value of "p" (35/36). (This is the maximum-likelihood estimate for p.) But in a Bayesian world, "p" has a full posterior distribution, not just a most-likely value.
So, still in the Bayesian world, the probability of failure (P(N_fail > 2)) must be calculated on the basis of not the most likely p, but the posterior of "p". You didn't do this; you just used the most-likely p.
Which brings us to the second problem with your reply. You really can get a confidence value on P(N_fail > 2). You can get a full posterior distribution! It will be a 1-dimensional density on [0,1]. And you could calculate this, either by simulation or by an analytic procedure (because it's a low-dimensional problem).
This posterior on P(N_fail > 2) would be the answer to the parent's question. It would probably be rather "fat", validating the intuition that we don't have much data.
My reply might be a little smarty-pants. Sorry if it is! As I said, your analysis is a reasonable first cut.
The easiest problem to fix is, you didn't specify any priors.
Read the whole thing more carefully. I started with the calculation for the maximum likelihood estimate, but I ended with a prior with equal a priori likelihoods for failure rates 0%, 0.01%, 0.02%, ..., 99.99%, 100%. This is a reasonable discretization of uniform on the unit interval.
My reply might be a little smarty-pants. Sorry if it is!
I would suggest that before indulging a tendency to be a smarty pants, that it is good to read the whole thing.
You're right, the second half of your post in effect puts a two-atom prior on "p", with zero everywhere else, and then goes on to use a sequence of such atoms, which would approximate a uniform prior. It's more standard to use smooth priors, because we don't have precise information, but you are right, I was not reading carefully.
You are still in error that there is not a way to describe the uncertainty in your estimate of the posterior probability of system failure. It has a posterior distribution, like everything else in a Bayesian analysis. You would compute it as I described -- Monte Carlo would be easiest.
The question _does_ make sense - you have some probability distribution that represents your belief, you observe data and form an updated probability distribution (your posterior) that represents your new belief incorporating the observed data, and the question asked (in a roundabout way) what the variance of the posterior distribution was.
I interpret the question along the lines of "if they do a hundred launches, what's your probability distribution on the number of them that fail?"
If I have a normal coin and a coin which is double-sided (but I don't know which side), I'll give 50% for both of them coming up heads next time I toss. But if I toss them a hundred times each, my probability distributions for them look totally different.
I suspect this information is encoded in your priors, but I don't know offhand how to access it.
So thinking about this some more: we're assuming engines have some "true failure rate" which we're trying to divine from evidence. 2% is currently the mean of your distribution on TFR, but the relevant question is what's the variance of your distribution?
Uniform prior, updated on the evidence that 1/36 engines have failed, I think this gives P(TFR = x) = x(1-x)^35 / (int x(1-x)^35 dx from 0 to 1). Apparently the integral is 1/1332, so P(TFR=x) = 1332·x·(1-x)^35. But that seems to have a mean of 5%, compared to your value of 2%, so I may have done something dumb?
You can answer that question for each prior. The probability of, say, 3 failures is the sum over all priors of the probability that that prior is true, times the probability that it would leave you with 3 failures.
I leave writing a program to calculate this as an exercise to the reader. I've put enough time in on this one already, I have paying work to get back to.
Not really, if you'd built your model from a billion launches and 20 million failures, a dozen failures in a row won't change your priors much at all. You've built your model off of far less data.
How many failures are required to change your model from 2% to, say, 4%?
It is easy enough to modify the linked script for any scenario that you want.
If 2 engines failed on the next launch, the model would predict a 4.8% chance of failure on the following launch.
As for your "build your model from a billion launches and 20 million failures" comment, if I had that much data, then it wouldn't much matter what reasonable set of priors that I started with, I'd wind up convinced that the true failure rate was very close to 2%.
Note that the prior that I am talking about is the distribution of possible theories before I saw ANY data.
I could. But that would be inappropriate here. The lower bound of any such interval would be astonishingly close to a failure rate of 0, while the maximum likelihood estimator is itself fairly close to 0. Therefore any reasonable set of priors usually puts you in the upper tail.
For instance one such interval has a MLE of an engine failure rate of 2.78%, a lower bound of 0.1% and an upper bound of around 15%. (Corresponding rocket failure rates range from 1 in 10 million to about 9.9%.)
Good stuff OP, nice to see Bayesian getting more attention recently. However, after reading your blog post, I can't help get the feeling this is how a natural frequentist would approach the problem. This method is effectively relying on known, statistical record of past events to form the priors.
Often a more useful and appropriate construct in the Bayesian world, is the use of a belief network or Bayesian Network. This is a probabilistic directed acyclic graph (DAG) that encodes priors, often in the form of subjective beliefs (yes subjectivity can be useful), including specific domain knowledge.
Common example: Consider a naive Bayesian classifier (a specialized form of belief network) that identifies individual pieces of spam. Do we arrive at the spam score by entering the probability of past events into a simple model based of the Bayes theorem formula?
No, it's trained using the vast amount of domain knowledge and pattern recognition (through our experience and own estimation of what 'spam' is) encoded in our minds, that provide the priors. Thus, even though there is a large amount of subjectivity involved, the overall result can objectively be measured, within a given utility function. Incidentally, this is often what makes many hardcore empiricists 'nervous', and hence avoid belief networks altogether.
Coming back to the Falcon 9: A piece of prior information outside the scope of historic safety records, for example, one of the lead engineers having a nagging doubt about a particular technical risk based on some observed phenomenon, could have an impact on the real world probability of the next event being a failure. (Which is a pretty useful thing to know!)
In fact, this exact scenario happened in 2003 with the disastrous destruction of the Space Shuttle Columbia. [1] An engineer spotted something wrong on previous flights, but management failed to heed the warning[2]. This could quite possibly have been averted, if a risk mitigation model were in place to account for such evidence.
Looking forward, it's quite possible to imagine a future where this decision making has been outsourced to a sophisticated AI based off a Bayes net, with far more accurate real world modeling of risk and failure probabilities, outclassing the amount of evidence and a human or committee could possibly hope to compete with.
While I've nothing against frequentist approaches (albeit Bayesian naturally makes more intuitive sense to me), a minor drawback is the reliance on the past to predict the future. For example if you had safety records on 1 million previous flights, then one might be tempted to say, "well that's that then, we now know objectively the probability of failures in the future -- end of story". But, the 1 000 001 flight may have been designed to fly on a completely different type of technology, that will change significantly change the safety record of space flight going forward for the next "x" years. Thus using a Bayesian approach account for all relevant priors, it would in theory be possible to reflect a more accurate probability for the 1 000 001 flight, before it took place.
Lastly Bayes nets are not the best tool for every job, and do have drawbacks in certain situations. They are vulnerable to things like Bayesian poisoning or confirmation bias. A Bayesian approach is only as useful as the ongoing real world relevancy and accuracy of the priors. As the old adage goes, GIGO - garbage in, garbage out.
> for example, one of the lead engineers having a nagging doubt about a particular technical risk based on some observed phenomenon, could have an impact on the real world probability of the next event being a failure. (Which is a pretty useful thing to know!)
You should know that there's no such thing as "real world probability". The rocket will crash, or it will not, period.[1] Probability, as a measure of your own ignorance, is subjective.[2] Your main point still stands though: knowing about the uncertainty of that lead engineer certainly should influence your assessments of the risks involved.
[1] What will actually happen is, the universe splits into many "worlds" (blobs of amplitude in configuration space), a fraction of which will have the rocket crash, and the rest won't. That's the closest thing we have from "real world probability", though it really isn't: the laws of physics as we currently know them are still deterministic.
Indeed. We pretty much agree then. If you re-read the "real world probability" in the context, I was talking specifically about a belief network. The degree to which a justified belief, in an outcome will occur. All beliefs are by their definitions 'subjective' and occurring in a mind.
Actually my current thinking over the last decade mostly aligns with what could be described as physicalist view of the reality, so even 'subjective' thoughts, ideas, concepts etc exist objectively in a physical sense as well (glia cells, neurons etc). (but that's a whole other topic ;)
I simply worded it 'real word' because I was attempting (perhaps ineloquently I will concede), to differentiate between frequentist and the Bayesian understanding of the term probability, because they differ [1].
Bayesian favors bringing in a priori beliefs into the model whereas a posteriori consideration of a problem, as occurs in frequentist approaches, favor isolation of the model.
>What will actually happen is, the universe splits into many "worlds"
Interesting, you state that so.. assertively :p I'd give the chance of a many worlds interpretation corresponding well with our physical reality, a low probability event, with a pretty high credibility interval ;)
I'd give the chance of a many worlds interpretation corresponding well with our physical reality, a low probability event, with a pretty high credibility interval ;)
There is a theorem that if an experiment and observational apparatus are both quantum mechanical systems, then the many worlds hypothesis describes what happens when that experiment is observed with that apparatus. If quantum mechanics is merely a good approximation of some better theory, then to whatever extent it is a good approximation of the system, the many world hypothesis remains a good description of that interaction.
Therefore your confidence that the many-world's hypothesis is an inaccurate description of what happens when you observe the outcome of a quantum mechanical experiment is an insistence that your brain and body are not well-described by the best theory that physics has for how the world works.
I think most physicists agree that at the bottom, we have a distribution of "complex amplitude"[1] over a "configuration space"[2]. But as you can see from my second link, many (most?) physicists insist that we can derive a "probability" from a complex number. Note that such probability would then be an actual real world probability, where the universe itself is uncertain about what to do. True non-determinism.
It's only natural. At the experimental level, the researcher does observe Born statistics. Same setup, different results, so there is probability in the territory after all.
There's a problem with that however: The equations, which make such wonderfully accurate predictions, (i) are dederministic, and (ii) do not state at any point that the blob of amplitude we don't see disappear in a puff of smoke. They merely say that the blobs eventually stop interacting. The same way that if you launch a photon to outer space, never to meet it again, it won't disappear the instant it reaches the boundary of our observable universe. If you insist on a mono world, you have to assert that the other blob, despite being predicted by those otherwise accurate equations, somehow doesn't exist when you don't see it.
One way to do it is to believe that, contrary to what the equations say, the blob you don't see does disappear in a puff of smoke. Its amplitudes are literally zeroed out behind your back. In hindsight, this one looks nuts to me. I mean, how can we justify distrusting accurate equations in a way that doesn't even make experimental predictions?
Another way is to call the square moduli of those amplitude "probabilities", and pretend that because it's probabilities, the blob you are not in isn't real. But the equations do not make any difference between the two blobs. Then how come the other blob is less real than our own?
To me, those two explanations really feel bizarre. You have to start from a mono world assumption to come up with that. An easy mistake to make, since personal experience is telling us all the time that there is only one world. A bit like a leaf in a binary tree: its ancestors form a line, not a tree. But Kolmogorov complexity says a literal interpretation of the equations (which means many world) is simpler than anything else we currently know about. So to hell with personal experience (which by the way is responsible for much worse whackery than mono world).
Now there is a way out: we can admit that current physics imply many worlds, but insist that real physics probably don't. Current physics are not complete after all. We may have big surprises. This argument is certainly be much saner than the Copenhagen interpretation. So much that it does lower my probability for many worlds somewhat. Just not enough to squash my confidence. :-)
You are criticizing me for trying to rely on data rather than a complex subjective model based on information from the beliefs of people that I have never met and have no input from? There is no way for me to attempt that approach that does not come down to some form of "making shit up".
More generally you are right that I prefer to work off of data rather than subjective opinion. Data I understand. Subjective opinion is valuable, but suffers from major potential biases. Correcting for that can be very hard.
>"Panels designed to relieve pressure within the engine bay were ejected to protect the stage and other engines."
Can anyone that knows something about the Falcon 9 design or rockets in general shed some light on this? That sentence makes it sound like the panels were purposefully jettisoned, which doesn't make sense to me. What do those panels do, what do they look like, and where are they?
Purposefully jettisonning panels is a way of preventing an explosion from damaging the rest of the ship. The rocket creates energy in a direction, but what if something prevents the energy from going in that direction? The energy, in the form of expanding gasses, has to go somewhere. If these panels didn't break away to give the energy somewhere to go, it could have forced into another rocket or into the ship's body, causing a much bigger problem.
When they say "Jettisoned", it makes it sound like one of the flight computers decided to drop the panels. Wouldn't it more be the case that the panels are deliberately designed to be blown off? I have a hard time believing that any control system could react in time in case of an engine explosion...
The panels may have been designed to jettison once they reached a certain pressure differential, or they may be jettisonned by computer.
How quickly do engines explode? Is it faster than 2 cars colliding? Computers deploy air bags quickly because electricity travels faster along wires in your car chassis than the car travels into something else.
Visualize a bumper with sensors - as the bumper is deformed by a collision, a sensor shifts an electron in the copper wire, and the electron next to it shifts. There's a cascade of shifting electrons along the wire, and it races backwards through the car's body, chased by the destruction of the car as it collides with another object. The cascade of electrons hits the air bag computer, which begins another cascade of electrons to the air bag. The wave of destruction has covered most of the distance to your windshield by now. The air bag deploys as molecules of air rush from their high-pressure canister to fill it. As the bag hits its most pressurized point, your car is coming to a stop as its kinetic energy is combined with energy from the other object.
Beautiful. A small correction – most airbags use pyrotechnic inflators, not compressed air.
Sidenote about airbags: they have to be folded to fit inside their module, so as it inflates it's also unfolding. In order to make sure it unfolds properly they coat it in a lubricant that can't evaporate – either talcum powder or cornstarch depending on the vehicle.
I learned this only after I scrambled out of the car my sister put in a ditch thinking it was on fire. The best part? It was a diesel car.
Who knows, maybe the engineers consider it a feature because after an accident it sure gets people out of the car quick!
I would venture to guess they're built to work both ways. If the computer has sufficient time and information to make the decision it releases the panels. Ex. Engine failed, shut it down, blow panel as precaution to drop pressure in that engine compartment.
They're also most likely built as the designated point of failure, like a safety valve on a boiler. If the pressure builds too high they are blown out by the pressure. Ex. engine goes critical and explodes before computer can react, the panels fail before other structural components do.
They said "ejected", not "jettisoned". You can be 'ejected' from a vehicle if you're not wearing your seatbelt, so the word doesn't require active response.
They probably refer to breakaway panel fairings above the engine isolation armor. Space X has an armor "cup"around the dangerous parts of the engine, and put fittings on top of that for some reason; aero probably.
Their statement suggests these came off when pressures radically changed during engine cut off, which is plausible. It was at max aerodynamic pressure, and removing all the pressure the motor generates is a big swing. I find it more likely that the nozzle (bell) shattered, as it is more exposed to both aero and combustion pressures. Maybe both happened.
When a liquid engine fails energetically, it's usually going to be a failure in the chamber. Nozzle or throat burn-through or other failure may also happen. The turbo pump might also let go and shred some stuff nearby. Plumbing failures may also kill the engine, but not destroy much; there are plenty of valves to fix leakage before it gets bad.
Here you can see the armored tub around the chamber section of each engine. It is meant to contain any problems. Mostly these would be hot gas from holes burned in the chamber or throat, fuel or oxidizer from leaks in the plumbing, or shrapnel from the turbo pump coming apart. Note that the other major failure mode, excessive vibration, cannot be armored against, but that is more a design thing than a random failure.
Most to all of these failures are easily detected by various pressure and flow sensors, and usually before they become big problems. Turn off the propellant valves, and the engine rapidly becomes safe, though off. But the armor does keep the neighbors safe from any problems, presuming it holds.
I would guess it is good enough for most failures.
Here you can see pretty white fairings hiding everything but the nozzles. Note that they appear to be a bunch of different pieces. Probably that is because they are meant to break away individually if something goes wrong.
Those are what I think the statement refers to. I doubt they are actively jettisoned, but may certainly be designed to pop off in an over-pressure situation. One could probably compare vs the video to see if the pieces look like that, the nozzle, or other engine parts. The corner fairing might also have failed under some circumstances.
Having seen the video in slow mo a few times, I think their assessment is plausible. Probably they will be able to tell what happened by telemetry. I don't know if they are still trying to recover stages, but if they are, they might get their hands in some physical evidence. Whether we will ever see any of this data, though, I don't know. They've been reasonably open in the past.
"Falcon 9 did exactly what it was designed to do. Like the Saturn V (which experienced engine loss on two flights) and modern airliners, Falcon 9 is designed to handle an engine out situation and still complete its mission. No other rocket currently flying has this ability."
Chills down my spine as I read this. I try to write eloquently, but sometimes the fact should stand alone: "No other rocket currently flying has this ability."
This reminds me of a major feature Chrome had when launched - to withstand a crash of any of the opened tabs. I thought it's a funny feature to have... other browsers just preferred not to crash in the first place. Then I tried Chrome and realized it was a great feature for them, because Chrome kept crashing all the time (back then).
What I wanted to say by this is that while it's a great thing for Falcon 9 to have fail-over in its maiden flights (as we could see yesterday), I wouldn't worry too much about "other rockets" not having this. Soyuz rockets are a great example (100% success rate for manned flights to ISS and over 97% success rate for all Soyuz rockets (that's since 1966)[1]).
I saw the engine flame out and the panels come apart during the launch. I expected some comment on the launch radio because it was obviously not SOP (engines shouldn go out with debris), yet, due to good design, the entire launch remained nominal.
I am now getting hopeful that I'll be able to experience zero-G before I die. SpaceX team, you are my heroes. Keep up the great work!
The main mission (Dragon/ISS) wasn't affected, but a secondary payload (an ORBComm test satellite) was left in the Dragon's insertion orbit, and didn't get the scheduled secondary boost --- apparently because, after the delayed orbital insertion, the reboost would have gone too close to ISS, according to an ORBComm press release:
Am I the only one who got confused by the first sentence? "The Dragon spacecraft is on its way to the International Space Station this morning and is performing nominally...".
Is everything fine now, or is it not? "Nominally" in this sense to me means that something is amiss, but reading the rest of the article seems to imply that everything is on track.
Having listened to 5 decades of rocket launches, I would say that nominal means that it is working within mission parameters. There may be glitches, but it is going to work, i.e. the payload is going into orbit. The engine failure wasnt planned, but there was enough redundancy in the system for it to succeed.
Ah. That explains it. I did not know that meaning of the word, and the explanation I checked at:
http://dictionary.reference.com/browse/nominal
only showed the meanings that I was familiar with.
I'm getting the sense that it's used because "numbers != reality", so it's a subtle reminder that they're basing their call on sensor data, rather than actually having their head inside the engine.
I find that to be a sufficient explanation, though I'm just guessing.
Wikipedia has a better explanation than I or others' comments, so far, could provide: http://en.wikipedia.org/wiki/Real_versus_nominal_value, and it 's "secondary" meaning is not, at all, limited to aerospace linguistics.
I see this confusion fairly often. Outside of the status of systems/missions etc., nominally often means something like 'in name only'. So to say it's proceeding 'nominally' to someone not familiar with its meaning in this domain sounds like "well, it's going according to 'plan', but ..."
Hm. I have never heard that meaning of the word previously, and the link I previously provided (http://dictionary.reference.com/browse/nominal) also did not show that specific meaning (unless I am too blind to see it!). I do find it interesting though that you seem to think that it is commonly used in this sense, so I guess I am not moving in the right technical (aeronautics) circles :-)
Thanks for the feedback.
Here’s the Oxford English Dictionary (funny how words work – I know of all five those meanings of nominal, but when I hear it I first think of the fifth, while you first seem to think of the first three):
nominal |ˈnɒmɪn(ə)l|
adjective
1 (of a role or status) existing in name only: Thailand retained nominal independence under Japanese military occupation.
• relating to or consisting of names.
2 (of a price or charge) very small; far below the real value or cost: they charge a nominal fee for the service.
3 (of a quantity or dimension) stated or expressed but not necessarily corresponding exactly to the real value: EU legislation allowed variation around the nominal weight (that printed on each packet).
• Economics expressed in terms of current prices or figures, without making allowance for changes over time: the nominal exchange rate.
4 Grammar relating to or functioning as a noun: a nominal group.
5 informal (chiefly in the context of space travel) functioning normally or acceptably.
It's common enough for space launches that there was joking on twitter last night about playing a drinking game- take a drink every time someone said "nominal". But most people decided that might be a little too much alcohol even for a drinking game. :-)
Heh, no aerospace background for me, merely computer engineering. Maybe that's still close enough for the term to have crept around with that meaning. Now go forth and help spread this usage far and wide! ;)
"Nominal" is the favorite word of aerospace engineers. Both in the sense that they like to use it a lot, and in the sense that they very much like to hear that things are going according to the plan.
It may not be following the optimal path, but it is absolutely within the planned contingencies: "Falcon 9 is designed to handle an engine out situation and still complete its mission."
The engine failed prior to MECO-1 (where the first two scheduled shutdowns occur) so there were still 8 lit engines. There's no capability to restart the engines that have been shut down.
It's amazing that this could have worked as a Dragon 7. One engine failed and they still could have made it with one other engine malfunctioning. Bearing in mind the cost of the launches for SpaceX are forecasted to be significantly lower than old shuttles, this is all the more impressive.
There was a misshap though, since the Orbcomm satellite wasn't nearly put on the right orbit. I have no idea whether the onboard propulsion of the satellite will be enough to get into the desired orbit, but if they do manage it, it will be a testament to the quality and interest of fault-tolerant space operations.
I have to admit that I'm impressed at how well the system they build compensated for the mechanical failure. It looks like they have some good people building things.
One aspect that's worth consideration is the private/corporate aspects of spaceflight. When there were failures in Apollo and the shuttle, the public had a right to know everything that happened since we'd paid for everything. SpaceX has been super cool about disclosure here, but how long can we count on that? At some point, there's too much money at stake for them to maintain full transparency.
What's more important than disclosure to the public after something has gone wrong, is disclosure to the customers (e.g. NASA) and future astronauts ahead of time if something has been identified as a potential problem before launch. The Space Shuttle Challenger disaster occurred because managers failed to acknowledge the warnings of engineers, who recommended postponing the launch due to cold weather which could prevent some O-rings from functioning properly. Having the ability to acknowledge and correct problems ahead of time, even if it means a launch delay and potentially some lost profits, will ultimately pave the path for a sustainable private space industry.
But I think that SpaceX understands that NASA is funded by the public and it will be easier to get the support of NASA if they have the support of the public. I would suspect that for this reason they will continue a decent amount of public disclosure.
I saw the launch live and I saw the debris ejecting about 90 seconds in. What I found particularly amazing was that SpaceX either didn't know it, didn't want to confirm it or just realised that their design worked because all I heard after that was my favourite aerospace term "Situation nominal".
Incorrect. From the fine article (SpaceX's release):
"It is worth noting that Falcon 9 shuts down two of its engines
to limit acceleration to 5 g's even on a fully nominal flight.
The rocket could therefore have lost another engine and still
completed its mission."
No, it just became a "Falcon 8" while enroute.
All joking aside, as a former NASA Space Shuttle engineer I'm very impressed by this private-enterprise venture into heavy-lift launch services. People have often speculated about how much cheaper launch services might be if they were in the private sector -- now we can find out. The preliminary signs are very good.