Actually it wasn't about the hydrogen that much. More like the hull painted with flammable stuff. With todays materials it couldn't have burned like that. So any airship design of today NOT using hydrogen is wasting buoyancy, and a rare (on earth) element, which could be put to use for more important things.
FWIW it was about hydrogen - the Hindenburg was designed around Helium (and thus didn't have various safeties around hydrogen) but due to embargoes against Nazi Germany they couldn't get the necessarily helium, so they filled it up with hydrogen against the original spec.
Yes. But still the paint burned first. And the hydrogen didn't explode, there was no "Knallgas". Even in all that chaos, the opportunity to mix in the right ratio with air to enable that, didn't arise. It just flared off.
One could even argue that all that flaring off generated some lift by updraft, making it crash softer, more slowly.
Hydrogen is not picky about fuel air mixture; it will explode at any concentration between 4% and 74% (in air). I rewatched the footage and it sure looks like an explosion to me.
The thermite paint hypothesis is interesting but a bunch of hydrogen airships exploded. The Hindenburg was partly made from metal recovered from the R101. The R101 exploded on her maiden voyage.
We might expect it to look differently, but it would appear that that's exactly what a hydrogen explosion looks like. By what means do you believe the camera, at least a hundred meters away, shakes?
Did it shake by a blast? Or was it just hastily turned around, to catch the flames?
I've watched many videos about that in the past, even ones where there were overlays with 3d-point-clouds.
Not in the mood to analyze this one further. Have doubts about it being really 'real time', conversion errors, whatver.
Maybe our understanding of 'explosion' is different. By explosion I mean something coming apart fast in an instant, with a bang, things flying away, shockwave.
For sure, in my mind a deflagration is a type of explosion, but I certainly don't mean to quibble about terms or to litigate this video more than is interesting to you.
I guess for me, I don't know whether it was hydrogen leaking around the rear or thermite in the paint which caused the ignition, and I don't know whether a helium airship would've also caught fire and how disastrous such a fire would've been. But I do know that what happened next was that the hydrogen ignited and the ship blew up.
That being said I think airships are a criminally under explored mode of transit, and that the Hindenburg shouldn't be a reason to abandon it altogether. At a minimum we're much more experienced in handling hydrogen now, and modern hydrogen blimps don't seem to blow up all that often.
It's not a problem of reflectivity, it's a problem of resolution. In order to detect something distinctly from other things (i.e. resolve that thing), you must be able to distinguish its reflected energy from that of other things by separating them along one or more dimension. Range is usually a good discriminator, but there are many things at (nearly) equal range to the radar. Azimuth is typically not great, because azimuth resolution requires a physically wide aperture, and real estate on the bumper is expensive. Doppler is great for moving things because it's easy to design a waveform with a small doppler resolution, and most moving things (cars, bikes, people) don't move at exactly the same speed as other moving things. However, nonmoving things have a very consistent velocity of precisely 0, and there are lots of them. So they can be very hard to resolve, and thus to detect.
In practice, the difference between pulsed radar and continuous wave radar is a continuum rather than a dichotomy. Historically, FMCW (frequency modulated continuous wave) had a high duty cycle (though not 100%, the ramp generators need finite time to reset (though you can alternate between up and downramps and get closer)). For some applications, though, requirements force you to short ramps and long PRIs, thus low duty cycles, but the name (FMCW) sticks.
This is true for some earlier lofi radars, but as driver assistance and self-driving have developed, so have the requirements and capabilities of the radar systems. Newer systems generally have shorter PRIs for higher doppler bandwidth, and much higher duty cycles for more energy on target - the FCC limits power, so you've got to get energy from the time axis. Both of these things make the interference problem harder.
I agree that you can't know what is "real" without looking at our universe from outside it, but that, in and of itself, doesn't imply that every model must be flawed, in the sense that its predictions must not be 100% consistent with observation. We could stumble across the "real" model, or something equivalent to it (in the sense of identical predictions) - we'd have no way of knowing whether the model is "real", but it could still be right.
Such a model would look extraordinarily compelling to rely on all the time for all purposes, all the while remaining capable of being false in a critical and very difficult (impossible with technology of the time) to detect way.
I’d rather prefer a Unix way: a plurality of explicitly more limited but more numerous and conceptually diverse models.
I find laying of blame to be the most egregious waste of time, for work as well as personal issues. People who insist on it are, by and large, not people you want to spend time or money with.
I was confused by that terminology at first, too, but it appears their product has an "offer" phase where the meeting organizer suggests multiple times, and a "book" phase in which the invitees accept the meeting and it's booked. Which doesn't necessarily mean that the meeting is attended, but it is a higher level of confirmation/buy-in from the invitees than what happens at my work (FANG) where organizers just throw meetings on everyone's calendar.
From an information theoretic perspective (which is the perspective Nyquist was originally coming from, though it didn't yet have that name), you don't need to mix the signal down. Assuming it is truly band-limited, you can sample the signal directly at RF, and reproduce it from those samples. Additionally, you will need to modulate the reproduced signal into the original band, which means you need to know where that band is - perhaps this is the detail you're pointing out?
Another way of looking at it is that sampling inherently does the mixing down to baseband. Although it may not be exactly the baseband you want if the spectrum isn't cleanly symmetric about a multiple of the sample frequency.
I've worked on ultrasound systems that definitely worked this way, not just in theory but also in practice. Bandpass filter 20–40 kHz, sample directly at 40 kHz (giving 20 kHz bandwidth). No mixer step involved, but your spectrum becomes inverted (e.g. if you do an FFT, a 22 kHz tone will be in the 18 kHz bin, not the 2 kHz bin as you would perhaps expect).
Aliasing makes more sense (to me, anyway) if you think about the spectrum of complex signals, in which signals of real samples are modeled as the sum of positive and negative frequencies.
In the sampling operation, all sinusoids are shifted down to the "natural baseband" by adding or subtracting some multiple of the sampling frequency that places the resulting frequency within +/- half of the sampling frequency. So for your example of 22kHz, that real frequency has two components: +22kHz that gets shifted down to -18kHz=22kHz-40kHz, and -22kHz that gets shifted up to +18kHz=-22kHz+40kHz.
Note that this "natural baseband" is an abstraction of our own invention. You can just as easily think of the spectrum as ranging from 0Hz to the sampling frequency f_s, rather than -f_s/2 to f_s/2. The fact that some prefer one over the other is precisely why fftshift exists.
To clarify: "band-limited" usually means X(w) = 0 for abs(w) > B for some B, where X is the frequency spectrum. And that's the definition Shannon used in the original proof, which is where the idea of Nyquist Frequency comes from.
If you add the additional constraint of the signal being "bandpass-limited" where, X(w) = 0 for A > abs(w) > B for some A, B, then yes, you can under sample.
And that's where the information-theory idea comes in where the amount of information contained in the band only "needs" 2X sampling rate to reconstruct perfectly.
You can think of aliasing being somewhat orthogonal to that in the sense that you need 2X bandwidth so you don't corrupt the signal, but 2X max frequency so you don't alias anything else into the signal. (I say this realizing that aliasing is what would cause the former signal corruption, hence "somewhat")