Interesting article but man the first section could have used some more graphics. Drawings of the sphere/circle he talks about is included with almost any antenna datasheet and looking at the pattern for various types of antennas is helpful.
For example, the last page of a datasheet [1] for a ceramic chip antenna (size of a capacitor or resistor commonly used on small electronics like smartwatches or glasses with -0.5 dBi gain) shows the sphere in various cross sections. The last page of a datasheet [2] for a "whip" antenna shows an E plane pattern (looking at the side of the antenna as it stands up) that looks like a pattern from a Rorschach test but the H-field (looking down at the antenna) is almost a perfect sphere.
Most (all?) of these diagrams come from RF testing rooms like the one in this shot [3]. You can't see it in the photo but usually in the floor is a rotating platform with a coax cable carrying an RF control signal coming from the testing instrumentation.
I have no idea about the amount of diagrams in data sheets... But you certainly can generate all of these diagrams by simulation, which is frequently done when researching/designing new antenna types.
I discourage anyone wanting to understand the physics of beamforming antennae from reading this article. The author has a very poor grasp of the concepts.
Can you elaborate? Without some kind of analysis of the mistakes in the article, we don't really have a way to judge if we should belief your statement about the author, nor can we improve our understanding.
A more descriptive listing of some of the issues with the article, or links to better resources, would rock.
Any time someone appeals to quantum physics to offer a handwaving explanation of something that classical physics covers just fine, watch out. Using path integrals to introduce interferometry is a waste of time, and in this case, just plain wrong.
Both the beamforming Wikipedia article and the OP basically say "by having several out of phase antennas you can create constructive interference at the intended receiver of the signal". What am I missing?
OP (and also Richard Feybnan) believes that QM provides a more intuitive explanation for the elusive "why" waves (interference) exist. Commenters here don't like that.
Well to start, the antenna generates it's own noise since it is not 100% efficient; i.e. it has a real resistance in addition to it's radiation resistance. If you put a real antenna in a box made of perfect electrical conductor, it would still generate noise, so you really don't halve the noise by increasing the gain 3 dB, you double the signal.
Here is a good tutorial on phased arrays. It's actually a lot easier to think of it when receiving a skewed wavefront, and the phase shifters (delay elements) compensating for the delay.
OK, while that seems like a valid tidbit if one has to implement this stuff, it doesn't seem like that's a helpful complication to introduce when trying to explain the basic concept.
I think abstrakraft's comment may partially be a reaction to the informal "fast and loose" style of your article. That said, I think there are also quite a number of factual inaccuracies. I won't go into all of them, but will comment on one paragraph from the article.
"(There's another concept called "antenna efficiency" which basically says you can adjust your scoop to resonate at a particular frequency, rejecting noise outside that frequency. That definitely works - but all antennas are already designed for this. That's why you get different antennas for different frequency ranges. Nowadays, the only thing you can do by changing your antenna size is to screw up the efficiency. You won't be improving it any further. So let's ignore antenna efficiency. You need a good quality antenna, but there is not really such a thing as a "better" quality antenna these days, at least for wifi.)"
Here you are conflating the issues of bandwidth and efficiency. Efficiency is how much RF power is lost to dissipative mechanisms in the antenna compared to how much is actually radiated. What you go on to describe is the bandwidth of the antenna. Your explanation that the bandwidth of the antenna is somehow chosen to reject noise for the receiver also demonstrates a misunderstanding of how all modern receiver architectures work. The noise bandwidth is either set by an IF filter in a super heterodyne receiver, or in the case of a synchronous digital receiver, in the matched filters. Antenna bandwidth may occasionally be chosen to reject interferes, but this is a different concept entirely. I suspect in consumer wi-fi antennas, if bandwidth is designed at all beyond "enough" it would be to maximize impedance bandwidth to allow for loading. It is also patently incorrect to say that "the only thing you can do by changing your antenna size is to screw up the efficiency". Larger antenna apertures provide more focusing, and therefore more directivity, and for a given efficiency more gain. Considering an array of antennas as a single antenna is one example of this. Another would be dish or horn type antennas which can be made with larger and larger apertures for more and more gain.
There are other issues throughout the article, but it's really not my intention to go line by line. It's also not my intention to discourage you from investigating this stuff further. I'm a practicing electrical engineer, and I feel every day all I learn is how much more I have yet to learn. Suffice it to say many of these concepts are very nuanced and complex (more so than your article reflects), so I applaud you for trying to figure this out on your own. Just don't think it will be nearly as simple as learning a new programming language. A couple basic things I would recommend you look up: the difference between directivity and gain, and the difference between coherent vs non-coherent combination. If you can, find someone who really, really knows this stuff to bounce your thoughts off of. Best of luck!
A traditional radio would be like an LED and a photocell; you can blink the LED and detect it with the photocell, but the maximum data rate is fairly small.
MIMO is more like vision. With a 2D array of pixels and a 2D array of photocells, you can transmit a vast amount of information through the same volume of space, and nearby equipment can reuse the same colors without much interference.
With smarter radios and more antennas, radio still has a long way to go before we hit the universe's data cap.
Expect everything bounces around, so it's like trying to decipher TV by watching the flickering on the wall. :) Actually, MIMO makes use of this (it works better in 'bouncy' non-line-of-sight multipath environments). By just a small antenna separation, the spatial channel characteristics can vary a lot as the signals might take a completely different paths, and the receiver is able to separate them from each other.
It's a good way of understanding spatial streams. Each pixel is a highly-directional transmit antenna, and each of your rods and cones is a highly-directional receiver. The result is 1920x1080 streams that you can interpret all at once!
(Of course, given the extremely wide bandwidth of visible light and the high SNR between your TV and other light sources, you don't need that many spatial streams to beam the information representing HDTV across space. But humans.)
MIMO doesn't operate via "spatial" streams, the antennas are not "highly directional" to the extent that a tx antenna and an rx antenna get an interference free channel.
True on a very technical level. MIMO is required to implement spatial streams. It's also the foundation for beamforming and diversity coding. But technically, MIMO itself is not beamforming or spatial streams.
So while not a 100% perfect example, I don't think the original comment is all that terrible.
There is a lot about this article that is not to like. With tutorial articles, I am of the opinion that all analogies are false and therefore bad. And resorting to quantum physics to describe the impact of Maxwell's equations is another bad signal.
The thing to notice about SNR is that you can increase it by increasing amplification at the sender (where the background noise is fixed but you have a clear copy of the signal) but not at the receiver.
This presumes that the signal that you are looking for is above the detectability threshold at the receiver. It may well not be, if you are trying to capture a weak signal.
Let's put aside the concept of a MASER used in radio astronomy and the size of their antennas. Or the fact that your wifi router or dongle has an amplifier inside of it. Let's do an experiment.
I have two instances of kismet running in my lab here in the leafy suburbs. One has a 3.5 inch antenna attached to a fit pc with internal wifi, the other has a 14 inch antenna attached to an Alfa with kismet running on kali in a vm on my mac laptop. The 14 inch antenna is successfully decoding 167 wifi networks and the 3.5 inch antenna sees 35 networks. I don't know what the relative gain rating of these antennas are, or what the relative quality of the radios is, but the dramatic difference on the receive side is pretty telling.
"all analogies are false and therefore bad. And resorting to quantum physics to describe the impact of Maxwell's equations is another bad signal."
That (EDIT: The analogy used in the article) is not just a good analogy, but summarizes our best understanding of both light and quantum physics. (Feynman knew exactly what he was talking about -- he invented/discovered some of the fundamentals). Whether you'd use quantum physics as an analogy to explain light, or light as an analogy to explain quantum physics is a matter of taste -- which one you find more intuitive and which less intuitive -- which depends on what you've been told by others previously.
I disagree with this statement. If you try to learn everything from first principles, you'll never get anywhere. Students need analogies to understand the big picture while learning the details.
"That is not just a good analogy, but summarizes our best understanding of both light and quantum physics"
And the Lesbeque integral is a more rigourously defined operator that the Riemann integral, but would you use it to introduce calculus to first year students? Of course not - you teach the basics, and in grad school, you let the students that need to worry about the more complicated stuff take Real Analysis.
There are certainly connections between interferometry and quantum physics. However, I don't see how the author's explanation of the subject is enhanced by using quantum physics. For the purpose of this article, classical physics explains the phenomenology just fine. As I said in another comment below, and wglb said above, appealing to quantum physics when it is entirely unnecessary is a bad smell, and is usually nothing more than a distraction to make the author sound more sophisticated.
The Lebesgue integral can be explained as intuitively as the Riemann integral (https://en.wikipedia.org/wiki/Lebesgue_integration#mediaview...). One can certainly have the intuition for both without too much knowledge of real analysis. And in general, I believe that analogies are very useful.
"The author explains in the article how to understand the behaviour of light."
That isn't the argument. What I'm arguing is the (lack of) utility of the explanation. In what way does it enhance, support, or in any way contribute to the article? His argument is essentially:
A. We think of light as traveling in straight lines.
B. But because quantum physics, they don't! They travel along infinitely many paths.
C. But all of these paths cancel out except the straight line path.
D. And that's how beamforming works. Except it's beam-un-forming.
So he brings up a quantum phenomenon to complicate the scenario, then immediately reduces it back to the original scenario, without ever explaining how that original scenario works (i.e. how waves add together to create constructive and destructive interference: note that the words "wave" and "phase" never occur in same paragraph), then claims that this is somehow elucidating. So what's the benefit?
tl, dr: One puzzler is that antenna radiation is frequency dependent, while that of Johnson noise in the microwave regime is flat with frequency. Then, considering detailed balance, how does an antenna matched to a terminated coax cable establish thermal equilibrium with space? What happens is that the 1/f^2 of the antenna pattern is cancelled by the f^2 (in the long-wavelength regime) of the Rayleigh-Jeans blackbody radiation formula.
It's _QED_ (Quantum Electrodynamics), which is both a book (compiled lecture notes) and a video series. It's on YouTube. It was a non-science-major informal lecture series. Very little math.
I think the "phased-array radar" you're referring to would be more precisely called "active-phased-array radar". Yes, the concepts are similar. By providing a calibrated time delay between elements of the array, the direction of focusing can be controlled. I'm not sure how this is done in practice for actual radars. If it is done in hardware, the array would have a single steerable pattern. If instead each array element has a clock-synchronized direct conversion digitizer, the beamforming can be done in the digital domain and the number of logical beams would be limited by the DSP capacity available.
That is a good summary. Some radars use phase shifters, and some true time delay elements. They are equivalent for narrow bandwidths, but the true time delay is needed for wide bandwidths.
In case anyone else is interested: the distinction between phase delay and time delay comes into play if bandwidth is large as you say, but in one more case as well. If the array is large enough that the time delay across the array is on the order of a symbol period you must use a true time delay. If you use a phase delay instead of a true time delay, energy from separate symbols will be added together to make the final bit decision and the resulting ISI will increase BER.
For example, the last page of a datasheet [1] for a ceramic chip antenna (size of a capacitor or resistor commonly used on small electronics like smartwatches or glasses with -0.5 dBi gain) shows the sphere in various cross sections. The last page of a datasheet [2] for a "whip" antenna shows an E plane pattern (looking at the side of the antenna as it stands up) that looks like a pattern from a Rorschach test but the H-field (looking down at the antenna) is almost a perfect sphere.
Most (all?) of these diagrams come from RF testing rooms like the one in this shot [3]. You can't see it in the photo but usually in the floor is a rotating platform with a coax cable carrying an RF control signal coming from the testing instrumentation.
[1] http://www.johansontechnology.com/images/stories/ip/rf-anten...
[2] ftp://ftp2.nearson.com/Drawings/Antenna/SG102N-2450V2.pdf
[3] http://upload.wikimedia.org/wikipedia/commons/d/dc/Large_Dri...