UBC does, or at least did, implement it across many of their physics and math courses. I'm not personally aware of on-going discussions, but there are many faculty members there and collaborators working on this stuff, as well as many students who served as the guinea pigs.
Thanks for this; I have only just had a chance to have a look. (I am less interested in strategic thinking than I am in details. I have a text and I want to make it work better in some of these ways. So I ma asking: What kinds of questions? What kinds of graphics? ...) This looks very helpful.
So your're correct that there is a Fourier Transform analogy for the uncertainty principle, but in the context of FMCW lidars (which brought up the question of velocity vs position uncertainty), the measurement of frequency actually determines both the position and the velocity. It's actually a problem for most FMCW lidars because you just get 1-2 frequency measurements and somehow need to disentangle what the range frequency is, as well as what the doppler (velocity) frequency is. A massive amount of effort has been put into developing lidar methods and architectures that solve this problem well.
But in summary, the uncertainty principle as encountered in quantum mechanics has ~nothing to do with a trade off between range accuracy and range uncertainty. It's possible that it could come into play in a very detailed treatment of FMCW lidar SNR, in the context of counting return photons, but also not generally necessary there. The time-frequency uncertainty plays a role in that the range and velocity resolution both get better the longer you stare at a signal. So for a given amount of reflected light, at a given range/velocity, there is a fundamental lower bound to how long you must integrate to a) get a signal at all and b) achieve a desired precision.
It seems to be an extract from "Foundations of Field Computation" by Bruce MacLennan, if you want to read the whole thing: http://web.eecs.utk.edu/~bmaclenn/FFC.pdf
He and Dr. Marcolli have a bunch of interesting stuff on their websites if you like this sort of stuff.
It really depends on the type of RADAR being used. If it's an FMCW radar, typically you will get a beat signal whose frequency corresponds to the target range. That frequency will vary with range, and in order to be well resolved you have to observe it for something like 1/period. So that puts a fundamental lower bound on how long you have to integrate. There are lots of tricks to improve things, and there are lots of variants of the standard radar hardware/methods, but I suspect that's what OP was referring to.
It depends entirely on what configuration of RADAR you've got, and what you're pointing it at. You can build a system that will result in returns with beat frequencies of just about anything, depending on the Tx modulation and the range. The question is whether it will be useful for your application in terms of range/velocity resolution, latency, integration time etc.
Like I said I was just speculating about why OP specifically mentioned 10-100ms. the light does indeed travel pretty quickly (although, as anybody in the radar/lidar industry will tell you, not nearly quickly enough!), however the round trip time is just the minimum latency you have to eat to get any information about your target. Once you have light coming back, you need to integrate for some about of time to achieve your desired SNR. That time could be very small, or it could be infinite if there are no photons coming back. Let's randomly say that you're using a RADAR with a Tx bandwidth situated such that the round trip time is 1us, and that your target range is s.t. the beat frequency of the return is 1kHz. Your job is to estimate that frequency, so you have to observe the waveform (by integrating samples for an FFT, typically) for at least one cycle of the RF wavelength. That would require that you wait 1us for the light to fly, and then wait another 1ms for the RF to cycle once. So your measurement latency is ~1ms. Now that's not 100ms, but perhaps you need more than one cycle to give a good estimate of the frequency, and then even more because the target is faint and there aren't many photons coming back. You could possibly arrive at some much higher number, like 10-100ms.
I'm not sure if that was OP's point, but that's all I'm saying ;-)
Yeah, it definitely can be shorter than 100ms. See my sibling comment. It just depends on the type of radar being used and the target range/velocity. Certainly for shortish range targets and mmwave radars on reasonably reflective targets you can get a signal with decent SNR in shorter time frames.
Actually most pulsed lidar systems use pretty high peak powers, which result in an eye-safe average power only because they are on for a sequence of very short pulses (~1ns). It's pretty unlikely OP is actually experiencing vision problems due to lidars, but it's probably possible for some kind of weird interaction to happen in the eyeball due to the high power pulses. Even so, it's unlikely to be causing an damage, temporary or otherwise. I would hope it doesn't happen to him/her while driving or something, though.
I suppose it will be a good while before we can approach an empirical proof for this sort of thing, since FMCW lidars are still very scarce, even more so than pulsed lidars right now. However, even if every car does have an FMCW lidar, the conditions required to get them to interfere with each other is are:
a) Have identical laser wavelength. Not just '905nm' or '1550nm', but _precisely_ the same wavelength. This is very hard to do even if you try.
b) Have a coincident beam path. Again, this needs to be very precisely aligned.
c) Have an overlapping coherence area. This is a bit technical, but it is a higher bar than just having spots spatially overlapping.
d) Have coherent+matching phase fronts at the detector. Again this is a fairly technical subject these properties vary along the beam path, and transversely. This also vary in time, temperature and many other things. The source lidar is able to 'interfere' with itself (in other words, get a signal), because it compensates for all of these effects with a local copy of the outgoing laser light. Other lidars' outgoing beams will in general, even for 100 cars, not be 'synced up' in this way.
Moreover, those conditions are just the intrinsic interference rejection properties of coherent lidars. Layered on top of that is that two lidars need to be using the same type of modulation, bullseye each other as they scan around the FOV, and provide enough photons to actually contribute to the signal. Then, if you satisfy all of those prerequisites, the interfering lidar also needs to overcome any heuristic/algorithmic rejection of spurious signals. Finally, if all of those conditions match up and you get a signal to punch through, and it's strong enough to over come the true signal, and you can't tell that it's an erroneous signal, then it will result on one bad/missing point in a frame of thousands of points, present for one frame.
You're correct, however, that there is a saturation issue. If you just DOS the photo diodes with photons you can potentially prevent any signals from getting through. But again, this isn't super easy to do. The detectors will almost certainly be balanced, not single ended, and AC coupled. So you really have to blast the photo diode, effectively bringing it up to it's damage threshold so it is just flooded with current and can't do anything, and/or just breaks. The raw laser light doesn't do much, both because the DC signal is rejected and because the balanced detectors will reject common mode signals (clearly you know this already). You also have the same issue with needing to shine into a very narrow field of view, at the right time, for long enough to matter.
Unfortunately FMCW lidar is sweeping the lasers across the same band. You don’t need the exact frequency just a beat frequency that’s within your detection Bw.
Also balanced detectors have something called common mode rejection. This is not infinite. In high volume applications it’s difficult for this to be >25dB but you can buy some devices >35dB.
Given that Lidar dynamic range is ~100dB you will definitely see the DC. I’ve not thought about this too much but it seems like an issue for the AGC as your demodulator won’t be bothered by it.
It's true that the laser frequency is sweeping, but it very well may not be over the same band. The sweep bandwidth in a typical lidar is likely in the 1-10GHz range. The carrier frequency of the laser that this modulation is riding on is probably in the neighborhood of 200THz. Let's say you're using a telecom laser at 1550nm. The actual wavelength of that laser will centered on some channel in the 1530-1580nm band, with each channel spaced by say 100GHz. So already each laser might intentionally be in a different channel, depending on chance and how many cars are there. But even if they are in the same channel, the chirp bandwidth is small compared to the channel bandwidth, so there will likely be at most only partial overlap, depending on where the respective center frequencies actually are. Unless your lidar is using a very expensive, very fiddly laser system, this center frequency will be drifting around within the channel all the time. It varies with
temperature, mechanical stress, output power and a bunch of other stuff, depending on the type of laser. However, even if the lasers are magically in the same channel, and perfectly locked to the same center frequency, you still need the light be coherent to produce an interfering RF signal. They will not be coherent.
Certainly the balanced detectors will have finite CMRR. In general you definitely have to make a good detector but it doesn't need to reject to 100dBc. A photodiode might have 100dB of dynamic range, but most likely your RF front end does not, and more importantly for most applications you will be dominated by photon shot noise, so you don't need to push common mode signals all the way to your electronic noise floor. 35dB of rejection works wonders.
The tools to do this exist. It's usually called 'blind source separation', as in "What are the N distinct audio signals which sum up to best explain a given compound signal, without knowing the possible source signals ahead of time." Usually it's done with some sort of matrix factorization, Principal Component Analysis, and/or Independent Component Analysis. It's also used for non-audio signals, like pulling the discrete firings out of noisy EEG signals. It's definitely not a foolproof solution but in a lot of applications it can get you going, at least.
By the problem setup it isn't blind source. It is sound = song plus other. A mixture model with 2 components
Edit. If you know the song it should be something simple like do cross correlation of audio with known song. Find peak. Solve for the gain and subtract away scaled and shifted song from original track. Will be rubbish if gain and timing have errors. Might need to do it in little chunks and interpolate the gain and shifts.
Edit 2. More generally, you might want to worry about the song having passed through some unknown transfer function (i.e. it is being played and recorded through shitty equipment). Then you have an interesting inverse problem. If everything is linear it will involve a regularized deconvolution. Will be tricky then.
It still is reduce-able to the more general blind source problem, right? We can conveniently "forget" that we know what the sources are so now we are blindfolded and can still use the same techniques to solve it.
Sorry I didn't see your responses until now. Indeed, there are many ways to slice the specific problem. I was specifically responding to the parent's statement:
> It would be a game changer if someone were to come up with a novel method of decomposing audio into discrete components
It's something that has been generally addressed and ~works. It will obviously depend on the specifics of the application, and yes if you can constrain the problem space further you ought to do better!
I find it odd to state that ASC's lidars work, and just cost too much until there is sufficient volume, but then turn around and say that the pulsed lidar groups with working products (that work better than flash lidar, in fact) are too expensive. Presumably the same applies to them. ASC's focal plane arrays may be ICs that scale well, but they still need lasers, electronics, housings, lenses, and more. Just like everyone else.
It also seems odd to state that Lumotive isn't novel. The fact that people have made SLM based phased array beam steering is hardly germane to the Lumotive being able to put together a robust, wide aperture, high speed, high resolution beam steering technology, let alone a complete lidar. There is simply a lot more to getting this stuff to work than just reading some paper from 2004 about SLM based steering in the lab.
Also, no lidar OEMs have a "buy" button. You're suggesting that this is because it's all a scam? I think it's because they don't need or want to sell to you. They are all out making partnerships with large vendors. I'll grant that some companies do appear to have vaporware products (most famously Quanergy's solid state product), but the reason there are many companies working at it is being there is a need and lots of commercial potential. The commercial potential in this case comes from the automotive industry, so there is basically zero incentive to sell at a consumer level. Not even velodyne, who definitely has real products, sells over the web. Even Ouster, who prides themselves at being the "available now" high performance lidar company, doesn't have a "buy" button. Perhaps if you email their sales guys and then send them $4k they will send you a unit, but it remains that the business model isn't sustained on individual sales.
Finally, the continental lidar you linked has pretty bad angular resolution (~1deg) and no stated range. The latter probably being because flash lidar is at a fundamental disadvantage to scanned lidar. Instead of all the photons going to one place, they go everywhere. This scales very badly with range, so they will be power (SNR) limited. The only way to overcome this would be to have correspondingly more sensitive detectors, which I do not believe is the case. Even if they gang together many small flash lidars that look at narrow FOVs, those lidars would probably have to be close to one pixel wide to compete on SNR while staying eye safe, in which case you end up losing the "scales on a chip" economics. This might be one reason why Ouster still spins a pixel wide array rather than strobing many.
You think that Zhang odometry paper is good, you should see the stuff his grad student came up with for his thesis[0]. The results look almost as good as the figures from the Ouster blog post! ;-)
If you're talking about how the human eye perceives light or dark objects, then yes it can be more complicated. Especially when colour is involved. That said, what they're talking about here when they say 'reflectivity' is the ratio of the power returned from an object to the power you shined onto it with your laser. The reflectivity will depend on the object surface texture, material, wavelength of light etc., but it only loosely corresponds to what you perceive as bright or dark.
The metric really just reflects (:-D) the signal-to-noise ratio and dynamic range of their sensor. Max range, min/max reflectivity, SNR, accuracy, and everything else are all intertwined, so it's very difficult to compare things on equal footing unless you know exactly how it was measured. LIDAR OEMs seem to have settled on a 10%/80% rule of thumb.
My point is, I'm not sure if 20% is actually a good threshold to be measuring. What are the levels of reflectance that will be encountered in practice? I.e. if a car is black (or splattered in dirt), what percentage are we talking about? Visually a lot of cars on the road are darker than "middle gray".
Amusingly, the question of "where to set the threshold" is actually a problem for pulsed lidar in general. What constitutes the pulse returning, and how do you know which pulse is dominant? In any case, my point was that I don't think they are implying that 20% is "the one true threshold" that determines if a LIDAR is sensitive enough. Indeed, many OEMs specify the bottom bracket at 10% already, and there will always be other variables such as weather and texture that will ruin your day no matter what threshold you chose.
As with software performance, it's all about reasonable benchmarking. If you're in some lidar application, for example building a self driving car, and you currently use velodyne HDL-64 sensors that can register returns from 10% targets at 80m, then Livox's specifications give you a clue as to how their unit might compare in a similar circumstance. That's all. Past that, you have to rig up a test with the unit yourself and profile, it's the only way. I'd also add that many objects would appear different in brightness if you looked at them under a pure wavelength like 905nm, rather than the while light your eye sees.
All that said, your concerns aren't misplaced. One of the leaders in the 'new wave' lidar OEMs is Luminar, and one of their original value propositions was that they went to a different wavelength (1550nm) which has a higher eye-safe power limit. This means that they could pump out higher energy pulses, and thus get more photons back from low reflectivity targets such as tires and dark cars. The jury is still out on what works best to cover the real world range of reflectivities, largely because there are just a lot more variables at play that a simple thresholding would imply.
https://cwsei.ubc.ca/