Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand from your table how the combined use of GPS+GLONASS would have a worse 'best case', 2.00 for GPS alone vs 2.37 for GPS+GLONASS.


There probably is some general statistical explanation of why you can get worse accuracy from combining multiple sources than each source by itself, but it seems irrelevant.

In actual receiver implementation you really get worse accuracy for simpler reason: most of the hardware resources (ie. receiver channels) are same between GPS and GLONASS, so using GLONASS ties up resources for GPS and vice versa.


I don't think this is true. Most of the cheap GPS chips I see on sparkfun can track 60 satellites at once, and you're not going to be seeing that many at one time anyway.


I would imagine for the same reason as wearing two watches doesn't improve your knowledge of the current time. You're more likely to be within the largest margin of error (i.e., the latest time on either of the watches is probably no later than the current time), but what can you say about the earliest time? Since it's equally likely to be correct on either watch, the only thing you can do is average it. But in the case that one of the watches is actually correct, you've just introduced error.


So, if I understand you correctly, the average error decreases but the best error actually increases?


I don't know the explanation, but I know this comes up in biometrics a lot. E.g. using fingerprints + facial geometry to identify someone is much less accurate than using either one or the other. Bruce Schneier has written about this in his book Beyond Fear, and I'm sure there are explanations online somewhere.


If you expect that more information will decrease performance, your algorithms are seriously borked.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: