Hacker Newsnew | past | comments | ask | show | jobs | submit | syockit's commentslogin

Check against FLT_EPSILON. Oh boy.

The reason is floating point precision errors, sure, but that check is not going to solve the problems.

Took a difference of two numbers with large exponents, where the result should be algebraically zero but isn't quite numerically? Then this check fails to catch it. Took another difference of two numbers with very small exponents, where the result is not actually algebraically zero? This check says it's zero.


Yeah, at the least you'll need an understanding of ULPs[0] before you can write code that's safe in this way. And understanding ULPs means understanding that no single constant is going to be applicable across the FLT or DBL range.

[0] https://en.wikipedia.org/wiki/Unit_in_the_last_place


You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?

Obligatory xkcd https://xkcd.com/810/


15 years ago ... needs updating in regard of how things panned out.

The point still stands. The human body still isn't going change. That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.


> That's why insulin pump can afford to have all kinds of rigorous engineering, while web-facing infrastructure on the other hand needs to be able to quickly adapt to changes.

The only reason we have a web in the first place is because of rigorous engineering. The whole thing was meant to be decentralized, if you're going to purposefully centralize a critical feature you are not going to get away with 'oh we need to quickly adapt to changes so let's abandon rigor'.

That's just irresponsible. In that case we'd be better off without CF. And I don't see CF arguing this, in fact I'm pretty sure that CF would be more than happy to expend the extra cycles so maybe stop attempting to make them look bad?


The best thing about gccgo is that it is not burdened with the weirdness of golang's calling convention, so the FFI overhead is basically the same as calling an extern function from C/C++. Take a look at [0] and see how bad golang's cgo calling latency compare to C. gccgo is not listed there but from my own testing it's the same as C/C++.

[0]: https://github.com/dyu/ffi-overhead


> The best thing about gccgo is that it is not burdened with the weirdness of golang's calling convention

Interesting. I saw go breaking from the c abi as the primary reason to use it; otherwise you might as well use java or rust.


Isn't that horribly out of date? More recent benchmarks elsewhere performed after some Go improvements show Go's C FFI having drastically lower overheard, by at least an order of magnitude, IIUC.


Might as well post it here since it's open access.


LAPACK is still Fortran, even in OpenBLAS where they only have f2c translated codes but hardly any assembly kernels.


I always install WinCompose specifically for this. Then a Greek letter is as simple as pressing Compose with *, followed by the related Roman letter. Though I still struggle to remember which key maps to which for the letters with no real 1-to-1 mapping (like theta).


Brings back memories. There was a time when the fork, libav, became the default on Ubuntu, and ffmpeg commands would say "this command is no longer maintained" or so. That was where I learned that there was a fork, and I thought ffmpeg was going to die as a result because there was heavy development activity on libav compared to ffmpeg initially. Surprise, ffmpeg outlived its fork!

This post talks about the situation back then: https://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html


It's really unintuitive how a mere increase in number of dimensions could totally change the scale of things. This is the crux of the curse of dimensionality in machine learning. If you simply stacked it high, it may almost reach the sun (but no quite), but once you lay it into roughly a square (20000 x 50000), it spans only over a mile per side. To lay it roughly into a cube of 100 dollar stacks, you'd have something like 65 x 150 x 1026, which turns out to be about only 12 yards per side.

EDIT: I screwed up in the 1D calculation. A 10 million-height stack of 100 100$ bills only reaches 67 miles.


Isn't a clock and a gyroscope precisely two things that are needed for the missile to know where it is at all times? Like, it can then know where it isn't by subtracting where it is from where it isn't, or where it isn't from where it is, to get the positional deviation. Combine that with the clock deviation, it can get the velocity and acceleration and then use all three information to generate corrective commands.


In principle, yes. That's called inertial navigation and while it works, for a while, the errors involved are big enough that without a feedback loop you'll be happy if it lands anywhere near the target area. And with the size of the payloads of the day 'near' isn't nearly precise enough. So these were only usable as a terror weapon. Unfortunately, not much seems to have changed in 80 years in spite of our ability to target much more precisely.


Yes but not precisely. There is an uncertainty that is difficult to remove.

However, if you combine it with modern image recognition and you know your target it will be probably enough


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: