Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without -0 or Infs, IEEE-754 would be significantly worse for doing scientific computations. I personally find these to be among the most useful and important features of floating point arithmetic.


Do you have an example? I have a hard time believing this since any scientific data will have uncertainty that should make it completely random as to what sign your zero is supposed to be. Similarly with Inf, if any intermediate result gives an Inf, you've lost all precision, and the final result is probably going to be NaN anyway.


> random as to what sign your zero is supposed to be

The purpose of negative zero is to preserve the sign of underflow, so you can e.g. get the branch cuts right when you are implementing some complex function. Cf. https://grouper.ieee.org/groups/msc/ANSI_IEEE-Std-754-2019/b...

Functions with singularities are ubiquitous in physics and many other branches of science, https://dlmf.nist.gov


I understand that that is the purpose, but the problem is as soon as you have any error in your computations, you lose the sign of the zero. From the paper you posted:

(ii) Rounding errors can obscure the singularities. That is why, for example, sqrt(z^2-1) = sqrt((z+1)(z-1)) fails so badly when either z^2=1 very nearly or when z^2<0 very nearly. To avoid this problem, the programmer may have to decompose complex arithmetic expressions into separate computations of real and imaginary parts. thereby forgoing Some of the advantages of a compact notation.

(iii) Careless handling can turn infinity or the sign of zero into misinformation that subsequently disappears leaving behind only a plausible but incorrect result. That is why compilers must not transform z-1 into z-(1+0i) as we have seen above, nor -(-x-x^2) into x+x^2 as we shall see below, lest a subsequent logarithm or square root produce a nonzero imaginary part whose sign is opposite to what was intended.

Branch cuts are a fundamentally dumb idea in finite precision math because if your input number has any uncertainty (which it always does), you don't know which side of the branch cut you are on. With a massive amount of manual work, you can sometimes hack the system well enough to work for extremely contrived cases, but in reality, it will never be useful because within the broader scope of the problem, if you've hit a singularity, you don't know what side you hit it from.

Functions with singularities are ubiquitous in physics and other branches of science, but symbolic math is needed to correctly track them.


Using symbolic math requires an amount of CPU/memory which is exponential in the number of operations applied, so is often (usually) not a practical or even possible choice.

You may think signed zero is a "fundamentally dumb idea" but it has helped a lot of people to accomplish their work, so... shrug.

The basic issue is that numerical analysis is hard, and implementing numerical algorithms involves plenty of edge cases, so whoever is writing that code need to have a pretty good understanding of the problem and the tools and be willing to spend time on careful reasoning. Picking a slightly (or very) different number representation with different trade-offs doesn't really make it easier.

Even if you (somehow) had a perfect number representation with infinite resolution and nearly free operations, you'd have to be careful when implementing numerical algorithms and would have to understand how to do error analysis and have an understanding of numerical stability etc. to design new ones.


There's also Arb if you need guaranteed intervals to make sure you are on the right side of the branch cut. I completely agree that care and numerical analysis is necessary. My point is that once you've done the analysis, your actual answers aren't infinite (hopefully), and if your calculation returns Inf, it means that your program is wrong. Given that, you might as well just make the value of your function exactly on the branch cut NaN and be done with it.


> if your calculation returns Inf, it means that your program is wrong

In my own programs, e.g. working with rational approximations or more general functions which are meromorphic in big parts of their domain possibly with branch cuts, or representing the sphere using the stereographic projection, Inf is a commonly expected and entirely correct result, either of intermediate calculations or of final results, and is also a commonly expected program input. In my experience looking at other people's code it is relatively common for Inf to be a perfectly reasonable result of numerical calculations. In some contexts, underflow resulting in 0 or overflow resulting in Inf is a more precise result than 1.000000000000000. It all depends what your numbers represent and what you are trying to compute.

(In some programs, it is even fine and ordinary to expect 0/0 or Inf/Inf resulting in NaN in intermediate computations, though these typically need to be checked for and special-cased so they can be recomputed to get the correct non-NaN but possibly Inf result.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: