What you are saying here is expressing some misunderstandings/misconceptions, and may confuse readers.
There's no reason to prefer floating point values with any particular exponent, as long as you are not getting too close to the ends, which for double precision is roughly googol^3 or 1/googol^3. (These numbers are absurdly big/small, and you are generally only going to get to them by multiplying a long list of big or small numbers together; if you need to do that you might need to occasionally renormalize the result and track the exponent separately, or work with logarithms instead.) Even for single precision, the exponent limits are about 10^38 or 10^(-38), which is very very big.
> want to retain as much precision as possible and still use floats, don't store it in a float with range [0.0,100.0]. Store it with the range [0.0,1.0]
This doesn't make sense to me. There are just as many floating point numbers between 64 and 128 as there are between 0.5 and 1.0, and the same number between 32 and 64, between 0.25 and 0.5, etc. All you did in multiplying by a constant is twirl up the mantissa bits and shift the exponent by ~7. Unless you care about the precise rounding in the ~16th decimal digit, there is limited practical difference. (Well, one tiny difference is you are preventing some of the integer-valued percentages from having an exact representation, if for some reason you care about that. On the flip side, if you need to compose these percentages or apply them to some other quantity the range 0–1 is generally more convenient because you won't have to do an extra division by 100.)
> if you're dealing with angles, you should not store them in the range [0.0,360.0), but instead store them either as radians [0-2π), or better: [-π,π), or store them as [-1.0,1.0) and use trig routines designed to work with that range.
Floats from 0 to 360 is a perfectly fine representation for angles, though you may want to use -180 to 180 if you want to accumulate or compare many very small angles in either direction, since there is much more precision near e.g. -0.00001 than near 359.99999. (Of course, if whatever software libraries you are using expect radians, it can be convenient to use radians as a representation, but it won't be any more or less precise.)
The reason pure mathematicians (and as a consequence most scientists) use radians instead is because the trig functions are easier to write down as power series and easier to do calculus with (using pen and paper) when expressed in terms of radians, because it eliminates an annoying extra constant.
Using numbers in the range -1 to 1 can be more convenient than radians mainly because π is not exactly representable in floating point (it can sometimes be nice to get an exact answer for arcsin(1) or the like), and because there are other mathematical tools which are nice to express in the interval [-1, 1].
Aside: If you are using your angles and trig functions for doing geometry (rather than, say, approximating periodic functions), let me instead recommend representing your angles as a pair of numbers (cos a, sin a), and then using vector algebra instead of trigonometry, ideally avoiding angle measures altogether except at interfaces with people or code expecting them. You'll save a lot of transcendental function evaluations and your code will be easier to write and reason about.
Aside #2: The biggest thing you should worry about with floating point arithmetic is places in your code where two nearly equal numbers get subtracted. This results in "catastrophic cancellation" that can eat up most of your precision. For example, you need to be careful when writing code to find the roots of quadratic equations, and shouldn't just naïvely use the "quadratic formula" or one of the two roots will often be very imprecise.
The quadratic solver implementation in kurbo is designed to be fast and reasonably precise for a wide range of inputs. But for a definitive treatment of how to solve quadratic equations, see "The Ins and Outs of Solving Quadratic Equations with Floating-Point Arithmetic" by Goualard[2]. I thought I understood the problem space pretty well, then I came across that.
Accurate representation of a single quantity is one thing. Doing several mathematical operations with that quantity while _maintaining_ accuracy is another.
There's no reason to prefer floating point values with any particular exponent, as long as you are not getting too close to the ends, which for double precision is roughly googol^3 or 1/googol^3. (These numbers are absurdly big/small, and you are generally only going to get to them by multiplying a long list of big or small numbers together; if you need to do that you might need to occasionally renormalize the result and track the exponent separately, or work with logarithms instead.) Even for single precision, the exponent limits are about 10^38 or 10^(-38), which is very very big.
> want to retain as much precision as possible and still use floats, don't store it in a float with range [0.0,100.0]. Store it with the range [0.0,1.0]
This doesn't make sense to me. There are just as many floating point numbers between 64 and 128 as there are between 0.5 and 1.0, and the same number between 32 and 64, between 0.25 and 0.5, etc. All you did in multiplying by a constant is twirl up the mantissa bits and shift the exponent by ~7. Unless you care about the precise rounding in the ~16th decimal digit, there is limited practical difference. (Well, one tiny difference is you are preventing some of the integer-valued percentages from having an exact representation, if for some reason you care about that. On the flip side, if you need to compose these percentages or apply them to some other quantity the range 0–1 is generally more convenient because you won't have to do an extra division by 100.)
> if you're dealing with angles, you should not store them in the range [0.0,360.0), but instead store them either as radians [0-2π), or better: [-π,π), or store them as [-1.0,1.0) and use trig routines designed to work with that range.
Floats from 0 to 360 is a perfectly fine representation for angles, though you may want to use -180 to 180 if you want to accumulate or compare many very small angles in either direction, since there is much more precision near e.g. -0.00001 than near 359.99999. (Of course, if whatever software libraries you are using expect radians, it can be convenient to use radians as a representation, but it won't be any more or less precise.)
The reason pure mathematicians (and as a consequence most scientists) use radians instead is because the trig functions are easier to write down as power series and easier to do calculus with (using pen and paper) when expressed in terms of radians, because it eliminates an annoying extra constant.
Using numbers in the range -1 to 1 can be more convenient than radians mainly because π is not exactly representable in floating point (it can sometimes be nice to get an exact answer for arcsin(1) or the like), and because there are other mathematical tools which are nice to express in the interval [-1, 1].
Aside: If you are using your angles and trig functions for doing geometry (rather than, say, approximating periodic functions), let me instead recommend representing your angles as a pair of numbers (cos a, sin a), and then using vector algebra instead of trigonometry, ideally avoiding angle measures altogether except at interfaces with people or code expecting them. You'll save a lot of transcendental function evaluations and your code will be easier to write and reason about.
Aside #2: The biggest thing you should worry about with floating point arithmetic is places in your code where two nearly equal numbers get subtracted. This results in "catastrophic cancellation" that can eat up most of your precision. For example, you need to be careful when writing code to find the roots of quadratic equations, and shouldn't just naïvely use the "quadratic formula" or one of the two roots will often be very imprecise.