Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real value being :

80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000

that is, the approximation was about 80% of the real value. Half of the approximation was in Stirling's formula, and another half in approximating 2^50.

Edit: nevermind. Stirling's formula is surprisingly accurate. https://www.google.co.uk/#safe=off&q=%2854%2Fe%29^54*sqrt%28...



There are two major sources of error. One is in using 2.7 as the value for e. That's wrong by about 0.677%, and then that's raised to the power of 54. That makes the result too big by about 44%.

Then it approximates 2^10 by 10^3, which is wrong by 2.4%. That's raised to the power of 5, and the result is too small by 12.6%.

So we divide by 1.44 and multiply by 1.28 to get the corrected answer of 7.8 * 10^67, which is extremely close to the actual answer of 8.066 * 10^67.

Finally, I used sqrt(2.pi.54) as 18, whereas it's closer to 18.42, another 2.3%. We should then get 7.8125 * 18.42/18 which is about 7.99474.

Nailed it.

Those corrections can (mostly) also be done (or at least approximated) by hand - maybe I should write a follow-up.


Is there a shorthand for calculating how much X% of error will become when raised to the power of Y?

For relatively small powers, simple multiplication seems to give results that are close enough, e.g. 0.677% * 54 = 36.6%, and 2.4% * 5 = 12%. But this is obviously not going to work once you start raising numbers to even higher powers.


Short answer: yes.

Slightly longer answer ...

For very small errors, multiplying by the power is right. You can see that because of the standard expansion:

(1+e)^n ~ 1 + ne + n(n-1)e^2/2 + ...

When e is small enough, e^2 is very small, so we can ignore everything that follows.

When e is slightly larger we can use more terms in the expansion, and that works nicely. That happens in the post about the birthday problem, where an extra term is used from the expansion for log(n).

So you don't get it for free, but it's not a lot of extra work.

In the case of 1.024^5, which is in the post, we get:

    e=0.024
    n=5
    print n * e

    -> 0.12
So the first approximation of the error is 12%

    print n*e + n*(n-1)/2*e**2

    -> 0.12575999999999998
So the extended approximation is 12.576%. We can compare that with the actual error:

    print (1+e)**5

    -> 1.125899906842624
Pretty close.

Even longer answer: Binomial Theorem.


So, let's call the correct value of whatever 'A'. Then "Off by X%" really means that the number you used is (1 + X/100) * A. (note: if you underestimated you need to make X negative for this to work)

When you raise that to the power Y, you get (1 + X/100)^Y * A^Y, when the correct result is just A^Y. So you get (1 + X/100)^Y times what you should've gotten.

You can convert that back into a percentage directly if you want, but if X is small (ie, if the original value is "close enough"), you can use the binomial expansion [1] and just keep the first few terms. If you do that, you eventually end up with:

Percentage the final result is out = Y * X + (1/200) * Y(Y-1) * X^2 + ...

[1]: https://en.wikipedia.org/wiki/Binomial_theorem


A different was of thinking about it is with the log rules. Let's take 2.4% to the hundredth power, which is assuredly more than 240%.

    y = 1.024^100
    ln y = ln (1.024^100) = 100 * ln 1.024
Now, we can approximate ln x = x-1 for values of x near one, so we get

    ln y = 100 * 0.024 = 2.4
    y = exp 2.4 = e ^ 2.4 ~ 2.7^2.4
    2.7^2.4 = (27/10)^2.4 = (3^3/10) ^ 2.4
    2.7^2.4 = (3^3)^2.4 / (10^2.4)
    = 3^7.2 / (100 * 10^0.4)
    = 3*(3^6)*3^0.2 / (100 * 10^0.4)
    = 3*(729/100)*3^0.2/10^0.4
    = 3*(7.29)*(3/100)^0.2
    = 21.87*(0.03)^(1/5)
Finally, (1/2)^5 = 1/32 ~ 0.03

    y ~ 21.87 * 1/2 = 10.94
Pulling out Python, the actual answer is 10.72, so we're within a few percent.


How does the 2^10 to 10^3 approximation come about? All of the others made sense to me, but I couldn't figure that one out.


2 ^ 10 = 1024.

That's one of those "shout it from the rooftops" approximations for programmers. I use it constantly.


And this is why base-ten and base-two file and memory sizes are roughly equivalent (which is somewhat useful but also causes a lot of confusion):

    2^10 bytes = 1 kiB ≈ 1 kB = 10^3 bytes
    2^20 bytes = 1 MiB ≈ 1 MB = 10^6 bytes
    2^30 bytes = 1 GiB ≈ 1 GB = 10^9 bytes
    2^40 bytes = 1 TiB ≈ 1 TB = 10^12 bytes
and so on. The relative error does grow as you move to larger and larger powers, though.


As the other answers point out, it follows from explicit computation. Let me add that there's no particularly good a priori reason to expect this particular coincidence. It just happens.


Compare 1000 with 1024.

As I say above, 2^10 is bigger than 10^3 by 2.4%. When we raise it to the power of 5 we are wrong by 12.6%.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: