How does a computer/calculator compute logarithms?
Computers and calculators compute logarithms using geometric series and polynomial equations from calculus. Natural logarithm's relation to $\frac{1}{1+x}$ is explained through curve areas. Polynomial series derived from integrating geometric series compute natural logarithms, but converge quickly only for $|x| < \frac{1}{2}."
Read original articleThe article discusses how computers and calculators compute logarithms using geometric series and polynomial equations derived from calculus. It explains the relationship between the natural logarithm and the function $\frac{1}{1+x}$, showing how the area under the curve relates to the natural logarithm. By integrating the geometric series, a polynomial series for calculating the natural logarithm is derived. However, this method has limitations as it only converges quickly for $|x| < \frac{1}{2}$, making it impractical for large values. Despite these limitations, logarithms have unique properties that allow for the calculation of logarithms for any value. The article provides a detailed explanation of the mathematical concepts involved in computing logarithms using polynomial series and highlights the special properties of logarithms that enable the calculation of logarithms for a wide range of values.
Related
There's more to mathematics than rigour and proofs (2007)
The article explores mathematical education stages: pre-rigorous, rigorous, and post-rigorous. It stresses combining formalism with intuition for effective problem-solving, highlighting the balance between rigor and intuition in mathematics development.
Implementing General Relativity: What's inside a black hole?
Implementing general relativity for black hole exploration involves coordinate systems, upgrading metrics, calculating tetrads, and parallel transport. Tetrads transform vectors between flat and curved spacetime, crucial for understanding paths.
Shape Rotation 101: An Intro to Einsum and Jax Transformers
Einsum notation simplifies tensor operations in libraries like NumPy, PyTorch, and Jax. Jax Transformers showcase efficient tensor operations in deep learning tasks, emphasizing speed and memory benefits for research and production environments.
Identifying Leap Years (2020)
David Turner explores optimizing leap year calculations for performance gains by using bitwise operations and integer bounds. He presents efficient methods, mathematical proofs, and considerations for signed integers, highlighting limitations pre-Gregorian calendar.
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.
But the polynomials themselves are usually generated through some other technique, especially the Remez algorithm (https://en.wikipedia.org/wiki/Remez_algorithm) and modern improvements to it like Sollya's (https://sollya.org/) LLL-based floating-point polynomial fitting. It's still an active area of research; the RLibM project (https://people.cs.rutgers.edu/~sn349/rlibm/) from Rutgers has introduced a totally new way to fit low-precision polynomials (say, up to 32 bits) using massive linear programming problems (roughly, 4 constraints on 10 variables).
Source: am researcher in this field.
It operated with 10 digits of precision, and its native hardware could add and subtract. Rather than doing multiplication and division and square root directly, instead the calculator was able to compute logs and antilogs via successive addition and subtraction. Multiplying two numbers might return 199.9999999 instead of 200.0, but since this was built for engineers who were mathematically literate, it was acceptable.
There were other calculators that could be programmed to compute logs and antilogs, but they were slow. On the LOCI machines, the results were instant (on a human timescale).
Description of the LOCI-2:
https://www.oldcalculatormuseum.com/wangloci.html
Just found this description of the log algorithm -- it used only six constants in its iterative add/subtract/shift algorithm to compute logs and antilogs:
This little machine was my first calculator, a present from my Dad. Man, I loved that thing. It was so small that I could easily hold it, and strike the keys, with one hand. That single-hand aspect proved helpful in my undergraduate physics and chemistry labs. The RPN methodology also made it easy to work through complicated formulae without getting lost in parentheses.
Over time, the keys stopped working well. And then stopped working at all. I kept it for a year or two, but then got rid of it, having moved on to another calculator that was more powerful but much less fun. I don't even remember the brand.
Looking back four decades, I wish I had kept the Sinclair Scientific, for a memento. But I did not appreciate that a dead little machine like that, in its thin plastic case, would bring back so many fond memories. There's a life lesson in that.
Slightly OT, but what made infinite sums a lot more sane to me was the understanding that the sums are in fact not infinite - it's just syntactic sugar for a perfectly ordinary limit over a series of finite sums, each with one more term than the last.
E.g.,
sum(1 / 2^n) for n from 1 to +infinity
"really" means: lim(sum(1 / 2^n) for n from 1 to m) for m -> +infinity
So no infinite loops of additions are actually involved.So, here is one way to evaluate the discussed series evaluated across the centered interval and I think it only needs 9 terms for "near IEEE double" NOT the author's 15 terms:
import sys, math # arg = number of terms of series
n = int(sys.argv[1]) if len(sys.argv) > 1 else 3
def lnATH(x): # Ln via ArcTanH form/series
u = (x - 1)/(x + 1) # Can expand loop to expr
return sum(2/(2*j+1)*u**(2*j+1) for j in range(n))
x = math.sqrt(0.5) # sqrt(1/2) centers error
x1 = x*2 # IEEE FP exponent gets within 1 octave
m = 3840 # 4K monitor; adjust to whatever
dx = (x1 - x)/m
for i in range(m): # Plot error in favorite tool
print(x, lnATH(x) - math.log(x))
x += dx
If you run it with 8 you begin to see round-off effects. With 9 terms, it becomes dominated by such effects.Anyway 15/9 = 1.66X speed cost which seemed enough to be noteworthy. I mean, he does call it "log_fast.py" after all. (4 seems adequate for IEEE single precision - though you might be 1-bit shy of 24-bits of relative precision at the very edges of octaves if that matters to you.)
[1] https://github.com/zachartrand/SoME-3-Living/blob/main/scrip... (and yeah, the code says "16 terms" & article "15 terms", but any way you slice it, I think it's a lot of extra terms).
https://play.google.com/store/apps/details?id=com.limpidfox....
I think I have some news for the writer of the article.
Related
There's more to mathematics than rigour and proofs (2007)
The article explores mathematical education stages: pre-rigorous, rigorous, and post-rigorous. It stresses combining formalism with intuition for effective problem-solving, highlighting the balance between rigor and intuition in mathematics development.
Implementing General Relativity: What's inside a black hole?
Implementing general relativity for black hole exploration involves coordinate systems, upgrading metrics, calculating tetrads, and parallel transport. Tetrads transform vectors between flat and curved spacetime, crucial for understanding paths.
Shape Rotation 101: An Intro to Einsum and Jax Transformers
Einsum notation simplifies tensor operations in libraries like NumPy, PyTorch, and Jax. Jax Transformers showcase efficient tensor operations in deep learning tasks, emphasizing speed and memory benefits for research and production environments.
Identifying Leap Years (2020)
David Turner explores optimizing leap year calculations for performance gains by using bitwise operations and integer bounds. He presents efficient methods, mathematical proofs, and considerations for signed integers, highlighting limitations pre-Gregorian calendar.
Synthesizer for Thought
The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.