June 21st, 2024

How does a computer/calculator compute logarithms?

Computers and calculators compute logarithms using geometric series and polynomial equations from calculus. Natural logarithm's relation to $\frac{1}{1+x}$ is explained through curve areas. Polynomial series derived from integrating geometric series compute natural logarithms, but converge quickly only for $|x| < \frac{1}{2}."

Read original articleLink Icon
How does a computer/calculator compute logarithms?

The article discusses how computers and calculators compute logarithms using geometric series and polynomial equations derived from calculus. It explains the relationship between the natural logarithm and the function $\frac{1}{1+x}$, showing how the area under the curve relates to the natural logarithm. By integrating the geometric series, a polynomial series for calculating the natural logarithm is derived. However, this method has limitations as it only converges quickly for $|x| < \frac{1}{2}$, making it impractical for large values. Despite these limitations, logarithms have unique properties that allow for the calculation of logarithms for any value. The article provides a detailed explanation of the mathematical concepts involved in computing logarithms using polynomial series and highlights the special properties of logarithms that enable the calculation of logarithms for a wide range of values.

Related

There's more to mathematics than rigour and proofs (2007)

There's more to mathematics than rigour and proofs (2007)

The article explores mathematical education stages: pre-rigorous, rigorous, and post-rigorous. It stresses combining formalism with intuition for effective problem-solving, highlighting the balance between rigor and intuition in mathematics development.

Implementing General Relativity: What's inside a black hole?

Implementing General Relativity: What's inside a black hole?

Implementing general relativity for black hole exploration involves coordinate systems, upgrading metrics, calculating tetrads, and parallel transport. Tetrads transform vectors between flat and curved spacetime, crucial for understanding paths.

Shape Rotation 101: An Intro to Einsum and Jax Transformers

Shape Rotation 101: An Intro to Einsum and Jax Transformers

Einsum notation simplifies tensor operations in libraries like NumPy, PyTorch, and Jax. Jax Transformers showcase efficient tensor operations in deep learning tasks, emphasizing speed and memory benefits for research and production environments.

Identifying Leap Years (2020)

Identifying Leap Years (2020)

David Turner explores optimizing leap year calculations for performance gains by using bitwise operations and integer bounds. He presents efficient methods, mathematical proofs, and considerations for signed integers, highlighting limitations pre-Gregorian calendar.

Synthesizer for Thought

Synthesizer for Thought

The article delves into synthesizers evolving as tools for music creation through mathematical understanding of sound, enabling new genres. It explores interfaces for music interaction and proposes innovative language models for text analysis and concept representation, aiming to enhance creative processes.

Link Icon 14 comments
By @pavpanchekha - 5 months
This is a well done post, but please note that computers do not in fact use Taylor series for evaluating polynomials. They do use polynomials! (Or, sometimes, rational functions, but I think the recent trend has been toward polynomials only.) They also use transformations like the log(1+x) - log(1-x) one described in this article. (That one is used in fdlibm, a very influential math library, though more modern libraries seem to not use it.)

But the polynomials themselves are usually generated through some other technique, especially the Remez algorithm (https://en.wikipedia.org/wiki/Remez_algorithm) and modern improvements to it like Sollya's (https://sollya.org/) LLL-based floating-point polynomial fitting. It's still an active area of research; the RLibM project (https://people.cs.rutgers.edu/~sn349/rlibm/) from Rutgers has introduced a totally new way to fit low-precision polynomials (say, up to 32 bits) using massive linear programming problems (roughly, 4 constraints on 10 variables).

Source: am researcher in this field.

By @tasty_freeze - 5 months
Wang Labs made an amazing (for its time) calculator, the LOCI-1, quickly replaced by the much more capable LOCI-2. They were from 1964/1965 and were built with core memory and discrete transistors, 1200 of them, no ICs.

It operated with 10 digits of precision, and its native hardware could add and subtract. Rather than doing multiplication and division and square root directly, instead the calculator was able to compute logs and antilogs via successive addition and subtraction. Multiplying two numbers might return 199.9999999 instead of 200.0, but since this was built for engineers who were mathematically literate, it was acceptable.

There were other calculators that could be programmed to compute logs and antilogs, but they were slow. On the LOCI machines, the results were instant (on a human timescale).

Description of the LOCI-2:

https://www.oldcalculatormuseum.com/wangloci.html

Just found this description of the log algorithm -- it used only six constants in its iterative add/subtract/shift algorithm to compute logs and antilogs:

https://osgalleries.org/journal/pdf_files/20.2/V20.2P51.pdf

By @bluenose69 - 5 months
For anyone who remembers the Sinclair Scientific calculator, the site http://files.righto.com/calculator/sinclair_scientific_simul... has a good explanation of its way of calculating. It is unlike other machines, overcoming serious electronic limitations with some simple but clever approximate methods.

This little machine was my first calculator, a present from my Dad. Man, I loved that thing. It was so small that I could easily hold it, and strike the keys, with one hand. That single-hand aspect proved helpful in my undergraduate physics and chemistry labs. The RPN methodology also made it easy to work through complicated formulae without getting lost in parentheses.

Over time, the keys stopped working well. And then stopped working at all. I kept it for a year or two, but then got rid of it, having moved on to another calculator that was more powerful but much less fun. I don't even remember the brand.

Looking back four decades, I wish I had kept the Sinclair Scientific, for a memento. But I did not appreciate that a dead little machine like that, in its thin plastic case, would bring back so many fond memories. There's a life lesson in that.

By @xg15 - 5 months
> While these equations of polynomials contain a finite number of terms, we can have polynomials with an infinite number of terms. These are called series, and one of the simplest types of series is called a geometric series.

Slightly OT, but what made infinite sums a lot more sane to me was the understanding that the sums are in fact not infinite - it's just syntactic sugar for a perfectly ordinary limit over a series of finite sums, each with one more term than the last.

E.g.,

  sum(1 / 2^n) for n from 1 to +infinity 
"really" means:

  lim(sum(1 / 2^n) for n from 1 to m) for m -> +infinity 
So no infinite loops of additions are actually involved.
By @cb321 - 5 months
The article's impl of its own series [1] centers on the octave [2/3, 4/3] but the error is actually more balanced across [1/sqrt(2), sqrt(2)]. (A series expansion of the error probably explains this - some of the experts here like pavpanchekha | pclmulqdq might be able to explain pithily.)

So, here is one way to evaluate the discussed series evaluated across the centered interval and I think it only needs 9 terms for "near IEEE double" NOT the author's 15 terms:

    import sys, math # arg = number of terms of series
    n = int(sys.argv[1]) if len(sys.argv) > 1 else 3
    def lnATH(x):         # Ln via ArcTanH form/series
      u = (x - 1)/(x + 1) # Can expand loop to expr
      return sum(2/(2*j+1)*u**(2*j+1) for j in range(n))
    x  = math.sqrt(0.5)   # sqrt(1/2) centers error
    x1 = x*2     # IEEE FP exponent gets within 1 octave
    m  = 3840    # 4K monitor; adjust to whatever
    dx = (x1 - x)/m
    for i in range(m):  # Plot error in favorite tool
      print(x, lnATH(x) - math.log(x))
      x += dx
If you run it with 8 you begin to see round-off effects. With 9 terms, it becomes dominated by such effects.

Anyway 15/9 = 1.66X speed cost which seemed enough to be noteworthy. I mean, he does call it "log_fast.py" after all. (4 seems adequate for IEEE single precision - though you might be 1-bit shy of 24-bits of relative precision at the very edges of octaves if that matters to you.)

[1] https://github.com/zachartrand/SoME-3-Living/blob/main/scrip... (and yeah, the code says "16 terms" & article "15 terms", but any way you slice it, I think it's a lot of extra terms).

By @owlbite - 5 months
If I remember correctly, real libm implementations rarely use an analytically derived approximation, they just fit a polynomial rational approximation to minimize the error on the required range and precision.
By @r00tanon - 5 months
Good article. I, too, am fascinated by calculators and how they work, especially the HP scientific RPN calculators. So much so, that I ended up developing this:

https://play.google.com/store/apps/details?id=com.limpidfox....

By @agumonkey - 5 months
Is there a recursive computation for logarithms just like subtraction or division ?
By @Thoreandan - 5 months
By @bvan - 5 months
Great post and fascinating series of comments.
By @myth2018 - 5 months
Btw: I'm looking for efficient ways to represent numbers of arbitrary precision, and performing mathematical operations on the them. It's for a toy financial calculator I'm writing in my spare time. My knowledge is really lacking in this field and I'm not even sure about what I should be asking/searching for. Could someone suggest me some sources? Preferably online, but books would also be very welcome.
By @constantcrying - 5 months
>for if our calculators and computers calculated logarithms inaccurately, as well as exponentials, trig functions, and square roots, to name but a few, a lot of scientific and engineering work would be broken and end in catastrophe.

I think I have some news for the writer of the article.

By @rhelz - 5 months
Its polynomials all the way down.