August 10th, 2024

Someone's been messing with Python's floating point subnormals

A technical issue with Python packages, particularly from Huggingface Transformers, was identified, where the `gevent` library altered floating point behavior, leading to potential numerical inaccuracies in applications.

Read original articleLink Icon
Someone's been messing with Python's floating point subnormals

The article discusses a technical issue encountered while using Python packages, particularly those from Huggingface Transformers, which resulted in warnings related to floating point subnormals. The author discovered that a small number of Python packages were compiled with the `-ffast-math` compiler option, which alters floating point behavior by treating subnormal numbers as zero. This can lead to incorrect numerical results in applications relying on standard floating point operations. The author traced the problem to the `gevent` library, which was found to be setting the FTZ/DAZ flags, causing the unexpected behavior. To investigate further, the author analyzed the top 25% of projects on PyPI by download count to identify other libraries that might have similar issues. The article highlights the challenges of ensuring numerical accuracy in Python applications and the potential pitfalls of using certain compiler optimizations without understanding their implications.

- A small number of Python packages can cause incorrect numerical results due to the `-ffast-math` compiler option.

- The `gevent` library was identified as a culprit for altering floating point behavior.

- The author analyzed the top 25% of PyPI projects to find other libraries with similar issues.

- The article emphasizes the importance of understanding compiler optimizations in numerical computing.

- Users should be cautious when using libraries that may inadvertently affect floating point calculations.

Link Icon 16 comments
By @spacebacon - 2 months
2001 - GCC 3.0: - Introduction of `-ffast-math`: The `-ffast-math` flag was introduced, enabling aggressive floating-point optimizations at the cost of strict IEEE compliance.

2004 - GCC 3.4: - Refinements and Extensions: Additional optimizations added, including better handling of subnormal numbers and more aggressive operation reordering.

2007 - GCC 4.2: - Introduction of Related Flags: `-funsafe-math-optimizations` flag introduced, offering more granular control over specific optimizations.

2010 - GCC 4.5: - Improvements in Vectorization: Enhanced vectorization capabilities, particularly for SIMD hardware using SSE and AVX instruction sets.

2013 - GCC 4.8: - More Granular Control: Introduction of flags like `-fno-math-errno`, improving efficiency by assuming mathematical functions do not set `errno`.

2017 - GCC 7.0: - Enhanced Complex Number Optimizations: Improved performance for complex number arithmetic, benefiting scientific and engineering applications.

2021 - GCC 11.0: - Better Support for Modern Hardware: Optimizations leveraging modern CPU architectures and instruction sets like AVX-512.

2024 - GCC 13.0 (Experimental): - Experimental Features: Additional optimizations focused on new CPU features and better handling of edge cases.

Sources: - GCC documentation archives - Release notes from various GCC versions - [GCC Wiki](https://gcc.gnu.org/wiki/) - [Krister Walfridsson's blog](https://kristerw.github.io)

By @reisse - 2 months
As always, -funsafe-math-optimizations are neither fun nor safe
By @gary_0 - 2 months
(2022)

I remember this one because of this part:

> Unbeknownst to me, even with --dry-run pip will execute arbitrary code found in the package's setup.py. In fact, merely asking pip to download a package can execute arbitrary code (see pip issues 7325 and 1884 for more details)! So when I tried to dry-run install almost 400K Python packages, hilarity ensued. I spent a long time cleaning up the mess, and discovered some pretty poor setup.py practices along the way. But hey, at least I got two free pictures of anime catgirls, deposited directly into my home directory. Convenient!

By @pimlottc - 2 months
What's the real-world consequences of having the floating point behavior changed? The article mentions some types of iterative algorithms but it's not clear how often those would be used. Would be interested to know what actual issues arose in any downstream projects.
By @jhj - 2 months
The original sin here is that original 1980s designs carry over: the processor retains FP unit state, rather than each instruction indicating what subnormal flush mode (or rounding mode or whatever) one wishes to use with no retained FP unit state. See also: the IEEE FP exception design (e.g., signaling NaNs) causing havoc with SIMD, deep pipelining, out-of-order execution etc.
By @Arech - 2 months
That's very important info for some pythonistas, thanks for sharing!
By @dhosek - 2 months
The bit about the behavior propagating through shared libraries is yet another reason to prefer static linking.
By @sulam - 2 months
<dang>, this is (2022).
By @bjornsing - 2 months
Is there a simple way to check if my Python script is affected? Because I guess numpy only complains if the FPU has been screwed up before it loads (not if some other package loaded later does it)?
By @mguijarr - 2 months
Contrary to what is said in the article, gevent has the fix, it has been merged as https://github.com/gevent/gevent/commit/e29bd2ee11ca5f78cc9c... 2 years ago.
By @im3w1l - 2 months
After thinking about this a bit, my conclusion is that it's a deficiency in the calling convention. It should specify the floating point flags as either caller saved or callee saved.
By @SillyUsername - 2 months
Well that explains the LLMs getting their answers wrong ;)
By @JSDevOps - 2 months
I just wanted to say—this is an absolutely brilliant write-up. Thanks so much for all your hard work
By @joshlk - 2 months
> I have never met a scientist who can resist the lure of fast-but-dangerous math

This made me chuckle

By @csours - 2 months
Kind of related topic: Using libraries is a pain - Is there any evidence that people are using LLM coding tools to write library functions instead of importing libraries? How would we tell? Can you think of second order effects from this?
By @throwaway81523 - 2 months
This is worth reading all the way through. Ouch.