Fixed-point arithmetic as a replacement for soft floats
Fixed-point arithmetic offers advantages over floating-point in embedded systems, improving performance and reducing binary size. It enables efficient calculations on platforms without floating-point hardware support, enhancing precision and speed.
Read original articleThe blog post discusses the advantages of using fixed-point arithmetic as a substitute for floating-point arithmetic in embedded systems, particularly on platforms like the Cortex M0+, which lack hardware support for floating-point operations. Fixed-point types, defined in ISO TR 18037, allow for efficient representation of fractional values using scaled integers, enabling arithmetic operations to be performed using standard integer instructions. This approach has demonstrated a significant performance improvement—approximately double the speed in classification algorithms—while also reducing binary size without compromising accuracy. The author highlights the limitations of floating-point arithmetic, especially in embedded contexts where software emulation can be costly. The blog also details the implementation of fixed-point types in a Google project, including the adaptation of mathematical functions like expf and sqrtf to work with fixed-point types. The author notes that careful normalization techniques can prevent overflow and maintain precision, ultimately enhancing performance. The transition to fixed-point arithmetic not only streamlines operations but also aligns with the constraints of embedded systems, making it a viable alternative to traditional floating-point methods.
- Fixed-point arithmetic can replace floating-point arithmetic in embedded systems for better performance.
- Using fixed-point types can lead to significant speed improvements and reduced binary size.
- The implementation of mathematical functions for fixed-point types is crucial for successful integration.
- Normalization techniques help prevent overflow and maintain precision in calculations.
- Fixed-point types are particularly beneficial for platforms lacking hardware support for floating-point operations.
Related
Do not taunt happy fun branch predictor
The author shares insights on optimizing AArch64 assembly code by reducing jumps in loops. Replacing ret with br x30 improved performance, leading to an 8.8x speed increase. Considerations on branch prediction and SIMD instructions are discussed.
Floating Point Math
Floating point math in computing can cause inaccuracies in decimal calculations due to binary representation limitations. Different programming languages manage this with varying precision, affecting results like 0.1 + 0.2.
Neo Geo Dev: Fixed Point Numbers
The article explains fixed point numbers for the Neo Geo, enabling decimal-like calculations using integers. It discusses their advantages, drawbacks, and practical coding examples for game development, emphasizing precision management.
Creating invariant floating-point accumulators
The blog addresses challenges in creating invariant floating-point accumulators for the astcenc codec, emphasizing the need for consistency across SIMD instruction sets and the importance of adhering to IEEE754 rules.
What is the best pointer tagging method?
The article analyzes pointer tagging methods for optimizing memory and performance, noting that practical performance varies by architecture, compiler optimizations are crucial, and untagged pointers often outperform tagged ones.
https://github.com/PetteriAimonen/libfixmath
I saw this getting forked for a custom SDK for the PS1 (which doesnt have a FPU).
In the fixed code I used straight c but wrote functions for sin, cos, and reciprocal square root.
I can't see getting just a 2x improvement over soft float, while using llvm specific features and modifying libraries just eliminates portability.
all irrational operations (sqrt, sin, cos and so on) return Num in a consistent way
Related
Do not taunt happy fun branch predictor
The author shares insights on optimizing AArch64 assembly code by reducing jumps in loops. Replacing ret with br x30 improved performance, leading to an 8.8x speed increase. Considerations on branch prediction and SIMD instructions are discussed.
Floating Point Math
Floating point math in computing can cause inaccuracies in decimal calculations due to binary representation limitations. Different programming languages manage this with varying precision, affecting results like 0.1 + 0.2.
Neo Geo Dev: Fixed Point Numbers
The article explains fixed point numbers for the Neo Geo, enabling decimal-like calculations using integers. It discusses their advantages, drawbacks, and practical coding examples for game development, emphasizing precision management.
Creating invariant floating-point accumulators
The blog addresses challenges in creating invariant floating-point accumulators for the astcenc codec, emphasizing the need for consistency across SIMD instruction sets and the importance of adhering to IEEE754 rules.
What is the best pointer tagging method?
The article analyzes pointer tagging methods for optimizing memory and performance, noting that practical performance varies by architecture, compiler optimizations are crucial, and untagged pointers often outperform tagged ones.