September 16th, 2024

The Byte Order Fallacy

The article argues that native byte order is irrelevant for programmers, advocating for code that handles data streams independently, avoiding complexity and bugs from practices like byte swapping.

Read original articleLink Icon
The Byte Order Fallacy

The article discusses the misconception surrounding byte order in programming, particularly in relation to data processing. It argues that the native byte order of a computer is largely irrelevant for most programmers, as the focus should be on the byte order of the data being processed. The author emphasizes that code should be written to handle data streams independently of the machine's byte order, using simple extraction methods that are portable across different architectures. The article critiques common practices that involve byte swapping and conditional compilation based on byte order, highlighting that such approaches often lead to unnecessary complexity and bugs. The author cites examples, including issues with Adobe Photoshop, to illustrate how byte order mismanagement can complicate software functionality. Ultimately, the piece advocates for a clearer understanding of byte order, suggesting that programmers should avoid overcomplicating their code with unnecessary byte order checks.

- The native byte order of a computer is generally irrelevant for data processing.

- Code should be designed to handle data streams independently of machine byte order.

- Common practices like byte swapping can introduce complexity and bugs.

- Proper handling of byte order can lead to simpler, more portable code.

- Mismanagement of byte order can cause significant software issues, as illustrated by real-world examples.

Related

The Byte Order Fiasco

The Byte Order Fiasco

Handling endianness in C/C++ programming poses challenges, emphasizing correct integer deserialization to prevent undefined behavior. Adherence to the C standard is crucial to avoid unexpected compiler optimizations. Code examples demonstrate proper deserialization techniques using masking and shifting for system compatibility. Mastery of these concepts is vital for robust C code, despite available APIs for byte swapping.

Beyond Clean Code

Beyond Clean Code

The article explores software optimization and "clean code," emphasizing readability versus performance. It critiques the belief that clean code equals bad code, highlighting the balance needed in software development.

Good programmers worry about data structures and their relationships

Good programmers worry about data structures and their relationships

Good programmers prioritize data structures over code, as they enhance maintainability and reliability. Starting with data design simplifies complexity, aligning with Unix philosophy and aiding senior engineers in system documentation.

Do low-level optimizations matter? Faster quicksort with cmov (2020)

Do low-level optimizations matter? Faster quicksort with cmov (2020)

The article emphasizes the importance of low-level optimizations in sorting algorithms, highlighting how modern CPU features and minimizing conditional branches can enhance performance, particularly with the new `swap_if` primitive.

Byte Ordering: On Holy Wars and a Plea for Peace (1980)

Byte Ordering: On Holy Wars and a Plea for Peace (1980)

The document explains floating-point number storage, emphasizing consistent bit order's importance. It discusses Little-Endian and Big-Endian systems, their implications for data processing, and advocates for unified data representation to reduce compatibility issues.

Link Icon 19 comments
By @LegionMammal978 - 5 months
IME, there's one big thing that often keeps my programs from being unaffected by byte order: wanting to quickly splat data structures into and out of files, pipes, and sockets, without having to encode or decode each element one-by-one. The only real way to make this endian-independent is to have byte-swapping accessors for everything when it's ultimately produced or consumed, but adding all the code for that is very tedious in most languages. One can argue that handling endianness is the responsible thing to do, but it just doesn't seem worthwhile when I practially know that no one will ever run my code on a big-endian processor.
By @iscoelho - 5 months
If you are using C/C++ for any new app, there is a possibility you are writing code that has a performance requirement.

- mmap/io_uring/drivers and additional "zero-copy" code implementations require consideration about byte order.

- filesystems, databases, network applications can be high throughput and will certainly benefit from being zero-copy (with benefits anywhere from +1% to +2000% in performance.)

This is absolutely not "premature optimization." If you're a C/C++ engineer, you should know off the top of your head how many cycles syscalls & memcpys cost. (Spoiler: They're slow.) You should evaluate your performance requirements and decide if you need to eliminate that overhead. For certain applications, if you do not meet the performance requirements, you cannot ship.

By @chasil - 5 months
TCP/IP is big-endian, which is likely the largest footprint for these concerns.

"htonl, htons, ntohl, ntohs - convert values between host and network byte order"

The cheapest big-endian modern device is a Raspberry Pi running a NetBSD "eb" release, for those who want to test their code.

https://wiki.netbsd.org/ports/evbarm/

By @rwmj - 5 months
Unless you're dealing with binary data in which case byte order matters very much and if you forget to convert it you're causing a world of pain for someone.

He even has an example where he just pushes the problem off to someone else "if the people at Adobe wrote proper code to encode and decode their files", yeah hope they weren't ignoring byte order issues.

By @genpfault - 5 months
(2012)

Original thread w/104 comments:

https://news.ycombinator.com/item?id=3796378

By @AstralStorm - 5 months
Really except for the networking (including say Bluetooth) nobody is big endian anymore. So how about just don't leak that thing from the network layer.

And do not define any data format to be big endian anymore. Deine it as little endian (do not leave it undefined) and everyone will be happy.

By @Laremere - 5 months
This is a reasonable way to do things, and I've used it before. However I just used Zig's method here, and like it a lot: https://ziglang.org/documentation/master/std/#std.io.Reader....

Given a reader (file, network, buffers can all be turned into readers), you can call readInt. It takes the type you want, and the endianess of the encoding. It's easy to write, self documents, and it's highly efficient.

By @ultrahax - 5 months
As a games coder I was glad when the xbox 360 / ps3 era came to an end; getting big endian clients talking to little endian servers was an endless source of bugs.
By @benlivengood - 5 months
The other case where it matters is SIMD instructions where you're serializing or deserializing multiple fields at once, but the SIMD operations are usually architecture specific to begin with and so if you shuffle bytes into and out of the native packed formats it will be specific to the endianness of the native packed format, and then you can forget about byte order outside of those shuffle transformations.
By @_nalply - 5 months
What he said: if you read bytes with some byte order, you compose them yourself correctly, no byte swapping but just reading byte for byte and convert them to the number value you need. The architecture byte order is implicit as long as you use the architecture's tools to convert the bytes.

Rust, for example has from_be_bytes(), from_le_bytes() and from_ne_bytes() methods for the number primitives u16, i16, u32, and so on. They all take a byte array of the correct length and interpret them as big, little and native endian and convert them to the number.

The first two methods work fine on all architectures, and that's what this article is about.

The third method, however, is architecture-dependent and should not be used for network data, because it would work differently and that's what you don't want. In fact, let me cite this part from the documentation. It's very polite but true.

> As the target platform’s native endianness is used, portable code likely wants to use from_be_bytes or from_le_bytes, as appropriate instead.

By @fracus - 5 months
I don't like these ambiguous titles. From the title I thought I was going to read that byte order doesn't matter when in fact the title should be "a computer's byte order is irrelevant to high-level languages". At least, state the fallacy in unambiguous terms one sentence right away. In any case, was an interesting read.
By @nativeit - 5 months
> If you wrote it on a PC and tried to read it on a Mac, though, it wouldn't work unless back on the PC you checked a button that said you wanted the file to be readable on a Mac. (Why wouldn't you? Seriously, why wouldn't you?)

As a non-SWE, whenever I see checkboxes to enable options that maximize compatibility, I often assume there’s an implicit trade-off, so if it isn’t checked by default, I don’t enable such things unless strictly necessary. I don’t have any solid reason for this, it’s just my intuition. After all, if there were no good reasons not to enable Mac compatibility, why wouldn’t it be the default?

Edit: spelling error with “implicit”

By @e4m2 - 5 months
Be aware that if you actually want to do as the article prescribes, don't just copy and paste -- you shan't take anything at face value in C: https://news.ycombinator.com/item?id=31718292.
By @wmf - 5 months
He's right that you shouldn't use ifdefs, but I think a macro like le32toh() is far clearer and more concise than a bunch of shifts and ors.

Also, a lot of comments in this thread have nothing to do with the article and appear to be responses to some invisible strawman.

By @nuancebydefault - 5 months
The byte order matters in all cases where there is i/o, being files, network streams, inter chip communication,... For data that stays on the same processor or for files that are only accessed with the processors of the same endianness, there really is no issue, even when doing bit manipulation.
By @eternityforest - 5 months
If Network Byte Order wasn't a thing, we could all just pretend big endian doesn't exist outside of mainframes.
By @wakawaka28 - 5 months
Characters are not necessarily 8 bits. So you need to do a bit more to have true portability.
By @wiredfool - 5 months
Unless you’re writing code to decode image file formats.