June 23rd, 2024

SquirrelFS: Using the Rust compiler to check file-system crash consistency

The paper introduces SquirrelFS, a crash-safe file system using Rust's typestate pattern for compile-time operation order enforcement. Synchronous Soft Updates ensure crash safety by maintaining metadata update order. SquirrelFS offers correctness guarantees without separate proofs, quickly verifying crash consistency during compilation. Comparative evaluations show SquirrelFS performs similarly or better than NOVA and WineFS.

Read original articleLink Icon
SquirrelFS: Using the Rust compiler to check file-system crash consistency

The paper titled "SquirrelFS: using the Rust compiler to check file-system crash consistency" introduces a novel approach to developing crash-safe file systems for persistent memory. By leveraging Rust's typestate pattern, the authors enforce a specific order of operations at compile time. They propose a crash-consistency mechanism called Synchronous Soft Updates, which focuses on maintaining the order of updates to file-system metadata to ensure crash safety. SquirrelFS, the file system built using this approach, embeds correctness guarantees into its typestate, eliminating the need for separate proofs. Compilation of SquirrelFS quickly verifies crash consistency, with successful compilation indicating safety and errors pinpointing bugs for correction. Comparative evaluations against existing file systems like NOVA and WineFS demonstrate that SquirrelFS achieves similar or better performance across various benchmarks and applications.

Related

Eight million pixels and counting: improving texture atlas allocation in Firefox (2021)

Eight million pixels and counting: improving texture atlas allocation in Firefox (2021)

Improving texture atlas allocation in WebRender with the guillotiere crate reduces texture memory usage. The guillotine algorithm was replaced due to fragmentation issues, leading to a more efficient allocator. Visualizing the atlas in SVG aids debugging. Rust's simplicity and Cargo fuzz testing are praised for code development and robustness. Enhancements in draw call batching and texture upload aim to boost performance on low-end Intel GPUs by optimizing texture atlases.

Memory sealing for the GNU C Library

Memory sealing for the GNU C Library

The GNU C Library introduces mseal() system call for enhanced security by preventing address space changes. Adhemerval Zanella's patch series adds support, improving memory manipulation protection in upcoming releases.

Memory Model: The Hard Bits

Memory Model: The Hard Bits

This chapter explores OCaml's memory model, emphasizing relaxed memory aspects, compiler optimizations, weakly consistent memory, and DRF-SC guarantee. It clarifies data races, memory classifications, and simplifies reasoning for programmers. Examples highlight data race scenarios and atomicity.

Copy-on-Write Performance and Debugging

Copy-on-Write Performance and Debugging

The article discusses Copy-on-Write (CoW) linking in Dev Drive for Windows systems, enhancing performance during repo builds. CoW benefits C# projects, with upcoming Windows updates enabling CoW by default for faster builds.

Homegrown Rendering with Rust

Homegrown Rendering with Rust

Embark Studios develops a creative platform for user-generated content, emphasizing gameplay over graphics. They leverage Rust for 3D rendering, introducing the experimental "kajiya" renderer for learning purposes. The team aims to simplify rendering for user-generated content, utilizing Vulkan API and Rust's versatility for GPU programming. They seek to enhance Rust's ecosystem for GPU programming.

Link Icon 5 comments
By @metadat - 5 months
Does this have a practical use? It's definitely a novel application of a property of Rust. It's also been my impression filesystem consistency is largely a solved problem thanks to write-ahead logs (WAL) and the like.

It's nice the authors included a link to the underlying source code in the last paragraph:

https://github.com/utsaslab/squirrelfs

By @klysm - 5 months
This is definitely part of the future for better storage systems. Too much responsibility is currently in the hands of programmers (like me) to not make any mistakes. The storage layer can be a high consequence place to make a mistake!
By @Shoop - 5 months
I’m guessing that the synchronous update architecture they’re using only really only makes sense for persistent memory and that this couldn’t easily be adapted to conventional hard drives or SSDs?
By @KennyBlanken - 5 months
This is great but may ultimately be useless, because the storage industry has a long history of not obeying SCSI/IDE/SATA/NVMe commands around flushes and lying to the OS about when data has been committed to disk. You can't trust a drive to write stuff when you tell it to, even when you tell it REALLY NO, SERIOUSLY, WRITE THAT SHIT DOWN NOW...and certainly not in the order you tell it to.

Long ago an Apple engineer told me that one of the reasons Apple sold 68k Macs with Apple-branded SCSI hard drives (and disk utilities that wouldn't work with non-Apple drives) and continued to use Apple-branded drives well into the PowerPC and IDE era was because Apple had been burned by writing operating system / filesystem code that took what drives reported at face value and assumed they were doing what they were supposed to according to the specs for the interface standard they'd supposedly been certified to meet.

Sure, it also enabled markups, but it wasn't just about having a big margin on storage, and there are additional costs involved. It's also why often Apple/Quantum drives were slower - they were actually doing what they were supposed to be doing, whereas everyone else as doing what was fastest to juice reviews, and magazine reviewers were either too cozy with manufacturers, or too lazy/stupid, to actually test to see if a drive was doing what it was supposed to be doing.

The groups behind the interface standards? Trademark money printer go brrrrrrrrrrrrrr. Those golf balls and yachts and supercars aren't gonna drive themselves.

And yes, this has persisted into the age of NVMe flash. Power failures with SATA and NVMe flash drives can be a real risky business, because the controller is shifting data every which way. From OS memory (some NVMe drives use host memory for caching at the hardware level, which is real snake-oil salesman shit Barnum would be proud of) or drive RAM, SLC mode short term storage, and higher level cells (MLC, QLC, etc.) And the controller has to keep track of all this, in addition to its tables providing the mapping between what the OS considers physical blocks and where those blocks are actually written, due to wear leveling.

A power failure with an SSD can potentially brick the drive, (effectively) permanently.

Do reviewers test for this? Nope. Most of them didn't even realize manufacturers were sending higher-spec NVMe drives out for review and in initial distribution, and then very quickly shifting to much cheaper controllers and flash, or they were in on it and kept their mouths shut so they didn't lose access to review sample hardware.

Do not trust your storage, at all. Fault tolerance, backups, and power protection suitable for the importance of the data...

By @johnisgood - 5 months
How does it compare to Nova-Fortis, PMFS, Strata, Ziggurat (Rust), SplitFS, and Aerie?