June 24th, 2024

Microsoft shelves its underwater data center

Microsoft has ended its underwater data center experiment, noting improved server longevity underwater. Despite success, Microsoft shifts focus to other projects like AI supercomputers and nuclear ambitions, discontinuing further underwater endeavors.

Read original articleLink Icon
Microsoft shelves its underwater data center

Microsoft has ended its Project Natick underwater data center experiment, citing a shift in focus. The project, which started in 2013, aimed to explore the feasibility of underwater data centers. Microsoft reported fewer server failures in the underwater setup compared to land-based servers, attributing this to seawater's temperature stability and the use of inert nitrogen gas for server protection. Despite the success in server longevity, Microsoft has decided not to pursue further underwater data center projects. The company plans to apply the learnings from this experiment to other cases, focusing on energy efficiency and operational improvements. While China has initiated its own submerged server project, Microsoft has not indicated any future underwater data center endeavors. Instead, Microsoft is redirecting its efforts towards collaborations like building a $100 billion AI supercomputer data center with OpenAI and exploring nuclear ambitions for modular reactors. The decision to discontinue the underwater data center project reflects Microsoft's strategic realignment towards more focused research and development initiatives.

Related

20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU

20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU

Using ONNX Runtime with WebGPU and WebAssembly in browsers achieves 20x speedup for background removal, reducing server load, enhancing scalability, and improving data security. ONNX models run efficiently with WebGPU support, offering near real-time performance. Leveraging modern technology, IMG.LY aims to enhance design tools' accessibility and efficiency.

Creating New Installation Media for MS-DOS 4.0

Creating New Installation Media for MS-DOS 4.0

Microsoft released MS-DOS 4.00 source code in 2024, prompting the community to develop MS-DOS 4.01 due to missing official media. Challenges arose in recreating installation media, including creating stub executables and addressing missing files. Installation from floppies had mixed results, with various disk downloads provided for users. Feedback is welcomed on fabulous.community.

My weekend project turned into a 3 years journey

My weekend project turned into a 3 years journey

Anthony's note-taking app journey spans 3 years, evolving from a secure Markdown tool to a complex Electron/React project with code execution capabilities. Facing challenges in store publishing, he prioritizes user feedback and simplicity, opting for a custom online deployment solution.

Merlin Labs aims to test an AI-powered KC-135 within a year

Merlin Labs aims to test an AI-powered KC-135 within a year

Two startups, Merlin Labs and EpiSci, collaborate to develop self-flying tankers and advanced AI for dogfighting. They aim to create AI pilots to reduce human pilot dependency in Air Force missions.

Homegrown Rendering with Rust

Homegrown Rendering with Rust

Embark Studios develops a creative platform for user-generated content, emphasizing gameplay over graphics. They leverage Rust for 3D rendering, introducing the experimental "kajiya" renderer for learning purposes. The team aims to simplify rendering for user-generated content, utilizing Vulkan API and Rust's versatility for GPU programming. They seek to enhance Rust's ecosystem for GPU programming.

Link Icon 10 comments
By @tboyd47 - 5 months
> “I’m not building subsea data centers anywhere in the world.” She later added, “My team worked on it, and it worked. We learned a lot about operations below sea level and vibration and impacts on the server. So, we’ll apply those learning to other cases.”

It doesn't sound like they really discontinued the project. It sounds like it got moved to another team.

By @gorgoiler - 5 months
The article alludes to the technique for dealing with failures inside a sealed underwater container: you never actually replace hardware but instead work around failures by reprovisioning services onto different nodes. Over time more and more nodes will fail until the container becomes exhausted and needs replacing.

That therefore begs the question: what are the narrowest points of failure in a system like this? For example it would be naive to have 140 fungible compute nodes if they all went through a single networking switch. Node workloads can be moved but a switch failure makes the entire container useless.

This works physically as well. Marine vessels have sealed bulkheads such that a hull breach in one end doesn’t cause flooding in the other end.

How do these vessels fan out all the other points of failure and what are the error rates in those components?

By @cm2187 - 5 months
Wouldn't it be easier to just submerge some big heat exchanger?
By @yelnatz - 5 months
With an improvement of 87.5% to reliability compared to land-based data centers.

I wonder what factors led them to discontinue the project.

By @ChrisArchitect - 5 months
Some more discussion about the project in 2020 when they presented some of the findings:

https://news.ycombinator.com/item?id=24470954

By @kulor - 5 months
Without knowing the security measures in place, it strikes me as a ripe vulnerability to any determined nefarious actor to destroy or infiltrate this submersible data centre.
By @oldpersonintx - 5 months
such a data center would have been under a lot of pressure to perform