Predicting the Future of Distributed Systems
Object storage is increasingly integrated into transactional and analytical systems, enhancing reliability. Organizations face challenges in adopting new programming models due to perceived investment risks and uncertainty about technology longevity.
Read original articlesignificant ways to reduce perceived risks and demonstrate clear value. The evolution of distributed systems is marked by the integration of object storage into transactional and analytical frameworks, which is seen as a step-change in value. However, the adoption of new programming models remains challenging due to the perception of high investment risks and the difficulty in identifying one-way-door versus two-way-door decisions. Object storage has matured and is increasingly utilized across various systems, offering features that enhance reliability and simplicity. The future of programming models may involve a shift towards extracting code from applications into infrastructure, allowing for better management and security. This transition could lead to more portable and secure business logic, ultimately facilitating easier updates and maintenance. The landscape is characterized by a plethora of emerging technologies, but the uncertainty surrounding their longevity and effectiveness complicates decision-making for organizations.
- Object storage is becoming integral to both transactional and analytical systems.
- The distinction between one-way-door and two-way-door decisions is crucial for effective technology adoption.
- New programming models may shift code from applications to infrastructure for better management.
- The future of distributed systems is marked by innovation but also uncertainty regarding technology longevity.
- Organizations face challenges in rationalizing investments in new technologies due to perceived risks.
Related
A Eulogy for DevOps
DevOps, introduced in 2007 to improve development and operations collaboration, faced challenges like centralized risks and communication issues. Despite advancements like container adoption, obstacles remain in managing complex infrastructures.
Is it time to version observability?
The article outlines the transition from Observability 1.0 to 2.0, highlighting structured logs for better data analysis, improved debugging, and enhanced software development, likening its impact to virtualization.
Ask HN: Pragmatic way to avoid supply chain attacks as a developer
The article addresses the security risks of managing software dependencies, highlighting a specific incident of a compromised package. It debates the effectiveness of containers versus VMs and seeks practical solutions.
Don't Believe the Big Database Hype, Stonebraker Warns
Mike Stonebraker critiques the hype around new database technologies, asserting many are not beneficial, while emphasizing the enduring relevance of the relational model and SQL amidst evolving cloud architectures.
Continuous reinvention: A brief history of block storage at AWS
Marc Olson discusses the evolution of Amazon Web Services' Elastic Block Store (EBS) from basic storage to a system handling over 140 trillion operations daily, emphasizing the need for continuous optimization and innovation.
- Several commenters emphasize the importance of industry adoption of specific APIs to facilitate the transition to distributed systems.
- There is a recognition that economic factors, rather than purely technological advancements, drive the dominance of object storage solutions like S3.
- Some contributors express concerns about the marketing and recognition of new programming models and tools, highlighting the challenges faced by developers.
- Discussions around the potential for smarter storage solutions and the integration of synchronous and asynchronous APIs are prevalent.
- Commenters also touch on the future of AI in infrastructure, suggesting a shift towards more abstracted and user-friendly programming paradigms.
If you read this section, the author gets a lot of things right, but clearly doesn't know the space that well since there have been people building things along these lines for years. And making vague commentary instead of describing the nitty-gritty doesn't evoke much confidence.
I work on one such language/tool called mgmt config, but I have had virtually no interest and/or skill in marketing it. TBQH, I'm disenchanted by the fact that it seems to get any recognition you need to have VC's and a three-year timeline, short-term goals, and a plan to be done by then or move on.
If you're serious about future infra, then it's all here:
https://github.com/purpleidea/mgmt/
Looking for coding help for some of the harder bits that people might wish to add, and for people to take it into production and find issues that we've missed.
The S3 API (object storage) is the accepted storage API, but you do not need AWS (but they are very good at this).
The Kafka API is the accepted stream/ buffer/ queue API, but you do not need Confluent.
SQL is the query language, but you do not need a relational database.
I was kinda expecting BigQuery to do this under the hood, but it seems like they don't, which is a shame. BigQuery isn't faster than, say, trino on gcs, even though Google could do some major optimisations here.
From the 70s through the 90s or 00s everything was file system-based, and it was just assumed that the best way to store data in a distributed system - even a globally-distributed one - was some sort of distributed file system. (e.g. Andrew File System, or research projects like OceanStore.
Nowadays the file system holds applications and configuration, but applications mostly store data in databases and object stores. In distributed systems this is done almost exclusively through system-specific network connections (e.g. port 3306 to MySQL, or HTTP for S3) rather than OS-level mounting of a file system.
(not counting HPC, where distributed file systems are used to preserve the developer look and feel of early non-distributed HPC systems)
It works great for stateless things, but not so great for stateful things. I guess this plays into state being persisted in object storage or DBs, this allows the application to be stateless.
In this post it is attributed to Jeff Bezos quotes, but it was popular in the Pacific North West before his rise.
See also the "Linux kernel management style" document that's been in the kernel since forever: https://docs.kernel.org/6.1/process/management-style.html
This was such a well put comment, that truly made me grok the entire article in just this one statement.
---
Infrastructure needs to be invisible, and that is where the future of AI-enabled orchestration/abstraction will allow development to be more poetry than code - whereby we can describe complex logic paths/workflows in a language of intent - and all the components required to accomplish that desired outcome will be much more quickly, elegantly be a reality.
THe real challenge ahead is the divide between those who have the capability and power of all the AI tools available to them, and those who are subjugated by those who do.
For example, an individual can build a lot with the current state of the available tool universe... but a more sophisticated and well funded organization will have a lot more potential capability.'
What I am really interested to know, is if there is a dark Dystopian Cyberpunk AI under-world happening yet?
Whats the state of BadActor/BigCorpo/BigSpy's capability and covert actions currently?
While we are distracted by AI_ClipArt and celebrity voice squabbles, and seemingly Top AI Voices are being ignored after founding organizations for Alignment/Governance/Humane/etc and warning of catastrophe - define The State of Things?
But yeah - extracting the code and letting logic just be handled yet portable, clonable, refactorable easily is where we are already headed. Its amazing and terrifying at the same time.
I'm thankful that all my Cyberpunk Fantasy reading, thinking, imagining and then my tiny part in the overall evolution of the world of tech today, having the opportunity to be here, worked with and build to, in with -- and now seeing the birth of AI and using it daily in my actual interactions with my IRL.
Such an amazing moment in Human History to be here through this.
Related
A Eulogy for DevOps
DevOps, introduced in 2007 to improve development and operations collaboration, faced challenges like centralized risks and communication issues. Despite advancements like container adoption, obstacles remain in managing complex infrastructures.
Is it time to version observability?
The article outlines the transition from Observability 1.0 to 2.0, highlighting structured logs for better data analysis, improved debugging, and enhanced software development, likening its impact to virtualization.
Ask HN: Pragmatic way to avoid supply chain attacks as a developer
The article addresses the security risks of managing software dependencies, highlighting a specific incident of a compromised package. It debates the effectiveness of containers versus VMs and seeks practical solutions.
Don't Believe the Big Database Hype, Stonebraker Warns
Mike Stonebraker critiques the hype around new database technologies, asserting many are not beneficial, while emphasizing the enduring relevance of the relational model and SQL amidst evolving cloud architectures.
Continuous reinvention: A brief history of block storage at AWS
Marc Olson discusses the evolution of Amazon Web Services' Elastic Block Store (EBS) from basic storage to a system handling over 140 trillion operations daily, emphasizing the need for continuous optimization and innovation.