January 21st, 2025

Why AI reminds me of cloud computing

The article compares the evolution of AI to cloud computing, noting misconceptions, the uncertain future of AI, the impact on labor, and unresolved legal challenges regarding copyright and data usage.

Read original articleLink Icon
Why AI reminds me of cloud computing

The article discusses the parallels between the evolution of artificial intelligence (AI) and cloud computing, highlighting that both fields have experienced significant hype and misunderstanding. The author reflects on the early misconceptions surrounding cloud computing, such as its utility and security advantages, and suggests that similar misinterpretations are occurring with AI today. Despite the excitement surrounding large language models (LLMs), the author emphasizes that the future of AI is uncertain and may diverge from current predictions. The current phase of AI is largely driven by deep learning and neural networks, which require substantial resources for training. The author notes that while LLMs can generate useful outputs, they also produce nonsensical results and raise concerns about bias and explainability. The implications of AI on the labor market are discussed, with a focus on how AI could either complement or commodify expertise. Additionally, legal issues surrounding copyright and the use of public data for training AI models are highlighted as potential challenges. The author concludes that while AI is a transformative technology, its trajectory is difficult to predict, and surprises are likely as the field continues to evolve.

- The evolution of AI mirrors the early development of cloud computing, with both fields facing misconceptions.

- Large language models (LLMs) are a significant focus in AI but can produce unreliable outputs.

- AI's impact on the labor market could either enhance or diminish the value of expertise.

- Legal challenges regarding copyright and data usage for AI training remain unresolved.

- The future of AI is unpredictable, with potential for unexpected developments.

Link Icon 7 comments
By @billyp-rva - about 1 month
I don't like the comparison.

> But. And here's where the comparison to cloud comes in; the details of that evolution seem a bit fuzzy.

Maybe I have rose-tinted glasses on, but cloud computing was never "fuzzy" the way LLMs are. Cloud offerings were (and even moreso now, are) platforms. At the time the concept of a technical platform was very well understood with plenty of prior art. .NET is an example that leaps to mind. The trade-off was you give up control and submit to vendor lock-in, but the platform abstracts away small details so you can focus on your business. In short, cloud wasn't a huge leap, conceptually.

With LLMs, conversely, there isn't really much you can point to and say "this is a natural progression of ____". It's an entirely new thing, with entirely new problems.

By @openrisk - about 1 month
> the details are hard to predict

What does not feel too risky to predict, though, are some general directions:

a) the era of "GPU"-style computing is here to stay. During the long era of exponential CPU speedups the architectures of vectorized computing were very niche (HPC). Going forward its clear there are potentially various economically viable "mass-market" applications of linear algebra. This may even change the economics building of silicon chips from the ground up. Which brings us to the other main point,

b) the era of algorithmic computing is also just starting. Right now there is an almost maniacal obsession with LLM's. Its not an entirely useless hype as it is trailblazing a path where much else can follow. But conceptually its just one little corner in the vast space of data processing algorithms.

While the general direction of travel seems reasonably established (for now), the details of what comes to pass depend a lot both on the aforementioned economics and the governance around the use of algorithms. Thus far the tech industry had a free pass. Its unlikely that this will continue.

By @neuroelectron - 27 days
You're right but I think the take away here is that isn't that we can't predict where new technology is going, it's more about how profitability effects how the technology is implemented (where it's most profitable).

The most obvious profit is replacing moderators, but that's not really making money just saving on infra. Targeted advertising is also a low hanging fruit but people are resistant to advertising and block it.

Astroturfing, community organization and similar domains is where it really shines. And I think it's being hidden well. People see obvious, non-contributing Ai slop but they don't anticipate that most of their online interactions are with bots or that the entire belief structure is algorithmically determined and enforced.

By @golly_ned - about 1 month
I don't think I got anything from this article over and above what has already been written about AI elsewhere and in greater depth and detail.
By @pknomad - about 1 month
Maybe I'm slow but I'm failing to read the part where AI reminds the author of specifically cloud computing. The general premise seems to be "lot of early promises never panned out"... but then you can say that about pretty much any fads or exciting trends.
By @sandspar - about 1 month
The author flutters around but doesn't land on a point.