Why AI reminds me of cloud computing
The article compares the evolution of AI to cloud computing, noting misconceptions, the uncertain future of AI, the impact on labor, and unresolved legal challenges regarding copyright and data usage.
Read original articleThe article discusses the parallels between the evolution of artificial intelligence (AI) and cloud computing, highlighting that both fields have experienced significant hype and misunderstanding. The author reflects on the early misconceptions surrounding cloud computing, such as its utility and security advantages, and suggests that similar misinterpretations are occurring with AI today. Despite the excitement surrounding large language models (LLMs), the author emphasizes that the future of AI is uncertain and may diverge from current predictions. The current phase of AI is largely driven by deep learning and neural networks, which require substantial resources for training. The author notes that while LLMs can generate useful outputs, they also produce nonsensical results and raise concerns about bias and explainability. The implications of AI on the labor market are discussed, with a focus on how AI could either complement or commodify expertise. Additionally, legal issues surrounding copyright and the use of public data for training AI models are highlighted as potential challenges. The author concludes that while AI is a transformative technology, its trajectory is difficult to predict, and surprises are likely as the field continues to evolve.
- The evolution of AI mirrors the early development of cloud computing, with both fields facing misconceptions.
- Large language models (LLMs) are a significant focus in AI but can produce unreliable outputs.
- AI's impact on the labor market could either enhance or diminish the value of expertise.
- Legal challenges regarding copyright and data usage for AI training remain unresolved.
- The future of AI is unpredictable, with potential for unexpected developments.
Related
The AI Summer
The article explores AI technology's growth, focusing on ChatGPT's rise and challenges in enterprise adoption of Language Models. It stresses the need for practical LLM tools and discusses integration complexities and CIO caution. It questions LLM hype and advocates for structured advancement.
Do AI Companies Work?
AI companies developing large language models face high costs and significant annual losses. Continuous innovation is crucial for competitiveness, as older models quickly lose value to open-source alternatives.
The phony comforts of AI skepticism
The article explores contrasting views on generative AI, highlighting its potential benefits in various fields, significant investment, and ongoing advancements, while acknowledging valid concerns about its risks and limitations.
Trustworthiness in the Age of AI
The perception of trust in AI has shifted from reliability to recognizing fallibility, particularly with Large Language Models, which generate probabilistic outputs that can mislead users about their accuracy.
AI Scaling Laws
The article examines AI scaling laws, emphasizing ongoing investments by major labs, the importance of new paradigms for model performance, and the need for better evaluations amid existing challenges.
> But. And here's where the comparison to cloud comes in; the details of that evolution seem a bit fuzzy.
Maybe I have rose-tinted glasses on, but cloud computing was never "fuzzy" the way LLMs are. Cloud offerings were (and even moreso now, are) platforms. At the time the concept of a technical platform was very well understood with plenty of prior art. .NET is an example that leaps to mind. The trade-off was you give up control and submit to vendor lock-in, but the platform abstracts away small details so you can focus on your business. In short, cloud wasn't a huge leap, conceptually.
With LLMs, conversely, there isn't really much you can point to and say "this is a natural progression of ____". It's an entirely new thing, with entirely new problems.
What does not feel too risky to predict, though, are some general directions:
a) the era of "GPU"-style computing is here to stay. During the long era of exponential CPU speedups the architectures of vectorized computing were very niche (HPC). Going forward its clear there are potentially various economically viable "mass-market" applications of linear algebra. This may even change the economics building of silicon chips from the ground up. Which brings us to the other main point,
b) the era of algorithmic computing is also just starting. Right now there is an almost maniacal obsession with LLM's. Its not an entirely useless hype as it is trailblazing a path where much else can follow. But conceptually its just one little corner in the vast space of data processing algorithms.
While the general direction of travel seems reasonably established (for now), the details of what comes to pass depend a lot both on the aforementioned economics and the governance around the use of algorithms. Thus far the tech industry had a free pass. Its unlikely that this will continue.
The most obvious profit is replacing moderators, but that's not really making money just saving on infra. Targeted advertising is also a low hanging fruit but people are resistant to advertising and block it.
Astroturfing, community organization and similar domains is where it really shines. And I think it's being hidden well. People see obvious, non-contributing Ai slop but they don't anticipate that most of their online interactions are with bots or that the entire belief structure is algorithmically determined and enforced.
Related
The AI Summer
The article explores AI technology's growth, focusing on ChatGPT's rise and challenges in enterprise adoption of Language Models. It stresses the need for practical LLM tools and discusses integration complexities and CIO caution. It questions LLM hype and advocates for structured advancement.
Do AI Companies Work?
AI companies developing large language models face high costs and significant annual losses. Continuous innovation is crucial for competitiveness, as older models quickly lose value to open-source alternatives.
The phony comforts of AI skepticism
The article explores contrasting views on generative AI, highlighting its potential benefits in various fields, significant investment, and ongoing advancements, while acknowledging valid concerns about its risks and limitations.
Trustworthiness in the Age of AI
The perception of trust in AI has shifted from reliability to recognizing fallibility, particularly with Large Language Models, which generate probabilistic outputs that can mislead users about their accuracy.
AI Scaling Laws
The article examines AI scaling laws, emphasizing ongoing investments by major labs, the importance of new paradigms for model performance, and the need for better evaluations amid existing challenges.