August 19th, 2024

What comes after the AI crash?

Concerns about a generative AI bubble highlight potential market corrections, misuse of AI technologies, ongoing harms like misinformation, environmental issues from data centers, and the need for vigilance post-crash.

Read original articleLink Icon
What comes after the AI crash?

The article discusses the potential aftermath of the anticipated AI market crash, highlighting concerns about the generative AI bubble. Critics have long warned that the hype surrounding AI technologies, particularly since the launch of ChatGPT, has inflated tech stock valuations. While a market correction seems inevitable, the focus should shift to the consequences of such a crash. The author notes that generative AI, despite its limitations and the false promises made by proponents like OpenAI's Sam Altman, will not disappear but may be misused in various sectors, including military and social services. The ongoing harms of AI, such as misinformation and negative impacts on labor conditions, could persist even after the bubble bursts. Additionally, the expansion of data centers to support AI infrastructure raises environmental concerns. The article emphasizes the need for vigilance and proactive measures to address the potential negative outcomes of generative AI, as the industry may seek to continue its operations regardless of the social and environmental costs.

- The AI market is experiencing a bubble, with concerns about overvaluation and impending correction.

- Generative AI technologies may persist and be misused even after the bubble bursts.

- Ongoing harms from AI, such as misinformation and labor exploitation, need to be addressed.

- The expansion of data centers for AI raises significant environmental issues.

- Vigilance is necessary to mitigate the negative impacts of AI technologies post-crash.

Link Icon 5 comments
By @DoctorOetker - about 2 months
I presume the down-to-earth "Deep Mathematics" and "Deep Differential Algorithms and Differential Datastructures" and "Deep Physics" sobering up.

There is gradual historical transfer of problems first residing in a vague "aristotelian logic" phase, then gradually ever more formal phase, first quantitative descriptive and eventually and only then the normative formalization (i.e. figures of merit etc..).

When physicists and engineers apply RMAD to known and understood (but computationally intensive) total potential/lagrangian/total-figure-of-merit... functions, people don't call it AI, not even necessarily machine learning.

When RMAD is used to optimize vague intuitionistic reasoning like LLMs we do call it AI.

Given the gradual transfer of problems from the vague domain to the explicit domain (formalize or fossilize), "after AI" comes the very same RMAD, but applied to find optima for formally specified and understood-in-the-sense-of-reductionism-but-not-in-emergent-sense systems. (i.e. a computer does not behave like a giant transistor).

By @christkv - about 2 months
Spending for training might be getting closer but I can't imagine that spending for inference will slack off. However for inference it will be all about power efficiency I imagine to bring the cost down over time for the end users and businesses.
By @wegfawefgawefg - about 2 months
This is a worthless article completely devoid of substance. The author just wants to see it fail and is virtue signaling about potential harms or something.

The truth is we do not know if AI will keep growing, or stagnate, in the same way we dont know if moores law will keep going or die each year.

The only way AI stops getting better is if computers stop getting higher compute per kwh.