AI Winter Is Coming
The AI ecosystem shows a disparity between producers and promoters, with academia criticized for superficial research and industry withholding valuable findings, potentially leading to another downturn in AI development.
Read original articleThe article discusses the current state of the AI ecosystem, highlighting a disparity between producers and promoters in academia and industry. It criticizes academia for becoming a "paper mill," where the pressure to publish leads to a proliferation of superficial research, often with catchy titles but lacking substance. Issues such as citation rings and reproducibility crises are prevalent, exemplified by a scandal involving Stanford students who faked results. In industry, valuable research is often unpublished to maintain competitive advantages, while published work tends to be non-critical or serves marketing purposes. This environment has fostered an "AI echo chamber," where influencers misinterpret and misrepresent research, leading to inflated perceptions of AI capabilities among non-technical audiences. The article warns that this trend could lead to another "AI winter," similar to past downturns in data science and crypto, suggesting that while promoters may chase the next trend, genuine producers will continue to advance the field.
- The AI ecosystem is experiencing a surplus of promoters compared to producers.
- Academia faces issues like superficial research and a lack of reproducibility.
- Industry research often remains unpublished to protect competitive advantages.
- Misrepresentation of AI capabilities is prevalent among influencers.
- The current trends may lead to another downturn in AI development.
Related
Don't Pivot into AI Research
The article highlights challenges for computer science students entering machine learning, noting that large companies dominate the field, potentially leading to lower salaries and prestige for researchers.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.
What comes after the AI crash?
Concerns about a generative AI bubble highlight potential market corrections, misuse of AI technologies, ongoing harms like misinformation, environmental issues from data centers, and the need for vigilance post-crash.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.
I wish I had a better heuristic, but the best I’ve found on Twitter is pseudonymous users with anime profile pics. These are people who don’t care about boosting a product. They’re possibly core contributors to a lesser-known but essential python library. They deeply understand a single thing very well. They don’t post all day because they are busy producing.
Hardly quietly. Thorpe published "Beat the Market" in 1967 detailing his formulae, six years before Black Scholes won the Nobel.
> But beneath the surface, there are rampant issues: citation rings, reproducibility crises, and even outright cheating. Just look at the Stanford students who claimed to fine-tune LLaMA3 to have be multimodal with vision at the level of GPT-4v, only to be exposed for faking their results. This incident is just the tip of the iceberg, with arXiv increasingly resembling BuzzFeed more than a serious academic repository.
Completely agreed. Academia is terminally broken. The citation rings don't bother me. Bibliometrics are the OG karma -- basically, fake internet points. Who cares?
The much bigger problem is that those totally corrupt circular influence rings extend into program director positions and grant review committees at federal funding agencies. Most of those people are themselves academics (on leave, visiting, etc.) who depend on money from the exact sources they are reviewing for. So this time is their friends turn, and next time is their turn. And don't dare tell me that this isn't how it works. I've been in too many of those rooms.
It's gotten incredibly bad in in ML in particular. Our government needs to cut these people off. I am sick of my tax money going to these assholes (via the NSF, DARPA, etc.). Just stop funding the entire subfield for a few years, tbh. It's that bad.
On the private sector side, I think that the speculative AI bubble will deflate, but also that some real value is being created and many large institutions are actually behaving quite reasonably compared to previous nonsense cycles. You just have to realize we're mid-late cycle and companies/groups that aren't finding PMF with llm tech in the next 2-3 years are probably not great bets.
--
For all the valid criticisms of "AI"[1] today, it's creating too much value to disappear completely and there's no particular reason[2] to expect progress to halt.
[1]: scare quotes because a lot of people today are mis-using the term "AI" to exclusively mean "LLM's" and that's just wrong. There's a lot more to AI than LLM's.
[2]: yes, I'm aware of neural scaling laws and some related charts showing a slow-down in progress, and the arguments around not having enough (energy|data|whatever) to continue to scale LLM's. But see [1] above - there is more to AI than LLM's.
The fall of data science??? When did that happen? I’m not squarely in the field, but I thought I would have heard about it
It started with Heroku but now it has gained VC attention in the form of Next/Vercel, Laravel Cloud, Void(0), Deno Deploy and Bun-yet-to-be-announced solution. I'm probably forgetting one or two.
Don't get me wrong, they are legit solutions. But the VC money currently being poured in on influencers to push these solutions make them seem much more appealing than they would be otherwise.
Very powerful, albeit sad, statement.
> the real producers will keep moving forward, building a more capable future for AI.
This is one of many signal flares going up.
Do something or cash out of the AI space. Engineers are tired.
“Meanwhile, data scientists and statisticians who oftentimes lack engineering skills are now being pushed to write Python and “do AI,” often producing nothing more than unscalable Jupyter Notebooks”
Most data scientists are already well versed in python. There’s so many platforms emerging that abstract a lot of the infra required to build semi-scalable applications
broadly agree but i think predicting ai winter isnt as useful as handicapping how deep and still building useful things regardless.
Has data science or the modern data stack fallen? What does crypto(I assume currency) have any relevance to ai winter for?
If anyone had this knowledge, they wouldnt tell us, theyd keep their market edge and make a bet for their own selfish greed.
Anything else is PR
discuss amongst yourselves Rhode Island, neither a road nor an island
But I've been hearing the refrain of this article for a decade now. I just don't believe it anymore.
Related
Don't Pivot into AI Research
The article highlights challenges for computer science students entering machine learning, noting that large companies dominate the field, potentially leading to lower salaries and prestige for researchers.
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
There's Just One Problem: AI Isn't Intelligent
AI mimics human intelligence without true understanding, posing systemic risks and undermining critical thinking. Economic benefits may lead to job quality reduction and increased inequality, failing to address global challenges.
What comes after the AI crash?
Concerns about a generative AI bubble highlight potential market corrections, misuse of AI technologies, ongoing harms like misinformation, environmental issues from data centers, and the need for vigilance post-crash.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.