Artificial intelligence is losing hype
AI investment is declining, with a 15% drop in major firms' share prices. Only 4.8% of American businesses use AI, and skepticism about its effectiveness is increasing.
Read original articleArtificial intelligence (AI) is experiencing a decline in hype, particularly among investors in Silicon Valley. Following a peak in share prices last month, stocks of major AI firms have dropped by 15%, raising concerns about the technology's ability to generate the anticipated profits. Observers are increasingly questioning the effectiveness of large language models, which underpin popular services like ChatGPT. Despite significant investments from big tech companies, data from the Census Bureau indicates that only 4.8% of American businesses are currently utilizing AI for production, a decrease from 5.4% earlier this year. The outlook for future adoption remains stagnant, with a similar percentage of companies planning to implement AI in the coming year. This trend suggests a growing skepticism about the immediate benefits of AI technology, contrasting with the extravagant promises made by tech firms regarding its potential.
- AI investment is declining, with a 15% drop in share prices of major firms.
- Only 4.8% of American companies currently use AI, down from 5.4%.
- Skepticism is rising regarding the effectiveness of large language models.
- Future adoption of AI among businesses appears stagnant.
- The initial hype surrounding AI is waning as investors reassess its profitability.
Related
Big Tech says AI is booming. Wall Street is starting to see a bubble
Big Tech's investment in AI is projected to reach $60 billion annually by 2026, but analysts doubt profitability, predicting only $20 billion in revenue, amid concerns about a potential investment bubble.
Big Tech says AI is booming. Wall Street is starting to see a bubble
Big Tech's heavy investments in AI, particularly by Google, Microsoft, and Nvidia, raise concerns of a financial bubble, with analysts doubting sustainability and predicting a significant revenue shortfall by 2026.
Investors Are Suddenly Getting Concerned That AI Isn't Making Any Serious Money
Investors are worried about AI profitability, with analysts warning of a potential bubble. Companies like Google and Microsoft face high costs and unclear monetization strategies, raising concerns about sustainability.
There's No Guarantee AI Will Ever Be Profitable
Silicon Valley tech companies are investing heavily in AI, with costs projected to reach $100 billion by 2027. Analysts question profitability, while proponents see potential for significant economic growth.
Big Tech Fails to Convince Wall Street That AI Is Paying Off
Big Tech companies like Amazon, Microsoft, and Alphabet face investor skepticism as AI investments yield disappointing sales results, leading to stock declines, while Meta and Apple report better outcomes from their AI initiatives.
- Many users express that while AI has transformative potential, its current implementations often fall short of expectations, leading to disappointment.
- There is a divide between those who find AI tools like LLMs (Large Language Models) immensely useful in their workflows and those who view them as overhyped and unreliable.
- Concerns about the sustainability of AI investments are prevalent, with some predicting a potential market correction or "bust" due to inflated expectations.
- Users highlight the importance of distinguishing between genuine advancements in AI and the hype surrounding it, suggesting that practical applications are still evolving.
- Some commenters emphasize the need for continued research and development in AI, arguing that while current models have limitations, the field is still progressing.
But as to the hype, we are in a brief pause before the election where no company wants to release anything that would hit the news cycle in a bad way and cause knee-jerk legislation. Are there new architectures and capabilities waiting? Likely some. Sora showed state of the art video generation, OpenAI has demoed an impressive voice mode, and Anthropic has teased that Opus 3.5 will be even more capable. OpenAI also clearly has some gas in the tank as they have focused on releasing small models such as GPT-4o and 4o mini. And many have been musing about agents and methods to improve system 2 like reasoning.
So while there’s a soft moratorium on showing scary new capability there is still evidence of progress being made behind-the-scenes. But what will a state of the art model look like when all of these techniques have been scaled up on brand new exascale data centers?
It might not be AGI, but I think it will at least be enough for the next hype Investment bubble.
So I've seen how the field has progressed and also have been able to look at it from a perspective most AI/engineering people don't -- what does this artificial intelligence look like when compared to biological intelligence. And I must say I am absolutely astonished people don't see this as opening the flood-gates to staggeringly powerful artificial intelligence. We've run the 4-minute mile. There are hundreds of billions of dollars figuring out how to get to the next level, and it's clear we are close. Forget what the current models are doing, it is what the next big leap (most likely with some new architecture change) will bring.
In focusing on intelligence we forget that it's most likely a much easier challenge than decentralized cheap autonomy, which is what took the planet 4 billion years to figure out. Once that was done, intelligence as we recognize it took an eye-blink. Just like with powered-flight we don't need bioliogical intelligence to transform the world. Artificial intelligence that guzzles electricity, is brittle, has blind spots, but still capable of 1000 times more than the best among us is going to be here within the next decade. It's not here yet, no doubt, but I am yet to see any reasoned argument for why it is far more difficult and will take far longer. We are in for radical non-linear change.
This wasn't the case with GPT-4/o. This capability is very new.
When I spoke to a colleague at Microsoft about these changes, they were floored. Microsoft has made themselves synonymous with AI, yet their company is barely even leveraging it. The big cos have put in the biggest investments, but also will be the slowest to change their processes and workflows to realize the shift.
Feels like one of those "future is here, not evenly distributed yet" moments. When a tool like Sonnet is released, it's not like big tech cos are going to transform over night. There's a massive capability overhang that will take some time to work itself through these (now) slow-moving companies.
I assume it was the same with the internet/dot-com crash.
Business Advice including marketing, reaching out to investors, understanding SAFE notes (follow up questions after watching the Y Combinator videos), customer interview design. All of which, as an engineer, I had never done before.
Create SQL queries for all kinds of business metrics including Monthly/Daily Active users, breakdown of users by country, abusive user detection and more.
Automated unit test creation. Not just the happy path either.
Automated data repository creation, based on a one shot example and MySQL text output describing the tables involved. From this, I have super fast data repositories that use raw SQL to get/write data.
Helping with challenging code problems that would otherwise need hours of searching google or reading the docs.
Database and query optimization.
Code Review. This has caught edge case bugs that normal testing did not detect.
I'm going to try out aider + claude sonnet 3.5 on my codebases. I have heard good things about it and some rave reviews on X/twitter. I watched a video where an engineer had a bug, described it to some tool (which wasn't specified, but I suspect aider), then Claude created a test to reproduce the bug and then fixed the code. The test passed, they then did a manual test and the bug was gone.
All in all, it helps assist us in new ways. Had somebody take a picture of a car part that had no markings and it identified it, found the maker/manufacturer/SKU and gave all the details etc. That stuff is useful.
But now we're looking at in-authentic stuff. Artists, writers being plagiarized, job cuts (for said marketing/pitches, BS presentations to downsize teams). It's not just losing its hype, its losing any hype in building humanity for the better. It's just more buzzwords, more 'glamour' more 'pop' shoved in our faces.
The layoffs aren't looking pretty.
Works well to help us code though. Viva, sysadmins unite.
For what it’s worth hype doesn’t mean sustainability anyway. If all the jokers go onto a new fad it’s hardly the skin off the back of anyone taking this seriously, they’ve been through worse times.
We are running out of textual data now to train on… so now they have switched to VIDEO. Geez now they can train on all the VIDEOS on the internet.
And when they finally get bots working, they will have limitless streams of TACTILE data…
Writing it off as the next fad seems fun. But to be honest, I was shocked by what openai did the first time. So they have my respect. I don’t think many of us saw it coming. And I think writing their creativity off again may not be wise.
So when they say the bubble is about to break… I get it. But I don’t see how.
I hardly ever pay for anything.
But I gladly spend money on ai to get the answers I need. Just makes my work work!
Also I would say the economic benefit of this tech for workers is that it will 2x the average worker as they catch on. Seriously I am a 2x coder compared to what I was because of this.
Therefore if me a person who hardly ever spends money has to buy it… I think eventually all businesses will realize all their employees need it. This driving massive revenue for those who sell it.
But it may not be the companies we think.
There are a lot of smallish tasks/problems people/systems needs to deal with, some of them even waste notable real engineering capacity, and a highschooler could do manually quite easily by hand.
Example: find out if a text contains an email address, including all kinds of shenanigans people do to mask it (may not be allowd, ... whatever). From a purely coding standpoint, this is a cats-and-mouse game of improving regex solutions in many cases to also find the more sophisticated patterns, but there will always be uncatched/new ways or simply errors that produce false positives. But a highschooler can be given a text and instantly spot the email address (or confirm none is in there).
In order to "solve" these types of small problems, LLMs are pretty much fantastic. It needs to only be reliable enough to produce a structured answer within a few attempts and cheap enough to not be a concern for finance/operations. Thats why for me it makes absolutely sense that the #1 priority for OpenAI since GPT4 has been building smaller/faster/cheaper models. Automators need exactly that, not genius-level AGI.
Also for me I think we're not even scratching the surface still about many tasks can be automated away within the current constraints/flaws of LLMs (hallucination, accuracy, ...). Everyone tries to hype up some super generic powerful future (that usually falls flat after a while), whereas the true value of LLMs is in the many small things where hardcoding solutions is expensive but an intern could do it right away.
Seemingly every non-tech company in the world has been trying to figure out an "AI strategy," driven by hype and FOMO, but most corporate executives have no clue as to what they're doing or ought to be doing. They are spending money on poorly thought-out ideas.
Meanwhile, every tech company providing "AI services" has been spending money like a drunken sailor, fueled by hype and FOMO. None of these AI services are generating enough revenue to cover the cost of development, training, or even, in many cases, inference.
Nvidia, the dominant software-plus-hardware platform (CUDA is a big deal), appears to be the only financial beneficiary of all this hype and FOMO.
According to the OP, the business of "AI" is losing hype, suggesting we're approaching a bust.
On the other hand we are no where near approaching hard limits on LLMs. When LLMs start to be trained for smaller subject areas with massive hand curated examples for solving problems, then they will reach expert performance in those narrow tech areas. These specialized models will be combined in general purpose MoEs.
Then new approaches beyond LLMs, RL, etc. will be discovered, perfected, made more efficient.
Seriously, any hard limits are far into the future.
In other words, lot of people seem to think that human attention spans are what determine everything, but the technological cycles at work here are much much deeper.
Personally I have used Midjourney and ChatGPT in ways that will have huge impacts on many activities and industries. Denying that because of media trendiness about AI seems shortsighted.
• text generators
• code generators
• image generators
• video generators
• speech generators
• sound/music generators
• various robotics vision and control systems (often trained in virtual environments)
• automated factories / warehouses / fulfillment centers
• self-driving cars (trucks/planes/trains/boats/bikes/whatever)
• scientific / reasoning / math AIs
• military AIs
I find all of these categories already have useful AIs. And they are getting better all the time. The progress might slow down here and there, but it keeps on going.
Self-driving was pretty bad a year ago, and now we have Tesla FSD driving uninterrupted for multiple hours in complex city environments.
Image generators now exceed 99.9% of humans in painting/drawing abilities.
Text generators are decent. There are hallucination issues, and they are not creative at the best human level, but I'd say they write better than 90% of humans. When it comes to poetry/lyrics, they all still suck pretty badly.
Video generators are in their infancy - we get decent quality, but absolutely mental imagery.
Reasoning is the weakest point, in my opinion. Current gen models are just not good at reasoning. Sometimes they are brilliant, but then they make very silly mistakes that a 10-year old child wouldn't make. You just can't rely on their logical abilities. I have really high hopes for that area. If they can figure out reasoning, our science research will become a lot more reliable and a lot more fast.
I couldn't care less about (any, so also neither about) the LLM hype. Especially didn't bother going to a new web site (ChatGPT), or installing new IDEs etc.
I checked Codeium's mycompany-customized landing page: a one-liner vim plug-in installation and copy pasting an auth token.
I started typing in the very same editor, very same environment, very same everything, and the thing just works, most of the time guesses well what I would want to write, so then I just press tab to accept and voila.
I wasn't expecting such a seamless experience.
I still haven't integrated its "chat" functionality into my workflow (maybe I won't at all). I'm not hyped about it, it just feels like a companion to already working (and correct) code completion.
I read a lot about other people's usages (I'm a devXP engineer), and I feel like that for whatever reason there is more love / hype / faith on their chosen AI companion than how much they actually could improve if took a humble way of understanding code, reading (and writing) docs, reasoning about the engineering solution.
As everything, now AI is losing hype, but somehow (in my bubble) seems like engineers are still high on it. But I also see that this will distill further the set of people who I look up to and want to collaborate with, because e of that mentioned humbleness, as opposed to just accepting text predicted solutions mindlessly.
That’s gonna be a bad take I think.
Meanwhile we’re seeing the first of the new generation of on-device inference chips being shipped as commodity edge compute.
When the devices you use every day — cars, doorbells, TV remotes, points-of-sale, roombas — can interpret camera and speech input locally in the time it takes to draw a frame and with low enough power to still give you 10h between charges: then we’ll be due another round of innovation.
The article points to how few parts of the economy are leveraging the text-only API products currently available. That still feels very Web 1.0, for me.
- AI is currently hyped to the gills - Companies may find it hard to improve profits using AI in the short term - A crash may come - We may be close to AGI - Current models are flawed in many ways - Current level generative AI is good enough to serve many use cases
Reality is nobody truly knows - there's disagreement on these questions among the leaders in the field.
An observation to add to the mix:
I've had to deliberately work full time with LLM's in all kinds of contexts since they were released. That means forcing myself to use them for tasks whether they are "good at them" yet or not. I found that a major inhibitor to my adoption was my own set of habits around how I think and do things. We aren't used to offloading certain cognitive / creative tasks to machines. We still have the muscle memory of wanting to grab the map when we've got GPS in front of us. I found that once I pushed through this barrier and formed new habits it became second nature to create custom agents for all kinds of purposes to help me in my life. One learns what tasks to offload to the AI and how to offload them - and when and how one needs to step in to pair the different capabilities of the human mind.
I personally feel that pushing oneself to be an early adopter holds real benefit.
We have to realize that there is a ton of money right now behind pushing AI everywhere. We have entire conventions for leadership pushing that a year later "is the time to move AI to Prod" or "Moving past the skeptics".
We have investors seemingly asking every company they invest in "how are you using generative AI" before investing. We have Microsoft, Google, and Apple (to a lesser degree) forcing AI down our throats whether we like it or not and ignoring any reliability (inaccurate) issues.
FFS Microsoft is pushing AI as a serious branding part of Windows going forward.
We have too much money committed to pushing the idea that we already have general AI, too much marketing, etc.
Consumer hype and money in this situation are going to be very different things. I do think a bust is going to happen, but I don't think in any meaningful way the "hype" has died down. I think and I hope it will die down, we keep seeing how the technology just simply can't do what they are claiming. But I honestly don't think it is going to happen until something catastrophic happens, and it is going to be ugly when it does. Hopefully your company won't be so reliant on it to not recover.
AI ain’t going nowhere. And certainly isn’t overhyped. LLMs however, certainly are overhyped.
Then again I find it a good interface for assistants and actual AI and APIs that it can call on your behalf
NVDA's high closes were $135.58 June 18, down to $134.91 July 10th and $130 close today. It's highest sale is $140.76. So it's close today is 8% off its highest sale ever, and 4% off its highest close ever, not a big thing for a volatile stock. It's earnings are next week and we'll see how it does.
Nvidia and SMCI are the ones who have been earning money selling equipment for "AI". For Microsoft, Google, Facebook, Amazon, OpenAI etc., it is all big initial capital expenditure which they (and the scolding investment bank analysts) hope to regain in the future.
among which audience? is the hype necessary for further development? we attained much, if not all, of the recent achievements without hype. if anything, i'm strongly in favor of ai losing all the hype so that our researchers can focus on what's necessary, not what will win the loudest applause from so fickle a crowd. i'd be worried if ai was attracting less researchers than, say, two or three years ago. that doesn't seem to be the case.
The future is most definitely exciting though, and sadly quite scary, too.
Those who do not know history are doomed to repeat it.
But then, the current hype wasn't there to produce something useful, but for "serial entrepreneurs" to get investor money. They'll just move to the next hyped thing.
Yann LeCunn had a great tweet on this: Sometimes, the obvious must be studied so it can be asserted with full confidence: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on, - they can not acquire new skills our knowledge without lots of human help, - they can not invent new things. Now, LLMs are merely a subset of AI techniques. Merely scaling up LLMs will not lead systems with these capabilities.
link https://x.com/ylecun/status/1823313599252533594?ref_src=twsr...
To focus on this: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on
Given that we are close to maximum in the size of the training set, this means they are not going to improve without some completely unknown at the moment technical breakthrough. Going from "not intelligent" to "intelligent" is a massive shift.
- Startups whose entire business model is to just provide a wrapper around OpenAI's API.
- Social Media "AI Influencers" and their mindless "7 Ways To Become A Millionaire With ChatGPT" videos.
- Non-technical pundits claiming we are 1-2 years from AGI (and AGI talk in general).
- The stock market assigning insane valuations to any company that claims to somehow be "doing AI".
Things that are NOT coming to an end:
- Ongoing R&D in AI (and not just LLMs).
- Companies at the frontier of AI (OpenAI, Anthropic, Mistral, Google, Meta) releasing ever more capable models and tooling around those models.
- Forward looking companies in all industries using AI both to add capabilities to their products and to drive efficiencies in internal processes.
For the record, before spelling the recipes out, it made sure I understood that collecting elk eggs may be unlawful in some jurisdictions.
I think part of it is due to the politically and internet-induced death of nuance. But part of it I can't fully understand.
Personally I think it's rather useful. I don't consider myself a heavy user and still use it almost every day to help code, I ask it a lot of questions about specific and general stuff. It's partially or totally substituted for me: Stack Overflow, Google Search, Google Translate, most tech references. In the office I see people using it all the time, there's almost always a chatgpt window open in some of the displays.
I think it's very difficult to say this is 100% hype and/or a "phase". It's almost a proven fact it's useful and people will want it in their lives. Even if it never improves again, ever. It's a new tool in the toolbox and there will be businesses providing it as a service, or perhaps we will get to open source general availability.
On the other extreme, all the AI doomerism and AGI stuff to me seems almost as unfounded as before generative AI. Sure, it's likely we'll get to AGI one day. But if you thought we were 100 years away, I don't think chatgpt put us any closer to it and I just don't get people who now say 5. I'd rather they worried about the impact of image gen AI in deepfakes and misinformation. That's _already_ happening.
they need to find a different derogatory slur to refer to tech workers
ideally one that isn't sexist and doesn't erase the contributions of women to industry
I have mixed feelings. On the one hand, I have a ton of schadenfreude for the AI maximalists (see: Leopold Aschenbrenner and the $1 trillion cluster that will never be), hype men (LinkedIn gurus and Twitter “technologists” that post threads with the thread emoji regurgitating listicles) or grifters (see: Rabbit R1 and the LAM vaporware).
On the other hand, I’m worried about another AI winter. We don’t need more people figuring out how to make bigger models, we need more fundamental research on low-resource contexts. Transformers are really just a trick to be able to ingest the whole internet. But there are many times where we don’t have a whole internet worth of data. The failure of LLMs on ARC is a pretty clear indication we’re not there yet (although I wouldn’t consider ARC sufficient either).
AI is following more a seasonal pattern with a AI Winters, can we expect a new winter soon?
> “An alarming number of technology trends are flashes in the pan.”
this has been a trend that seems to keep on recurring but does not stop from the tech bros from pushing the marketing beyond the realities.
raising money in the name of the future will give you similar results as self-driving cars or vr. the potential is crazy, but it is not going to make you double your money in a couple financial years. this should help serious initiatives find better-aligned investors.
The Economist, seriously?
The first was started with simple non-ML image manipulation and video analysis (like finding baggage left unmoved for a certain amount of time in a hall, trespassing alerts for gates and so on) and reach the level of live video analysis for autonomous drive. The second date back a very big amount of time, maybe with the Conrad Gessner's libraries of Babel/Biblioteca Universalis ~1545 with a simple consideration: a book is good to develop and share a specific topic, a newspaper to know "at a glance" most relevant facts of yesterday and so on but we still need something to elicit specific bit of information out of "the library" without the human need to read anything manually. Search engines does works but have limits. LLMs are the failed promise to being able to juice information (in a model) than extract it on user prompt distilled well. That's the promise, the reality is that pattern matching/prediction can't work much for the same problem we have with image, there is no intelligence.
For an LLM if a known scientist (as per tags in some parts of the model ingested information) say (joking in a forum) that eating a small rock a day it's good for health, the LLM will suggest such practice simply because it have no knowledge of joke. Similarly having no knowledge of humans a hand with ten fingers it's perfectly sound.
That's the essential bubble, PRs and people without knowledge have seen Stable Diffusion producing an astronaut riding a horse, have ask some questions to ChatGPT and have said "WOW! Ok, not perfect but it will be just a matter of time" and the answer is no, it will NOT be at least with the current tech. There are some use, like automatic translation, imperfect but good enough to be arranged so 1 human translator can do the same job of 10 before, some low importance ID checks could be done with electronic IDs + face recognition so a single human guards can operate 10 gates alone in an airport just intervening where face recognition fails. Essentially FEW low skill jobs might be automated, the rest is just classic automation, like banks who close offices simply because people use internet banking and pay with digital means so there is almost no need to pick and deposit cash anymore, no reasons to go to the bank anymore. The potential so far can't grow much more, so the bubble burst.
Meanwhile big tech want to keep the bubble up because LLM training is a thing not doable at home as single humans alone, like we can instead run a homeserver for our email, VoIP phone system, file sharing, ... Yes, it's doable in a community, like search with YaCy, maps with Open Street Maps etc but the need of data an patient manual tagging is simply to cumbersome to have a real community born and based model that match or surpass one done by Big Tech. Since IT knowledge VERY lately and very limited start to spread a bit enough to endanger big tech model... They need something users can't do at home on a desktop. And that's a part of the fight.
Another is the push toward no-ownership for 99% to better lock-in/enslave. So far the cloud+mobile model have created lock-in but still users might get data and host things themselves, if they do not operate computers anymore, just using "smart devices" well, the option to download and self host is next to none. So here the push for autonomous taxis instead of personal cars, connected dishwashers who send 7+Gb/day home and so on. This does not technically work so despite the immense amount of money and the struggle of the biggest people start to smell rodent and their mood drop.
Q: How many N's are there in Normation?
A: There is one N in the word "Normation"
Note that the answer is the same when asked n's instead of N's.
And this is but one example of many simple cases demonstrating that these model are indeed not reasoning in a similar manner to humans. However, the outputs are useful enough that I myself use Claude and GPT-4o for some work, but with full awareness that I must review the outputs in cases where factual accuracy is required.
Related
Big Tech says AI is booming. Wall Street is starting to see a bubble
Big Tech's investment in AI is projected to reach $60 billion annually by 2026, but analysts doubt profitability, predicting only $20 billion in revenue, amid concerns about a potential investment bubble.
Big Tech says AI is booming. Wall Street is starting to see a bubble
Big Tech's heavy investments in AI, particularly by Google, Microsoft, and Nvidia, raise concerns of a financial bubble, with analysts doubting sustainability and predicting a significant revenue shortfall by 2026.
Investors Are Suddenly Getting Concerned That AI Isn't Making Any Serious Money
Investors are worried about AI profitability, with analysts warning of a potential bubble. Companies like Google and Microsoft face high costs and unclear monetization strategies, raising concerns about sustainability.
There's No Guarantee AI Will Ever Be Profitable
Silicon Valley tech companies are investing heavily in AI, with costs projected to reach $100 billion by 2027. Analysts question profitability, while proponents see potential for significant economic growth.
Big Tech Fails to Convince Wall Street That AI Is Paying Off
Big Tech companies like Amazon, Microsoft, and Alphabet face investor skepticism as AI investments yield disappointing sales results, leading to stock declines, while Meta and Apple report better outcomes from their AI initiatives.