August 20th, 2024

Artificial intelligence is losing hype

AI investment is declining, with a 15% drop in major firms' share prices. Only 4.8% of American businesses use AI, and skepticism about its effectiveness is increasing.

Read original articleLink Icon
SkepticismExcitementFrustration
Artificial intelligence is losing hype

Artificial intelligence (AI) is experiencing a decline in hype, particularly among investors in Silicon Valley. Following a peak in share prices last month, stocks of major AI firms have dropped by 15%, raising concerns about the technology's ability to generate the anticipated profits. Observers are increasingly questioning the effectiveness of large language models, which underpin popular services like ChatGPT. Despite significant investments from big tech companies, data from the Census Bureau indicates that only 4.8% of American businesses are currently utilizing AI for production, a decrease from 5.4% earlier this year. The outlook for future adoption remains stagnant, with a similar percentage of companies planning to implement AI in the coming year. This trend suggests a growing skepticism about the immediate benefits of AI technology, contrasting with the extravagant promises made by tech firms regarding its potential.

- AI investment is declining, with a 15% drop in share prices of major firms.

- Only 4.8% of American companies currently use AI, down from 5.4%.

- Skepticism is rising regarding the effectiveness of large language models.

- Future adoption of AI among businesses appears stagnant.

- The initial hype surrounding AI is waning as investors reassess its profitability.

AI: What people are saying
The comments reflect a diverse range of opinions on the current state and future of AI technology, particularly in light of declining investment and skepticism.
  • Many users express that while AI has transformative potential, its current implementations often fall short of expectations, leading to disappointment.
  • There is a divide between those who find AI tools like LLMs (Large Language Models) immensely useful in their workflows and those who view them as overhyped and unreliable.
  • Concerns about the sustainability of AI investments are prevalent, with some predicting a potential market correction or "bust" due to inflated expectations.
  • Users highlight the importance of distinguishing between genuine advancements in AI and the hype surrounding it, suggesting that practical applications are still evolving.
  • Some commenters emphasize the need for continued research and development in AI, arguing that while current models have limitations, the field is still progressing.
Link Icon 78 comments
By @throwup238 - 6 months
By @futureshock - 6 months
This should really be retitled to “The AI investment bubble is losing hype.” LLMs as they exist today will slowly work their way into new products and use cases. They are an important new capability and one that will change how we do certain tasks.

But as to the hype, we are in a brief pause before the election where no company wants to release anything that would hit the news cycle in a bad way and cause knee-jerk legislation. Are there new architectures and capabilities waiting? Likely some. Sora showed state of the art video generation, OpenAI has demoed an impressive voice mode, and Anthropic has teased that Opus 3.5 will be even more capable. OpenAI also clearly has some gas in the tank as they have focused on releasing small models such as GPT-4o and 4o mini. And many have been musing about agents and methods to improve system 2 like reasoning.

So while there’s a soft moratorium on showing scary new capability there is still evidence of progress being made behind-the-scenes. But what will a state of the art model look like when all of these techniques have been scaled up on brand new exascale data centers?

It might not be AGI, but I think it will at least be enough for the next hype Investment bubble.

By @ChaitanyaSai - 6 months
I've trained as a neuroscientist and written a book about consciousness. I've worked in machine learning and built products for over 20 years and now use AI a fair bit in the ed-tech work we do.

So I've seen how the field has progressed and also have been able to look at it from a perspective most AI/engineering people don't -- what does this artificial intelligence look like when compared to biological intelligence. And I must say I am absolutely astonished people don't see this as opening the flood-gates to staggeringly powerful artificial intelligence. We've run the 4-minute mile. There are hundreds of billions of dollars figuring out how to get to the next level, and it's clear we are close. Forget what the current models are doing, it is what the next big leap (most likely with some new architecture change) will bring.

In focusing on intelligence we forget that it's most likely a much easier challenge than decentralized cheap autonomy, which is what took the planet 4 billion years to figure out. Once that was done, intelligence as we recognize it took an eye-blink. Just like with powered-flight we don't need bioliogical intelligence to transform the world. Artificial intelligence that guzzles electricity, is brittle, has blind spots, but still capable of 1000 times more than the best among us is going to be here within the next decade. It's not here yet, no doubt, but I am yet to see any reasoned argument for why it is far more difficult and will take far longer. We are in for radical non-linear change.

By @_acco - 6 months
AI (specifically Claude Sonnet via Cursor) has completely transformed my workflow. It's changed my job description as a programmer. (And I've been doing this for 13y – no greenhorn!)

This wasn't the case with GPT-4/o. This capability is very new.

When I spoke to a colleague at Microsoft about these changes, they were floored. Microsoft has made themselves synonymous with AI, yet their company is barely even leveraging it. The big cos have put in the biggest investments, but also will be the slowest to change their processes and workflows to realize the shift.

Feels like one of those "future is here, not evenly distributed yet" moments. When a tool like Sonnet is released, it's not like big tech cos are going to transform over night. There's a massive capability overhang that will take some time to work itself through these (now) slow-moving companies.

I assume it was the same with the internet/dot-com crash.

By @aussieguy1234 - 6 months
If anything, i'm getting more hyped up over time. Here are the things i've used LLMs for, with success in all areas as a solo technical founder.

Business Advice including marketing, reaching out to investors, understanding SAFE notes (follow up questions after watching the Y Combinator videos), customer interview design. All of which, as an engineer, I had never done before.

Create SQL queries for all kinds of business metrics including Monthly/Daily Active users, breakdown of users by country, abusive user detection and more.

Automated unit test creation. Not just the happy path either.

Automated data repository creation, based on a one shot example and MySQL text output describing the tables involved. From this, I have super fast data repositories that use raw SQL to get/write data.

Helping with challenging code problems that would otherwise need hours of searching google or reading the docs.

Database and query optimization.

Code Review. This has caught edge case bugs that normal testing did not detect.

I'm going to try out aider + claude sonnet 3.5 on my codebases. I have heard good things about it and some rave reviews on X/twitter. I watched a video where an engineer had a bug, described it to some tool (which wasn't specified, but I suspect aider), then Claude created a test to reproduce the bug and then fixed the code. The test passed, they then did a manual test and the bug was gone.

By @gennarro - 6 months
I tried to do some AI database clean up this weekend - simple stuff like zip lookup and standardizing spacing, and caps - and ChatGPT managed to screw it ip over and over. It’s the sort of thing there a little error means the answer is totally wrong so I spent an hour refining the query and then addressing edge cases etc. I could have just done it all in excel in less with less chance of random (hard to catch) errors.
By @mrinfinitiesx - 6 months
Good. It's decent for summarizing and giving me bullet points and explaining things like I'm 5, makes it easy to code things that I don't want to code or spend time figuring out how to do with new languages, other than that, I see no real world applications outside of listening to burger king orders and putting them on a screen for people to make them. Simple support requests, and of course making buzzword-esque documents that you can feed in to a deck-maker for presentations and stuff.

All in all, it helps assist us in new ways. Had somebody take a picture of a car part that had no markings and it identified it, found the maker/manufacturer/SKU and gave all the details etc. That stuff is useful.

But now we're looking at in-authentic stuff. Artists, writers being plagiarized, job cuts (for said marketing/pitches, BS presentations to downsize teams). It's not just losing its hype, its losing any hype in building humanity for the better. It's just more buzzwords, more 'glamour' more 'pop' shoved in our faces.

The layoffs aren't looking pretty.

Works well to help us code though. Viva, sysadmins unite.

By @ianbutler - 6 months
This supposed “cycle” has been crazy it’s been about 1.5 years since gpt4 came out, which is really the first generally capable model. I think a lot of this “cycle” is media’s wishful thinking. Humans, especially humans in large bureaucracies, just don't move this quickly. Enterprises have barely had time to dip their toes in.

For what it’s worth hype doesn’t mean sustainability anyway. If all the jokers go onto a new fad it’s hardly the skin off the back of anyone taking this seriously, they’ve been through worse times.

By @h_tbob - 6 months
To be honest, I was surprised by ChatGPT. I didn’t think we were close.

We are running out of textual data now to train on… so now they have switched to VIDEO. Geez now they can train on all the VIDEOS on the internet.

And when they finally get bots working, they will have limitless streams of TACTILE data…

Writing it off as the next fad seems fun. But to be honest, I was shocked by what openai did the first time. So they have my respect. I don’t think many of us saw it coming. And I think writing their creativity off again may not be wise.

So when they say the bubble is about to break… I get it. But I don’t see how.

I hardly ever pay for anything.

But I gladly spend money on ai to get the answers I need. Just makes my work work!

Also I would say the economic benefit of this tech for workers is that it will 2x the average worker as they catch on. Seriously I am a 2x coder compared to what I was because of this.

Therefore if me a person who hardly ever spends money has to buy it… I think eventually all businesses will realize all their employees need it. This driving massive revenue for those who sell it.

But it may not be the companies we think.

By @anonyfox - 6 months
The sweet spot of the current LLMs (not whatever the next gen might or might not improve on) for me is similar to suddenly having an army of idiots at my fingertips.

There are a lot of smallish tasks/problems people/systems needs to deal with, some of them even waste notable real engineering capacity, and a highschooler could do manually quite easily by hand.

Example: find out if a text contains an email address, including all kinds of shenanigans people do to mask it (may not be allowd, ... whatever). From a purely coding standpoint, this is a cats-and-mouse game of improving regex solutions in many cases to also find the more sophisticated patterns, but there will always be uncatched/new ways or simply errors that produce false positives. But a highschooler can be given a text and instantly spot the email address (or confirm none is in there).

In order to "solve" these types of small problems, LLMs are pretty much fantastic. It needs to only be reliable enough to produce a structured answer within a few attempts and cheap enough to not be a concern for finance/operations. Thats why for me it makes absolutely sense that the #1 priority for OpenAI since GPT4 has been building smaller/faster/cheaper models. Automators need exactly that, not genius-level AGI.

Also for me I think we're not even scratching the surface still about many tasks can be automated away within the current constraints/flaws of LLMs (hallucination, accuracy, ...). Everyone tries to hype up some super generic powerful future (that usually falls flat after a while), whereas the true value of LLMs is in the many small things where hardcoding solutions is expensive but an intern could do it right away.

By @julienchastang - 6 months
As usual, when we see a thread on this topic on HN, the reactions tend to be bimodal: either "Yes, AI has transformed my workflow" (which is where I mostly fall), or "No, it's over-hyped." The latter often comes with an anecdote about how an LLM failed at a relatively simple task. I speculate that this diversity in opinion might be related to whether or not the user is employing a pro-tier LLM. Personally, I've been very impressed by ChatGPT-4 across a wide range of tasks, from debugging K8s logs to coding and ideation. I also wonder if some of the negative reactions stem from bad luck with an initial "first contact" with an LLM, where the results fell flat for any number of reasons (e.g., poor prompting), leading the user to conclude that it's not worth their time.
By @cs702 - 6 months
The OP is not about AI as a field of research. It's about whether the gobs of money invested in "AI" products and services in recent years, fueled by hype and FOMO, will earn a return, and whether we are approaching the bust of a classic boom-bust over-investment cycle.

Seemingly every non-tech company in the world has been trying to figure out an "AI strategy," driven by hype and FOMO, but most corporate executives have no clue as to what they're doing or ought to be doing. They are spending money on poorly thought-out ideas.

Meanwhile, every tech company providing "AI services" has been spending money like a drunken sailor, fueled by hype and FOMO. None of these AI services are generating enough revenue to cover the cost of development, training, or even, in many cases, inference.

Nvidia, the dominant software-plus-hardware platform (CUDA is a big deal), appears to be the only financial beneficiary of all this hype and FOMO.

According to the OP, the business of "AI" is losing hype, suggesting we're approaching a bust.

By @mark_l_watson - 6 months
Yes and no. Hype over ‘API wrapper’ projects and startups will crash a bit, I think.

On the other hand we are no where near approaching hard limits on LLMs. When LLMs start to be trained for smaller subject areas with massive hand curated examples for solving problems, then they will reach expert performance in those narrow tech areas. These specialized models will be combined in general purpose MoEs.

Then new approaches beyond LLMs, RL, etc. will be discovered, perfected, made more efficient.

Seriously, any hard limits are far into the future.

By @keiferski - 6 months
It’s certainly possible that AI is being overhyped, and I think in some cases it definitely is - but being tired of hearing about it in no way correlates to its actual usefulness.

In other words, lot of people seem to think that human attention spans are what determine everything, but the technological cycles at work here are much much deeper.

Personally I have used Midjourney and ChatGPT in ways that will have huge impacts on many activities and industries. Denying that because of media trendiness about AI seems shortsighted.

By @bufferoverflow - 6 months
AI is not one thing at the moment. We have multiple systems that are being developed in parallel:

• text generators

• code generators

• image generators

• video generators

• speech generators

• sound/music generators

• various robotics vision and control systems (often trained in virtual environments)

• automated factories / warehouses / fulfillment centers

• self-driving cars (trucks/planes/trains/boats/bikes/whatever)

• scientific / reasoning / math AIs

• military AIs

I find all of these categories already have useful AIs. And they are getting better all the time. The progress might slow down here and there, but it keeps on going.

Self-driving was pretty bad a year ago, and now we have Tesla FSD driving uninterrupted for multiple hours in complex city environments.

Image generators now exceed 99.9% of humans in painting/drawing abilities.

Text generators are decent. There are hallucination issues, and they are not creative at the best human level, but I'd say they write better than 90% of humans. When it comes to poetry/lyrics, they all still suck pretty badly.

Video generators are in their infancy - we get decent quality, but absolutely mental imagery.

Reasoning is the weakest point, in my opinion. Current gen models are just not good at reasoning. Sometimes they are brilliant, but then they make very silly mistakes that a 10-year old child wouldn't make. You just can't rely on their logical abilities. I have really high hopes for that area. If they can figure out reasoning, our science research will become a lot more reliable and a lot more fast.

By @kmarc - 6 months
My employer PoC'd, collaborated with and eventually bought Codeium's solution.

I couldn't care less about (any, so also neither about) the LLM hype. Especially didn't bother going to a new web site (ChatGPT), or installing new IDEs etc.

I checked Codeium's mycompany-customized landing page: a one-liner vim plug-in installation and copy pasting an auth token.

I started typing in the very same editor, very same environment, very same everything, and the thing just works, most of the time guesses well what I would want to write, so then I just press tab to accept and voila.

I wasn't expecting such a seamless experience.

I still haven't integrated its "chat" functionality into my workflow (maybe I won't at all). I'm not hyped about it, it just feels like a companion to already working (and correct) code completion.

I read a lot about other people's usages (I'm a devXP engineer), and I feel like that for whatever reason there is more love / hype / faith on their chosen AI companion than how much they actually could improve if took a humble way of understanding code, reading (and writing) docs, reasoning about the engineering solution.

As everything, now AI is losing hype, but somehow (in my bubble) seems like engineers are still high on it. But I also see that this will distill further the set of people who I look up to and want to collaborate with, because e of that mentioned humbleness, as opposed to just accepting text predicted solutions mindlessly.

By @11thEarlOfMar - 6 months
Not until we've seen a plethora of AI startups go public with no revenue.
By @ummonk - 6 months
Whether and to what extent AI can be monetized is an open question. But there's no question that LLMs are already seeing extensive use in everyday office work and already making large improvements to productivity.
By @mensetmanusman - 6 months
I’m just surprised something nearly replaced google in my lifetime.
By @technick - 6 months
I was out at Defcon this year and it was all about AI this, AI that, AI will solve the worlds problems, AI will catch all threats, blah blah blah blah...
By @zombot - 6 months
Such bad timing! As I was just about to replace my dentist, my oncologist, my GP, my tax attorney, and my investment banker with CrapGPT, expecting baldness and cancer to be cured by tomorrow, not to mention a get-rich-quick scheme for everybody and their grandmother. Son, I am disappoint.
By @jimjimjim - 6 months
But what about all those organizations that have "Do something with AI" as the goal for the quarter? All those bonuses driving people to somehow add AI to products. All the poor devs that have been told to replace features driven by deterministic code with AI good-enough-ness.
By @ssimoni - 6 months
Hilarious. The article tries to go even one step further past the loss of hype, by making an additional argument that ai might not be in a hype cycle at all. Meaning they conjecture that it might not even come out of the trough of disillusion to mass adoption.

That’s gonna be a bad take I think.

By @someonehere - 6 months
It’s become an invaluable resource for my team in debugging scripts we’ve written for our services. There are a couple of third-party integrations that have been helping us greatly increase our release of features and fixes for problems in our company.
By @gorgoiler - 6 months
Asking an API to write three paragraphs of text still takes tens of seconds and requires working internet and an expensive data center.

Meanwhile we’re seeing the first of the new generation of on-device inference chips being shipped as commodity edge compute.

When the devices you use every day — cars, doorbells, TV remotes, points-of-sale, roombas — can interpret camera and speech input locally in the time it takes to draw a frame and with low enough power to still give you 10h between charges: then we’ll be due another round of innovation.

The article points to how few parts of the economy are leveraging the text-only API products currently available. That still feels very Web 1.0, for me.

By @e-clinton - 6 months
Claude 3.5 is vastly better the 4o. I produce new features at a rate that’s 2-3x faster than I could without it. Not perfect and isn’t great in all use cases but overall transformational. I’ve been coding for 20+ years.
By @castigatio - 6 months
I think many things can be true at the same time:

- AI is currently hyped to the gills - Companies may find it hard to improve profits using AI in the short term - A crash may come - We may be close to AGI - Current models are flawed in many ways - Current level generative AI is good enough to serve many use cases

Reality is nobody truly knows - there's disagreement on these questions among the leaders in the field.

An observation to add to the mix:

I've had to deliberately work full time with LLM's in all kinds of contexts since they were released. That means forcing myself to use them for tasks whether they are "good at them" yet or not. I found that a major inhibitor to my adoption was my own set of habits around how I think and do things. We aren't used to offloading certain cognitive / creative tasks to machines. We still have the muscle memory of wanting to grab the map when we've got GPS in front of us. I found that once I pushed through this barrier and formed new habits it became second nature to create custom agents for all kinds of purposes to help me in my life. One learns what tasks to offload to the AI and how to offload them - and when and how one needs to step in to pair the different capabilities of the human mind.

I personally feel that pushing oneself to be an early adopter holds real benefit.

By @nbzso - 6 months
Honestly. I am enjoying it. From Dave will replace you, to this is not working well in 6 months. A new record. Logically, everyone around me forgot that I patiently explained the limits of stochastic parrots and the false hope on synthetic data. If we lived in the remotely responsible place, some people would have their heads rolling down the stairs. The psychological damage over workforce from AI hype is comparable only with the negative effect of social networks on the society. :)
By @dbrueck - 6 months
For me, the most amazing thing about LLMs is translation between written languages (human, not programming). I can only speak to the translation between English and Spanish on everyday topics, but ChatGPT often produces translations that are near native speaker quality, and even when it doesn't, the results are almost always far, far above the "good enough to communicate clearly" threshold. It's incredible.
By @nerdjon - 6 months
I feel like I have to disagree, even though I really don't want too. This technology is seriously overhyped.

We have to realize that there is a ton of money right now behind pushing AI everywhere. We have entire conventions for leadership pushing that a year later "is the time to move AI to Prod" or "Moving past the skeptics".

We have investors seemingly asking every company they invest in "how are you using generative AI" before investing. We have Microsoft, Google, and Apple (to a lesser degree) forcing AI down our throats whether we like it or not and ignoring any reliability (inaccurate) issues.

FFS Microsoft is pushing AI as a serious branding part of Windows going forward.

We have too much money committed to pushing the idea that we already have general AI, too much marketing, etc.

Consumer hype and money in this situation are going to be very different things. I do think a bust is going to happen, but I don't think in any meaningful way the "hype" has died down. I think and I hope it will die down, we keep seeing how the technology just simply can't do what they are claiming. But I honestly don't think it is going to happen until something catastrophic happens, and it is going to be ugly when it does. Hopefully your company won't be so reliant on it to not recover.

By @moi2388 - 6 months
Well, maybe because people and companies still overwhelmingly seem to think LLMs == AI.

AI ain’t going nowhere. And certainly isn’t overhyped. LLMs however, certainly are overhyped.

Then again I find it a good interface for assistants and actual AI and APIs that it can call on your behalf

By @Ologn - 6 months
> Since peaking last month the share prices of Western firms driving the ai revolution have dropped by 15%.

NVDA's high closes were $135.58 June 18, down to $134.91 July 10th and $130 close today. It's highest sale is $140.76. So it's close today is 8% off its highest sale ever, and 4% off its highest close ever, not a big thing for a volatile stock. It's earnings are next week and we'll see how it does.

Nvidia and SMCI are the ones who have been earning money selling equipment for "AI". For Microsoft, Google, Facebook, Amazon, OpenAI etc., it is all big initial capital expenditure which they (and the scolding investment bank analysts) hope to regain in the future.

By @bulbosaur123 - 6 months
Same way mobile phones are losing their hype...they've become ubiquitous
By @carlmr - 6 months
I'm really wondering if we're going to see a lack of people with CS degrees a few years from now because of Jensen Huang saying AI will do all that and we should stop learning how to program.
By @yawboakye - 6 months
> artificial intelligence is losing hype.

among which audience? is the hype necessary for further development? we attained much, if not all, of the recent achievements without hype. if anything, i'm strongly in favor of ai losing all the hype so that our researchers can focus on what's necessary, not what will win the loudest applause from so fickle a crowd. i'd be worried if ai was attracting less researchers than, say, two or three years ago. that doesn't seem to be the case.

By @justmarc - 6 months
Maybe it's because people are finding out that it's actually not as intelligent as they thought it would be in its current iteration.

The future is most definitely exciting though, and sadly quite scary, too.

By @m3kw9 - 6 months
Hype is relative to your circle, where you are getting your info, and how the algorithm targets you with info that interests you. So yes, the hype is tiring for that Economist journalist, but for many they have not even heard or used it, and then there is everyone in between. As for myself there is hype but tongue seem justified based on how good the LLMs are currently
By @nottorp - 6 months
https://en.wikipedia.org/wiki/AI_winter

Those who do not know history are doomed to repeat it.

But then, the current hype wasn't there to produce something useful, but for "serial entrepreneurs" to get investor money. They'll just move to the next hyped thing.

By @naasking - 6 months
Yes, hype happens because something new that can potentially be applied to many problems triggers lots of experimentation and discussion. Once people figure out the problems to which it's well-suited and ill-suited, experimentation and discussion die down and there's just application. Nothing to see here, this is standard and expected.
By @cleandreams - 6 months
The problem is that current generative AI is not actually intelligent.

Yann LeCunn had a great tweet on this: Sometimes, the obvious must be studied so it can be asserted with full confidence: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on, - they can not acquire new skills our knowledge without lots of human help, - they can not invent new things. Now, LLMs are merely a subset of AI techniques. Merely scaling up LLMs will not lead systems with these capabilities.

link https://x.com/ylecun/status/1823313599252533594?ref_src=twsr...

To focus on this: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on

Given that we are close to maximum in the size of the training set, this means they are not going to improve without some completely unknown at the moment technical breakthrough. Going from "not intelligent" to "intelligent" is a massive shift.

By @KingOfCoders - 6 months
Which is great, the internet exploded when TV stopped talking about "the internet" and everyone just used it.
By @DebtDeflation - 6 months
Things that are coming to an end:

- Startups whose entire business model is to just provide a wrapper around OpenAI's API.

- Social Media "AI Influencers" and their mindless "7 Ways To Become A Millionaire With ChatGPT" videos.

- Non-technical pundits claiming we are 1-2 years from AGI (and AGI talk in general).

- The stock market assigning insane valuations to any company that claims to somehow be "doing AI".

Things that are NOT coming to an end:

- Ongoing R&D in AI (and not just LLMs).

- Companies at the frontier of AI (OpenAI, Anthropic, Mistral, Google, Meta) releasing ever more capable models and tooling around those models.

- Forward looking companies in all industries using AI both to add capabilities to their products and to drive efficiencies in internal processes.

By @mikhael28 - 6 months
As long as Zuck keeps releasing open-source models, the moat will continue to disappear from these companies. Only expensive, corporate processing tiers will exist and everyone will run stuff locally. Not a lot of money to be made from local processing.
By @taberiand - 6 months
Sure it's not all it's cracked up to be but I sure hope there's a sweet spot where I can run the latest models for a cheap price ($20 / month is a steal), and it doesn't instead crash to the point where they get turned off
By @kderbyma - 6 months
Honestly, the only thing I have found somewhat useful with LLMs is to get smarter tab complete and to occasionally fill out small methods in classes, and finally writing unit tests and adding documentation. it saves me a little time on mostly improving test coverage and readability. But until I give it some examples it usually hallucinates even method names within the class that are very similar but slightly different and some of the time saved is lost by having to fix it's mistakes. I would say it's improving my LoC output by maybe 5-15% max, but the tab complete is nice when writing code.
By @pif - 6 months
The most useful ChatGPT has been for me consisted in teaching me some nice recipes for elk eggs.

For the record, before spelling the recipes out, it made sure I understood that collecting elk eggs may be unlawful in some jurisdictions.

By @moridin - 6 months
Good, maybe now we can focus on building killer apps rather than hype-posturing.
By @lz400 - 6 months
AI is a very strange thing where 2 seemingly smart coders use it and one comes out thinking it's obviously revolutionary and the other one thinking it's a waste of time and where 2 seemingly smart journalists use it and one thinks AGI and the end of the world is nigh and the other one thinks the market will crash when the hype dies over.

I think part of it is due to the politically and internet-induced death of nuance. But part of it I can't fully understand.

Personally I think it's rather useful. I don't consider myself a heavy user and still use it almost every day to help code, I ask it a lot of questions about specific and general stuff. It's partially or totally substituted for me: Stack Overflow, Google Search, Google Translate, most tech references. In the office I see people using it all the time, there's almost always a chatgpt window open in some of the displays.

I think it's very difficult to say this is 100% hype and/or a "phase". It's almost a proven fact it's useful and people will want it in their lives. Even if it never improves again, ever. It's a new tool in the toolbox and there will be businesses providing it as a service, or perhaps we will get to open source general availability.

On the other extreme, all the AI doomerism and AGI stuff to me seems almost as unfounded as before generative AI. Sure, it's likely we'll get to AGI one day. But if you thought we were 100 years away, I don't think chatgpt put us any closer to it and I just don't get people who now say 5. I'd rather they worried about the impact of image gen AI in deepfakes and misinformation. That's _already_ happening.

By @jdefr89 - 6 months
It’s hilarious seeing people getting LLMs to traditional takes traditional discrete algorithms do perfectly already. “Let’s use LLM to do basic arithmetic!” Like, that’s not what they are built for. We want more generalization… So much to unpack here and I’m tired of having to explain these basic things. You will know our models got more powerful if they can do something like solve the ARC challenge, not cramming it with new updated information we know it will already process a certain way…
By @laichzeit0 - 6 months
That's great. Then don't use it? I however find it immensely useful and will continue to use it.
By @scubadude - 6 months
I'm still waiting for the Virtual Reality from 1996 to change the world. Colour me surprised that AI is being found to be 90% hype.
By @matrix87 - 6 months
> Silicon Valley’s tech bros are having a difficult few weeks.

they need to find a different derogatory slur to refer to tech workers

ideally one that isn't sexist and doesn't erase the contributions of women to industry

By @janalsncm - 6 months
“AI” never existed, at least AGI never did. AI that works is called machine learning and it’s not going away because it actually drives revenue at many companies. But the people who are working on that were working on it before blockchain and they’ll be working on it long after the next hype cycle runs out of steam. Unlike grifting, actual expertise takes time.

I have mixed feelings. On the one hand, I have a ton of schadenfreude for the AI maximalists (see: Leopold Aschenbrenner and the $1 trillion cluster that will never be), hype men (LinkedIn gurus and Twitter “technologists” that post threads with the thread emoji regurgitating listicles) or grifters (see: Rabbit R1 and the LAM vaporware).

On the other hand, I’m worried about another AI winter. We don’t need more people figuring out how to make bigger models, we need more fundamental research on low-resource contexts. Transformers are really just a trick to be able to ingest the whole internet. But there are many times where we don’t have a whole internet worth of data. The failure of LLMs on ARC is a pretty clear indication we’re not there yet (although I wouldn’t consider ARC sufficient either).

By @XCSme - 6 months
I think text-to-SQL is quite cool and works reasonably well.
By @bentt - 6 months
All we need to see is a computer that operates itself according to what you ask it to do, and the hype will be back. For some reason nobody's really showing this, but it seems obvious. Maybe it's too dangerous.
By @gsky - 6 months
It's the only one creating tech jobs at the moment
By @Kuinox - 6 months
Why is there no journalist name on this article ?
By @freemoney99 - 6 months
These days you can't be a respected news outlet if you don't regularly have an article/post/blog about AI losing hype. Wondering when that fad will reach its peak...
By @ryoshu - 5 months
Good. Time to build.
By @j-a-a-p - 6 months
TL;DR, article is not so much about AI, it is more about Gartner's hype cycle. According the Economist data only 25% of tech hypes follow this pattern. Many more (no percentage given) are just a flash in the pan.

AI is following more a seasonal pattern with a AI Winters, can we expect a new winter soon?

By @dwighttk - 6 months
Good… it’s all hype
By @bpiroman - 6 months
I use ChatGPT almost everyday as a part of my coding work flow
By @j_timberlake - 6 months
They were writing pro-AI articles less than 2 months ago. They can just post AI-hype and AI-boredom articles so both sides will give them clicks. It's like an alternate form of Gell-Mann Amnesia that you're feeding.
By @rldjbpin - 6 months
while this field is now paying my bills, i am lowkey happy to see this notion in the mainstream.

> “An alarming number of technology trends are flashes in the pan.”

this has been a trend that seems to keep on recurring but does not stop from the tech bros from pushing the marketing beyond the realities.

raising money in the name of the future will give you similar results as self-driving cars or vr. the potential is crazy, but it is not going to make you double your money in a couple financial years. this should help serious initiatives find better-aligned investors.

By @meindnoch - 6 months
The only time I found LLMs useful was creatig fake heartwarming stories to farm likes from boomers on Facebook.
By @signa11 - 6 months
can someone please post an archive link to this article ? thank you !
By @iainctduncan - 6 months
The real take away from this article is that the Gartner hype cycle is bullshit.
By @olalonde - 6 months
> Silicon Valley’s tech bros

The Economist, seriously?

By @robertlf - 6 months
Why post an article that's behind a paywall? How many of us can read it?
By @rambojohnson - 6 months
blah blah blah
By @megamike - 6 months
tell me I am already bored with it next.....
By @kkfx - 6 months
ML is born in two master branches, one it's image manipulation, where video manipulation follow, another is textual search and generation toward the saint Graal of semantic search.

The first was started with simple non-ML image manipulation and video analysis (like finding baggage left unmoved for a certain amount of time in a hall, trespassing alerts for gates and so on) and reach the level of live video analysis for autonomous drive. The second date back a very big amount of time, maybe with the Conrad Gessner's libraries of Babel/Biblioteca Universalis ~1545 with a simple consideration: a book is good to develop and share a specific topic, a newspaper to know "at a glance" most relevant facts of yesterday and so on but we still need something to elicit specific bit of information out of "the library" without the human need to read anything manually. Search engines does works but have limits. LLMs are the failed promise to being able to juice information (in a model) than extract it on user prompt distilled well. That's the promise, the reality is that pattern matching/prediction can't work much for the same problem we have with image, there is no intelligence.

For an LLM if a known scientist (as per tags in some parts of the model ingested information) say (joking in a forum) that eating a small rock a day it's good for health, the LLM will suggest such practice simply because it have no knowledge of joke. Similarly having no knowledge of humans a hand with ten fingers it's perfectly sound.

That's the essential bubble, PRs and people without knowledge have seen Stable Diffusion producing an astronaut riding a horse, have ask some questions to ChatGPT and have said "WOW! Ok, not perfect but it will be just a matter of time" and the answer is no, it will NOT be at least with the current tech. There are some use, like automatic translation, imperfect but good enough to be arranged so 1 human translator can do the same job of 10 before, some low importance ID checks could be done with electronic IDs + face recognition so a single human guards can operate 10 gates alone in an airport just intervening where face recognition fails. Essentially FEW low skill jobs might be automated, the rest is just classic automation, like banks who close offices simply because people use internet banking and pay with digital means so there is almost no need to pick and deposit cash anymore, no reasons to go to the bank anymore. The potential so far can't grow much more, so the bubble burst.

Meanwhile big tech want to keep the bubble up because LLM training is a thing not doable at home as single humans alone, like we can instead run a homeserver for our email, VoIP phone system, file sharing, ... Yes, it's doable in a community, like search with YaCy, maps with Open Street Maps etc but the need of data an patient manual tagging is simply to cumbersome to have a real community born and based model that match or surpass one done by Big Tech. Since IT knowledge VERY lately and very limited start to spread a bit enough to endanger big tech model... They need something users can't do at home on a desktop. And that's a part of the fight.

Another is the push toward no-ownership for 99% to better lock-in/enslave. So far the cloud+mobile model have created lock-in but still users might get data and host things themselves, if they do not operate computers anymore, just using "smart devices" well, the option to download and self host is next to none. So here the push for autonomous taxis instead of personal cars, connected dishwashers who send 7+Gb/day home and so on. This does not technically work so despite the immense amount of money and the struggle of the biggest people start to smell rodent and their mood drop.

By @zelcon - 6 months
Copium
By @omnee - 6 months
I just asked Google's Gemini the following question:

Q: How many N's are there in Normation?

A: There is one N in the word "Normation"

Note that the answer is the same when asked n's instead of N's.

And this is but one example of many simple cases demonstrating that these model are indeed not reasoning in a similar manner to humans. However, the outputs are useful enough that I myself use Claude and GPT-4o for some work, but with full awareness that I must review the outputs in cases where factual accuracy is required.