September 29th, 2024

Do AI Companies Work?

AI companies developing large language models face high costs and significant annual losses. Continuous innovation is crucial for competitiveness, as older models quickly lose value to open-source alternatives.

Read original articleLink Icon
FrustrationSkepticismExcitement
Do AI Companies Work?

The article discusses the sustainability and viability of AI companies, particularly those developing large language models (LLMs). It highlights the significant financial burdens these companies face, with OpenAI reportedly burning through $7 billion annually while attempting to raise $6.5 billion. The costs associated with building and improving LLMs are expected to rise, as advancements in technology require increasingly complex computations. Despite these challenges, there is a strong belief in the potential profitability of LLMs, leading to a competitive environment where companies must continuously innovate to avoid obsolescence. The rapid pace of technological advancement means that older models quickly lose value, and open-source alternatives are becoming more competitive. The article suggests that AI companies have limited options: either invest heavily to stay ahead or risk being outpaced by competitors. Unlike traditional cloud providers, AI companies face a unique threat of disruption due to the ease of access to computing resources. The author concludes that the market's irrationality may be the only way for these companies to remain solvent, as they navigate a landscape where ongoing investment is crucial for survival.

- AI companies face high operational costs, with significant annual losses reported.

- Continuous innovation is essential to maintain competitiveness in the rapidly evolving AI landscape.

- The value of older models diminishes quickly, with open-source alternatives gaining traction.

- Unlike traditional cloud services, AI companies can be disrupted more easily due to accessible resources.

- Market irrationality may be necessary for AI companies to sustain their operations.

AI: What people are saying
The comments reflect a diverse range of opinions on the challenges and future of AI companies developing large language models (LLMs).
  • Many commenters express skepticism about the sustainability of current business models, highlighting the high costs and competition from open-source alternatives.
  • There is a consensus that continuous innovation is essential, with some suggesting that the rapid pace of development may lead to a "race to the bottom" in terms of model quality and profitability.
  • Several comments emphasize the importance of user experience and product differentiation, arguing that companies need to focus on building a strong brand and ecosystem.
  • Some participants discuss the potential for AI to become a commodity, with concerns that LLMs may lose their unique value as technology advances.
  • The discussion also touches on the broader implications of AI in society, including its role in national security and the potential for AGI development.
Link Icon 62 comments
By @LASR - 7 months
I lead an applied AI research team where I work - which is a mid-sized public enterprise products company. I've been saying this in my professional circles quite often.

We talk about scaling laws, superintelligence, AGI etc. But there is another threshold - the ability for humans to leverage super-intelligence. It's just incredibly hard to innovate on products that fully leverage superintelligence.

At some point, AI needs to connect with the real world to deliver economically valuable output. The ratelimiting step is there. Not smarter models.

In my mind, already with GPT-4, we're not generating ideas fast enough on how best to leverage it.

Getting AI to do work involves getting AI to understand what needs to be done from highly bandwidth constrained humans using mouse / keyboard / voice to communicate.

Anyone using a chatbot already has felt the frustration of "it doesn't get what I want". And also "I have to explain so much that I might as well just do it myself"

We're seeing much less of "it's making mistakes" these days.

If we have open-source models that match up to GPT-4 on AWS / Azure etc, not much point to go with players like OpenAI / Anthropic who may have even smarter models. We can't even use the dumber models fully.

By @llm_trw - 7 months
I've found that everything that works stops being called AI.

Logic programming? AI until SQL came out. Now it's not AI.

OCR, computer algebra systems, voice recognition, checkers, machine translation, go, natural language search.

All solved, all not AI any more yet all were AI before they got solved by AI researchers.

There's even a name for it: https://en.m.wikipedia.org/wiki/AI_effect?utm_source=perplex...

By @bhouston - 7 months
I think we are in the middle of a steep S-curve of technology innovation. It is far from plateauing and there are still a bunch of major innovations that are likely to shift things even further. Interesting time and these companies are riding a wild wave. It is likely some will actually win big, but most will die - similar to previous technology revolutions.

The ones that win will win not just on technology, but on talent retention, business relationships/partnerships, deep funding, marketing, etc. The whole package really. Losing is easy, miss out on one of these for a short period of time and you've easily lost.

There is no major moat, except great execution across all dimensions.

By @bradhilton - 7 months
Kind of feels like the ride-sharing early days. Lots of capital being plowed into a handful of companies to grab market share. Economics don't really make sense in the short term because the vast majority of cash flows are still far in the future (Zero to One).

In the end the best funded company, Uber, is now the most valuable (~$150B). Lyft, the second best funded, is 30x smaller. Are there any other serious ride sharing companies left? None I know of, at least in the US (international scene could be different).

I don't know how the AI rush will work out, but I'd bet there will be some winners and that the best capitalized will have a strong advantage. Big difference this time is that established tech giants are in the race, so I don't know if there will be a startup or Google at the top of the heap.

I also think that there could be more opportunities for differentiation in this market. Internet models will only get you so far and proprietary data will become more important potentially leading to knowledge/capability specialization by provider. We already see some differentiation based on coding, math, creativity, context length, tool use, etc.

By @bcherny - 7 months
This article, and all the articles like it, are missing most of the puzzle.

Models don’t just compete on capability. Over the last year we’ve seen models and vendors differentiate along a number of lines in addition to capability:

- Safety

- UX

- Multi-modality

- Reliability

- Embeddability

And much more. Customers care about capability, but that’s like saying car owners care about horsepower — it’s a part of the choice but not the only piece.

By @flappyeagle - 7 months
This is like when VCs were funding all kinds of ride share, bike share, food delivery, cannabis delivery, and burning money so everyone gets subsidized stuff while the market figures out wtf is going on.

I love it. More goodies for us

By @cageface - 7 months
It seems very difficult to build a moat around a product when the product is supposed to be a generally capable tool and the input is English text. The more truly generally intelligent these models get the more interchangeable they become. It's too easy to swap one out for another.
By @martin_drapeau - 7 months
The fundamental question is how to monetize AI?

I see 2 paths: - Consumers - the Google way: search and advertise to consumers - Businesses - the AWS way: attrack businesses to use your API and lock them in

The first is fickle. Will OpenAI become the door to the Internet? You'll need people to stop using Google Search and rely on ChatGPT for that to happen. Will become a commodity. Short term you can charge a subscription but long term will most likely become a commondity with advertising.

The second is tangible. My company is plugged directly to the OpenAI API. We build on it. Still very early and not so robust. But getting better and cheaper and faster over time. Active development. No reason to switch to something else as long as OpenAI leads the pack.

By @insane_dreamer - 7 months
The biggest problem that I have with AI is how people extrapolate out of thin air, going from "LLMs can craft responses that sound exactly like a human" to "LLMs can solve climate change and world hunger". Those are two orthogonal skills and nothing we've seen so far indicates that LLMs would be able to make that jump, any more than I expect someone with a PhD in linguistics to solve problems faced by someone with a PhD in applied physics working on solving nuclear fusion.
By @ManlyBread - 7 months
I am wondering where exactly are the AI products. I am not talking about boring, nerd stuff but rather about products that are well-known (to the point where non-technical users have heard of them) and are used on a daily basis by a significant amount of people. Currently only ChatGPT fits this bill despite the fact that the "AI revolution" is nearly two years old at this point.
By @tqi - 7 months
Personally, I think the more interesting question is: if not AI then what? I doubt we will go back to Metaverse (lol) or Crypto, B2B SaaS seems safe but stagnant (and competing over a largely fixed pie of IT spend). It feels like it doesn't matter if the companies work-- until there is something shinier to invest in it will be the defacto choice for LPs chasing returns (and VCs chasing carry).
By @MangoCoffee - 7 months
>Therefore, if you are OpenAI, Anthropic, or another AI vendor, you have two choices. Your first is to spend enormous amounts of money to stay ahead of the market. This seems very risky though

Regarding point 6, Amazon invested heavily in building data centers across the US to enhance customer service and maintain a competitive edge. it was risky.

This strategic move resulted in a significant surplus of computing power, which Amazon successfully monetized. In fact, it became the company's largest profit generator.

After all, startups and businesses is all about taking risk, ain't it?

By @rurban - 7 months
Don't mixup LLM with AI. Not every AI company works on top of LLM's, many are doing vision or robotics or even old-school AI.

Our system works, is AI, is profitable, doing vision. Vision scales. There's a little bit of LLM classification. And robotics also, but this part is not really AI, just a generic industry robot.

By @arnaudsm - 7 months
LLM intelligence has plateaued for the past 12 months. Open source models are catching up while being 20x smaller than the original gpt4 (gemma2/llama3.2/qwen2.5).

AI companies are promising AGI to investors to survive a few more years before they probably collapse and don't deliver on that promise.

LLMs are now a commodity. It's time for startups to build meaningful products with it!

By @mepian - 7 months
The title reminds me of this classic paper, "If It Works, It's Not AI: A Commercial Look at Artificial Intelligence Startups": https://dspace.mit.edu/handle/1721.1/80558
By @gdiamos - 7 months
The top of the nasdaq is full of companies that build computers (from phones to data centers), not companies that only do algorithms.

A clever AI algorithm run on rented compute is not a moat.

By @SamBam - 7 months
> If the proprietary models stop moving forward, the open source ones will quickly close the gap.

This is the Red Queen hypothesis in evolution. You have to keep running faster just to stay in place.

On it's face, this does seem like a sound argument that all the $$ following LLMs is irrational:

1. No matter how many billions you pour into your model, you're only ever, say, six months away from a competitor building a model that's just about as good. And so you already know you're going to need to spend an increased number of billions next year.

2. Like the gambler who tries to beat the house by doubling his bet each time, at some point there must be a number where that many billions is considered irrational by everybody.

3. Therefore it seems irrational to start putting in even the fewer billions of dollars now, knowing the above two points.

By @patrickhogan1 - 7 months
ChatGPT benefits from network effects, where user feedback on the quality of its answers helps improve the model over time. This reduces its reliance on external services like ScaleAI, lowering development costs.

Larger user base = increased feedback = improved quality of answers = moat

By @themanmaran - 7 months
Companies in the business of building models are forced to innovate on two things at once.

1. Training the next generation of models

2. Providing worldwide scalable infrastructure to serve those models (ideally at a profit)

It's hard enough to accomplish #1, without worrying about competing against the hyperscalers on #2. I think we'll see large licensing deals (similar to Anthropic + AWS, OpenAI + Azure) as one of the primary income sources for the model providers.

With the second (and higher margin) being user facing subscriptions. Right now 70% of OpenAI's revenue comes from chatgpt + enterprise gpt. I imagine Anthropic is similar, given the amount of investment in their generative UI. At the end of the day, model providers might just be consumer companies.

By @streetcat1 - 7 months
The competition for big LLM AI companies is not other big LLM AI companies, but rather small LLM AI companies with good enough models. This is a classic innovator dilemma. For example, I can imagine a team of cardiologists creating a fine tune LLM model.
By @phreeza - 7 months
One thing I have found myself wondering is why this didn't play out similarly with Google in the early days.

I guess their secret sauce was just so good and so secret that neither established players nor copycat startups were able to replicate it, the way it happened with ChatGPT? Why is the same not the case here, is it just because the whole LLM thing grew out of a relatively open research culture where the fundamentals are widely known? OTOH PageRank was also published before the founding of Google.

I'd be curious to hear if anyone has theories or insight here.

By @t43562 - 7 months
Obviously there's a huge technical change waiting in the wings because we don't need billions of dollars to make a human. Nor does a human need hundreds of kilowatts of electricity to think.
By @insane_dreamer - 7 months
Apple might emerge as one of the winners here, despite being one of the last to come out with an LLM, because it already has the infrastructure, or delivery system, for hundreds of millions of users to interact with an LLM. Google has a similar foothold worldwide, but its major weakness that it's cannibilizing its Search cash cow (how do you make $ from ads with LLMs?), whereas in Apple's case it enhances its product (finally a smart Siri). OpenAI doesn't have that except through Microsoft.
By @fnordpiglet - 7 months
Having been there for the dotcom boom and bust from pre-IPO Netscape to the brutal collapse of the market, it’s hard to say dotcoms don’t work. There was clearly something there of immense value, but it took a lot experimentation with business models and maturation of technology as well as the fundamental communications infrastructure of the planet. All told it feels like we’ve really only gained a smooth groove in the last 10 years.

I see no reason why AI will be particularly different. It seems difficult to make the case AI is useless, but it’s also not particularly mature with respect to fundamental models, tool chains, business models, even infrastructure.

In both cases speculative capital flowed into the entire industry, which brought us losers like pets.com but winners like Amazon.com, Netflix.com, Google.com, etc. Which of the AI companies today are the next generation of winners and losers? Who knows. And when the music stops will there be a massive reckoning? I hope not, but it’s always possible. It probably depends on how fast we converge to “what works,” how many grifters there are, how sophisticated equity investors are (and they are much more sophisticated now than they were in 1997), etc.

By @tinyhouse - 7 months
I like the article and agree with many of the arguments. A few comments though.

1. It's not that easy to switch between providers. There's no lock-in of course, but once you build a bunch of code that is provider specific (structured outputs, prompt caching, json mode, function calls, prompts designed for a specific provider, specific tools used by the openai Assistant, etc) then you need a good reason to switch (like a much better or cheaper model)

2. All of these companies do try to build some echo system around them, esp in the enterprise. The problem is that Google and Microsoft have a huge advantage here cause they have all the integrations

3. The consumer side. It's not just LLM. It's image, video, voice, and many more. You cannot ignore that ChatGPT can rival Google in a few years in terms of usage. As long as they can deliver good models, users are not going to switch so quickly. It's a huge market, just like Google. Pretty much, everyone in the world is going to use ChatGPT or some alternative in the next few years. My 9 year old and her friends already use it. No reason why they cannot monetize their huge user base like Google did.

By @Nevermark - 7 months
> it’s burning through $7 billion a year to fund research and new A.I. services and hire more employees

And at some point one of these companies will reach the point it does not need as many employees. And has a model capable of efficiently incorporating new learning without having to reset and relearn from scratch.

That is what AGI is.

Computing resources for inference and incremental learning will still be needed, but when the AGI itself is managing all/much of that, including continuing to find efficiencies, ... profitably might be unprecedented.

The speed of advance over the last two decades has been steady and exponential. There are not many (or any) credible signals that a technical wall is about to be encountered.

Which is why I believe that I, I by myself, might get there. Sort of, kind of, probably not, probably just kidding. Myself.

--

Another reason companies are spending billions is to defend their existing valuations. Google's value could go to zero if they don't keep up. Other companies likewise.

It is the new high stakes ante for large informational/social service relevance.

By @agentultra - 7 months
They eventually have to turn a profit or pass the hot potato. Maybe they’ll be the next generation of oligarchs supported by the state when they all go bankrupt but are too essential to fail.

My guess is that profitability is becoming increasingly difficult and nobody knows how yet… or whether it will be possible.

Seems like the concentration of capital is forcing the tech industry to take wilder and more ludicrous bets each year.

By @arbuge - 7 months
> What, then, is an LLM vendor’s moat? Brand? Inertia? A better set of applications built on top of their core models? An ever-growing bonfire of cash that keeps its models a nose ahead of a hundred competitors?

Missed one I think... the expertise accumulated in building the prior generation models, that are not themselves that useful anymore.

Yes, it's true that will be lost if everybody leaves, a point he briefly mentions in the article. But presumably AWS would also be in trouble, sooner or later, if everybody who knows how things work left. Retaining at least some good employees is tablestakes for any successful company long-term.

Brand and inertia also don't quite capture the customer lock-in that happens with these models. It's not just that you have to rewrite the code to interface with a competitor's LLM; it's that that LLM might now behave very differently than the one you were using earlier, and give you unexpected (and undesirable) results.

By @ryukoposting - 7 months
I don't really buy the "cost of hardware" portion of the argument, even if everything else seems sound. In 1992 if you wanted 3D graphics, you'd call up SGI and drop $25,000 on an Indigo 2. 6 years later, you could walk into Circuit City and buy a Voodoo2 for 300 bucks, slap it in the PC you already owned, and call it a day.

I know we aren't in the 90s. I know that the cost of successive process nodes has grown exponentially, even when normalizing for inflation. But, still. I'd be wary of betting the farm on AI being eternally confined to giant, expensive special-purpose hardware.

This stuff is going to get crammed into a little special purpose chip dangling off your phone's CPU. Either that, or GPU compute will become so commodified that it'll be a cheap throw-in for any given VPS.

By @Der_Einzige - 7 months
#2 is dead wrong, and shows that the author is not aware of the current exciting research happening in parameter efficient fine-tuning or representation/activation engineering space.

The idea that you need huge amounts of compute to innovate in a world of model merging and activation engineering shows a failure of imagination, not a failure to have the necessary resources.

PyReft, Golden Gate Claude (Steering/Control Vectors), Orthogonalization/Abliteration, and the hundreds of thousands of Lora and other adapters available on websites like civit.ai is proof that the author doesn't know what they're talking about re: point #2.

And I'm not even talking about the massive software/hardware improvements we are seeing for training/inference performance. I don't even need that, I just need evidence that we can massively improve off the shelf models with almost no compute resources, which I have.

By @janoc - 7 months
The mistake in that article is the assumption that these companies collecting those gigantic VC funding rounds are looking to stay ahead of the pack and be there even 10 years down the road.

That's a fundamental misunderstanding of the (especially) US startup culture in the last maybe 10-20 years. Only very rarely is the goal of the founders and angel investors to build an actual sustainable business.

In most cases the goal is to build enough perceived value by wild growth financed by VC money & by fueling hype that an subsequent IPO will let the founders and initial investors recoup their investment + get some profit on top. Or, find someone to acquire the company before it reaches the end of its financial runway.

And then let the poor schmucks who bought the business hold the bag (and foot the bill). Nobody cares if the company becomes irrelevant or even goes under at that point anymore - everyone who did has has recouped their expense already. If the company stays afloat - great, that's a bonus but not required.

By @culebron21 - 7 months
I think the article contradicts itself, missing some simple math. Contradicting points:

1. it takes huge and increasing costs to build newer models, models approached asymptote 2. a startup can take an open-source model and get you out of business in 18 months (with CocaCola example)

The size of LLM is what protects them from being attacked by startups. Microsoft's operating profit in 2022 was $72B, which is 10x bigger than the running cost of OpenAI. And if 2022 was too successful, profits of $44B still dwarf OpenAI.

If OpenAI manages to ramp up investment like Uber, it may stay alive, otherwise it's tech giants that can afford running some LLM. ...if people will be willing to pay for this level of quality (well, if you integrate it into MS Word, they actually may want it).

By @w10-1 - 7 months
Two models to address how/why/when AI companies make sense:

(1) High integration (read: switching) costs: any deployment of real value is carefully tested and tuned for the use-case (support for product x, etc.). The use cases typically don't evolve that much, so there's little benefit to re-incurring the cost for new models. Hence, customers stay on old technology. This is the rule rather than the exception e.g., in medical software.

(2) The Instagram model: it was valuable with a tiny number of people because they built technology to do one thing wanted by a slice of the market that was very interesting to the big players. The potential of the market set the time value of the delay in trying to replicate their technology, at some risk of being a laggard to a new/expanding segment. The technology gave them a momentary head start when it mattered most.

Both cases point to good product-market fit based on transaction cost economics, which leads me to the "YC hypothesis":

The AI infrastructure company that best identifies and helps the AI integration companies with good product-market fit will be the enduring leader.

If an AI company's developer support consist of API credits and online tutorials about REST API's, it's a no-go. Instead, like YC and VC's, it should have a partner model: partners use considerable domain skills to build relationships with companies to help them succeed, and partners are selected and supported in accordance with the results of their portfolio.

The partner model is also great for attracting and keeping the best emerging talent. Instead of years of labor per startup or elbowing your way through bureaucracies, who wouldn't prefer to advise a cohort of the best prospects and share their successes? Unlike startup's or FAANG, you're rewarded not for execution or loyalty, but for intelligence in matching market needs.

So the question is not whether the economics of broadcast large models work, but who will gain the enduring advantage in supporting AI eating the software that eats the world?

By @steveBK123 - 7 months
I do wonder if they become another category like voice assistants where people just expect to get them free as part of an existing ecosystem.

Or search/social media where people are happy to pay $0 to use it in exchange for ads.

Sure some people are paying now, but its nowhere near the cost of operating these models, let alone developing them.

Also the economics may not accrue to the parts of the stack people think. What if the model is commodity and the real benefits accrue to the GOOG/AAPL/MSFT of the world that integrate models, or to the orgs that gatekept their proprietary data properly and now can charge for querying it?

By @mistercow - 7 months
> If a competitor puts out a better model than yours, people can switch to theirs by updating a few lines of code.

This may become increasingly the case as models get smarter, but it’s often not the case right now. It’s more likely to be a few lines of code, a bunch of testing, and then a bunch of prompt tweaking iterations. Even within a vendor and model name, it’s a good idea to lock to a specific version so that you don’t wake up to a bunch of surprise breakages when the next version has different quirks.

By @scotty79 - 7 months
> you probably don’t want to stake your business on always being the first company to find the next breakthrough

that's like a better half of the entire Apple business model that brought them success, find the next hot thing (like a capacitive touchscreen or high density displays), make exclusive deals with hardware providers so you can release a device using that new tech and feed on it for some time till the the upstream starts leaking tech left and right and you competition can finally catch up

By @trash_cat - 7 months
Did Apple work when they launched the iPhone without the App store? It's a very similar question. There is this obsession to talk about tangible business value (and that its nowehere to be seen) despite OpenAI setting record for daily users. Right now it is a consumer product and will take time before we understand how to organize business around them. It took us about 20 years to get from the hype of .com bubble to tech giants.
By @29athrowaway - 7 months
Just like supermarkets know what products you are buying, LLM inference providers what requests you are making. And just like supermarkets list the most profitable products and then clone them, LLM inference providers could come up with their own versions of the most profitable products based on LLMs.

My prediction is that the thin layer built on top of LLMs will be eaten up starting from the most profitable products.

By using inference APIs you are doing their market research for free.

By @valine - 7 months
This period of model scaling at all cost is going to be a major black eye on the industry in a couple years. We already know that language models are few shot learners at inference time, and yet OpenAI seems to be happy throwing petaflops of compute training models the slow way.

The question is how can you use in-context learning to optimize the model weights. It’s a fun math problem and it certainly won’t take a billion dollar super computer to solve it.

By @cutemonster - 7 months
> Your second choice is… I don’t know?

What about: Lobby for AI regulations that prevent new competitors from arising, and hopefully kills of a few?

By @iamgopal - 7 months
Are we increasing or decreasing the "knowledge" entropy ? ( in the closed system of earth ? ) by doing AI ?
By @sillyLLM - 7 months
I think a selling point for LLMs would be to match you with people that you find perfect for that use case. For example a team for a job, a wife, real friends, clubs for sharing hobbies, for finding the best people mastering something you want to accomplish. Unfortunately we and LLMs don't know how to match people in that way.
By @fsckboy - 7 months
>The market needs to be irrational for you to stay solvent.

it's a telling quote he chose: if you think AI is over invested, you should short AI companies, and that's where the quote comes from, problem is, even if you're right the market can stay irrational longer than you can afford to hold your short.

By @est - 7 months
AI as a work force could be comparable to interns. They work in a 7x24 shift but fail the task from time to time.
By @ocean_moist - 7 months
The title should be “Do LLM building companies work?”. The article fails to address companies that will be using LLMs or companies innovating on other models/architectures.

I don’t think most people looking to build an AI company want to build an LLM and call it a company.

By @doganugurlu - 7 months
A very long-winded of saying “LLMs are a commoditized offering despite being so new.”
By @cratermoon - 7 months
I would say, without qualification, "no". https://www.techpolicy.press/challenging-the-myths-of-genera...
By @matchagaucho - 7 months
>2) pushing the frontier further out will likely get more difficult.

The upside risk is premised on this point. It'll get so cost prohibitive to build frontier models that only 2-3 players will be left standing (to monetize).

By @paulvnickerson - 7 months
And here I am spending all my time with boring xgboost classifiers...
By @shahzaibmushtaq - 7 months
Yes, AI companies can only work if they all somehow agree to slow things down a little bit instead of competing to release a better model like every month.
By @charlieyu1 - 7 months
TSMC is by large the biggest chip company of the world and they are still investing lots of money in research.
By @doganugurlu - 7 months
A very long-winded of saying LLMs are a commoditized offering despite being so new.
By @23B1 - 7 months
It is ironic that this article seems to focus on the business logic, considering that is the same myopia at these AI companies.

Not that physical/financial constraints are unimportant, but they often can be mitigated in other ways.

Some background: I was previously at one of these companies that got hoovered up in the past couple years by the bigs. My job was sort of squishy, but it could be summarized as 'brand manager' insofar as it was my job to aide in shaping the actual tone, behaviors, and personality of our particular product.

I tell you this because in full disclosure, I see the world through the product/marketing lens as opposed to the engineer lens.

They did not get it.

And by they I mean founds whose names you've heard of, people with absolute LOADS of experience in building a shipping technology products. There were no technical or budgetary constraints at this early stage, we were moving fast and trying shit. But they simply could not understand why we needed to differentiate and how that'd make us more competitive.

I imagine many technology companies go through this, and I don't blame technical founders who are paranoid about this stuff; it sounds like 'management bullshit' and a lot of it is, but at some point all organizations who break even or take on investors are going to be answerable to the market, and that means leaving no stone unturned in acquiring users and new revenue streams.

All of that to say, I do think a lot of these AI companies have yet to realize that there's a lot to be done user experience-wise. The interface alone - a text prompt(!?) is crazy out-of-touch to me. The fact that average users have no idea how to set up a good prompt and how hard everyone is making it for them to learn about that.

All of these decisions are pretty clearly made by someone who is technology-oriented, not user-oriented. There's no work I'm aware of being done on tone, or personality frameworks, or linguistics, or characterization.

Is the LLM high on numeracy? Is it doing code switching/matching, and should it? How is it qualifying its answers by way of accuracy in a way that aids the user learning how to prompt for improved accuracy? What about humor or style?

It just completely flew over everyone's heads. This may have been my fault. But I do think that the constraints you see to growth and durability of these companies will come down to how they're able to build a moat using strategies that don't require $$$$ and that cannot be easily replicated by competition.

Nobody is sitting in the seats at Macworld stomping their feet for Sam Altman. A big part of that is giving customers more than specs or fiddly features.

These companies need to start building a brand fast.

By @willmadden - 7 months
Let's hope the moat isn't regulatory...
By @goldfeld - 7 months
No, clearly, AIs do all the work for them.
By @coding123 - 7 months
I don't get the point of the author. At one point he's saying because the race to get bigger and better will never end, we'll need ever larger compute and more and more and more, so in the end, the companies will fail.

I don't see it this way. In plumbing I could have chosen to use 4" pipe throughout my house. I chose 3". Heck, I could have purchased commercial pipe that's 12", or even 36". It would have changed a lot of the design of my foundation.

Just because there is something much bigger and can handle a lot more poop, doesn't mean it's going to be useful for everyone.

By @plaidfuji - 7 months
I get the sense that the value prop of LLMs should first be cut into two categories: coding assistant, and everything else.

LLMs as coding assistants seem to be great. Let’s say that every working programmer will need an account and will pay $10/month (or their employer will).. what’s a fair comp for valuation? GitHub? That’s about $10Bn. Atlassian? $50Bn

The “everything else” bin is hard to pin down. There are some clear automation opportunities in legal, HR/hiring, customer service, and a few other fields - things that feel like $1-$10Bn opportunities.

Sure, the costs are atrocious, but what’s the revenue story?

By @highfrequency - 7 months
Enjoyed the article and thought many of the points were good.

Here's a counterargument.

> In other words, the billions that AWS spent on building data centers is a lasting defense. The billions that OpenAI spent on building prior versions of GPT is not, because better versions of it are already available for free on Github.

The money that OpenAI spends on renting GPUs to build the next model is not what builds the moat. The moat comes from the money/energy/expertise that OpenAI spends on the research and software development. Their main asset is not the current best model GPT-4; it is the evolving codebase that will be able to churn out GPT-5 and GPT-6. This is easy to miss because the platform can only churn out each model when combined with billions of dollars of GPU spend, but focusing on the GPU spend misses the point.

We're no longer talking about a thousand line PyTorch file with a global variable NUM_GPUs that makes everything better. OpenAI and competitors are constantly discovering and integrating improvements across the stack.

The right comparison is not OpenAI vs. AWS, it's OpenAI vs. Google. Google's search moat is not its compute cluster where it stores its index of the web. Its moat is the software system that incorporates tens of thousands of small improvements over the last 20 years. And similar to search, if an LLM is 15% better than the competitors, it has a good shot at capturing 80%+ of the market. (I don't have any interest in messing around with a less capable model if a clearly better one exists.)

Google was in some sense "lucky" that when they were beginning to pioneer search algorithms, the hardware (compute cluster) itself was not a solved problem the way it is today with AWS. So they had a multidimensional moat from the get-go, which probably slowed early competition until they had built up years' worth of process complexity to deter new entrants.

Whereas LLM competition is currently extremely fierce for a few reasons: NLP was a ripe academic field with a history of publishing and open source, VC funding environment is very favorable, and cloud compute is a mature product offering. Which explains why there is currently a proliferation of relatively similar LLM systems:

> Every LLM vendor is eighteen months from dead.

But the ramp-up time for competitors is only short right now because the whole business model (pretrain massive transformers -> RLHF -> chatbot interface) was only discovered 18 months ago (ChatGPT launched at the end of 2022) - and at that point all of the research ideas were published. By definition, the length of a process complexity moat can't exceed how long the incumbent has been in business! In five years, it won't be possible to raise a billion dollars and create a state of the art LLM system, because OpenAI and Anthropic will have been iterating on their systems continuously. Defections of senior researchers will hurt, and can speed up competitor ramp-time slightly, but over time a higher proportion of accumulated insights is stored in the software system rather than the minds of individual researchers.

Let me emphasize: the billions of dollars of GPU spend is a distraction; we focus on it because it is tangible and quantifiable, and it can feel good to be dismissive and say "they're only winning because they have tons of money to simply scale up models." That is a very partial view. There is a tremendous amount of incremental research going on - no longer published in academic journals - that has the potential to form a process complexity moat in a large and relatively winner-take-all market.

By @winddude - 7 months
probably why OpenAI wants to build there own 5gw nuclear plant.
By @jillesvangurp - 7 months
The question is too broad. What's an AI company? It could be anything. The particular sub class here that is implied is companies that are spending many billions to develop LLMs.

The business model for those is to produce amazing LLM models that are hard to re-create unless you have similar resources and then make money providing access, licensing, etc.

What are those resources? Compute, data, and time. And money. You can compensate for lack of time by throwing compute at the problem or less/more data. Which is a different way of saying: spend more money. So, it's no surprise that this space is dominated by trillion dollar companies with near infinite budgets and a small set of silicon valley VC backed companies that are getting multi billion dollar investments.

So the real question is whether these companies have enough of a moat to defend their multi billion dollar investments. The answer seems to be no. For three reasons: hardware keeps getting cheaper, software keeps getting better, and using the models is a lot cheaper than creating them.

Creating GPT-3 was astronomically expensive a few years ago and now it is a lot cheaper by a few orders of magnitude. GPT-3 is of course obsolete now. But I'm running Llama 3.2 on my laptop and it's not that bad in comparison. That only took 2 years.

Large scale language model creation is becoming a race to the bottom. The software is mostly open source and shared by the community. There is a lot of experimentation happening but mostly the successful algorithms, strategies, and designs are quickly copied by others. To the point where most of these companies don't even try to keep this a secret anymore.

So that means new, really expensive LLMs have a short shelf life where competitors struggle to replicate the success and then the hardware gets cheaper and others run better algorithms against whatever data they have. Combine that with freely distributed models and the ability to run them on cheap infrastructure and you end up with a moat that isn't that hard to cross.

IMHO all of the value is in what people do with these models. Not necessarily in the models. They are enablers. Very expensive ones. Perhaps a good analogy is the value of Intel vs. that of Microsoft. Microsoft made software that ran on Intel chips. Intel just made the chips. And then other chip manufacturers came along. Chips are a commodity now. Intel is worth a lot less than MS. And MS is but a tiny portion of the software economy. All the value is in software. And a lot of that software is OSS. Even MS uses Linux now.

By @AI_beffr - 7 months
the hype will never die. all the smartest people in industry and government believe that there is a very high probability that this technology is near the edge of starting the AGI landslide. you dont need AGI to start the AGI landslide, you just need AI tools that are smart enough to automate the process of discovering and building the first AGI models. every conceivable heuristic indicates that we are near the edge. and because of this, AI has now become a matter of national security. the research and investment wont stop, because it cant, because it is now an arms race. this wont just fizzle out. it will be probed and investigated to absolute exhaustion before anyone feels safe enough to stop participating in the race. if you have been keeping up you will know that high level federal bureaucrats are now directly involved in openAI.