OpenAI is set to lose $5B this year
OpenAI's projected costs for 2024 are $7 billion, with a potential $5 billion loss. Revenue from ChatGPT is about $2 billion annually, indicating a significant financial shortfall.
Read original articleOpenAI's training and inference costs are projected to reach $7 billion in 2024, with the company potentially facing a $5 billion loss. As of March, OpenAI was expected to spend nearly $4 billion on Microsoft Azure servers for running inference workloads for ChatGPT. The company operates approximately 350,000 servers equipped with Nvidia A100 chips, with around 290,000 dedicated to ChatGPT, running at near full capacity. Training costs for ChatGPT and new models could add another $3 billion this year. OpenAI benefits from discounted rates from Microsoft, paying about $1.30 per A100 server per hour. The workforce, now around 1,500 employees, is estimated to cost $1.5 billion, significantly higher than the initial projection of $500 million for 2023. OpenAI generates about $2 billion annually from ChatGPT and anticipates nearly $1 billion from access to large language models (LLMs). Recent revenue figures indicate a monthly total of $283 million, suggesting potential full-year sales between $3.5 billion and $4.5 billion. This financial outlook indicates a substantial shortfall, prompting the need for additional funding within the next year to cover operational costs and losses.
Related
AI industry needs to earn $600B per year to pay for hardware spend
The AI industry faces challenges to meet $600 billion annual revenue target to offset massive hardware investments. Concerns arise over profitability gap, urging companies to innovate and create value for sustainability.
AI models that cost $1B to train are underway, $100B models coming
AI training costs are rising exponentially, with models now reaching $1 billion to train. Companies are developing more powerful hardware to meet demand, but concerns about societal impact persist.
OpenAI slashes the cost of using its AI with a "mini" model
OpenAI launches GPT-4o mini, a cheaper model enhancing AI accessibility. Meta to release Llama 3. Market sees a mix of small and large models for cost-effective AI solutions.
AI paid for by Ads – the GPT-4o mini inflection point
OpenAI released the gpt-4o mini model at $0.15 per 1M input tokens and $0.60 per 1M output tokens, enabling cost-effective AI content creation. Despite low costs, profitability per page view remains minimal. Future AI-generated blogs prompt discussions on the internet's evolution.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.
https://fortune.com/2024/06/13/apple-not-paying-openai-chatg...
This is exactly what the news industry is super concerned that AI will do, except done by humans and the news industry themselves.
Related
AI industry needs to earn $600B per year to pay for hardware spend
The AI industry faces challenges to meet $600 billion annual revenue target to offset massive hardware investments. Concerns arise over profitability gap, urging companies to innovate and create value for sustainability.
AI models that cost $1B to train are underway, $100B models coming
AI training costs are rising exponentially, with models now reaching $1 billion to train. Companies are developing more powerful hardware to meet demand, but concerns about societal impact persist.
OpenAI slashes the cost of using its AI with a "mini" model
OpenAI launches GPT-4o mini, a cheaper model enhancing AI accessibility. Meta to release Llama 3. Market sees a mix of small and large models for cost-effective AI solutions.
AI paid for by Ads – the GPT-4o mini inflection point
OpenAI released the gpt-4o mini model at $0.15 per 1M input tokens and $0.60 per 1M output tokens, enabling cost-effective AI content creation. Despite low costs, profitability per page view remains minimal. Future AI-generated blogs prompt discussions on the internet's evolution.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.