YC closes deal with Google for dedicated compute cluster for AI startups
Google Cloud has launched a dedicated Nvidia GPU and TPU cluster for Y Combinator startups, offering $350,000 in cloud credits and support to enhance AI development and innovation.
Read original articleGoogle Cloud has launched a dedicated cluster of Nvidia GPUs and Google tensor processing units for Y Combinator (YC) startups, aimed at supporting early-stage AI development. This initiative is part of Google Cloud's strategy to engage with promising AI startups, providing them with essential resources to build and train AI models. Each participating startup from YC's Summer 2024 cohort will receive $350,000 in cloud credits over two years, along with additional support including $12,000 in Enhanced Support credits and a free year of Google Workspace Business Plus. The goal is to foster long-term partnerships, as historically, about 5% of YC startups have achieved unicorn status. YC partner Diana Hu noted that many early-stage AI startups face compute limitations, making this partnership particularly valuable. The dedicated cluster will allow these startups to efficiently manage their high-performance computing needs, especially for training AI models, which often require significant computational resources in bursts rather than continuous operation. This collaboration is expected to attract more AI startups to Y Combinator, enhancing their ability to innovate and grow. Other venture capital firms, like Andreessen Horowitz, are also investing in GPU resources to support AI startups, indicating a growing trend in the industry to provide essential computing power to emerging companies.
Related
Show HN: Find AI – Perplexity Meets LinkedIn
The website offers an AI-powered search engine for tech companies and individuals. Users can find specific matches like startup founders with Ph.D.s in AI or female AI startup founders in NYC. It provides insights into YC startups by Stanford alumni and chief of staff candidates in edtech. Users can explore boutique recruiting firms and membership clubs. The platform aims to assist in finding customers, hires, and investments.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.
Show HN: We made glhf.chat – run almost any open-source LLM, including 405B
The platform allows running various large language models via Hugging Face repo links using vLLM and GPU scheduler. Offers free beta access with plans for competitive pricing post-beta using multi-tenant model running.
Four co's are hoarding billions worth of Nvidia GPU chips. Meta has 350K of them
Meta has launched Llama 3.1, a large language model outperforming ChatGPT 4o on some benchmarks. The model's development involved significant investment in Nvidia GPUs, reflecting high demand for AI training resources.
Tensorfuse (YC W24) Is Hiring
Tensorfuse, a Y Combinator-backed startup in Bengaluru, seeks a Systems Engineer to develop a serverless GPU runtime. The role offers ₹2M - ₹3M salary, requiring skills in Rust or Go and Kubernetes.
- Many commenters emphasize the importance of cloud credits for early-stage startups, noting how they facilitate rapid development.
- There is some confusion regarding whether the credits are exclusive to Y Combinator startups or available to all AI startups.
- Several users share personal experiences with cloud credits from other providers, highlighting their value in startup growth.
- Some comments question Google's need to offer these credits, suggesting it may indicate a lack of external demand for their GPU and TPU resources.
- One user discusses their own startup's approach to providing compute resources, indicating a competitive landscape in cloud services.
next idea I'd love to see: professors getting grants/cloud credits to teach classes on the gcloud
My startup (Hot Aisle) is all about building, managing and deploying dedicated compute clusters for businesses. At the enterprise level of compute, there is a lot that goes into making this happen, so we are effectively the capex / opex for businesses that don't want to do this themselves, but want to have a lot more control over the compute they are running on.
The twist is that while we can deploy any compute that our customers want, we are starting with AMD instead of Nvidia. The goal is to work towards offering alternatives to a single provider of compute for all of AI.
You can't do this for others unless you also do it for yourself. As such, we're building our own first cluster of 16x Dell chassis with 128 MI300x GPUs deployed into a Tier 5 data center as our initial rollout. Full technical details are on our website. It has been a long road to get here and we hope to be online and available for rental at the end of this month.
One of my goals has also been to get Dell / AMD / Advizex (our var) to offer compute credits on our cluster. Those credits would then get turned around into future purchases to grow into more clusters. It becomes a developer flywheel... the more developers on the hardware, the more hardware needed, the more we buy. This is something unfamiliar to their existing models, so wish me luck in convincing them. Hopefully this announcement helps my story. =)
Edit: Getting downvoted. Would love to hear some dialog for why. I don't really consider this an advertisement, so apologies if you're clicking that button for that reason. I'm really just excited about learning about validation of my business model and explaining why.
P.S. Yes I know that the server requirements are very low (explained by dang and others) and I also know there are many plugins and hacks to get dark mode ;-)
In contrast with Microsoft where they are GPU limited (don't have enough to sell).
Related
Show HN: Find AI – Perplexity Meets LinkedIn
The website offers an AI-powered search engine for tech companies and individuals. Users can find specific matches like startup founders with Ph.D.s in AI or female AI startup founders in NYC. It provides insights into YC startups by Stanford alumni and chief of staff candidates in edtech. Users can explore boutique recruiting firms and membership clubs. The platform aims to assist in finding customers, hires, and investments.
XAI's Memphis Supercluster has gone live, with up to 100,000 Nvidia H100 GPUs
Elon Musk launches xAI's Memphis Supercluster with 100,000 Nvidia H100 GPUs for AI training, aiming for advancements by December. Online status unclear, SemiAnalysis estimates 32,000 GPUs operational. Plans for 150MW data center expansion pending utility agreements. xAI partners with Dell and Supermicro, targeting full operation by fall 2025. Musk's humorous launch time noted.
Show HN: We made glhf.chat – run almost any open-source LLM, including 405B
The platform allows running various large language models via Hugging Face repo links using vLLM and GPU scheduler. Offers free beta access with plans for competitive pricing post-beta using multi-tenant model running.
Four co's are hoarding billions worth of Nvidia GPU chips. Meta has 350K of them
Meta has launched Llama 3.1, a large language model outperforming ChatGPT 4o on some benchmarks. The model's development involved significant investment in Nvidia GPUs, reflecting high demand for AI training resources.
Tensorfuse (YC W24) Is Hiring
Tensorfuse, a Y Combinator-backed startup in Bengaluru, seeks a Systems Engineer to develop a serverless GPU runtime. The role offers ₹2M - ₹3M salary, requiring skills in Rust or Go and Kubernetes.