August 7th, 2024

Launch HN: Release (YC W20) – Orchestrate AI Infrastructure and Applications

Release.ai, founded by Erik, Tommy, and David, offers a platform for orchestrating AI applications, providing free GPU cycles, prioritizing data security, and featuring a workflow engine with deployment templates.

Launch HN: Release (YC W20) – Orchestrate AI Infrastructure and Applications

Release.ai, founded by Erik, Tommy, and David, launched on Hacker News in 2020 after their tenure at TrueCar. Initially focused on simplifying staging environments, the team pivoted to address the growing need for orchestrating AI applications and infrastructure. The platform allows users to manage AI workloads efficiently, offering a sandbox account with limited free GPU cycles for experimentation. Users can access 5 free compute hours on an Amazon g5.2xlarge instance and 100 free managed environment hours monthly under a free plan. Release.ai emphasizes the importance of security and privacy, enabling users to keep their data and models within their cloud accounts. The platform integrates AI applications into existing software development workflows, featuring a workflow engine that automates complex tasks. It also simplifies the management of GPU resources across multiple clouds using Kubernetes. To facilitate the development of AI applications, Release.ai provides a library of over 20 templates for popular open-source frameworks, allowing for one-click deployment. The founders are eager for user feedback and encourage exploration of the platform.

- Release.ai focuses on orchestrating AI applications and infrastructure.

- Users can experiment with a sandbox account offering free GPU cycles.

- The platform prioritizes data security by keeping data within users' cloud accounts.

- It features a workflow engine for automating AI application tasks.

- Over 20 templates are available for easy deployment of AI frameworks.

Link Icon 18 comments
By @JoeCortopassi - 2 months
I've noticed that while a bunch of developers have played with LLM's for toy projects, few seem to have any actual experience taking it to prod in front of real users. I’ve personally had to do so for a few startups, and it's like trying to nail Jell-O to a tree. Every random thing you change, from prompts to models, yields massively different/unpredictable results.

I think because of this, a bunch of companies/tools have tried to hop in this space and promised the world, but often times people are best served by just hitting OpenAI/GPT directly, and jiggling the results until they get what they want. If you're not comfortable doing that, there are even companies that do that for you, so you can just focus on the prompt itself.

So that being said, help me understand why I should be adding this whole system/process to my workflow, versus just hitting OpenAI/Anthropic/Google directly?

By @BurritoKing - 2 months
This looks awesome, getting started with AI development is daunting and I really like how this focuses on integrating with a bunch of open source frameworks and then deploying them into your own cloud (I always prefer to run the infrastructure, it feels weird to rely on something that's a complete black box).

The sandbox environment with free GPU hours is a cool way to try things out without a big commitment too. It's nice seeing a product that genuinely seems to address the practical challenges of AI deployment. Looking forward to seeing how the platform develops!

By @bradhe - 2 months
Super interesting you guys have been working on this since 2020 if I'm reading the post title correctly? Would love to know the iterations you've gone through.
By @todd3834 - 2 months
This is very cool! I love seeing tooling targeting inference. I feel like stable diffusion and LLAMA have to be the primary use cases for these types of services. DALL-E is super lacking and GPT does actually start to get pretty expensive once you are using it in production.
By @michaelmior - 2 months
This looks cool, but I'm a little confused about the pricing model. It sounds like I'm paying you for every hour my jobs are running on my own infrastructure if I'm reading it right. That seems like a really odd way to price things if true.
By @the_pascal - 2 months
How does this compare to managed offerings like Google Gemini and AWS Bedrock? Thanks in advance and congratulations on the new product!!
By @mcsplat2 - 2 months
How do you hook up data to an environment? And what data sources do you support (Snowflake/etc?)
By @mchiang - 2 months
This is cool. I'd like to give it a try. Press a button, and get GPU access to build apps on.
By @tommy_mcclung - 2 months
We might not have made it clear in the post how to signup for the sandbox. Just head to http://release.ai and click on "Start Free Trial".
By @jakubmazanec - 2 months
Why do you have such generic name? It will make searching so much harder.
By @bluelightning2k - 2 months
Seems pretty vague. Something to do with half self-hosting open source LLMs with a proprietary docker-like template thing.

Am I on the right track?

By @nextworddev - 2 months
Pretty insane that these folks are YC 2020 and just now pivoted again. Shows how hard company building is.
By @sidcool - 2 months
Congrats on launching. Interesting pivot.
By @jodotgiff - 2 months
Great system, no complaints!!
By @drawnwren - 2 months
I'm pretty much exactly your target market. I run a kubernetes, docker, poetry devops hell at an ml startup and was curious how your product helped. You got about 2 minutes of my time. I scanned your website, I have no idea what you do or whether you fix my problem.

Not trying to be negative, but I think there may be a 30 second pitch to be made to people like me that isn't made on your site.

By @billclerico - 2 months
congrats!!
By @bijutoha - 2 months
What has been the feedback from early users regarding the ease of transitioning from your original product focus to the current AI orchestration platform?