September 25th, 2024

Hetzner introduces GPU server for AI training

Hetzner Online offers dedicated server hosting from €35.70, featuring high-performance options like the GEX130 GPU server. Services include root access, DDoS protection, domain registration, and flexible payment terms.

Read original articleLink Icon
Hetzner introduces GPU server for AI training

Hetzner Online offers a variety of dedicated server hosting options, with prices starting from €35.70 for server auctions and €46.41 for dedicated servers. Their product range includes EX-Line, AX-Line, RX-Line, SX-Line, and GPU-Line servers, catering to different performance needs and budgets. The GEX130 dedicated GPU server is highlighted for its high performance, featuring an NVIDIA RTX™ 6000 graphics card, making it suitable for AI workloads and complex computational tasks. The servers come with various features such as full root access, dedicated IP addresses, and DDoS protection. Additional services include domain registration, SSL certificates, and managed server options. Customers can choose from multiple operating systems and benefit from 24/7 support. Hetzner also emphasizes its commitment to user experience through cookie usage for website improvements. The company operates with no minimum contract term and offers flexible payment methods.

- Hetzner Online provides dedicated server hosting starting from €35.70.

- The GEX130 GPU server is designed for high-performance tasks, ideal for AI and data processing.

- Customers have access to various features, including root access, DDoS protection, and multiple OS options.

- The company offers additional services like domain registration and SSL certificates.

- There is no minimum contract term, and multiple payment methods are accepted.

Link Icon 13 comments
By @andersa - 4 months
Hmm. Seems like a bad deal.

This is a monthly reservation for a single 6000 Ada for $940. You can get the same on RunPod for $670.

And to actually train stuff you'd likely want nodes with more of them, like 8, or just different GPUs all together (like A100/H100/etc).

By @loughnane - 4 months
What’s the most cost effective option for hosting an llm these days? I don’t need to train, I just want to use one of the llama models for inference to reduce my reliance on 3rd parties.
By @yk - 4 months
So 1kEUR/month for a 6kEUR GPU. Pretty sure there are a lot of drug dealers who wish they had gone into cloud training instead.
By @Blaec - 4 months
CoCalc offers On-Demand GPU servers with H100s starting at $2.01 per hour (metered per second) through its integration with Hyperstack... It also has more budget-friendly options, like RTX A4000s at $0.18 per hour.

https://cocalc.com/features/compute-server

In case you are not familiar, CoCalc is a real-time collaborative environment for education and research that you can access via your web browser at https://cocalc.com/

By @krick - 4 months
What's currently the cheapest/easiest way to deal with relatively lightweight GPU tasks, that are not lightweight enough for my PC?

Consider this use case: I want to upload 50 GB of audio somewhere and run whisper (biggest model) on it. I imagine the processing should be doable in minutes for a powerful GPU and must be very cheap, the script will be like 20 LOC, but I'll spend some time setting stuff up, uploading this and so on (which for example, makes colab a no-go for this). Any recommendations?

Also, when they say it's "per hour" do they mean an hour of GPU-time, or an hour of me "renting the equipment", so to say?

By @dist-epoch - 4 months
Pricing is surprising, typically Hetzner has extremely low prices, yet here there are 50%-70% more expensive then the competition, and you also pay a one time setup cost.
By @lvl155 - 4 months
I always looks at these prices and think it’s a complete rip off for anyone doing less than 4 GPUs.
By @gosub100 - 4 months
Do any of these offer training data as a service? Seems like they could charge a premium for a continuous multicast of a large dataset on say a 10g or higher connection. A one-to-many reply and charge the customer to sit under the firehose.
By @mromanuk - 4 months
I use runpod or vast for training my (small) models (a few million parameters) mostly using RTX4090 up to 4 GPUs. Training is a sporadic task. Is not worth it for me to book it monthly (at these prices)
By @rkwasny - 4 months
According to the benchmarks: https://github.com/mag-/gpu_benchmark

RTX 6000 Ada is ~A100

By @justmarc - 4 months
Although R2 and B3 are excellent alternatives to S3, go get them Hetzner!

Hetzner is a great, reliable company with fantastic offerings and excellent support.