
Baseten
Best for Scale-to-zero Inference, Custom Model Serving, Low-Latency APIs

Best for GDPR-Compliant AI, European Data Sovereignty, Image Generation, Fine-Tuning
Genesis Cloud is a Munich-based GPU cloud provider founded in 2018, specializing in making GPU compute accessible, sustainable, and GDPR-compliant within Europe. All Genesis Cloud data centers are powered by 100% renewable energy from Iceland and Norway.
All infrastructure is within the European Economic Area (EEA). Standard Contractual Clauses and Data Processing Agreements available. Data never transferred to US servers. ISO 27001 certified security management.
| GPU Models | RTX 4090 24GB, RTX 3090 24GB, A100 PCIe 80GB, A100 SXM4 80GB, L40 48GB |
| GPU Types | A100, RTX 3090, RTX 4090 |
| Headquarters | Munich, Germany |
| Founded | 2018 |
| Availability | Available Now |
| Website | genesiscloud.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Genesis Cloud GPU cloud pricing starts from $0.30/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Genesis Cloud offers RTX 4090 24GB, RTX 3090 24GB, A100 PCIe 80GB, A100 SXM4 80GB, L40 48GB GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Genesis Cloud operates data centers in EU Central, EU West, Germany. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Genesis Cloud and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Genesis Cloud offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Scale-to-zero Inference, Custom Model Serving, Low-Latency APIs

Best for Budget GPU Compute, Image Generation, Fine-Tuning, Batch Processing

Best for Serverless Inference, Ad-hoc Python scripts, Quick Prototyping