
Latitude.sh
Best for Bare Metal GPU, Low-Latency AI Inference, Global Edge AI Deployment

Best for Sustainable AI Compute, Green HPC, EU-based AI Inference
Cudo Compute is a London-based GPU cloud provider founded in 2019, with a mission to make AI compute sustainable and cost-efficient. All Cudo data centers run on 100% renewable energy including hydroelectric power from Norway, geothermal energy from Iceland, and certified renewable power agreements in the UK and Germany.
For AI companies under GDPR, Cudo’s European data centers provide a fully compliant environment where training data never leaves EU jurisdiction. Data Processing Agreements (DPAs) available.
| GPU Models | H100 SXM5 80GB, H100 PCIe 80GB, A100 80GB, RTX 4090, L40S, V100 |
| GPU Types | A100, H100, L40S, RTX 4090 |
| Headquarters | London, United Kingdom |
| Founded | 2019 |
| Availability | Available Now |
| Website | www.cudocompute.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Cudo Compute GPU cloud pricing starts from $0.20/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Cudo Compute offers H100 SXM5 80GB, H100 PCIe 80GB, A100 80GB, RTX 4090, L40S, V100 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Cudo Compute operates data centers in Germany, Singapore, United Kingdom, US East. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Cudo Compute and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Cudo Compute offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Bare Metal GPU, Low-Latency AI Inference, Global Edge AI Deployment

Best for Serverless Image Generation, LLM API inference, Open-Source Model Hosting

Best for AI Innovation, TPU Training, MLOps (Vertex AI)