
CoreWeave
Best for Enterprise LLM Training, HPC, AI Inference at Scale

Best for Enterprise AI Training, Multi-Tenant GPU Clusters, Cost-Effective H100 Access
FluidStack is a London-based GPU cloud provider founded in 2019, focused on delivering enterprise-grade H100 and A100 GPU clusters across the US and EU. FluidStack positions itself as an alternative to CoreWeave for teams that want serious cluster access without hyperscale complexity.
FluidStack enterprise plans include dedicated account management, 99.9% uptime SLA, priority GPU allocation, custom VPC networking, and ISO 27001 compliance. SOC 2 Type II certification in progress.
| GPU Models | H100 SXM5 80GB, H100 PCIe 80GB, A100 SXM4 80GB, A100 PCIe, L40S 48GB, RTX 4090 |
| GPU Types | A100, H100, L40S, RTX 4090 |
| Headquarters | London, United Kingdom |
| Founded | 2019 |
| Availability | Available Now |
| Website | fluidstack.io ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
FluidStack GPU cloud pricing starts from $0.89/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
FluidStack offers H100 SXM5 80GB, H100 PCIe 80GB, A100 SXM4 80GB, A100 PCIe, L40S 48GB, RTX 4090 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
FluidStack operates data centers in EU Central, Germany, United Kingdom, US East, US West. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to FluidStack and other matching providers. You'll receive proposals within 24 hours — no commitment required.
FluidStack offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Enterprise LLM Training, HPC, AI Inference at Scale

Best for LLM Serverless APIs, Fast Image Generation, Voice AI

Best for On-demand GPU instances, SMEs, Sustainable Computing