
Vast.ai
Best for Budget GPU Compute, Image Generation, Fine-Tuning, Batch Processing

Best for Enterprise LLM Training, HPC, AI Inference at Scale
CoreWeave is the leading purpose-built cloud for AI and HPC workloads, founded in 2017 and headquartered in Livingston, NJ. CoreWeave secured over $23 billion in enterprise contracts and went public on NASDAQ (CRWV) in March 2025.
CoreWeave operates the world’s largest independently-owned GPU cluster with over 250,000 GPUs including H100 SXM5, H100 NVL 94GB, A100 SXM4 80GB, L40S 48GB, and A40.
CoreWeave is Kubernetes-native with their SUNK platform enabling HPC-style batch workloads. InfiniBand networking at 400Gb/s connects H100 nodes.
| GPU Models | H100 SXM5 80GB, H100 NVL 94GB, A100 SXM4 80GB, L40S, A40, RTX A6000 |
| GPU Types | A100, H100, L40S |
| Headquarters | Livingston, NJ, USA |
| Founded | 2017 |
| Availability | Available Now |
| Website | coreweave.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
CoreWeave GPU cloud pricing starts from $0.99/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
CoreWeave offers H100 SXM5 80GB, H100 NVL 94GB, A100 SXM4 80GB, L40S, A40, RTX A6000 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
CoreWeave operates data centers in EU West, United Kingdom, US East, US West. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to CoreWeave and other matching providers. You'll receive proposals within 24 hours — no commitment required.
CoreWeave offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Budget GPU Compute, Image Generation, Fine-Tuning, Batch Processing

Best for Finetuning Open Source Models, Serverless inference endpoints

Best for European Startups, Eco-friendly Compute, Cost-effective Training