
Amazon Web Services (AWS)
AvailableBest for Enterprise Production, Model Deployment, Massive Scale
Locations: Global (30+ regions)
Compare 20 cloud providers offering NVIDIA H100 (80 GB HBM3 VRAM). Find real-time pricing, availability, and get matched with verified providers instantly.
The NVIDIA H100 is the most powerful data-center GPU available, purpose-built for accelerating transformer-based large language models. With 80 GB of HBM3 memory and NVLink 4.0 support, the H100 is the gold standard for training GPT-class models at scale.
The spot market for H100 cloud compute varies widely by provider. On-demand pricing typically ranges from $1.50–$5/hr per GPU for single-instance access. For larger multi-GPU clusters (8x, 16x, or 64x GPU nodes), enterprise pricing with SLAs is negotiated directly with providers. Reserved capacity offers 30–60% discounts vs. on-demand pricing.
When evaluating providers for H100 GPU cloud, consider:

Best for Enterprise Production, Model Deployment, Massive Scale
Locations: Global (30+ regions)

Best for Enterprise LLM Training, HPC, AI Inference at Scale
Locations: US East (NJ, VA), US West (CA), EU West (UK, Sweden, Netherlands)

Best for AI Innovation, TPU Training, MLOps (Vertex AI)
Locations: Global (35+ regions)

Best for Finetuning Open Source Models, Serverless inference endpoints
Locations: US, EU

Best for LLM Serverless APIs, Fast Image Generation, Voice AI
Locations: US East, US West

Best for LLM Training, AI Research, Fine-Tuning
Locations: US East (Texas), US West (California, Utah), Europe (UK)

Best for Enterprises, OpenAI Integrations, Hybrid Cloud
Locations: Global (60+ regions)

Best for Production AI Model Serving, Custom Model Inference
Locations: US East, US West

Best for Serverless Image Generation, LLM API inference, Open-Source Model Hosting
Locations: US, EU

Best for Distributed Computing, Ray workload scaling, LLM hosting
Locations: US East, US West

Best for Serverless Inference, Ad-hoc Python scripts, Quick Prototyping
Locations: US East, US West, Europe


Best for Scale-to-zero Inference, Custom Model Serving, Low-Latency APIs
Locations: US, EU

Best for AI Inference, Image Generation, Fine-Tuning, Budget ML
Locations: US East, US West, EU West (Norway, France), Asia Pacific (Singapore)

Best for Edge AI, Application Developers requiring unified infrastructure, Web Apps + AI
Locations: Global (30+ Data Centers)


Best for European Enterprise AI, Massive Scale LLM Training, HPC
Locations: EU (Finland)

Best for Enterprise LLM Pre-training, Large-Scale AI Research, Foundation Model Development
Locations: US West (Colorado, Nevada), US East (Virginia)

Best for Enterprise AI Training, Multi-Tenant GPU Clusters, Cost-Effective H100 Access
Locations: UK (London, Manchester), US West (California), US East (Virginia), EU Central (Germany, France)

Best for ML Notebooks, AI Model Development, Research, Computer Vision
Locations: US East (New York), US West (California), EU West (Netherlands, UK)
NVIDIA H100 is commonly used for: LLM training, large-scale AI research, multi-modal model training. Its 80 GB HBM3 of VRAM makes it suitable for running large models that don't fit in smaller GPU memory.
NVIDIA H100 cloud pricing varies by provider and region, but typically ranges from $1.50/hr to $8/hr for single-GPU instances. Multi-GPU cluster pricing scales proportionally. Use the filters above to compare current market rates.
ComputeStacker currently lists 20 providers offering H100 GPU cloud access. These include a mix of hyperscalers, specialist AI cloud providers, and bare-metal GPU hosting services.
Yes — most providers on ComputeStacker offer on-demand hourly pricing for H100 instances. Reserved and spot pricing options are also available from many providers, offering discounts of 30–70% for committed usage.