
Graphcore
Best for Researchers and teams running highly sparse machine learning models that struggle on GPUs.

Best for Fortune 500 companies managing massive, dedicated DGX AI supercomputing clusters.
NVIDIA Base Command is the ultimate enterprise software platform for managing massive AI supercomputers. While not a public cloud in the traditional sense, Base Command powers NVIDIAβs DGX SuperPODs and is offered via select cloud partners. It abstracts the complexity of managing multi-node, thousands-of-GPUs clusters, allowing enterprise data science teams to schedule workloads, manage datasets, and monitor training jobs from a single pane of glass. It is the absolute gold standard for Fortune 500 companies training foundational models on dedicated infrastructure.
| GPU Models | H100, B200, DGX SuperPOD |
| GPU Types | B200, DGX SuperPOD, H100 |
| Headquarters | Santa Clara, CA |
| Founded | 1993 |
| Availability | Waitlist |
| Website | nvidia.com β |
π‘ Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote β
NVIDIA Base Command GPU cloud pricing starts from $25.00/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
NVIDIA Base Command offers H100, B200, DGX SuperPOD GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
NVIDIA Base Command operates data centers in Global. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to NVIDIA Base Command and other matching providers. You'll receive proposals within 24 hours β no commitment required.
NVIDIA Base Command offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Researchers and teams running highly sparse machine learning models that struggle on GPUs.

Best for LLM Serverless APIs, Fast Image Generation, Voice AI

Best for Enterprise teams requiring perfect auditability, reproducibility, and automated infrastructure orchestration for deep learning.