
Nebius AI
Best for European Enterprise AI, Massive Scale LLM Training, HPC

Best for Enterprise LLM Pre-training, Large-Scale AI Research, Foundation Model Development
Voltage Park is a San Francisco-based GPU cloud startup founded in 2023. The company acquired over 24,000 NVIDIA H100 GPUs in direct partnership with NVIDIA. Voltage Park focuses exclusively on large-scale enterprise AI training, working directly with AI labs and foundation model companies.
| GPU Models | H100 SXM5 80GB, H100 NVL 94GB, A100 SXM4 80GB |
| GPU Types | A100, H100 |
| Headquarters | San Francisco, CA, USA |
| Founded | 2023 |
| Availability | Limited |
| Website | voltagepark.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Voltage Park GPU cloud pricing starts from $2.25/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Voltage Park offers H100 SXM5 80GB, H100 NVL 94GB, A100 SXM4 80GB GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Voltage Park operates data centers in US East, US West. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Voltage Park and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Voltage Park offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for European Enterprise AI, Massive Scale LLM Training, HPC

Best for Scale-to-zero Inference, Custom Model Serving, Low-Latency APIs

Best for Serverless Image Generation, LLM API inference, Open-Source Model Hosting