
Northflank
Best for Small teams and startups deploying containerized AI applications wanting Heroku-like simplicity with GPU support.

Best for Researchers and enterprise teams tackling massive, intractable optimization and logistical ML problems.
D-Wave Leap is the world’s first quantum cloud service, granting developers immediate, real-time access to D-Wave’s advanced quantum annealing computers. While completely different from GPU-based deep learning, quantum annealing is increasingly utilized for complex Machine Learning optimization problems, logistics routing, and financial modeling that would take traditional supercomputers thousands of years to solve. D-Wave Leap provides a hybrid solver service, combining classical enterprise compute with quantum acceleration for highly specific, complex AI challenges.
| GPU Models | Quantum Annealer (Advantage System) |
| GPU Types | Quantum Annealer (Advantage System) |
| Headquarters | Burnaby, Canada |
| Founded | 1999 |
| Availability | Available Now |
| Website | cloud.dwavesys.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
D-Wave Leap GPU cloud pricing starts from $0.00/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
D-Wave Leap offers Quantum Annealer (Advantage System) GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
D-Wave Leap operates data centers in EU, North America. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to D-Wave Leap and other matching providers. You'll receive proposals within 24 hours — no commitment required.
D-Wave Leap offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Small teams and startups deploying containerized AI applications wanting Heroku-like simplicity with GPU support.

Best for Hardware engineers and AI developers optimizing inference for power-constrained or high-throughput edge deployments.

Best for Integrated Cloud Workloads