
D-Wave Leap
Best for Researchers and enterprise teams tackling massive, intractable optimization and logistical ML problems.

Best for Established AI businesses needing long-term, dedicated bare-metal servers with massive global bandwidth capacity.
LeaseWeb is a global infrastructure giant that has quietly built out a massive, highly competitive fleet of dedicated GPU servers. Operating massive data centers across Europe, North America, and Asia, LeaseWeb offers unmetered, highly customizable bare-metal machines loaded with NVIDIA hardware. For AI companies operating at massive scaleβsuch as those running continuous data scraping, data pipeline processing, and steady-state inferenceβLeaseWeb offers long-term contract pricing that significantly undercuts the hourly public cloud models.
| GPU Models | A100, RTX A6000, Tesla T4 |
| GPU Types | A100, RTX A6000, Tesla T4 |
| Headquarters | Amsterdam, Netherlands |
| Founded | 1997 |
| Availability | Available Now |
| Website | leaseweb.com β |
π‘ Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote β
LeaseWeb GPU cloud pricing starts from $0.70/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
LeaseWeb offers A100, RTX A6000, Tesla T4 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
LeaseWeb operates data centers in APAC, EU, Global (US. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to LeaseWeb and other matching providers. You'll receive proposals within 24 hours β no commitment required.
LeaseWeb offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Researchers and enterprise teams tackling massive, intractable optimization and logistical ML problems.

Best for Crypto-native startups and researchers seeking highly affordable, decentralized GPU compute.

Best for Large enterprises needing to run governed machine learning workloads directly on their existing hybrid data lakes.