
Scaleway
Best for European startups, Managed Kubernetes with GPUs, Green Cloud

Best for Companies looking to drastically reduce inference costs by optimizing models to run on cheaper GPUs.
Deci AI is a deep learning development platform specifically focused on massive inference acceleration. Utilizing their proprietary AutoNAC (Automated Neural Architecture Construction) technology, Deci helps developers build or optimize AI models that run significantly faster on any given hardware. By providing an optimized cloud inference platform, Deci allows companies to slash their cloud compute bills by achieving higher throughput on cheaper GPUs (like the T4 or L4) rather than relying exclusively on expensive A100s or H100s for serving models.
| GPU Models | L4, T4, A10G, A100 |
| GPU Types | A100, A10G, L4, t4 |
| Headquarters | Tel Aviv, Israel |
| Founded | 2019 |
| Availability | Available Now |
| Website | deci.ai ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Deci AI GPU cloud pricing starts from $0.80/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Deci AI offers L4, T4, A10G, A100 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Deci AI operates data centers in EU Central, US West. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Deci AI and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Deci AI offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for European startups, Managed Kubernetes with GPUs, Green Cloud

Best for Web3 developers and AI startups looking for highly scalable, cryptographically verifiable, and deeply discounted decentralized compute.

Best for MLOps Teams, Spot Instance Arbitrage, Dynamic Cloud Scaling