
Yandex Cloud
Best for Enterprises deploying ML applications specifically targeting the CIS and Eastern European markets.

Best for Asia-focused Enterprise AI
Tencent Cloud is one of the world’s largest cloud providers, offering a vast array of services tailored for enterprise scale. Their GPU instances (CVM-GPU) are widely used for deep learning, cloud gaming, and video rendering, particularly by organizations with a strong presence in the Asia-Pacific region.
Like other hyperscalers, Tencent provides a fully integrated ecosystem, including high-performance computing (HPC) clusters and managed Kubernetes. While they offer robust A100 and older generation instances, accessing the latest H100s outside of China may be subject to availability constraints and geopolitical export restrictions.
| GPU Models | A100, V100, T4 |
| GPU Types | NVIDIA A100, NVIDIA T4, NVIDIA V100 |
| Headquarters | Shenzhen, China |
| Founded | 2010 |
| Availability | Available Now |
| Website | www.tencentcloud.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Tencent Cloud GPU cloud pricing starts from $2.50/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Tencent Cloud offers A100, V100, T4 GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Tencent Cloud operates data centers in Asia, Europe, North America. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Tencent Cloud and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Tencent Cloud offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Enterprises deploying ML applications specifically targeting the CIS and Eastern European markets.

Best for Developers deploying generative AI, TTS, or voice agents who need instant serverless scaling and sub-second cold starts.

Best for Managed AI Endpoints