
Replicate
Best for Serverless Image Generation, LLM API inference, Open-Source Model Hosting

Best for Budget Compute, Side Projects, Decentralized Rendering
TensorDock is an incredibly cost-effective marketplace that aggregates GPU compute from data centers and individual node operators around the globe. By decentralizing their infrastructure, TensorDock can provide high-end consumer cards like the RTX 4090 and enterprise cards like the A100 at unparalleled price points.
| GPU Models | RTX 4090, RTX 3090, A100, L40S |
| GPU Types | A100, RTX 3090, RTX 4090 |
| Headquarters | Seattle, WA, USA |
| Founded | 2021 |
| Availability | Available Now |
| Website | tensordock.com ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
TensorDock GPU cloud pricing starts from $0.10/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
TensorDock offers RTX 4090, RTX 3090, A100, L40S GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
TensorDock operates data centers in Asia Pacific, EU Central, US East. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to TensorDock and other matching providers. You'll receive proposals within 24 hours — no commitment required.
TensorDock offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Serverless Image Generation, LLM API inference, Open-Source Model Hosting

Best for No-code Finetuning, AI Application Developers, Quick Prototyping

Best for LLM Training, AI Research, Fine-Tuning