
Trainy
Best for AI developers and enterprises needing to rapidly fine-tune large language models without managing DeepSpeed or Kubernetes clusters.

Best for Enterprise ML and data engineering teams requiring heavy pipeline orchestration, complete reproducibility, and managed Kubernetes infrastructure.
Union.ai provides the enterprise managed compute infrastructure for Flyte, the incredibly powerful open-source ML orchestration engine developed at Lyft. For massive data science and AI teams, Union.ai provides a robust control plane that manages heavy Kubernetes clusters, orchestrates complex DAGs (Directed Acyclic Graphs) for model training, and tracks massive data pipelines. It abstracts away the heavy DevOps required for production ML, allowing data scientists to run massive workloads with absolute reproducibility.
| GPU Models | Managed Orchestrated Compute (AWS, GCP) |
| GPU Types | GCP, Managed Orchestrated Compute (AWS |
| Headquarters | Bellevue, WA |
| Founded | 2021 |
| Availability | Available Now |
| Website | union.ai ↗ |
| GPU Model | Instance Type | Hourly Rate |
|---|---|---|
| T4g | On-Demand | $0.15/hr |
| T4 | On-Demand | $0.29/hr |
| L4 | On-Demand | $0.34/hr |
| A10G | On-Demand | $0.40/hr |
| A100 | On-Demand | $0.62/hr |
| V100 | On-Demand | $0.65/hr |
| L40S | On-Demand | $0.73/hr |
| H100 | On-Demand | $1.38/hr |
| H200 | On-Demand | $1.58/hr |
| B200 | On-Demand | $2.85/hr |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Union.ai GPU cloud pricing starts from $0.15/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Union.ai offers Managed Orchestrated Compute (AWS, GCP) GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Union.ai operates data centers in Global. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Union.ai and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Union.ai offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for AI developers and enterprises needing to rapidly fine-tune large language models without managing DeepSpeed or Kubernetes clusters.

Best for Developers deploying containerized AI inference APIs without managing servers.

Best for Integrated Cloud Workloads