
Salad
Best for Batch processing, Image Generation APIs, Highly parallel cheap inference

Best for MLOps Teams, Spot Instance Arbitrage, Dynamic Cloud Scaling
Mithril approaches the GPU shortage differently. Instead of just building another data center, they provide a powerful AI resource orchestration layer that connects ML teams to a multi-cloud GPU marketplace. Their platform intelligently routes training jobs to wherever compute is cheapest and most available.
For MLOps teams looking to slash their AWS or GCP bills, Foundry automates the difficult process of managing spot instances. By ensuring your machine learning jobs are highly resilient and checkpointed, Mithril can seamlessly migrate your training workloads across providers to secure the best possible pricing.
| GPU Models | A100, H100, L40S |
| GPU Types | A100, H100, L40S |
| Headquarters | Palo Alto, CA, USA |
| Founded | 2022 |
| Availability | Available Now |
| Website | mithril.ai ↗ |
💡 Pricing note: Rates shown are indicative. Final pricing depends on GPU model, reservation type (spot vs. on-demand), contract length, and region. Get an exact quote →
Mithril GPU cloud pricing starts from $1.00/hr depending on GPU type, reservation model (on-demand vs. spot vs. reserved), and region. Use the quote form to get exact pricing for your specific workload.
Mithril offers A100, H100, L40S GPU instances. Availability varies by region and configuration. Contact the provider through ComputeStacker for current availability.
Mithril operates data centers in Global. Choosing a region close to your users minimises latency and can help with data residency compliance requirements.
Use the "Get a Quote" button on this page to submit your GPU requirements. ComputeStacker will forward your request to Mithril and other matching providers. You'll receive proposals within 24 hours — no commitment required.
Mithril offers high-performance GPU infrastructure suitable for large language model training and fine-tuning workloads. For large-scale distributed training, check the Specs tab for NVLink and InfiniBand interconnect availability.

Best for Batch processing, Image Generation APIs, Highly parallel cheap inference

Best for Enterprise IT requiring automated, isolated bare-metal servers with high bandwidth.

Best for AI engineers and studios requiring raw, un-virtualized bare-metal access to the latest NVIDIA H100 and Ada architecture.