Mirantis integrates NVIDIA Run:ai to speed AI deployment
Mirantis has integrated NVIDIA Run:ai with its k0rdent AI platform, allowing AI platforms to be deployed in minutes.
The integration targets enterprises and neocloud providers building private AI infrastructure and looking to automate the steps between provisioning GPU systems and preparing them for training and inference workloads.
The combined setup automates deployment and lifecycle management for NVIDIA Run:ai through k0rdent AI. This includes the software and infrastructure layers between bare-metal systems and the AI workload environment used by data science and operations teams.
Organisations adopting private AI systems often face a lengthy process after buying or installing GPU hardware. That can include setting up orchestration software, aligning operator dependencies, configuring networking, allocating resources and validating the environment before workloads can run.
Under the integration, k0rdent AI automates the installation and configuration of components including ingress and external DNS, cert-manager, the NVIDIA GPU Operator, NVIDIA Network Operator, Dynamic Resource Allocation Operator, MPI Operator, Training Operator and NVIDIA Run:ai platform templates. The platform also manages dependency sequencing, configuration validation and infrastructure readiness checks.
Run:ai provides the workload and GPU orchestration layer in the broader AI factory stack. It lets users submit training jobs, run inference tasks and launch notebooks through a user interface, command-line interface or API without directly managing Kubernetes clusters or GPU settings.
The integration is intended to reduce the operational burden on IT and data science teams, especially where deployment and management responsibilities are split across departments. It is also aimed at multi-tenant environments and operators seeking more standardised deployment across regions and infrastructure types.
Richard Borenstein, Senior Vice President of Business Development at Mirantis, said customers face an operational bottleneck. "Enterprises don't struggle to purchase GPUs - they struggle to operationalize them," he said.
"By automating the deployment of NVIDIA Run:ai as part of a full-stack AI factory platform, k0rdent AI enables organizations to move from infrastructure delivery to workload execution in a fraction of the time, with repeatable, production-grade outcomes," Borenstein said.
NVIDIA positioned the tie-up around deployment speed and infrastructure use in shared environments. "Enterprises and cloud providers are looking for ways to accelerate the path from infrastructure to production AI," said Omri Geller, Vice President and General Manager at NVIDIA.
"By integrating NVIDIA Run:ai with Mirantis k0rdent AI, customers can automate the deployment of AI factory environments, enabling faster time to value, improved GPU utilization, and more efficient scaling of AI workloads across multi-tenant environments," Geller said.
Certified setup
The integration has been tested through NVIDIA Run:ai's certification programme and achieved partner-certified status after more than 100 functional tests. These covered workload submission, scheduling behaviour, multi-tenant operations and platform lifecycle management.
Mirantis is also a validated member of the NVIDIA AI Cloud Ready initiative. In practice, this positions k0rdent AI within a broader push by infrastructure suppliers to offer pre-tested software stacks for AI deployments, rather than leaving customers to assemble and validate each layer themselves.
Regulated use
The integrated platform supports air-gapped deployments designed for environments with no external network dependency. That is likely to matter to government users and regulated sectors that need isolated infrastructure.
k0rdent AI also supports rack-scale GPU systems, including NVIDIA Grace Blackwell NVL72, through the NVIDIA NCX Infra Controller, which is used for bare-metal lifecycle automation.
For neocloud operators, the automation could support on-demand deployment and removal of full Run:ai environments to improve use of installed GPU capacity. For enterprise users, the same approach is intended to provide a more consistent way to deploy AI platforms across teams, locations and infrastructure estates.
Mirantis serves enterprise customers including Adobe, Ericsson, Inmarsat, MetLife, PayPal and Societe Generale.