IT Brief Ireland - Technology news for CIOs & IT decision-makers
Modern it ops room gpu monitoring screens cloud infra servers

CIQ unveils Rocky Linux Pro AI for GPU inference at scale

Thu, 12th Mar 2026

CIQ has launched Rocky Linux from CIQ Pro AI, a version of its Rocky Linux distribution aimed at AI inference and other GPU-accelerated production workloads.

The product, referred to as RLC Pro AI, bundles PyTorch with Nvidia's CUDA and DOCA OFED software stack. CIQ has also listed additional hardware partners and frameworks on its roadmap.

Interest in GPU-based infrastructure has grown as organisations move more machine learning and AI workloads into production. CIQ argues that operating system choices affect how effectively organisations use GPU hardware, especially when deploying workloads at scale.

Stack approach

RLC Pro AI uses the CIQ Linux Kernel and ships with GPU drivers, libraries, and frameworks that CIQ says it has tuned and validated for AI workloads. The distribution is designed to support deployments from bare metal to Kubernetes and on-premises infrastructure.

CIQ says it is built for the current hardware enterprises are buying and is intended to be production-ready from first boot, with immediate support for current Nvidia GPU accelerators.

The focus on validation and pre-configuration reflects a broader industry push to standardise AI software stacks. Organisations building AI platforms often need repeatable performance across development and production, and consistent configurations across environments and sites.

Performance claims

CIQ says RLC Pro AI ships with pre-tuned kernel settings, PyTorch flags, and CUDA configurations to reduce manual tuning and limit configuration drift after updates.

It also says organisations running inference at scale can achieve higher throughput on existing GPU deployments "from day one". CIQ did not provide benchmark figures, but says it has validated performance gains against use cases.

CIQ also points to scaling economics, arguing that higher throughput from existing hardware can reduce the resources needed to meet output targets, affecting infrastructure requirements across nodes, clusters, and fleets.

CIQ also says RLC Pro AI provides a consistent stack and performance profile across public clouds and on-premises environments, including AWS, Google Cloud Platform, and Microsoft Azure, as well as bare metal and sovereign on-premises infrastructure.

Product line

RLC Pro AI is part of the Rocky Linux from CIQ Pro product family, which also includes RLC+NVIDIA, RLC Pro, and RLC Pro Hardened.

CIQ positions itself as the founding support and services partner of Rocky Linux. It also sells tools and platforms around the operating system layer, including Ascender Pro for IT automation, Fuzzball for cloud HPC orchestration, Warewulf Pro for cluster provisioning, and Apptainer, a container system used in high-performance computing.

The announcement also reflects continued activity around "sovereign" infrastructure-deployments that prioritise local control, data residency, and policy compliance. CIQ has highlighted sovereign on-premises infrastructure as one of the environments it targets for inference workloads.

Gregory Kurtzer, CEO of CIQ and founder of Rocky Linux, said operating systems have not kept pace with how organisations use GPU infrastructure in production.

"The OS is where GPU ROI is won or lost, and the industry has ignored it for too long," Kurtzer said. "Organisations are committing hundreds of millions of dollars to GPU infrastructure and running it on operating systems that were never designed for it. RLC Pro AI simplifies and de-risks AI infrastructure investments while driving cutting edge performance and simplicity."

Bjorn Hovland, president of CIQ, said cost pressures on GPU compute are shaping infrastructure decisions across organisations of different sizes.

"GPU compute is the most constrained and expensive resource in AI infrastructure today," Hovland said. "RLC Pro AI gives organizations more from the infrastructure they have already paid for, and those economics hold whether you are a startup running a single GPU node or an enterprise managing a thousand."