IT Brief Ireland - Technology news for CIOs & IT decision-makers
Modern telecom datacenter nvidia core to edge ai low latency

HPE unveils AI Grid to power distributed edge inference

Thu, 19th Mar 2026

HPE has launched HPE AI Grid, a distributed infrastructure package that connects AI factories and far-edge inference sites using Nvidia's AI Grid reference architecture.

The product targets service providers running AI workloads across many locations. It aims to deliver predictable, low-latency connectivity between central compute sites, regional facilities, and edge endpoints, treating distributed inference as a single system.

The offering sits within NVIDIA AI Computing by HPE, a portfolio that combines HPE servers and networking with Nvidia accelerated computing and networking components. The AI Grid brand signals a focus on geographically distributed deployments rather than single-site AI infrastructure.

"We're redefining how AI is delivered by moving intelligence to where data and users live and making the network the dependable fabric for real-time experiences," said Rami Rahim, Executive Vice President, President and General Manager, Networking, HPE. "HPE AI Grid with NVIDIA gives service providers a secure, scalable way to operate distributed inference as a single system-delivering predictable, ultra-low latency performance so customers can innovate faster, reduce risk, and create new services."

What it includes

HPE AI Grid aligns with Nvidia's reference architecture and packages networking, optics, servers, and management tools into a single stack. It uses Juniper assets for routing and wide-area networking, alongside HPE compute platforms for edge and rack deployments.

The networking layer includes HPE Juniper telco-grade multicloud routing and coherent optics, along with cloud-native, multi-tenant security, firewalls, WAN automation, and orchestration. HPE also highlights "zero-touch" deployment and lifecycle operations.

On the compute side, AI Grid uses HPE ProLiant Compute edge and rack servers, configurable with Nvidia RTX PRO 6000 Blackwell GPUs. The stack also includes Nvidia BlueField DPUs, Spectrum-X Ethernet switches, ConnectX SuperNICs, and AI blueprints focused on inference.

Nvidia describes the AI Grid concept as a way to manage where AI workloads run across different infrastructure tiers. "An AI Grid unifies geographically distributed AI clusters to place AI workloads where they run best-balancing performance, cost, and latency across AI factories, regional sites, and the edge," said Chris Penrose, Global Vice President, Telco, NVIDIA. "Together with HPE, we're bringing that vision to life by combining NVIDIA's accelerated computing and networking with HPE's telco‐grade multicloud routing and edge infrastructure to create a single, intelligent fabric for distributed inference."

Service provider uses

HPE is pitching AI Grid to operators that want to run inference close to users and devices. It cites retail personalisation, predictive maintenance in manufacturing, local edge inference in healthcare, and carrier-grade AI services as examples that require low latency and consistent performance.

A central theme is turning existing service provider sites into inference points. Operators can use locations that already have power and connectivity as part of the grid model. HPE also refers to "RAN-ready AI grids," linking the approach to telecom footprints and distributed facilities.

Comcast has begun AI field trials using its distributed network, focused on "real-time edge AI inferencing" across its footprint. Initial trials included HPE ProLiant servers running small language models from Personal AI-part of HPE's Unleash AI partner programme-using Nvidia GPUs. The trial also covered an AI-powered "front desk" service for small businesses.

Operator interest

HPE also cites interest from TELUS and CityFibre, with both framing the technology around edge inference and network-based service delivery.

"HPE and NVIDIA have been strategic partners in building TELUS' Sovereign AI Factory, Canada's fastest and most powerful supercomputer, which is enabling researchers, businesses, and institutions to innovate at scale," said Nazim Benhadid, Executive Vice-president and Chief Technology Officer, TELUS. "As TELUS looks to bring AI closer to customers, advance AI-powered network optimization and deliver faster service, HPE AI Grid powered by NVIDIA is a solution we are interested in exploring further as we continue our transformational AI journey."

CityFibre points to latency expectations and security requirements for customer-facing services. "Our customers increasingly expect millisecond responsiveness, low-latency connectivity and comprehensive security to support their applications and services," said Neil McRae, CTIO, CityFibre. "We're exploring how AI Grid from HPE, based on NVIDIA's reference architecture, could support distributed AI inferencing and bring intelligence closer to users and data. By leveraging our fiber network assets, we see potential to combine high-performance connectivity with intelligent services for customers."

Financing terms

HPE Financial Services is attaching financing offers to the AI Grid push, including 0% financing on networking AIOps software such as HPE Juniper Networking Mist. It is also offering financing that it says provides the equivalent of 10% cash savings on AI-ready networking leases.

Pricing and detailed deployment requirements have not been provided. More announcements are expected as field trials progress and operators move from pilots to broader rollouts across regional and edge sites.