Composable Infrastructure Solutions

The H3 Platform Composable Infrastructure solutions enable you to dynamically assemble or disassemble PCIe/CXL devices to tailor your own composable IT infrastructure, catering to various application requirements in a dynamic business setting. The main goal of these solutions is to seamlessly allocate the required GPU and memory for any application, as required.

BNL-CIS-main-image.png

Why Composable Infrastructures?

Composable infrastructure solution aims to provide the flexibility to reconfigure the service as needed for dynamic changing workloads. H3 is the first to introduce the PCIe Gen 4 GPU chassis, surpassing Gen 3. Gen 4 doubles the total bandwidth, achieving around 32 gigabytes per second of bidirectional data flow. This advancement also decreases system latency. The increased bandwidth and reduced latency are crucial factors in providing greater capacity for data-heavy tasks.

BNL-CIS-Why1.png
BNL-CIS-Why2.png
BNL-CIS-Why3.png


H3 Platform’s Solution benefits

H3 Platform’s solution are tailored in-rack solutions to help realise the PCIe switch-intermediated dynamic allocation requirements of devices, such as (AI) accelerators, GPU’s, storage modules, and NIC cards, through a central management system. Devices are pooled outside servers ready to be assigned or reclaimed upon request, showcasing the flexible resource expansion and allocation to reduce the idle rate and TCO.

To help you increase your IT efficiency by using an Composable Infrastructure are:


BNL-CIS-Image-2.png
  • Reliability

    Heat poses a challenge for PCIe 4.0 devices due to increased clock rates generating more heat. To address this, a specially designed chassis offers adequate cooling for optimal performance. Additionally, users can opt for error alerts via email for PCIe and GPU memory, enabling timely issue resolution.

  • Manageability/h3>

    H3 offers user management functions for various roles within the solution, providing different levels of access based on user needs. The three roles are super admin, user admin, and user.

  • Serviceability

    The GPU chassis is crafted for easy servicing, as component changes are swift and straightforward, requiring no special tools, training, or specialisation.

Product Highlights

1. H3 CXL Memory Expansion Box

BNL-CIS-CXL Memory Pool.png

The H3 CXL Memory Expansion Box offers Memory Composability and Real-Time Monitoring. The appliance offers advanced memory management features such as slicing memory into smaller units, sharing memory among hosts, and mapping memory to hosts. It also allows for surprise memory add and remove, configurable host memory address, and provides a summary of device assignments. Real-time topology on dynamic memory clustering with insights on memory utilisation and performance.

• Total capacity up to 4 TByte memory

• Up to 8 (x16) CXL AIC memory modules

• Up to 14 (x8) or 7 (x16) host connections

• Support CXL Memory Expansion, Pooling, and Sharing

H3 CXL Memory Datasheet

2. H3 Falcon 5012 - PCIe Gen5 GPU Solution

BNL-CIS-Falcon 5012.png

Revolutionising system dynamics, Falcon 5012 introduces advanced GPU management, enabling up to 8 hosts to dynamically share 10 devices on-demand. This streamlines resource allocation, saving significant setup time. The solution allows seamless removal and addition of GPU resources between hosts, ensuring intelligent rearrangement.

• GPU dynamic provisioning

• 4U 19” rackmounted disaggregated GPU solution

• 8 dual slots for GPUs, 4 slots for NICs, and 2 slots for re-timers

• 4 (x16) server host configuration

H3 can offer solutions for PCIe Gen4 as well, the Faclon 40000 series.

Falcon 5012 Datasheet Falcon 4000 Datasheet

3. PCIe 5.0 NVMe MR-IOV Solution

BNL-CIS-Falcon 5208.png

PCIe 5.0 NVMe MR-IOV Solution - Composable NVMe™ SSD virtual functions to meet highest performance requirements.

NVMe MR-IOV allows multiple root ports to share an SR-IOV NVMe device. It significantly improves the utilisation and efficiency of SSD with multi-root capability while securing the SSD I/O performance in a virtual environment like SR-IOV. Multiple hosts can simultaneously use the Gen 5 NVMe namespaces of the NVMe SSDs in the NVMe chassis via virtual functions with PCIe fabric. This Gen 5 NVMe MR-IOV system comprises 24 E3.S NVMe SSDs and provides up to 12 host connections via external PCIe cables. The SSDs are disaggregated from the host machines and installed in the chassis; therefore, the physical functions of these SSDs reside in the chassis rather than joining with the hosts.

In the same way, the SR-IOV functionality gets enabled within the system. Each SSD derives 32 virtual functions that are pre-assigned to the connected hosts. Users can later allocate NVM namespaces to any connected host machine by creating required namespaces and attaching them to the virtual functions assigned to the host.

• Total capacity up to 360TB storage

• Up to 24x E3.S NVMe™ SSDs

• 12 (x8) or 6 (x16) host connections

• Support PCIe multi-root sharing of NVMe™ SSDs through virtual functions.

Falcon 5208 Datasheet