Feature Description
Mellanox PeerDirect™ PeerDirect™ communication provides high efficiency RDMA access by
eliminating unnecessary internal data copies between components on
the PCIe bus (for example, from GPU to CPU), and therefore
significantly reduces application run time. ConnectX-4 Lx advanced
acceleration technology enables higher cluster efficiency and scalability
to tens of thousands of nodes.
CPU Offload Adapter functionality enabling reduced CPU overhead allowing more
available CPU for computation tasks.
Open vSwitch (OVS) offload using ASAP
2(TM)
• Flexible match-action flow tables
• Tunneling encapsulation / decapsulation
Quality of Service (QoS) Support for port-based Quality of Service enabling various application
requirements for latency and SLA.
Hardware-based I/O Virtualization ConnectX-4 Lx provides dedicated adapter resources and guaranteed
isolation and protection for virtual machines within the server.
Storage Acceleration A consolidated compute and storage network achieves significant cost-
performance advantages over multi-fabric networks. Standard block and
file access protocols can leverage RDMA for high-performance storage
access.
• NVMe over Fabric offloads for target machine
• Erasure Coding
• T10-DIF Signature Handover
SR-IOV ConnectX-4 Lx SR-IOV technology provides dedicated adapter
resources and guaranteed isolation and protection for virtual machines
(VM) within the server.
NC-SI The adapter supports a Network Controller Sideband Interface (NC-SI),
MCTP over SMBus and MCTP over PCIe - Baseboard Management
Controller interface.
High-Performance Accelerations • Tag Matching and Rendezvous Offloads
• Adaptive Routing on Reliable Transport
• Burst Buffer Offloads for Background Checkpointing