NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
उत्पाद विवरण:
| ब्रांड नाम: | Mellanox |
| मॉडल संख्या: | MCX755106AS-HEAT (900-9x7AH-00788-DTZ) |
| दस्तावेज़: | Connectx-7 infiniband.pdf |
भुगतान & नौवहन नियमों:
| न्यूनतम आदेश मात्रा: | 1 टुकड़ा |
|---|---|
| मूल्य: | Negotiate |
| पैकेजिंग विवरण: | बाहरी डिब्बा |
| प्रसव के समय: | इन्वेंट्री के आधार पर |
| भुगतान शर्तें: | टी/टी |
| आपूर्ति की क्षमता: | परियोजना/बैच द्वारा आपूर्ति |
|
विस्तार जानकारी |
|||
| प्रतिरूप संख्या।: | MCX755106AS-HEAT (900-9x7AH-00788-DTZ) | बंदरगाहों: | 2-पोर्ट |
|---|---|---|---|
| तकनीकी: | अनिद्रा | इंटरफ़ेस प्रकार: | OSFP56 |
| विनिर्देश: | 16.7 सेमी x 6.9 सेमी | मूल: | भारत / इज़राइल / चीन |
| MOQ: | 200GBE | होस्ट इंटरफ़ेस: | Gen3 X16 |
| प्रमुखता देना: | NVIDIA ConnectX-7 InfiniBand adapter,400Gb/s Mellanox network card,Smart adapter with NDR support |
||
उत्पाद विवरण
NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
Flagship Model PCIe Gen5 x16 InfiniBand & RoCE Dual-Port 400G. Ultra-low latency 400Gb/s RDMA network adapter engineered for AI factories, HPC clusters, and hyperscale cloud data centers. The ConnectX-7 MCX755106AS-HEAT integrates in-network computing, hardware security offloads, and advanced virtualization to accelerate modern scientific computing and software-defined infrastructures.
Product Overview
The NVIDIA ConnectX-7 family delivers groundbreaking performance with up to 400Gb/s bandwidth per port, supporting both InfiniBand (NDR/HDR/EDR) and Ethernet (up to 400GbE). Model MCX755106AS-HEAT features PCIe Gen5 host interface (up to x32 lanes), dual-port 400Gb/s density, multi-host capability, and advanced engines for GPUDirect RDMA, NVMe-oF acceleration, and inline cryptography. Built for demanding AI training, simulation, and real-time analytics, this adapter reduces CPU overhead while maximizing data throughput and security.
With on-board memory for rendezvous offload, SHARP collective acceleration, and ASAP2 SDN offloads, ConnectX-7 transforms standard servers into high-performance network nodes with near-zero jitter and nanosecond-precision timing (IEEE 1588v2 Class C).
Key Features
- NDR InfiniBand & 400GbE ready - Up to 400Gb/s per port, dual-port configuration delivering 800Gb/s aggregate bandwidth; supports NDR, HDR, EDR InfiniBand and 400/200/100/50/25/10GbE.
- PCIe Gen5 x16 (up to x32 lanes) - High-throughput host interface with TLP processing hints, ATS, PASID, and SR-IOV.
- In-Network Computing - Hardware offload of collective operations (SHARP), rendezvous protocol, burst buffer offload.
- GPUDirect RDMA & GPUDirect Storage - Direct GPU-to-NIC data path, accelerating deep learning and data analytics.
- Hardware Security Engines - Inline IPsec/TLS/MACsec encryption/decryption (AES-GCM 128/256-bit) + secure boot with hardware root-of-trust.
- Advanced Storage Acceleration - NVMe-oF (over Fabrics/TCP), NVMe/TCP offload, T10-DIF signature handover, iSER, NFS over RDMA, SMB Direct.
- ASAP2 SDN & VirtIO Acceleration - OVS offload, VXLAN/GENEVE/NVGRE encapsulation, connection tracking, and programmable parser.
- Precision Timing - PTP (IEEE 1588v2) with 12ns accuracy, SyncE, time-triggered scheduling, packet pacing.
Technology: Inside ConnectX-7
Built on 7nm process, ConnectX-7 integrates multiple hardware acceleration engines that offload the CPU and deliver deterministic performance. Key technological pillars include:
- RDMA over Converged Ethernet (RoCE) - Zero-Touch RoCE for low-latency Ethernet fabrics.
- Dynamically Connected Transport (DCT) & XRC - Efficient MPI and HPC communication.
- On-Demand Paging (ODP) & User Memory Registration (UMR) - Simplifies memory management for large-scale applications.
- Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) - In-network data reduction for MPI collectives.
- Multi-Host Technology - Enables up to 4 independent hosts to share a single adapter, optimizing server utilization.
- PLDM & SPDM Manageability - Firmware update, monitoring, and device attestation for enterprise security.
Typical Deployments
- AI & Machine Learning Clusters - Large-scale training with NCCL, UCX, and GPUDirect RDMA.
- HPC Simulation & Research - Weather modeling, genomics, molecular dynamics requiring low-latency MPI.
- Hyperscale Cloud & SDDC - Overlay networking, NFV acceleration, secure multi-tenancy (SR-IOV).
- Enterprise Storage Systems - NVMe-oF target offload and distributed file systems (Lustre, GPUDirect Storage).
- 5G Edge & Telecom - Time-sensitive infrastructures with Class C PTP and MACsec security.
Compatibility
Operating Systems & Virtualization: In-box drivers for Linux (RHEL, Ubuntu, Rocky Linux), Windows Server, VMware ESXi (SR-IOV), and Kubernetes (CNI plugins). Optimized for NVIDIA HPC-X, UCX, OpenMPI, MVAPICH, MPICH, OpenSHMEM, NCCL, and UCC.
Hardware Compatibility: Standard PCIe Gen5 slots (x16 mechanical, x16/x32 electrical). Certified with major server platforms from Dell, HPE, Supermicro, Lenovo, and NVIDIA DGX systems.
Interoperability: Fully compliant with InfiniBand Trade Association Spec 1.5, IEEE 802.3 for Ethernet, and PCI-SIG Gen5 specifications.
Specifications - ConnectX-7 MCX755106AS-HEAT
| Parameter | Detail |
|---|---|
| Product Model | MCX755106AS-HEAT |
| Form Factor | PCIe HHHL (Half Height Half Length), FHHL bracket optional |
| Host Interface | PCIe Gen5.0 x16 (up to 32 lanes, supporting bifurcation & Multi-Host) |
| Network Protocols | InfiniBand (NDR/HDR/EDR) & Ethernet (400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE) |
| Port Configuration | Dual-port QSFP-DD (2x 400Gb/s NDR, 800Gb/s aggregate) |
| InfiniBand Speeds | NDR 400Gb/s per port, HDR 200Gb/s, EDR 100Gb/s, FDR (compatible) |
| Ethernet Speeds | 400/200/100/50/25/10GbE NRZ/PAM4 |
| On-board Memory | Integrated in-network memory for rendezvous offload & burst buffer |
| Security Offloads | Inline IPsec, TLS, MACsec (AES-GCM 128/256-bit), Secure Boot, Flash Encryption |
| Storage Offloads | NVMe-oF (TCP/Fabrics), NVMe/TCP, T10-DIF, SRP, iSER, NFS over RDMA, SMB Direct |
| Timing & Sync | IEEE 1588v2 PTP (12ns accuracy), SyncE, programmable PPS, time-triggered scheduling |
| Virtualization | SR-IOV, VirtIO acceleration, VXLAN/NVGRE/GENEVE offload, Connection tracking (L4 firewall) |
| Manageability | NC-SI, MCTP over SMBus/PCIe, PLDM (Monitor/Firmware/FRU/Redfish), SPDM, SPI flash, JTAG |
| Remote Boot | InfiniBand boot, iSCSI, UEFI, PXE |
| Power Consumption | Not publicly specified - dual-port high-performance adapter requires adequate airflow; please confirm before ordering |
| Operating Temperature | 0°C to 55°C (with appropriate chassis cooling) |
Note: Some parameters may vary based on firmware and system configuration. Consult NVIDIA documentation or contact Starsurge for specific validation.
Advantages - Built for Modern Data Centers
Lowest Total Cost of Ownership Offloads CPU from networking, storage, and security tasks -- lowering power and cooling costs per Gb/s.
Future-Ready Bandwidth PCIe Gen5 and NDR 400G per port eliminates bottlenecks for next-gen GPU servers and AI clusters.
Enterprise Security Hardware inline encryption (IPsec/TLS/MACsec) and secure chain of trust meet compliance (FIPS, DoD).
Seamless Integration Full compatibility with major distros, hypervisors, and container orchestration platforms.
Frequently Asked Questions (FAQ)
Q: Is the MCX755106AS-HEAT compatible with both InfiniBand and Ethernet switches?
Yes. ConnectX-7 supports dual-protocol operation; you can use it in InfiniBand mode (NDR fabric) or Ethernet mode (RoCE). The adapter auto-detects or can be configured via firmware.
Q: Does this dual-port model support GPUDirect Storage on both ports simultaneously?
Absolutely. GPUDirect Storage and GPUDirect RDMA are fully supported across both ports, enabling direct data movement between storage and GPU memory without CPU bounce buffers.
Q: What PCIe bifurcation options are available for MCX755106AS-HEAT?
It supports standard bifurcation (x16, x8/x8) and NVIDIA Multi-Host allowing up to 4 independent hosts with appropriate PCIe switch platforms.
Q: Can I use this adapter for NVMe/TCP offload?
Yes. Hardware offload for NVMe over TCP reduces CPU utilization and improves latency for software-defined storage.
Q: Where can I find the latest firmware and drivers?
Drivers are included in mainstream Linux kernels (MLNX_OFED). NVIDIA's official Mellanox tools (mlxconfig, flint) support firmware updates. Starsurge also provides curated firmware packages.
Precautions & Ordering Notes
- Ensure server motherboard provides PCIe Gen5 slot with adequate cooling (active airflow recommended for 400G operation).
- For Multi-Host configurations, verify platform support and cabling requirements (splitter cables may be needed).
- Optical modules / DAC cables are sold separately; use NVIDIA-qualified transceivers for compliance.
- Maximum power consumption not published by NVIDIA; typical thermal design assumes 30W-40W range under heavy load for dual-port model, please validate with your chassis thermal guide.
- For cryptographic features (IPsec/TLS offload), additional licensing may be required; please confirm with sales.
Compatibility Matrix (Simplified)
| Component / Ecosystem | Supported | Notes |
|---|---|---|
| NVIDIA DGX H100 / GH200 | Yes | Certified with dual-port configuration |
| VMware vSphere / ESXi | Yes (SR-IOV) | Driver support included |
| Linux kernel 5.x+ | Yes (In-box) | MLNX_OFED recommended |
| Windows Server 2022 | Yes | Native RDMA / RoCE |
| Kubernetes / CNI | Yes | Multus, SR-IOV CNI |
| OpenMPI / MVAPICH | Yes | Optimized for InfiniBand verbs |
Buyer Checklist - ConnectX-7 Adoption
- Check host platform: PCIe Gen5 slot (or Gen4 with backward compatibility but bandwidth limited).
- Confirm cable type: 400G NDR (OSFP/QSFP-DD) or splitter for 2x200G per port.
- Verify cooling: Dual-port high-power adapter requires at least 350 LFM airflow.
- For security features: Confirm if secure boot and crypto offload are needed (HEAT model includes hardware root-of-trust and full inline engines).
- Software stack: MLNX_OFED or inbox driver version compatibility with your kernel/OS.
इस उत्पाद के बारे में अधिक जानकारी जानना चाहते हैं







