NVIDIA Quantum MQM8700-HS2F 200G InfiniBand Switch 40-Port 16Tb/s
उत्पाद विवरण:
| ब्रांड नाम: | Mellanox |
| मॉडल संख्या: | MQM8700-HS2F (920-9B110-00FH-0MD) |
| दस्तावेज़: | MQM8700 series.pdf |
भुगतान & नौवहन नियमों:
| न्यूनतम आदेश मात्रा: | 1 टुकड़ा |
|---|---|
| मूल्य: | Negotiate |
| पैकेजिंग विवरण: | बाहरी डिब्बा |
| प्रसव के समय: | इन्वेंट्री के आधार पर |
| भुगतान शर्तें: | टी/टी |
| आपूर्ति की क्षमता: | परियोजना/बैच द्वारा आपूर्ति |
|
विस्तार जानकारी |
|||
| नमूना: | MQM8700-HS2F (920-9B110-00FH-0MD) | स्थिति: | नया और मौलिक |
|---|---|---|---|
| अपलिंक कनेक्टिविटी: | 200 जीबीपीएस | कीवर्ड: | मेलानॉक्स नेटवर्क स्विच |
| THROUGHPUT: | 16 टीबी/एस | अनुपालन: | ROHS-6, CE, FCC, ISO, ETS |
| प्रमुखता देना: | NVIDIA InfiniBand switch 200G,Mellanox network switch 40-port,Quantum MQM8700 switch 16Tb/s |
||
उत्पाद विवरण
High‑performance fixed‑configuration switch delivering 40 ports of 200Gb/s (or 80 ports of 100Gb/s) with 16 Tb/s non‑blocking throughput, in‑network computing acceleration, and ultra‑low latency — purpose‑built for HPC, AI clusters, and hyperscale data centers.
The NVIDIA Quantum QM8700 series (including MQM8700‑HS2F) represents a new class of 200G InfiniBand smart switches, engineered to eliminate bottlenecks in AI, high‑performance computing, and cloud storage environments. With up to forty 200Gb/s ports in a compact 1U form factor, the switch delivers 16 Tb/s aggregate throughput with cut‑through latency below 130 nanoseconds. Built on NVIDIA’s scalable InfiniBand architecture, the QM8700‑HS2F features an embedded x86 dual‑core processor, integrated Subnet Manager (up to 2,000 nodes), and support for NVIDIA SHARP™ technology, accelerating collective operations by offloading communication from servers to the network fabric.
Designed for extreme flexibility, each 200Gb/s QSFP56 port can be split into two independent 100Gb/s ports, doubling the radix for dense top‑of‑rack deployments. The QM8700‑HS2F variant (P2C airflow) is a fully managed switch running MLNX‑OS®, ideal for enterprises seeking high‑performance networking with simple out‑of‑the‑box management, CLI, WebUI, SNMP, and JSON APIs.
- 200Gb/s InfiniBand per port – forty QSFP56 ports supporting 200G or 100G split modes, non‑blocking architecture.
- In‑Network Computing Acceleration – NVIDIA SHARP™ technology enables in‑switch data aggregation, reducing MPI, NCCL, and SHMEM communication time by orders of magnitude.
- High Radix & Split Capability – Convert 40x 200G ports into 80x 100G ports for double‑density topologies without extra switches.
- Advanced Congestion Management – Adaptive routing, static routing, and quality of service (QoS) to eliminate hot spots and maximize effective fabric bandwidth.
- Integrated Subnet Manager – On‑board SM supports up to 2,000 nodes for quick cluster bring‑up; fully managed via MLNX‑OS.
- Redundant & Hot‑Swappable PSU – 1+1 redundant power, 80 Plus Gold certified, ENERGY STAR compliant, with power optimization on partial port usage.
- Comprehensive Management – CLI, WebUI, SNMP, JSON, plus optional UFM™ for external advanced fabric orchestration (MQM8790 variant).
- Backward Compatible – Seamless interoperability with previous InfiniBand generations (EDR, FDR).
Unlike conventional switches that only forward packets, NVIDIA Quantum switches embed scalable hierarchical aggregation and reduction protocol (SHARP) engines directly in the silicon. Data traversing the switch can be processed — aggregated, reduced, or broadcast — without multiple round‑trips to server endpoints. This dramatically accelerates collective operations like all‑reduce, barrier, and broadcast, which are critical for deep learning frameworks (TensorFlow, PyTorch via NCCL) and MPI‑based HPC simulations. The result is up to 10x performance gains for communication‑intensive workloads and reduced CPU overhead, freeing compute resources for actual application processing.
The QM8700‑HS2F also supports adaptive routing and congestion control algorithms that automatically balance traffic across multiple paths, delivering near‑line‑rate throughput even under high contention.
- AI & Machine Learning Clusters – Large‑scale GPU‑based systems (NVIDIA DGX, HGX) requiring 200G interconnect with SHARP for NCCL acceleration.
- High‑Performance Computing (HPC) – Research labs, national labs, and universities running MPI workloads, weather simulation, and computational fluid dynamics.
- Hyperscale & Cloud Data Centers – Fat‑tree, DragonFly+, and multi‑dimensional torus topologies for scalable, high‑bisection bandwidth fabrics.
- Enterprise & Financial Services – Ultra‑low latency trading platforms and database acceleration requiring predictable network performance.
- Top‑of‑Rack (ToR) & End‑of‑Row – Double‑density 100Gb/s per server connectivity using split port capability.
The QM8700 series works seamlessly with NVIDIA ConnectX‑6, ConnectX‑7, and BlueField DPU adapters, supporting both InfiniBand and mixed fabrics. It is backward compatible with previous InfiniBand speeds (EDR 100Gb/s, FDR 56Gb/s). Fully interoperable with existing NVIDIA Quantum fabric switches, and managed via unified fabric manager (UFM) for telemetry and predictive monitoring. Operating system support includes major Linux distributions (RHEL, Ubuntu, Rocky Linux) and NVIDIA certified GPU servers.
| Parameter | Detail |
|---|---|
| Model Number | MQM8700-HS2F |
| Ports & Speed | 40 QSFP56 ports; up to 200Gb/s per port; supports split into 80 ports of 100Gb/s |
| Aggregate Throughput | 16 Tb/s non‑blocking |
| Switching Latency | < 130ns (cut‑through) |
| Management | Fully managed, on‑board x86 dual core CPU (Broadwell ComEx D‑1508 2.2GHz), 8GB system memory; MLNX‑OS, CLI, WebUI, SNMP, JSON, Subnet Manager integrated |
| Power Supply | 1+1 redundant hot‑swappable, 100‑127VAC / 200‑240VAC, 80 Plus Gold, ENERGY STAR |
| Airflow | P2C (port‑to‑power) — MQM8700-HS2F, standard depth |
| Dimensions (HxWxD) | 1.7 x 17 x 23.2 in (43.6 x 433.2 x 590.6 mm), 1U |
| Weight | With 2 PSUs: 12.48 kg / 27.5 lbs |
| Operating Temperature | 0°C to 40°C |
| Certifications | CE, FCC, VCCI, ICES, RCMS, RoHS compliant |
| Warranty | 1 year limited hardware warranty (extendable options available) |
| Orderable Part Number (OPN) | Description | Airflow | Management |
|---|---|---|---|
| MQM8700-HS2F | NVIDIA Quantum 200Gb/s InfiniBand switch, 40 QSFP56, dual AC PSU, x86 dual core, standard depth, P2C airflow, rail kit | P2C (Port to Power) | Managed (MLNX-OS) |
| MQM8700-HS2R | Same as above but C2P airflow (power‑to‑port) | C2P | Managed |
| MQM8790-HS2F | Unmanaged variant, P2C airflow, suitable for UFM external management | P2C | Externally managed (UFM ready) |
| MQM8790-HS2R | Unmanaged, C2P airflow | C2P | Externally managed |
Note: For high‑density 100G deployments using split cables, both managed and unmanaged SKUs provide the same port flexibility. Choose MQM8700‑HS2F for integrated subnet management and full OS access.
- Superior ROI – Reduce capital expenditure with double‑density 100G port capacity and lower switch count for large fabrics.
- Energy Efficient – Dynamic power scaling based on port utilization, lowering operational costs.
- SHARP™ Acceleration – Up to 10x faster collective communications without consuming host CPU cycles.
- Simplified Management – On‑board Subnet Manager eliminates need for external SM servers for clusters up to 2,000 nodes.
- Scalable Topologies – Native support for Fat Tree, DragonFly+, and Torus to future‑proof data center growth.
- Proven Ecosystem – Backed by NVIDIA cumulative software stack and 24/7 partner support.
Starsurge Group provides end‑to‑end lifecycle services for NVIDIA Quantum switches, including pre‑sales architecture consulting, proof‑of‑concept testing, and global logistics. Our experienced technical team offers remote troubleshooting, firmware upgrades, and RMA coordination. Warranty extension options and 24x7 priority support available upon request. Multilingual support for EMEA, Americas, and APAC regions ensures rapid response for mission‑critical deployments.
- Ensure ambient operating temperature remains between 0°C and 40°C; maintain proper rack ventilation.
- Use only qualified QSFP56 optics or DAC cables listed in NVIDIA compatibility guide.
- Airflow direction (P2C or C2P) must match data center cooling scheme. MQM8700-HS2F uses P2C (port‑to‑power).
- Power supply must be connected to appropriate AC voltage (100‑240VAC) with grounding.
- Firmware updates: always backup configuration before upgrading via MLNX‑OS.
- Weight ~12.5kg with two PSUs — use proper mechanical lift for rack mounting.
Hong Kong Starsurge Group Co., Limited is a technology‑driven provider of network hardware, IT services, and system integration solutions. Founded in 2008, the company serves customers worldwide with products including network switches, NICs, wireless access points, controllers, cabling, and infrastructure equipment. Backed by an experienced sales and technical team, Starsurge supports industries such as government, healthcare, manufacturing, education, finance, and enterprise.
With a customer‑first approach, Starsurge focuses on reliable quality, responsive service, and tailored solutions. As an authorized partner for leading networking brands, we deliver global logistics, custom software development, and multilingual support — helping clients build efficient, scalable, and dependable network infrastructure.
| Component | Supported Models / Types |
|---|---|
| Adapters | NVIDIA ConnectX‑6, ConnectX‑7, BlueField‑2 / BlueField‑3 InfiniBand |
| Cables & Optics | QSFP56 DAC (passive up to 3m, active up to 5m), AOC, optical transceivers (SR4, LR4) |
| Operating Systems | Linux (RHEL 8/9, Ubuntu 20.04/22.04, Rocky Linux), Windows Server with InfiniBand stack |
| Management Platforms | MLNX-OS, NVIDIA UFM, Prometheus/Grafana via SNMP exporter |
| Topology Support | Fat Tree, DragonFly+, 2D/3D Torus, SlimFly |
- Confirm airflow direction (P2C for MQM8700-HS2F) matches your rack cooling scheme.
- Verify required port speed (200G native or 100G breakout) and cable assembly type.
- Check power input: dual redundant AC with C13/C14 connectors.
- Ensure rack depth supports 23.2 inches (standard depth).
- Plan for Subnet Manager: on‑board SM covers up to 2000 nodes; larger clusters may need additional SM instances.
- Validate software license requirements for advanced features (UFM optional).
- NVIDIA Quantum QM9700 Series (NDR 400G InfiniBand)
- NVIDIA ConnectX‑6 VPI Adapter Cards (100Gb/s Dual‑port)
- NVIDIA BlueField‑3 DPU for infrastructure acceleration
- Starsurge Custom Rack Integration Kits & QSFP56 Cables (passive/active)
- NVIDIA UFM Telemetry Platform for large‑scale fabric management







