EMC 105-001-013-00 ISILON H/F 2-PORT 40GBE IB VPI CARD

$45.05 (Ex. GST)

In stock: 2

The CX354A features two QSFP ports, each capable of operating in either 40 Gb/s Ethernet mode or 56 Gb/s FDR InfiniBand mode (40 Gb/s effective data rate with IB overhead). The VPI (Virtual Protocol Interconnect) technology allows each port to be independently configured for Ethernet or InfiniBand, providing maximum flexibility in hybrid or multi-protocol environments.
In EMC/Isilon deployments, this card was one of the most commonly used cluster interconnect NICs in Isilon H-series (hybrid) and F-series (all-flash) nodes, enabling ultra-fast node-to-node communication across the scale-out NAS cluster. It supports RDMA (Remote Direct Memory Access) over both native InfiniBand and RoCE (RDMA over Converged Ethernet), delivering near-zero CPU overhead and sub-microsecond latencies for data transfers.
The card includes extensive hardware offloads for TCP/UDP/IP stateless processing, large send offload (LSO), receive side scaling (RSS), iSCSI/SMB Direct acceleration, FCoE (Fibre Channel over Ethernet), and advanced virtualization features like SR-IOV (up to 127 virtual functions per port).

Model / EMC Part Number: 105-001-013-00 (EMC/Isilon branding)
Mellanox Model: CX354A (MCX354A-QCBT or equivalent)
Ports: 2 × QSFP (40 Gb/s Ethernet or 56 Gb/s FDR InfiniBand per port)
Dual-Protocol Support: 40GbE or FDR IB (VPI – each port independently configurable)
PCIe Interface: PCIe 3.0 x8 (backward compatible with PCIe 2.0)
Primary Use: Cluster interconnect in EMC Isilon H-series (hybrid) and F-series (all-flash) nodes
Form Factor: Low-profile PCIe card (half-height bracket; full-height optional)
Cooling: Passive (requires server chassis airflow)
Transceivers/Cables Supported: QSFP+ DAC (direct-attach copper), AOC (active optical), 40GbE SR/LR transceivers, FDR IB cables
Launch Era: Early 2010s (major deployments 2012–2018)
Status (2025–2026): Legacy / End-of-Life – no new production or active support from Dell EMC or NVIDIA (Mellanox acquired by NVIDIA)

Ports: 2 × QSFP cages

Data Rate per Port: 40 Gbps (Ethernet) or 56 Gbps FDR InfiniBand (40 Gbps effective data rate)

Aggregate Bandwidth: 80 Gbps (Ethernet mode) / 112 Gbps (IB FDR mode)

Host Interface: PCIe 3.0 x8 (8 GT/s, up to ~64 Gbps bidirectional effective throughput)

RDMA Support: Native InfiniBand RDMA + RoCE v1/v2 (RDMA over Converged Ethernet)

Hardware Offloads: TCP/UDP/IP stateless offloads, LSO, RSS, iSCSI/SMB Direct, FCoE, SR-IOV (up to 127 VFs per port)

Latency: Sub-microsecond with RDMA/RoCE (extremely low in IB FDR mode)

Power Consumption: ~10–15 W typical

Operating Temperature: 0°C to 55°C (commercial grade)

Management/Drivers: Mellanox OFED (Linux), inbox drivers (Windows), VMware ESXi, Isilon OneFS custom integration

Supported OS/Drivers: Linux (Mellanox OFED stack), Windows, VMware ESXi, and Isilon OneFS optimized drivers

Recently viewed products