Using Mellanox ConnectX VPI Ports to Ethernet or InfiniBand

The Mellanox ConnectX5 VPI adapter supports both Ethernet and InfiniBand port modes, which must be configured.

Check Status

# mst status -v
MST modules:
------------
    MST PCI module is not loaded
    MST PCI configuration module is not loaded
PCI devices:
------------
DEVICE_TYPE             MST                           PCI       RDMA            NET                                     NUMA  
ConnectX4(rev:0)        /dev/mst/mt4115_pciconf3      8b:00.0   mlx5_3                                                  1     

ConnectX4(rev:0)        /dev/mst/mt4115_pciconf2      84:00.0   mlx5_2                                                  1     

ConnectX4(rev:0)        /dev/mst/mt4115_pciconf1      0c:00.0   mlx5_1                                                  0     

ConnectX4(rev:0)        /dev/mst/mt4115_pciconf0      05:00.0   mlx5_0                                                  0                                                 1    

Start MST

# mst start
Starting MST (Mellanox Software Tools) driver set
Loading MST PCI module - Success
Create devices
Unloading MST PCI module (unused) - Success

Change the port type to Ethernet (LINK_TYPE = 2)

# mlxconfig -d /dev/mst/mt4115_pciconf2 set LINK_TYPE_P1=2

Check that the port type was changed to Ethernet

# ibdev2netdev
mlx5_0 port 1 ==> ens1np0 (Down)
mlx5_1 port 1 ==> enp12s0np0 (Down)
mlx5_2 port 1 ==> enp132s0np0 (Up)
mlx5_3 port 1 ==> enp139s0np0 (Down)

References:

A relook at InfiniBand and Ethernet Trends on Top500

I have put up a article from Nvidia Perspective on the Top 500 Interconnect Trends. There is another article put up by the NextPlatform that took a closer look at the Infiniband and Ethernet Trends

Taken from The Next Platform “The Eternal Battle Between Infiniband and Ethernet”

The penetration of Ethernet rises as the list fans out, as you might expect, with many academic and industry HPC systems not being able to afford InfiniBand or not willing to switch away from Ethernet. And as those service providers, cloud builders, and hyperscalers run Linpack on small portions of their clusters for whatever political or business reasons they have. Relatively slow Ethernet is popular in the lower half of the Top500 list, and while InfiniBand gets down there, its penetration drops from 70 percent in the Top10 to 34 percent in the complete Top500.

Nvidia’s InfiniBand has 34 percent share of Top500 interconnects, with 170 systems, but what has not been obvious is the rise of Mellanox Spectrum and Spectrum-2 Ethernet switches on the Top500, which accounted for 148 additional systems. That gives Nvidia a 63.6 percent share of all interconnects on the Top500 rankings. That is the kind of market share that Cisco Systems used to enjoy for two decades in the enterprise datacenter, and that is quite an accomplishment.

Taken from The Next Platform “The Eternal Battle Between Infiniband and Ethernet”

References:

The Eternal Battle Between Infiniband and Ethernet