Intel MPI Library supports multiple, dynamically selectable network fabric device drivers to support different communication channels between MPI processes. The default communication method uses a built-in TCP (Ethernet, or sockets) device driver. Before the introduction of Intel® MPI Library 4.0, selection of alternative devices was done through the command line using the I_MPI_DEVICE environment variable. Starting with Intel® MPI Library 4.0, the I_MPI_FABRICS environment variable is to be used, and the environment variable I_MPI_DEVICE is considered a deprecated syntax. The following table lists the network fabric types for I_MPI_FABRICS that are supported by Intel MPI Library 4.0 and its successors:
Interconnection-Device-Fabric Values for the I_MPI_FABRICS Environment Variable | Description | |
---|---|---|
shm | Shared-memory | |
dapl | Network fabrics that support DAPL*, such as InfiniBand*, iWarp*, Dolphin*, and XPMEM* (through DAPL*) | |
tcp | Network fabrics that support TCP/IP, such as Ethernet and InfiniBand* (through IPoIB*) | |
tmi | Network fabrics with tag matching capabilities through the Tag Matching Interface (TMI), such as Qlogic* and Myrinet* | |
ofa | Network fabric, such as InfiniBand* (through OpenFabrics* Enterprise Distribution (OFED*) verbs) provided by the Open Fabrics Alliance* (OFA*) |
The environment variable I_MPI_FABRICS has the following syntax:
I_MPI_FABRICS=<fabric>|<intra-node fabric>:<internodes-fabric>
where:
<fabric> placeholder can have the values shm, dapl, or tcptcp, tmi, or ofa
<intra-node fabric> placeholder can have the values shm, dapl, or tcptcp, tmi, or ofa
<inter-node fabric> placeholder can have the values dapl, or tcptcp, tmi, or ofa
The next section provides examples for using the I_MPI_FABRICS environment variable within the mpiexec command line.