Fabric-Based Collective Offload Solution

This blog entry is summarise from the excellent article “Achieving Breakthrough MPI Performance with Fabric Collectives Offload” by Voltaire. This is also a continuation of the article “Performance Penalty for MPI Communication

A. Fabric-based collective offload solution.
There are 3 principles

  1. Network Offload –
    Offloading floating point computation from the server CPU to the network switch. The collective operations can be easily handled by the switch CPU and its cache
  2. Topology-aware orchestration –
    The Fabric Subnet Manager (SM) which has complete fabric knowledge of the fabric physical topology and ensure that the collective logical-tree optimise the collective communication accordingly 
  3. Communication isolation –
    Collective communication is isolated from the rest of the rest of the fabric by making use of VLAN

Adapter-based Offload collective offload

  1. The Adapter-based Offload approach delegates collective communication, management and progress as well as computation if need to the Host Channel Adapter (HCA). This will addresses the issues of OS noise shielding, but cannot be expected to improve the entire set of collective inefficiencies such as fabric congestion and topology. From the article, there are scalability issues with this approach as the size of the job increases, the number of HCA resources used. This in turn will increase memory consumption and cache missing, resulting in added latency for the collective operation.

Voltaire Solution

Voltaire uses the Fabric-based collective offload approach called Fabric Colective Accelerator (FCA) software. The solution is composed of a manager that orchestrates the initalisation of the collective communication tree and MPI Library that offloads the computation onto the Voltaire switch CPU. For more details, do look at “Achieving Breakthrough MPI Performance with Fabric Collectives Offload” you will find very useful graphs and details on this solution.

PDF Document: Achieving Breakthrough MPI Performance with Fabric Collectives Offload by Voltaire

Performance Penalty for MPI Communication

This blog entry is summarise from the excellent article “Achieving Breakthrough MPI Performance with Fabric Collectives Offload” by Voltaire.

According to the paper,

What are MPI collectives?

  1. MPI is the defacto standard for communication among processes that model a parallel program running on a distributed memory system.
  2. MPI functions include point-to-point communication and group communication between many nodes
  3. for some collectives, the operation involes mathematical group operation that is performed among the results of each professes such as summation or determining min/mx value

What prohibit x86 cluster application performance scalability?

  1. Cluster’s network and collective operations. Collective Operations which is the group communication which has to wait for all the member of the groups to pariticpate before it can conclude. In other wors, the slowest member will impac the overall performance
  2. Applications can spend up to 50% o 60% on collectives. The more number of nodes, the % increased in the inefficiency

Problems with collective scalability

A. Cluster Hotspot and Congestion

  1.  Non-Blocking configuration does not eliminiate the problem even though it is providing a higher I/O throughput . This is because applications communication pattern are rarely evenly distributed and “hot-spot” do occur
  2. Collective Messages are affected by congestion due to the “many-to-one” problem of group communication and large amount of collective messages travelling over the fabrics

B. Server OS noise

  1. In non-real-time OS environment, many tasks and events can cause a running process to perform a context switch in favour of other tasks before returning to the collective operations after some time. This is due to the “OS noise” which includes hardware interrupts, page faults, swap-ins and preemption on the main program.

For more information on how this MPI performance can be resolves, do look at upcoming Blog Entry “Fabric-Based Collective Offload Solution