Checking Process running on GPGPU

If you wish to check the running process at GPU, it is quite easy.

watch -n 1 nvidia-smi

Look at the Processes at the bottom. You have which GPU is holding running what and the corresponding PID and Process Name. Quite useful

Getting on board Nvidia GPGPU on CentOS KVM

  1. For vGPU test you’ll need a license, which can be requested here:
    https://www.nvidia.com/object/nvidia-enterprise-account.html
  2. Other documentation for installing vGPU on  Red Hat / CentOS is here:
    https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#red-hat-el-kvm-install-configure-vgpu
  3. Virtual GPU Software Quick Start Guide
    https://linuxcluster.wordpress.com/2019/01/28/virtual-gpu-software-quick-start-guide/

In summary the steps are:
– Install a piece of sw in the host/hypervisor to help virtualize GPUs
– Install the GPU drivers inside the guest OS of the VMs
– Install a license server (flex) for the licensing
– Configure license server and settings within the VM to connect to the license server

 

Nvidia Tesla versus Nvidia GTX Cards

References

  1. Performance Comparison between NVIDIA’s GeForce GTX 1080 and Tesla P100 for Deep Learning
  2. Comparison of NVIDIA Tesla/Quadro and NVIDIA GeForce GPUs

 

Nvidia EULA

Key clauses are: 2.1.3 that states no DC deployment, commercial hosting and broadcast services
http://www.nvidia.com/content/DriverDownload-March2009/licence.php?lang=us&type=GeForce

 

FP64 64-bits (Double Precision) Floating Point Calculation


Pix taken from Comparison of NVIDIA Tesla/Quadro and NVIDIA GeForce GPUs

FP16-16bits (Half Precision) Floating Point Calculation


Pix taken from Comparison of NVIDIA Tesla/Quadro and NVIDIA GeForce GPUs

Developing a Linux Kernel Module using GPUDirect RDMA

Taken from Developing a Linux Kernel Module using GPUDirect RDMA

1.0 Overview

GPUDirect RDMA is a technology introduced in Kepler-class GPUs and CUDA 5.0 that enables a direct path for data exchange between the GPU and a third-party peer device using standard features of PCI Express. Examples of third-party devices are: network interfaces, video acquisition devices, storage adapters.

GPUDirect RDMA is available on both Tesla and Quadro GPUs.

A number of limitations can apply, the most important being that the two devices must share the same upstream PCI Express root complex. Some of the limitations depend on the platform used and could be lifted in current/future products.

A few straightforward changes must be made to device drivers to enable this functionality with a wide range of hardware devices. This document introduces the technology and describes the steps necessary to enable an GPUDirect RDMA connection to NVIDIA GPUs on Linux.

 

1.1. How GPUDirect RDMA Works

When setting up GPUDirect RDMA communication between two peers, all physical addresses are the same from the PCI Express devices’ point of view. Within this physical address space are linear windows called PCI BARs. Each device has six BAR registers at most, so it can have up to six active 32bit BAR regions. 64bit BARs consume two BAR registers. The PCI Express device issues reads and writes to a peer device’s BAR addresses in the same way that they are issued to system memory.

Traditionally, resources like BAR windows are mapped to user or kernel address space using the CPU’s MMU as memory mapped I/O (MMIO) addresses. However, because current operating systems don’t have sufficient mechanisms for exchanging MMIO regions between drivers, the NVIDIA kernel driver exports functions to perform the necessary address translations and mappings.

To add GPUDirect RDMA support to a device driver, a small amount of address mapping code within the kernel driver must be modified. This code typically resides near existing calls to get_user_pages().

The APIs and control flow involved with GPUDirect RDMA are very similar to those used with standard DMA transfers.

References:

Read more at: http://docs.nvidia.com/cuda/gpudirect-rdma/index.html