NVIDIA Special Address at SIGGRAPH 2021

NVIDIA and SIGGRAPH share a long history of innovation and discovery. Over the last 25 years our community has seen giant leaps forward, driven by brilliant minds and curious explorers. We are now upon the opening moments of an AI-powered revolution in computer graphics with massive advancements in rendering, AI, simulation, and compute technologies across every industry. With open standards and connected ecosystems, we are on the cusp of achieving a new way to interact and exist with graphics in shared virtual worlds.

NVIDIA Special Address | MWC Barcelona 2021

In a special address at MWC Barcelona 2021, NVIDIA announced its partnership with Google Cloud to create the industry’s first AI-on-5G open innovation lab that will speed AI application development for 5G network operators.

Additional announcements included: ● Extending the 5G ecosystem with Arm CPU cores on NVIDIA BlueField-3 DPUs ● Launching NVIDIA CloudXR 3.0 with bidirectional audio for remote collaboration

Performance Required for Deep Learning

There is this question that I wanted to find out about deep learning. What are essential System, Network, Protocol that will speed up the Training and/or Inferencing. There may not be necessary to employ the same level of requirements from Training to Inferencing and Vice Versa. I have received this information during a Nvidia Presentation

Training:

  1. Scalability requires ultra-fast networking
  2. Same hardware needs as HPC
  3. Extreme network bandwidth
  4. RDMA
  5. SHARP (Mellanox Scalable Hierarchical Aggregation and Reduction Protocol)
  6. GPUDirect (https://developer.nvidia.com/gpudirect)
  7. Fast Access Storage

Influencing

  1. Highly Transactional
  2. Ultra-low Latency
  3. Instant Network Response
  4. RDMA
  5. PeerDirect, GPUDirect