This new 100-qubit Quantum processors is built with ultracold atoms

Image: ColdQuanta

By cooling atoms down to near absolute zero and then controlling them with lasers, a company has successfully created a 100-qubit quantum processor that compares to the systems developed by leading quantum players to date. ColdQuanta, a US-based company that specializes in the manipulation of cold atoms, unveiled the new quantum processor unit, which will form the basis of the company’s 100-qubit gate-based quantum computer, code-named Hilbert, launching later this year after final tuning and optimization work. here are various different approaches to quantum computing, and among those that have risen to prominence in the last few years feature superconducting systems, trapped ions, photonic quantum computers and even silicon spin qubits.

ZDNet “Quantum computing: This new 100-qubit processor is built with atoms cooled down near to absolute zero”

The Article can be found here “Quantum computing: This new 100-qubit processor is built with atoms cooled down near to absolute zero

Resolving GNU MP not found on CentOS 7

If you are installing package like BiocManager::install(“scDblFinder”)

BiocManager::install("scDblFinder")

You may encounters error like

configure: error: GNU MP not found, or not 4.1.4 or up, see http://gmplib.org

The fix is obvious as seen on error message. If you are using CentOS 7, you can easily fix it via Yum

% yum install gmp-devel

Compiling R-4.1.0 with GNU

The R Project for Statistical Computing

Prerequisites

gnu-6.5
m4-1.4.18
gmp-6.1.0
mpfr-3.1.4
mpc-1.0.3
isl-0.18
gsl-2.1

Compiling PCRE is important or you will face an error like

configure: error: PCRE2 library and headers are required, or use --with-pcre1 and PCRE >= 8.32 with UTF-8 support

After you have compile PCRE, you can proceed with the compilation of R-4.1.0

% ./configure --prefix=/usr/local/R-4.1.0 --with-pcre1=/usr/local/pcre-8.42 --with-blas --with-lapack --enable-R-shlib

If there are no issues….

R is now configured for x86_64-pc-linux-gnu

  Source directory:            .
  Installation directory:      /usr/local/R-4.1.0

  C compiler:                  gcc  -g -O2
  Fortran fixed-form compiler: gfortran  -g -O2

  Default C++ compiler:        g++ -std=gnu++14  -g -O2
  C++11 compiler:              g++ -std=gnu++11  -g -O2
  C++14 compiler:              g++ -std=gnu++14  -g -O2
  C++17 compiler:
  C++20 compiler:
  Fortran free-form compiler:  gfortran  -g -O2
  Obj-C compiler:              gcc -g -O2 -fobjc-exceptions

  Interfaces supported:        X11
  External libraries:          pcre1, readline, BLAS(generic), LAPACK(generic), curl
  Additional capabilities:     PNG, JPEG, NLS, ICU
  Options enabled:             shared R library, R profiling

  Capabilities skipped:        TIFF, cairo
  Options not enabled:         shared BLAS, memory profiling

  Recommended packages:        yes

Make and Make Install the Files

% make -j 8
.....
gcc -I"/home/user1/Downloads/R-4.1.0/include" -DNDEBUG   -I/usr/local/include   -fpic  -g -O2  -c anova.c -o anova.o
gcc -I"/home/user1/Downloads/R-4.1.0/include" -DNDEBUG   -I/usr/local/include   -fpic  -g -O2  -c anovapred.c -o anovapred.o
gcc -I"/home/user1/Downloads/R-4.1.0/include" -DNDEBUG   -I/usr/local/include   -fpic  -g -O2  -c branch.c -o branch.o
gcc -I"/home/user1/Downloads/R-4.1.0/include" -DNDEBUG   -I/usr/local/include   -fpic  -g -O2  -c bsplit.c -o bsplit.o
gcc -I"/home/user1/Downloads/R-4.1.0/include" -DNDEBUG   -I/usr/local/include   -fpic  -g -O2  -c choose_surg.c -o choose_surg.o
g
.....
% make install

References:

  1. Compiling R by Toby Dylan

A relook at InfiniBand and Ethernet Trends on Top500

I have put up a article from Nvidia Perspective on the Top 500 Interconnect Trends. There is another article put up by the NextPlatform that took a closer look at the Infiniband and Ethernet Trends

Taken from The Next Platform “The Eternal Battle Between Infiniband and Ethernet”

The penetration of Ethernet rises as the list fans out, as you might expect, with many academic and industry HPC systems not being able to afford InfiniBand or not willing to switch away from Ethernet. And as those service providers, cloud builders, and hyperscalers run Linpack on small portions of their clusters for whatever political or business reasons they have. Relatively slow Ethernet is popular in the lower half of the Top500 list, and while InfiniBand gets down there, its penetration drops from 70 percent in the Top10 to 34 percent in the complete Top500.

Nvidia’s InfiniBand has 34 percent share of Top500 interconnects, with 170 systems, but what has not been obvious is the rise of Mellanox Spectrum and Spectrum-2 Ethernet switches on the Top500, which accounted for 148 additional systems. That gives Nvidia a 63.6 percent share of all interconnects on the Top500 rankings. That is the kind of market share that Cisco Systems used to enjoy for two decades in the enterprise datacenter, and that is quite an accomplishment.

Taken from The Next Platform “The Eternal Battle Between Infiniband and Ethernet”

References:

The Eternal Battle Between Infiniband and Ethernet

Quantum Computing just got desktop sized

Quantum computing is coming on leaps and bounds. Now there’s an operating system available on a chip thanks to a Cambridge University-led consortia with a vision is make quantum computers as transparent and well known as RaspberryPi. This “sensational breakthrough” is likened by the Cambridge Independent Press to the moment during the 1960s when computers shrunk from being room-sized to being sat on top of a desk. Around 50 quantum computers have been built to date, and they all use different software – there is no quantum equivalent of Windows, IOS or Linux. The new project will deliver an OS that allows the same quantum software to run on different types of quantum computing hardware.

Redshark “Quantum Computing just got desktop sized”

For more information, do take a look at Quantum Computing just got desktop sized

IBM researchers demonstrate the quantum advantage over classical computing

IBM researchers have finally proven in a real-world experiment that quantum computers are superior to classical devices – although for now, only at a miniature scale. 

Big Blue’s quantum team set out to discover if today’s quantum devices, despite their limitations, could be used to complete a task that cannot be done on a classical system.  

Since quantum computing is still in its infancy, the researchers leveled the playing field between the two methods by designing a microscopic experiment with limited space – that is, limited amount of available memory. 

ZD Net

For more Information, see IBM researchers demonstrate the advantage that quantum computers have over classical computers

Top 500 Interconnect Trends

Published twice a year and publicly available at www.top500.org, the TOP500 supercomputing list ranks the world’s most powerful computer systems according to the Linpack benchmark rating system.

Taken from Nvidia Networking

Summary of Findings for Nvidia Networking.

  • NVIDIA GPU or Network (InfiniBand, Ethernet) accelerate 342 systems or 68% of overall TOP500 systems
  • InfiniBand accelerates seven of the top ten supercomputers in the world
  • NVIDIA BlueField DPU and HDR InfiniBand Networking accelerate the world’s 1st academic cloud-native supercomputer at Cambridge University
  • NVIDIA InfiniBand and Ethernet networking solutions connect 318 systems or 64% of overall TOP500 platforms
  • InfiniBand accelerates 170 systems, 21% growth compared to June 2020 TOP500 list
  • InfiniBand accelerates #1, #2 supercomputers in the US, #1 in China, #1, #2 and #3 in Europe
  • NVIDIA 25 gigabit and faster Ethernet solutions connect 62% of total Ethernet systems

Rapid Growth in HPC Storage

The Article is taken from On-Prem No Longer Centre Stage for Broader HPC Storage

AI/ML, more sophisticated analytics, and larger-scale HPC problems all bode well for the on-prem storage market in high performance computing (HPC) and are an even bigger boon for cloud storage vendors.

Nossokoff points to several shifts in the storage industry and among the top supercomputing sites, particularly in the U.S. that reflect changing priorities with storage technologies, especially with the mixed file problems AI/ML introduce into the traditional HPC storage hierarchy. “We’re seeing a focus on raw sequential large block performance in terms of TB/s, high-throughput metadata and random small-block IOPS performance, cost-effective capacity for increasingly large datasets in all HPC workloads, and work to add intelligent placement of data so it’s where it needs to be.”

In addition to keeping pace with the storage tweaks to suit AI/ML as well as traditional HPC, there have been shifts in the vendor ecosystem this year as well. These will likely have an impact on what some of the largest HPC sites do over the coming years as they build and deploy their first exascale machines. Persistent memory is becoming more common, companies like Samsung are moving from NVMe to CXL, which is an indication of where that might fit in the future HPC storage and memory stack. Companies like Vast Data, which were once seen as an up and coming player in the on-prem storage hardware space for HPC transformed into a software company, Nossokoff says.

On-Prem No Longer Centre Stage for Broader HPC Storage – NextPlatform

UDP Tuning to maximise performance

There is a interesting article how your UDP traffic can maximise performance with a few tweak. The article is taken from UDP Tuning

The most important factors as mentioned in the article is

  • Use jumbo frames: performance will be 4-5 times better using 9K MTUs
  • packet size: best performance is MTU size minus packet header size. For example for a 9000Byte MTU, use 8972 for IPV4, and 8952 for IPV6.
  • socket buffer size: For UDP, buffer size is not related to RTT the way TCP is, but the defaults are still not large enough. Setting the socket buffer to 4M seems to help a lot in most cases
  • core selection: UDP at 10G is typically CPU limited, so its important to pick the right core. This is particularly true on Sandy/Ivy Bridge motherboards.

Do take a look at the article UDP Tuning

AMD HPC User Forum Networking Meeting at ISC21

AMD HPC User Forum Networking Meeting

For more information, see AMD HPC User Forum Networking Meeting

Wednesday, June 30, 2021
7:00am – 8:15am (PDT)
10:00am – 11:15pm (EDT)
4:00pm – 5:15pm (CEST)

To Register Live at ISC21: AMD HPC User Forum Networking Meeting – Registration (eventscloud.com)

  • 7:00 – 7:15 am: Opening Remarks
    • Mike Norman, PhD, Director, SDSC
    • Brad McCredie, PhD, Corporate Vice President, AMD
  • 7:15 – 7:20 am: Introduction of Forum
    • Mary Thomas, PhD, AMD User Forum President, Computational Data Scientist, SDSC
  • 7:20 – 7:50 am: Forum Members discuss their work and value of Forum
    • Mahidhar Tatineni, PhD, SDSC, (User Forum Special Interest Group)
    • Alastair Basden, PhD, HPC Technical Manager, Durham University
    • Lorna Smith, Programme Manager, EPCC, University of Edinburgh
    • Sagar Dolas, Program Lead – Future Computing & Networking, Surf
    • Hatem Ltaief, PhD, Principal Research Scientist, KAUST
    • Marc O’Brien, Cancer Research UK Cancer Institute, Cambridge University
  • 7:50 – 8:00 am : Q/A