One Hundred Year Study on Artificial Intelligence, or AI100

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications and even abuses of AI technology

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” Brown University computer scientist Michael Littman, who chaired the report panel, said in a news release.

“That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago,” Littman added. “But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

Those risks include deep-fake images and videos that are used to spread misinformation or harm people’s reputations; online bots that are used to manipulate public opinionalgorithmic bias that infects AI with all-too-human prejudices; and pattern recognition systems that can invade personal privacy by piecing together data from multiple sources.

The report says computer scientists must work more closely with experts in the social sciences, the legal system and law enforcement to reduce those risks.

References:

Intel Ponte Vecchio playing Catch Up with AMD and Nvidia

Intel recently announced details on their forthcoming data center GPU, the Xe HPC, code named Ponte Vecchio (PVC). Intel daringly implied that the peak performance of the PVC GPU would be roughly twice that of today’s fastest GPU, the Nvidia A100. PVC and Sapphire Rapids (the multi-tile next-gen Xeon) are being used to build Aurora, the Argonne National Lab’s Exascale supercomputer, in 2022, so this technology should finally be just around the corner.

Intel is betting on this first-generation datacenter GPU for HPC to finally catch up with Nvidia and AMD, both for HPC (64-bit floating point) and AI (8 and 16-bit integer and 16-bit floating point). The Xe HPC device is a multi-tiled, multi-process-node package with new GPU cores, HBM2e memory, a new Xe Link interconnect, and PCIe Gen 5 implemented with over 100-billion transistors. That is nearly twice the size of the 54-billion Nvidia A100 chip. At that size, power consumption could be an issue at high frequencies. Nonetheless, the Xe design clearly demonstrates that Intel gets it; packaging smaller dies helps reduce development and manufacturing costs, and can improve time to market.

Intel Lays Down The Gauntlet For AMD And Nvidia GPUs by Frobes

No MEAM parameter file in pair coefficients Errors in LAMMPS

If you are encountering errors like, you may want to check

ERROR: No MEAM parameter file in pair coefficients (../pair_meamc.cpp:243)

When a pair_coeff command using a potential file is specified, LAMMPS looks for the potential file in 2 places. First it looks in the location specified. E.g. if the file is specified as “niu3.eam”, it is looked for in the current working directory. If it is specified as “../potentials/niu3.eam”, then it is looked for in the potentials directory, assuming it is a sister directory of the current working directory. If the file is not found, it is then looked for in one of the directories specified by the LAMMPS_POTENTIALS environment variable. Thus if this is set to the potentials directory in the LAMMPS distribution, then you can use those files from anywhere on your system, without copying them into your working directory. Environment variables are set in different ways for different shells. Here are example settings for

 export LAMMPS_POTENTIALS=/path/to/lammps/potentials

For more information, do read LAMMPS Documentation https://docs.lammps.org/stable/pair_coeff.html

Supporting Science with HPC

Article is taken from Supporting Science with HPC from Scientific-Computing

HPC integrators can help scientists and HPC research centres through the provisioning and management of HPC clusters. As the number of applications and potential user groups for  HPC continues to expand supporting domain expert scientists use and access of HPC resources is increasingly important.  

While just ten years ago a cluster would have been used by just a few departments at a University, now there is a huge pool of potential users from non-traditional HPC applications. This also includes Artificial intelligence (AI) and machine learning (ML)  as well as big data or applying advanced analytics to data sets from research areas that would previously not be interested in the use of HPC systems. 

This culminates in a growing need to support and facilitate the use of HPC  resources in academia or research and development. These organisations can either choose to employ the staff to support this infrastructure or try to outsource some or all of these processes to companies experienced in the management and support of HPC systems. 

Article is taken from Supporting Science with HPC from Scientific-Computing

KIM_SimulatorHeaders.h: No such file or directory during compilation of lammps

If you encounter an error such as the one below during compilation for lammps such as written in Compiling LAMMPS-15Jun20 with GNU 6 and OpenMPI 3

After running the command,

% make g++_openmpi -j 16

you may encounter

In file included from ../style_pair.h:112:0,
                 from ../force.cpp:20:
../pair_kim.h:70:34: fatal error: KIM_SimulatorHeaders.h: No such file or directory
 #include "KIM_SimulatorHeaders.h"

This is due to one of the a header file not found at /usr/local/lammps-29Oct20/src/pair_kim.h. At line 70, modify the actual location of KIM_SimulatorHeaders.h

/* #include "KIM_SimulatorHeaders.h" */
#include "/usr/local/lammps-29Oct20/lib/kim/kim-api-2.1.3/build/installed-kim-api-2.1.3/include/kim-api/KIM_SimulatorHeaders.h"

Again, after running the command,

% make g++_openmpi -j 16

you may encounter

mpicxx -g -O3  -DLAMMPS_GZIP -I../../lib/colvars -DLMP_USER_OMP -I../../lib/voronoi/includelink -DLMP_PYTHON -I../../lib/poems -DLMP_MPIIO -I../../lib/message/cslib/src -I../../lib/latte/includelink -DLMP_KOKKOS  -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1     -I/usr/include/python2.7 -I/usr/include/python2.7   -I./ -I../../lib/kokkos/core/src -I../../lib/kokkos/containers/src -I../../lib/kokkos/algorithms/src --std=c++11 -fopenmp -I./ -I../../lib/kokkos/core/src -I../../lib/kokkos/containers/src -I../../lib/kokkos/algorithms/src   -DLMP_KIM_CURL   -c ../kim_param.cpp
../kim_param.cpp:73:34: fatal error: KIM_SimulatorHeaders.h: No such file or directory
 #include "KIM_SimulatorHeaders.h"

This is due to one of the a header file not found at /usr/local/lammps-29Oct20/src/kim_param.cpp. At line 73, modify the actual location of KIM_SimulatorHeaders.h

/* #include "KIM_SimulatorHeaders.h" */
#include "/usr/local/lammps-29Oct20/lib/kim/kim-api-2.1.3/build/installed-kim-api-2.1.3/include/kim-api/KIM_SimulatorHeaders.h"

There might be the same issue for kim_init.cpp, fix_store_kim.cpp, pair.kim.h etc

Issues when launching ABAQUS/CAE

I was using FastX3, loaded the ABAQUS 2020 cae,

% module load abaqus/2020
% abaqus cae
 Error: code 2 major 153 minor 3: BadValue (integer parameter out of range for operation).
X Error: code 167 major 153 minor 5: GLXBadContext.
X Error: code 167 major 153 minor 26: GLXBadContext.
X Error: code 167 major 153 minor 4: GLXBadContext.
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
failed to create drawable
.....
.....
.....



Warning: There was a problem creating an OpenGL feedback context for printing.
If you encounter any problems with printing, verify that indirect GLX context
creation is enabled on your X server. For more information on indirect / direct
GLX context and how to enable indirect GLX context creation, see Knowledge Base
article QA00000043316

The issue is that on some graphics devices Abaqus/CAE and Abaqus/Viewer may fail when hardware acceleration is turned on. To mitigate the issue, you can turned off hardware acceleration which may have some performance degradation for the graphics performance. This can be done by simply

% abaqus cae --mesa

ABAQUS maintained a list of Graphics Devices which has passed the test and you may want to take a look

ABAQUS 2020 Graphics Devices

PBS Professional MoM Access Configuration Parameters

Taken from PBS Professional Admin Guide

The Configuration Parameters can be found at /var/spool/pbs/mom_priv/config

$restrict_user <value>
  • Controls whether users not submitting jobs have access to this machine. When True, only those users running jobs are allowed access.
  • Format: Boolean
  • Default: off
$restrict_user_exceptions <user_list>
  • List of users who are exempt from access restrictions applied by $restrict_user. Maximum number of names in list is 10.
  • Format: Comma-separated list of usernames; space allowed after comma
$restrict_user_maxsysid <value>
  • Allows system processes to run when $restrict_user is enabled. Any user with a numeric user ID less than or equal to value is exempt from restrictions applied by $restrict_user.
  • Format: Integer
  • Default: 999

Example


To restrict user access to those running jobs, add:

$restrict_user True

To specify the users who are allowed access whether or not they are running jobs, add:

$restrict_user_exceptions User1, User2

To allow system processes to run, specify the maximum numeric user ID by adding:

$restrict_user_maxsysid 999

Setting up FreeSurfer 7.2.0 for CentOS-7

FreeSurfer is an open source software suite for processing and analyzing (human) brain MRI images.

  • Skullstripping
  • Image Registration
  • Subcortical Segmentation
  • Cortical Surface Reconstruction
  • Cortical Segmentation
  • Cortical Thickness Estimation
  • Longitudinal Processing
  • fMRI Analysis
  • Tractography
  • FreeView Visualization GUI
  • and much more..

The freesurfer downloads can be found here

The Tar GZ for Linux Install can be found here

% tar -zxvpf freesurfer-linux-centos7_x86_64-7.0.0.tar.gz
% cd freesurfer

Prepare the Environment in your .bashrc

export FREESURFER_HOME=/usr/local/freesurfer
export FSFAST_HOME=$FREESURFER_HOME/fsfast
export FSF_OUTPUT_FORMAT=nii.gz
export FMRI_ANALYSIS_DIR=$FREESURFER_HOME/fsfast
export FUNCTIONALS_DIR=$FREESURFER_HOME/sessions
export FS_OVERRIDE=0
export MNI_DIR=$FREESURFER_HOME/mni
export MINC_BIN_DIR=$FREESURFER_HOME/mni/bin
export MINC_LIB_DIR=$FREESURFER_HOME/mni/lib
export MNI_DATAPATH=$FREESURFER_HOME/mni/data
export PERL5LIB=$FREESURFER_HOME/mni/share/perl5
export MNI_PERL5LIB=$FREESURFER_HOME/mni/share/perl5

oneAPI DevSummit, Asia-Pacific and Japan

This one-day, LIVE virtual conference features talks, panels, and a hands-on learning experience focused on using oneAPI, DPC++, and AI/ML to accelerate performance of cross-architecture workloads (CPU, GPU, FPGA, and other accelerators).

Register now to:

  • Connect with fellow developers and innovators.
  • Learn about the latest developer tools for oneAPI.
  • Hear from thought leaders in industry and academia who are working on innovative cross-platform, multi-vendor oneAPI solutions.
  • Discover real world projects using oneAPI to accelerate data science and AI pipelines.
  • Dive into a hands-on session on Intel® oneAPI toolkits for HPC and AI applications.
  • Join a vibrant community supporting each other using oneAPI, DPC++ and AI.

To Register

Full Scheduled Event