One Hundred Year Study on Artificial Intelligence, or AI100

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications and even abuses of AI technology

“In the past five years, AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives,” Brown University computer scientist Michael Littman, who chaired the report panel, said in a news release.

“That’s really exciting, because this technology is doing some amazing things that we could only dream about five or ten years ago,” Littman added. “But at the same time the field is coming to grips with the societal impact of this technology, and I think the next frontier is thinking about ways we can get the benefits from AI while minimizing the risks.”

Those risks include deep-fake images and videos that are used to spread misinformation or harm people’s reputations; online bots that are used to manipulate public opinionalgorithmic bias that infects AI with all-too-human prejudices; and pattern recognition systems that can invade personal privacy by piecing together data from multiple sources.

The report says computer scientists must work more closely with experts in the social sciences, the legal system and law enforcement to reduce those risks.

References:

Intel Ponte Vecchio playing Catch Up with AMD and Nvidia

Intel recently announced details on their forthcoming data center GPU, the Xe HPC, code named Ponte Vecchio (PVC). Intel daringly implied that the peak performance of the PVC GPU would be roughly twice that of today’s fastest GPU, the Nvidia A100. PVC and Sapphire Rapids (the multi-tile next-gen Xeon) are being used to build Aurora, the Argonne National Lab’s Exascale supercomputer, in 2022, so this technology should finally be just around the corner.

Intel is betting on this first-generation datacenter GPU for HPC to finally catch up with Nvidia and AMD, both for HPC (64-bit floating point) and AI (8 and 16-bit integer and 16-bit floating point). The Xe HPC device is a multi-tiled, multi-process-node package with new GPU cores, HBM2e memory, a new Xe Link interconnect, and PCIe Gen 5 implemented with over 100-billion transistors. That is nearly twice the size of the 54-billion Nvidia A100 chip. At that size, power consumption could be an issue at high frequencies. Nonetheless, the Xe design clearly demonstrates that Intel gets it; packaging smaller dies helps reduce development and manufacturing costs, and can improve time to market.

Intel Lays Down The Gauntlet For AMD And Nvidia GPUs by Frobes

No MEAM parameter file in pair coefficients Errors in LAMMPS

If you are encountering errors like, you may want to check

ERROR: No MEAM parameter file in pair coefficients (../pair_meamc.cpp:243)

When a pair_coeff command using a potential file is specified, LAMMPS looks for the potential file in 2 places. First it looks in the location specified. E.g. if the file is specified as “niu3.eam”, it is looked for in the current working directory. If it is specified as “../potentials/niu3.eam”, then it is looked for in the potentials directory, assuming it is a sister directory of the current working directory. If the file is not found, it is then looked for in one of the directories specified by the LAMMPS_POTENTIALS environment variable. Thus if this is set to the potentials directory in the LAMMPS distribution, then you can use those files from anywhere on your system, without copying them into your working directory. Environment variables are set in different ways for different shells. Here are example settings for

 export LAMMPS_POTENTIALS=/path/to/lammps/potentials

For more information, do read LAMMPS Documentation https://docs.lammps.org/stable/pair_coeff.html