Encountering SSH “Permission denied, please try again.”

If you encountered issues like “Permission denied, please try again.” during SSH, there are a few steps to consider.

One possibility is that you may want to took at /var/log/secure which might give some clues to the possible causes. One could be due to lock out rules for SSH. There is one interesting writeup which could shed light into this possibility. Configure lockout rules for SSH login

Another possibility is that you may want to check on the permission on your .ssh directory which may be incorrectly set. For example, you may want to consider

  1. .ssh directory: 700 (drwx——)
  2. public key (.pub file): 644 (-rw-r–r–)
  3. private key (id_rsa): 600 (-rw——-)

PackagesNotFoundError and Conda Install

If you are installing package with conda and you are encountering an issue.

PackagesNotFoundError: The following packages are not available from current channels:

- c-compiler
- fortran-compiler
- cxx-compiler

There is a workaround below. IThis is important as it tells conda to also look on the conda-forge channel when you search for packages.

conda config --append channels conda-forge
## Package Plan ##

  environment location: /usr/local/anaconda3-2022/envs/sagemath_env

  added / updated specs:
    - c-compiler
    - cxx-compiler
    - fortran-compiler
    - pkg-config

The following packages will be downloaded:

    package                    |            build
    _libgcc_mutex-0.1          |             main           3 KB
    _openmp_mutex-5.1          |            1_gnu          21 KB
    binutils-2.36.1            |       hdd6e379_2          27 KB  conda-forge
    binutils_impl_linux-64-2.36.1|       h193b22a_2        10.4 MB  conda-forge
    binutils_linux-64-2.36     |      hf3e587d_10          24 KB  conda-forge
    c-compiler-1.5.0           |       h166bdaf_0           5 KB  conda-forge
    cxx-compiler-1.5.0         |       h924138e_0           5 KB  conda-forge
    fortran-compiler-1.5.0     |       h2a4ca65_0           5 KB  conda-forge
    gcc-10.4.0                 |      hb92f740_10          24 KB  conda-forge
    gcc_impl_linux-64-10.4.0   |      h7ee1905_16        46.7 MB  conda-forge
    gcc_linux-64-10.4.0        |      h9215b83_10          25 KB  conda-forge
    gfortran-10.4.0            |      h0c96582_10          24 KB  conda-forge

Detecting and Shutting Down VNC Server in CentOS-7

To list the ports and the Xvnc session’s associated user, as root, enter:

# lsof -i -P | grep vnc
Xvnc        2267     root    5u  IPv6    76766      0t0  TCP *:6003 (LISTEN)
Xvnc        2267     root    6u  IPv4    76767      0t0  TCP *:6003 (LISTEN)
Xvnc        2267     root    9u  IPv4    76775      0t0  TCP *:5903 (LISTEN)
Xvnc        2267     root   10u  IPv6    76776      0t0  TCP *:5903 (LISTEN)

Apparently, there is some Xvnc running. To do a quick shutdown

# systemctl |grep vnc
vncserver@:1.service                                                                             loaded active running   Remote desktop service (VNC)
  system-vncserver.slice                                                                           loaded active active    system-vncserver.slice
# systemctl stop vncserver@:1.servic
# systemctl stop system-vncserver.slice

Check that the XVNc again

# systemcl stop xvnc.socket
# systemctl status xvnc.socket
* xvnc.socket - XVNC Server
   Loaded: loaded (/usr/lib/systemd/system/xvnc.socket; disabled; vendor preset: disabled)
   Active: inactive (dead)
   Listen: [::]:5900 (Stream)
Accepted: 0; Connected: 0
# systemctl |grep vnc

If however, you are interested in setting up VNC, there is a good article for you to consider
Remote-desktop to a host using VNC¶

RapidFile Toolkit v2.0 for FlashBlade

What is RapidFile Toolkit?

RapidFile Toolkit is a set of supercharged tools for efficiently managing millions of files using familiar Linux command line interfaces. RapidFile Toolkit is designed from the ground up to take advantage of Pure Storage FlashBlade’s massively parallel, scale-out architecture, while also supporting standard Linux file systems. RapidFile Toolkit can serve as a high performance, drop-in replacement for Linux commands in many common scenarios, which can increase employee efficiency, application performance, and business productivity. RapidFile Toolkit is available to all Pure Storage customers.


Benefits of RapidToolkit according to the Site

Increase SysAdmin Productivity

  • Up to 20X faster than Linux Core Utilities
  • Accelerates file management and analytics

Faster Data Movement & Analytics

  • Accelerates Perforce Checkout by up to 20X
  • Rapid file copy to and from scratch space

Faster & Simpler Data Pipelines

  • Indexing files systems up to 20X faster, reducing metadata caching time
  • Support EDA, Genomics, DevOps, HPC, Analytics & Apache Spark and AI/ML


Linux commandsRapidFile Toolkit v2.0Description
lsplsLists files & directories
findpfindFinds matching files
dupduSummarizes file space usage
rmprmRemoves files & directories
chownpchownChanges file ownership
chmodpchmodChanges file permissions
cppcopyCopies files & directories

To Download, you have to be Pure Storage Customers and Partners.

Download URL (login required)


Ganglia and Gmond Python module for GPUs

If you are running a cluster with NVIDIA GPUs, there now exists a python module for monitoring NVIDIA GPUs using the newly released Python bindings for NVML (NVIDIA Management Library). These bindings are under BSD license and allow simplified access to GPU metrics like temperature, memory usage, and utilization.

Nvidia Developer – Ganglia Monitoring System

To install the Ganglia plug-in on your Ganglia installation, see these download links:

For more information see:


Graphite – highly scalable real-time graphing system

Graphite is an interesting project. If you wish to take a look at the project a bit deeper. The official Graphite Documentation is very comprehensive.

But some pointers could be useful.

Point 1: What is Graphite?

Graphite is a highly scalable real-time graphing system. As a user, you write an application that collects numeric time-series data that you are interested in graphing, and send it to Graphite’s processing backend, carbon, which stores the data in Graphite’s specialized database. The data can then be visualized through graphite’s web interfaces.

Graphite 1.2.0 Documentation

Point 2: Architecture

Graphite consists of 3 software components:

  1. carbon – a Twisted daemon that listens for time-series data
  2. whisper – a simple database library for storing time-series data (similar in design to RRD)
  3. graphite webapp – A Django webapp that renders graphs on-demand using Cairo

Point 3: Who should be using Graphite?

Anybody who would want to track values of anything over time. If you have a number that could potentially change over time, and you might want to represent the value over time on a graph, then Graphite can probably meet your needs.

Specifically, Graphite is designed to handle numeric time-series data. For example, Graphite would be good at graphing stock prices because they are numbers that change over time. Whether it’s a few data points, or dozens of performance metrics from thousands of servers, then Graphite is for you. As a bonus, you don’t necessarily know the names of those things in advance (who wants to maintain such huge configuration?); you simply send a metric name, a timestamp, and a value, and Graphite takes care of the rest!

Graphite 1.2.0 Documentation

Point 4: Tools

Ganglia, a tool used by many High Performing Cluster (HPC) worldwide can be integrated with Graphite. Other tools that work with Graphite can be found here

Point 5: Get the book…..

3 Tenets of Monitoring and Approach to IT Monitoring

I read the book Monitoring with Graphite by Oreilly. Please read the book further. It is a good read. I’m just pending my own thoughts.

He mentioned something that is quite interesting that I have not really thought of. This can be divided into 3 main categories:

  1. Fault Detection
  2. Alerting
  3. Capacity Planning

Fault Detection

Fault Detection is to identify when a resource becomes unavailable or starts to perform poorly. Traditionally, system administrators employ thresholds to recognise the delta in a system’s behaviour


Alerting constitutes the moment the monitoring system identifies a fault, the recipient(s) is alerted through som means perhaps like email, SMS so that further actions can be taken by the recipient(s)

Capacity Planning

The act of capacity planning is the ability to study trends in the data and use that knowledge make informed decisions about adding capacity now or in the near future. You can use Graphite to work on the time-series data

Pull and Push Model

Pull Model – The Traditional Approach to IT Monitoring centers around a polling agent spending resources to connect to remote users or appliances to determine their current status. However, traditional method of pull method have limitation in integrating trending and monitoring and often different software stacks is required.

Push Model – Metrics are pushed from the sources to a unified storage repository, and providing with a consolidated set of data to drive both IT responses and business decisions. The advantage is that collection tasks are decentralised and we no longer require to scale our collection system horizontally as the architecture scale vertically. One of the interesting aspects of the push model is that we can isolate the functional responsibilities of the monitoring system.

chsh -s /bin/tcsh and you (user) don’t exist error

Sometimes, you are a non-root user and you wish to change shell and you have an error

$ chsh -s /bin/tcsh
chsh you (user xxxxxxxxx) don't exist

This error occurs when the userID and Passowrd is using LDAP or Active Directory so there is no local account in the /etc/passwd where it first looks to. I used Centrify where we can configure the Default Shell Environment on AD. But there is a simple workaround if you do not want to bother your system administrator

First check that you have install tcsh. I have it!

$ chsh -l

Next Step: Check your current shell

$ echo "$SHELL"

Step 3: Write a simple .profile file

$ vim ~/.profile
if [ "$SHELL" != "/bin/tcsh" ]
    export SHELL="/bin/tcsh"
    exec /bin/tcsh -l    # -l: login shell again

Step 4: In your .bashrc, just add the “source ~/.profile”

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc

source ~/.profile

Source the .bashrc again

$ source ~/.bashrc