Using TurboVNC 0.6 and VirtualGL 2.1.4 to run OpenGL Application Remotely on CentOS

 
Acknowledgement:
Much of these materials come from VirtualGL documents with additional notes by me from the field.

 

1. What is VirtualGL?
According to VirtualGL Project Website:

“VirtualGL is an open source package which gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration…..With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D graphics accelerator on the application server, and only the rendered 3D images are sent to the client machine……” For more information see What is VirtualGL? from Project Website

2. System Requirements:
See VirtualGL System Requirements from Project Website.

Taken from Virtual GL

3. Download and Install TurboJPEG and VirtualGL on CentOS 5.x.

For more detailed information, see
User Guide for Virtual 2.1.4

a. Download and Install TurboJPEG….

Go to the VirtualGL Download page

# rpm -Uvh turbojpeg*.rpm

b. Download and Install VirtualGL….

Go to the VirtualGL Download page

# rpm -Uvh VirtualGL*.rpm
 
4. Accessing to the 3D Xserver
According to the Project Website, VirtualGL requires access to the application server’s 3D graphics card so that it can create off-screen pixel buffers (Pbuffers) and redirect the 3D rendering from applications into these Pbuffers.

a. Shutdown Display Manager

# init 3

 

5. Configure VirtualGL Server Configuration

a. Run

# /opt/VirtualGL/bin/vglserver_config

b. Only users in the vglusers group can use VirtualGL

Restrict local X server access to vglusers group (recommended)?
[Y/n] Y

c. Only users in the vglusers group can run OpenGL applications on the VirtualGL server

Restrict framebuffer device access to vglusers group (recommended)?
[Y/n] Y

d. Disabling XTEST will prevent them from inserting keystrokes or mouse events and thus hijacking local X sessions on that X server.

Disable XTEST extension (recommended)?
[Y/n] Y

e. If you choose to restrict X Server or framebuffer device access to the vglusers group, then add root and users to the vglusers group

# vim /etc/group
vglusers:x:1800:root,user1,user2

f. Restart the Display Manager.

# init 5
(To verify that the application server is ready to run VirtualGL, log out of the server, log back into the server using SSh, and execute the following commands in the SSh session)

g. If you restricted 3D X server access to vglusers

xauth merge /etc/opt/VirtualGL/vgl_xauth_key
xdpyinfo -display :0
/opt/VirtualGL/bin/glxinfo -display :0

h. If you did not restrict 3D X server access

xdpyinfo -display :0
/opt/VirtualGL/bin/glxinfo -display :0

6. SSH Server Configuration

# vim /etc/ssh/sshd_config
X11Forwarding yes
# UseLogin No

*7. Checking that the OpenGL are using hardware drivers and direct rendering enabled to maximise performance.

# glxinfo |grep render

For example,

direct rendering: Yes
OpenGL renderer string: Quadro FX 3450/4000 SDI/PCI/SSE2
GL_NVX_conditional_render, GL_SGIS_generate_mipmap, GL_SGIS_texture_lod,

*8. Install and configure TurboVNC Server at the VirtualGL Server

a. Firstly, I uninstall the CentOS default vncserver

# yum remove vnc-server

b. Go to the VirtualGL Download to obtain and install the TurboVNC.rpm

# rpm -Uvh turbovnc*.rpm

c. The default rpm install places TurboVNC at /opt. You can create softlink at /usr/bin to /opt/TurboVNC

# ln -s /opt/TurboVNC/bin/vncserver /usr/bin/vncserver
# ln -s /opt/TurboVNC/bin/vncviewer /usr/bin/vncviewer
# ln -s /opt/TurboVNC/bin/vncpasswd /usr/bin/vncpasswd
# ln -s /opt/TurboVNC/bin/Xvnc /usr/bin/Xvnc

*9. Using TurboVNC

Open a session of VNC by simply typing

# vncserver
New 'kittycool:1 (root)' desktop is kittycool:1

Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/kittycool:1.log

(In the VNC Client, type)
kittycool:1

To Test whether the Virtual Server is working, type

# cd /opt/VirtualGL/bin
# ./vglrun glxgears


You will see a near smooth generation of the glxgears relative to hardware. Congrat!

Installing and checking /proc/mount using Nagios Plugins on CentOS 5

Most of this blog entry material is taken from “Checking /proc/mounts on remote server” from Nagioswiki . The Nagios version is 3.x and the OS Platform CentOS 5.x

We will basically require 2 Nagios Plugins “check_disk” and “check_nrpe” plugins to use this excellent nrpe plugins

On the Remote Server, install the

# yum install nagios-nrpe nagios-plugins-disk

On the Remote Server, go to nagios configuration file and the command inside nrpe.cfg

# vim /etc/nagios/nrpe.cfg
command[check_disks_proc_mounts]=/usr/lib/nagios/plugins/check_disk -w 15% -c 10% $(for x in $(cat /proc/mounts |awk '{print $2}')\; do echo -n " -p $x "\; done)

On the Nagios Server,

Ensure you have the check_nrpe plugins inside. and test the plugins

# yum install nagios-nrpe
# cd /usr/lib64/nagios/plugins
# ./check_nrpe -H monitored-server -c check_disks_proc_mounts

DISK OK - free space: / 28106 MB (53% inode=98%); /boot 81 MB (86% inode=99%);
/dev/shm 1887 MB (100% inode=99%);| /=24543MB;47188;49964;0;55516
/boot=12MB;83;88;0;98 /dev/shm=0MB;1603;1698;0;1887

Add the following definition in your commands.cfg file

define  command {
        command_name    check_nrpe_disk_procs
        command_line    $USER1$/check_nrpe -H $HOSTNAME$ -c check_disks_proc_mounts -t 20
        }

Add the following sort of host check (assuming, of course, that your host is already in your config)

define service{
        use                    local-service
        host_name              monitored_server
        service_description    check_disk on proc mounts
        check_command          check_nrpe_disk_procs
}

Horray it is done.

Incorporating PNP 0.4.x (PNP is not Perfparse) with Nagios 3 and CentOS 5

This blog entry is taken in part from the  Integrating PNP (PNP is not Perfparse) with CentOS 4.x / Nagios 2.x from NagiosWiki and the book Nagios 2nd Edition from No starch Press.

1. What is PNP4Nagios?
PNP4Nagios (English) is an addon to nagios which analyzes performance data provided by plugins and stores them automatically into RRD-databases

2. Which version will you be covering?
I’ll be using on the pnp4nagios 0.4.x which fit into CentOS 5.x quite well as it does not need to incorporate additional newer components which might break existing dependencies.
Download the pnp4nagios 0.4x from the download website

3. What prerequisites I need?
Install rrdtools
Make sure you have the RPMForge Repository installed. For more information, get more information at LinuxToolkit (Red Hat Enterprise Linux / CentOS Linux Enable EPEL (Extra Packages for Enterprise Linux) Repository).

# yum install rrdtool


4. Download and configure

# wget http://sourceforge.net/projects/pnp4nagios/files/PNP/pnp-0.4.14/pnp-0.4.14.tar.gz/download
# tar -zxvf pnp-0.4.14.tar.gz # cd pnp-0.4-14
# ./configure --sysconfdir=/etc/pnp --prefix=/usr/share/nagios

(For more configuration of ./configure, see ./configure –help)
Output is as followed:

*** Configuration summary for pnp 0.4.14 05-02-2009 ***
General Options:
------------------------- -------------------
Nagios user/group:  nagios nagios 
Install directory:  /usr/share/nagios 
HTML Dir:           /usr/share/nagios/share
Config Dir:          /etc/pnp 
Location of rrdtool binary: /usr/bin/rrdtool Version 1.4.4
RRDs Perl Modules:  FOUND (Version 1.4004)
RRD Files stored in:   /usr/share/nagios/share/perfdata 
process_perfdata.pl Logfile: /usr/share/nagios/var/perfdata.log
Perfdata files (NPCD) stored in: /usr/share/nagios/var/spool/perfdata/

Review the options above for accuracy. If they look okay,
type 'make all' to compile.
# make all
# make install

If there are failure for make or make install, you may not have installed gcc-c++ tool to compile.

# yum install gcc-c++

Create soft links

# ln -s /usr/share/nagios/share /usr/share/nagios/pnp


5. Passing Performance Data to the PNP data collector.
To switch on the performance data processing

# PROCESS PERFORMANCE DATA OPTION
# This determines whether or not Nagios will process performance
# data returned from service and host checks.  If this option is
# enabled, host performance data will be processed using the
# host_perfdata_command (defined below) and service performance
# data will be processed using the service_perfdata_command (also
# defined below).  Read the HTML docs for more information on
# performance data.
# Values: 1 = process performance data, 0 = do not process performance data

process_performance_data=1
# HOST AND SERVICE PERFORMANCE DATA PROCESSING COMMANDS
# These commands are run after every host and service check is
# performed.  These commands are executed only if the
# enable_performance_data option (above) is set to 1.  The command
# argument is the short name of a command definition that you
# define in your host configuration file.  Read the HTML docs for
# more information on performance data.

host_perfdata_command=process-host-perfdata
service_perfdata_command=process-service-perfdata

5a. Switching on the process-service-perfdata

(From inside Nagios configuration directory usually /etc/nagios/)

# cd objects
# vim commands.cfg
define command {
  command_name    process-service-perfdata
  command_line    /usr/bin/perl /usr/share/nagios/libexec/process_perfdata.pl
}

define command {
  command_name    process-host-perfdata
  command_line    /usr/bin/perl /usr/share/nagios/libexec/process_perfdata.pl -d HOSTPERFDATA
}

6. Final check

# cd /etc/nagios
# nagios -v nagios.cfg
(Check for any error. If there is no error, restart the service)
# service nagios restart
(Restart httpd)
# service httpd restart

7. Take a look at the graph!

http://YourServerIPAddres.org/nagios/share/

Installing Check Disk IO Plugins via NRPE on CentOS 5.x

This Blog entry is modified from Check Disk IO via NRPE from Nagios Wiki

1. What is check_diskio?
check_diskio is a simple Nagios plugin for monitoring disk I/O on Linux 2.4 and 2.6 systems.

2. Where do I get information and download check_diskio?
Got to http://freshmeat.net/projects/check_diskio/

3. Installation Guide
A. Ensure you install the perl package. You will need perl 5.8.x and required modules. You need to install RPMforge Repository (From Linux Toolkit Blog)

At the Client Machine.

# yum install perl
# tar -zxvf check_diskio-3.2.2.tar.gz
# cd check_diskio-3.2.2
# less INSTALL (Read the INSTALL Readme file)
# perl Makefile.PL INSTALLSCRIPT=/usr/lib64/nagios/plugins
(You will see a list of warnings of prerequisites)

B. Install the prerequisites Perl-Modules. Here may not be completed lists.

# yum install perl-Nagios*
# yum install perl-List*
# yum install perl-Readonly*
# yum install perl-Number-Format
# yum install perl-File-Slurp*
# yum install perl-Array-Unique

C. Finally compile

# perl Makefile.PL INSTALLSCRIPT=/usr/lib64/nagios/plugins
# make
# make install
(You will see the check_diskio at /usr/lib64/nagios/plugins)

D. Edit the nrpe.cfg file on the client machine. If you have not download nagios nrpe plugins, just do a

# yum install nagios-nrpe

D1. Edit /etc/nagios/nrpe.cfg on the client machine,

# vim /etc/nagios/nrpe.cfg
command[check_diskio]=/usr/lib64/nagios/plugins/check_diskio --device=/dev/sda -w 200 -c 300

At the Server, just make sure you install the nagios nrpe plugins like D. Finally to ensure that the remote server plugins are ok, do a test at Nagios Server

# cd /usr/lib64/nagios/plugins
# ./check_nrpe -H remote-server -c check_diskio
CHECK_DISKIO OK - sda 194 sectors/s | WRITE=194;200;300 READ=0;200;300 TOTAL=194;200;300

Horray It is done!