Set hostname using hostnamectl for CentOS 7

1. Listing hostname using “hostnamectl” or “hostnamectl status”

[root@localhost ~]# hostnamectl
Static hostname: helloworld.com
Icon name: computer-server
Chassis: server
Machine ID: aaaaaaaaaaaaa
Boot ID: ddddddddddd
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-327.el7.x86_64
Architecture: x86-64

2.Setting static host-name using hostnamectl

# hostnamectl set-hostname "helloworld.com" --static

3. Delete static host-nameusing hostnamectl

# hostnamectl set-hostname "" --static

Commands for sending signals by explicit request

A. Foreground Processes:

You can use the keyboard to issue a signal on your current processing by pressing a keyboard control sequence t

1. Suspend foreground process

# Ctrl+z

2. Kill foreground process

# Ctrl+c

3. Core Dump

# Ctrl+\

B. Background Process

  1. Check the list of kill option
# kill -l
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
 6) SIGABRT      7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
..................................
# kill -9 PID

2. Look up processes based on name and kill them

# pgrep -l -u user1
7000 bash
7001 sleep
...............
# pkill -SIGKILL -u user1

Notes: SIGTERM is the default signal, SIGKILL is a commonly misused administrator favorite. Since the SIGKILL signal cannot be handled or ignored, it is always fatal. However, it forces termination without allowing the killed process to run self-cleanup routines. It is recommended to send SIGTERM first, then retry with SIGKILL only if a process fails to respond

3. kill processes running in tty3

# pkill -SIGKILL -t tty3

4. Use the pstree command to view a process tree for the system or a single user.

# pstree -p root

[root@lime ~]# pstree -p root
init(1)─┬─NetworkManager(1785)
├─abrtd(2232)

Introduction to Systemd on CentOS 7

A few terms we need to grasp:

  1. Daemons are processes that wait or run in the background performing various tasks.
  2. To listen for connections, a daemon uses a socket.
  3. service often refers to one or more daemons

If you are moving from CentOS 6 to CentOS 7, you may be wondering why the need to move to systemd. Here are the features

  1. Parallelization capabilities, which increase the boot speed of a system
  2. Automatic service dependency management which can prevent long time-out.
  3. A method of tracking related processes together by using Linux control groups
  4. On-demand starting of daemons without requiring a separate service

Listing unit files with systemctl

  1. Query the state of all units
    # systemctl
  2. Query the state of selected service
    # systemctl --type=service
  3. List full output of the selected service. Useful for detailed check and investigation
    # systemctl status sshd.service -l
  4. To check whether the particular is active and enabled to start at boot time
    # systemctl is-active sshd
    # systemctl is-enabled sshd
  5. List the active state of all loaded units. –all will include inactive units
    # systemctl list-units --type=service
    # systemctl list-units --type=service --all
  6. View the enabled and disabled settings for all units
    # systemctl list-unit-files --type=service
    
  7. View only failed services.
    # systemctl --failed --type=service
    

     


Controlling System Services

a. Status of a Service

# systemctl status sshd.service

b. Disable of a Service

# systemctl disable sshd.service

c. Enable and verify the status of a Service

# systemctl enable sshd.service
# systemctl is-enabled sshd.service

d. Reload configuration file of a running service

# systemctl is-enabled sshd.service

Joining RHEVM-Manage-Domain tool to join AD Domain

Do note that for Red Hat Enterprise Virtualization you must attach a directory server to the Manager using the Domain Management Tool, engine-manage-domains. This configuration sis for RHEV 3.5
Active Directory
Identity Management (IdM)
Red Hat Directory Server 9 (RHDS 9)
OpenLDAP

# rhevm-manage-domains add --domain=your_active_directory_domain --user=spmstest --provider=ad

Restart the ovirt-engine

# service ovirt-engine restart

Install Windows 7 with VirtIO Drivers on RHEV 3.4

One of the best ways to improve the performance of Microsoft Windows guest is to use paravirtualised devices and drivers for KVM in the guests. This provides close to bare performance (up to 95%). These drivers are provided by the virtio-win RPM packages in the RHEL.

1. The virtio-win packages can be found on RHEL server /usr/share/virtio-win/

# ls -l /usr/share/virtio-win/
drwxr-xr-x. 4 root root      4096 Oct 13 10:24 drivers
drwxr-xr-x. 2 root root      4096 Oct 13 10:24 guest-agent
-rw-r--r--. 1 root root   2949120 May 29  2014 virtio-win-1.7.1_amd64.vfd
-rw-r--r--. 1 root root 149004288 May 29  2014 virtio-win-1.7.1.iso
-rw-r--r--. 1 root root   2949120 May 29  2014 virtio-win-1.7.1_x86.vfd
lrwxrwxrwx. 1 root root        26 Oct 13 10:24 virtio-win_amd64.vfd -> virtio-win-1.7.1_amd64.vfd
lrwxrwxrwx. 1 root root        20 Oct 13 10:24 virtio-win.iso -> virtio-win-1.7.1.iso
lrwxrwxrwx. 1 root root        24 Oct 13 10:24 virtio-win_x86.vfd -> virtio-win-1.7.1_x86.vfd

You may want to add these images to the ISO Library, you can attach the virtio-drivers-*.vfd image to the virtual floppy drier beofre manually installing Windows

2. The rhev-tools-setup.iso image found in the usr/share/rhev-guest-tools-iso

# ls -l /usr/share/rhev-guest-tools-iso
-rw-r--r--. 1 root root    177272 Jul 29 16:30 LICENSES
-rw-r--r--. 1 root root 350181376 Jul 29 16:30 RHEV-toolsSetup_3.4_9.iso
lrwxrwxrwx. 1 root root        57 Oct 13 10:23 rhev-tools-setup.iso -> /usr/share/rhev-guest-tools-iso/RHEV-toolsSetup_3.4_9.iso
-rw-r--r--. 1 root root       924 Jul 29 16:30 SOURCES

3. Upload drivers and tools to the ISO Storage Domain

# rhevm-iso-uploader -u admin@internal list 
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ISO Storage Domain Name   | Datacenter                | ISO Domain Status
dmn_ixora_iso_vol         | RH_Resource               | active
# rhevm-iso-uploader -u admin@internal --iso-domain=dmn_ixora_iso_vol upload  /usr/share/virtio-win/virtio-win-1.7.1_x86.vfd /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):

4. Create the new VM using the RHEV Manager

Remember to click the “Run Once”
Under the “Boot Option”,
– Attached the virtio-win-1.7.1_amd.vfd to the floppy
– Attached the MS Windows 7 ISO to the CD

RHEV_Windows7_OS

5. Update Ethernet Drivers, PCI Simple Communication Controller by attaching the RH
RHEV_Windows7_OS_Drivers

6. Once you have booted to Windows 7, go to Device Manager and run an update for the missing Ethernet, storage controller

Windows7_DeviceManager

You should see
– Red Hat VirtIO SCSI controller,
– Red Hat VirtIO SCSI pass-through controller
– Red Hat VirtIO Ethernet Adapter

Checking for Constant Time Stamp Counter

A Constant Time Stamp Counter is included in more recent Intel Processors (TSC) to reads at the processor’s maximum rate regardless of the actual CPU running rate. While this makes time keeping more consistent, but it can skew benchmarks, where a certain amount of spin-up time is spent at a lower clock rate before the OS switches the processor to the higher rate. For more information on Time Stamp Counter, do look at Time Stamp Counter (wikipedia)

To check whether your CPU support TSC, you can issue the command

# grep -q constant_tsc /proc/cpuinfo && echo "Constant TSC detected" || echo "No Constant TSC Detected"

Creation of Logical Networks on RHEV

Logical Networks allow segregation of different types of traffic. According to Red Hat Training Manual, it is recommended to have 3 types of network

  1. Management Network – Network to connect hypervisor (RHEV-H) nodes to the RHEV Manager
  2. Storage Network – Network to connect RHEV-H  to NFS and iSCSI. If you are using FC, there is no need to create a separate network since it is using SAN
  3. Public Network –  Connect the Gateway Router, the RHEV-M systems and RHEV-H Nodes

 

Configure Logical Networks

Data Centre > Networks > New Loginal Networks

NewLogicalNetwork

 

Configure Network Mac Address Pool

1. View Current Settings

# rhevm-config -g MacPoolRanges
MacPoolRanges: 00:1a:4a:9f:7e:00-00:1a:4a:9f:7e:ff version: general

2. Set the Mac range to 52:54:00:56:00:01 – 52:54:00:56:00:FF

# rhevm-config -s MacPoolRanges=52:54:00:56:00:01-52:54:00:56:00:FF

3. Restart ovirt-engine engine

# service ovirt-engine restart

Displaying SPICE on the VM network for RHEV 3.4

By default, SPICE graphic server somehow by default uses the management network to display the console. Usually the management network is not visible to the users.

For RHEV 3.4, this can be easily resolve on the RHEV Manager console

  1. Portal > Networks
  2. Click on the Network you wish SPICE graphic Server to display on
  3. Click “Manage Network”
  4. Click “Display Network”

 

Once configured. Remember to REBOOT all the VMs to activate the changes

RHEV_SPICE

 

 

Installing and Configuring Red Hat Enterprise Virtualisation

Step 1: Ensure that you have subscribed to Red Hat Virtualisation Channels. For more information, see Subscribing to Red Hat Virtualisation Manager Channels

Step 2: Install RHEVM packages.

This will take a while…… 1.6GB of downloads…….

# yum install rhevm rhevm-reports

Step 3: Link the Directory Server to the Red Hat Enterprise Virtualisation Manager.

See Joining RHEVM-Manage-Domain tool to join AD Domain

Step 4: Install RHEV manager

# rhevm-setup

Step 5: Go to the website – Administration Portal

RHEV_portal

Step 6: Logon to the Portal with admin

RHEV_Portal2

Step 7: Create Data Centre

Step 8: Create and Populate a New ISO NFS Storage Domain

Step 9: Creation of Logical Network

Step 10: Creation of Windows 7 with Virtio