VMware-NVIDIA AI-Ready Enterprise platform

NVIDIA and VMware have formed a strategic partnership to transform the data center to bring AI and modern workloads to every enterprise.

NVIDIA AI Enterprise is an end-to-end, cloud-native suite of  AI and data analytics software, optimized, certified, and supported by NVIDIA to run on VMware vSphere with NVIDIA-Certified  Systems. It includes key enabling technologies  from NVIDIA for rapid deployment, management, and scaling of AI workloads in the modern hybrid cloud.

For more information, see NVIDIA AI Enterprise

Virtual Machine 3D Support Option is Greyed Out for Windows 2008 and 2012 on VMware

The solution is taken from Enable 3D Support option is greyed out in virtual machine video card settings (2092210) 

Modify the VMX file of the Virtual Machine

1. Take backup of the virtual machines VMX file
2. Open the VMX file, using text editor and add this line in the end

mks.enable3d = TRUE

*You can use the vi to edit the VMX file

3. Check the VMID of the Guest OS you are reloading

# vim-cmd "vmsvc/getallvms"
Vmid        Name .........
2

4.  Reload the virtual machines configuration file by running this command:

# vim-cmd vmsvc/reload VMID

Adding and Removing the 2nd Mellanox Ethernet Port as an uplink to an Existing Vswitch using the CLI

At VSphere 5.1 Client. I was able to see the Dual-Port Network Adapter (vmnic22.p1, vmnic22.p2) after I install the Vmware  Installing Mellanox ConnectX® EN 10GbE Drivers for VMware® ESX 5.x Server.

But somehow I am not able to use the 2nd port of the Mellanox ConnectX 10G on the VSphere Client > Configuration > Networking. It will not be visible. However at the VSphere Client > Configuration > Networking > Add Networking, I not able to see the 2nd Port being available.

I found the document from Mellanox (MellanoxMLX4_ENDriverforVMwareESXi-5.xREADME) which is useful to resolve the issue. At Page 10,

Adding the Device as an uplink to an Existing Vswitch using the CLI

Step 1: Log into the ESXi server with root permission

Step 2: Add an uplink to a vswitch, run:

# esxcli network vswitch standard uplink add –u <uplink_name> -v <vswitch_name>

* Uplink_name refer to the name used by ESX for the network Adapter. For example, vmnic22.p2 is the uplink name

Step 3: Check that uplink was added successfully. Run:

# esxcli network vswitch standard list -v <vswitch_name>

Removing the Device an an uplink to an Existing Vswitch using the CLI

Step 1: Log into the ESXi server with root permissions

Step 2: Remove an uplink from a vswitch, run:

# esxcli network vswitch standard uplink remove -u <uplink_name> -v <vswitch_name>

Upgrading Mellanox ConnectX® EN 10GbE Drivers for VMware® ESX 5.x Server

Do read the Blog Entry Installing Mellanox ConnectX® EN 10GbE Drivers for VMware® ESX 5.x Server

Step 1: At VMware ESX 5.x Hypervisor,

  • Click the F2 button
  • Select “Troubleshoot Options”
  • “Enable ESXi Shell” and “Enable SSH”

Step 2: Download the VMware ESXi 5.0 Driver for Mellanox ConnectX Ethernet Adapters

Step 3: Unzip the mlx4_en-mlnx-1.6.1.2-offline_bundle-471530.zipEdit

Step 4: The upgrade process is similar to a new install, except the command that should be issued is the following:

#   esxcli software vib upgrade -v {VIBFILE}

In the example above, this would be:

#  esxcli software vib update -v 
/tmp/net-mlx4-en-1.6.1.2-1OEM.500.0.0.406165.x86_64.vib

Installing Mellanox ConnectX® EN 10GbE Drivers for VMware® ESX 5.x Server

If you have Mellanox Technologies MT27500 Family [ConnectX-3] 10G Ethernet Card, it may not be automatically detected by VMware ESX 5.x Hypervisor. You have to install the driver manually into Vmware 5.x

Step 1: At VMware ESX 5.x Hypervisor,

  • Click the F2 button
  • Select “Troubleshoot Options”
  • “Enable ESXi Shell” and “Enable SSH”

Step 2: Download the VMware ESXi 5.0 Driver for Mellanox ConnectX Ethernet Adapters

Step 3: Unzip the mlx4_en-mlnx-1.6.1.2-offline_bundle-471530.zip

Step 4: Read the README file

VMware uses a file package called a VIB (VMware Installation Bundle) as the  mechanism for installing or upgrading software packages on an ESX server.

The file may be installed directly on an ESX server from the command line, or through the VMware Update Manager (VUM).

Step 5:  For New Installation (From README and modified)

For new installs, you should perform the following steps:

Step 5a: Copy the VIB to the ESX server.  Technically, you can place the file anywhere that is accessible to the ESX console shell, but for these instructions, we’ll assume the location is in ‘/tmp’.

Here’s an example of using the Linux ‘scp’ utility to copy the file from a local system to an ESX server located at 10.10.10.10:

# scp net-mlx4-en-1.6.1.2-1OEM.500.0.0.406165.x86_64.vib root@10.10.10.10:/tmp

Step 5b: Issue the following command (full path to the VIB must be specified):

# esxcli software vib install -v {VIBFILE}

In the example above, this would be:

# esxcli software vib install -v 
/tmp/net-mlx4-en-1.6.1.2-1OEM.500.0.0.406165.x86_64.vib

Multiprotocol Performance Test of VMware EX 3.5 on NetApp Storage Systems

NetApp has written a technical paper “Performance Report: Multiprotocol Performance Test of VMware® ESX 3.5 on NetApp Storage Systems” on performance test using FCP, iSCSI, NFSon on Vmware 3.5. Do read the article for good details. I have listed the summary only.

Fibre Channel Protocol Summary

  1. FC achieved up to 9% higher throughput than the other protocols while requiring noticeably lower CPU utilization on the ESX 3.5 host compared to NFS and iSCSI.
  2. FC storage infrastructures are generally the most costly of all the protocols to install and maintain. FC infrastructure requires expensive Fibre Channel switches and Fibre Channel cabling in order to be deployed.

iSCSI Protocol Summary

  1. Using the VMware iSCSI software initiator, we observed performance was at most 7% lower than FC.
  2. Software iSCSI also exhibited the highest maximum ESX 3.5 host CPU utilization of all the protocols tested.
  3. iSCSI  is relatively inexpensive to deploy and maintain. as it is  running on a standard TCP/IP network,

NFS Protocol Summary

  1. NFS performance was at maximum 9% lower than FC. NFS also exhibited ESX 3.5 host server CPU utilization maximum on average higher than FC but lower than iSCSI.
  2. Running on a standard TCP/IP network, NFS does not require the expensive Fibre Channel switches, host bus adapters, and Fibre Channel cabling that FC requires, making NFS a lower cost alternative of the two protocols.
  3. NFS provides further storage efficiencies by allowing on-demand resizing of data stores and increasing storage saving efficiencies gained when using deduplication. Both of these advantages provide additional operational savings as a result of this storage simplification.

Installing Chelsio driver CD on an ESX 4.x host

This article is taken and modified from Installing the VMware ESX/ESXi 4.x driver CD on an ESX 4.x host (VMware Knowledge Base)

Step 1: Download the Chelsio Drivers for ESX

Download from relevant drivers for your sepcific cards from  Chelsio Download Centre

Step 2: Follow the instruction from VMware

Note: This procedure requires you to place the host in Maintenance Mode, which requires downtime and a reboot to complete installation. Ensure that any virtual machines that need to stay live are migrated, or plan for proper down time if migration is not possible.
  1. Download the driver CD from the vSphere Download Center.
  2. Extract the ISO on your local workstation using an third-party ISO reader (such as WinISO). Alternatively, you can mount the ISO via SSH with the command:

    mkdir /mnt/iso mount -o loop filename.iso /mnt/iso

    Note: Microsoft operating systems after Windows Vista include a built-in ISO reader.

  3. Use the Data Browser in the vSphere Client to upload the ZIP file that was extracted from the ISO to your ESX host.

    Alternatively, you can use a program like WinSCP to upload the file directly to your ESX host. However, you require root privileges to the host to perform the upload.

  4. Log in to the ESX host as root directly from the Service Console or through an SSH client such as Putty.
  5. Place the ESX host in Maintenance Mode from the vSphere Client.
  6. Run this command from the Service Console or your SSH Client to install the bundled package:

    esxupdate –bundle=<name of bundled zip> update

  7. When the package has been installed, reboot the ESX host by typing reboot from the Service Console.

Note: VMware does not endorse or recommend any particular third party utility, nor are the above suggestions meant to be exhaustive.

Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 2)

Note for Installation of the VM:

Remember to add the PCI/PCIe Device to the VM. Upon adding, you should be able to see the “10:00.4  | Chelsio Communications Chelsio T4 10GB Ethernet”. See above Pix

Proceed with installation of the VM, you should be able to see the Ethernet settings. Do proceed with the installation of OFED and Chelsio Drivers.

Information:

  1. Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 1)

Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 1)

For Part 1, the article is taken from Configuring VMDirectPath I/O pass-through devices on an ESX host. Part (2), we will deal with Chelsio T4 Card configuration after the Passthrough has been configured.

1. Configuring pass-through devices

To configure pass-through devices on an ESX host:
  1. Select an ESX host from the Inventory of VMware vSphere Client.
  2. On the Configuration tab, click Advanced Settings. The Pass-through Configurationpage lists all available pass-through devices.Note:A green icon indicates that a device is enabled and active.An orange icon indicates that the state of the device has changed and the host must be rebooted before the device can be used.
  3. Click Edit.
  4. Select the devices and click OK.Note: If you have a chipset with VT-d, when you click Advanced Settings in vSphere Client, you can select what devices are dedicated to the VMDirectPath I/O.
  5. When the devices are selected, they are marked with an orange icon. Reboot for the change to take effect. After rebooting, the devices are marked with a green icon and are enabled.Note:The configuration changes are saved in the /etc/vmware/esx.conf file. The parent PCI bridge, and if two devices are under the same PCI bridge, only one entry is recorded.The PCI slot number where the device was connected is 00:0b:0. It is recorded as:/device/000:11.0/owner = “passthru”Note: 11 is the decimal equivalent of the hexadecimal 0b.

2. To configure a PCI device on a virtual machine:

  1. From the Inventory in vSphere Client, right-click the virtual machine and choose Edit Settings.
  2. Click the Hardware tab.
  3. Click Add.
  4. Choose the PCI Device.
  5. Click Next.Note: When the device is assigned, the virtual machine must have a memory reservation for the full configured memory size.

 

3. Information

  1. Configuring VMDirectPath I/O pass-through devices on an ESX host with Chelsio T4 Card (Part 2)