NetApp Visual Storage Console (VSC)5.0 Plugin for VCentre 5.5

VSC 5.0 is a major change that includes a new look and seamless integration with the VMware vSphere Web Client. New features in this release include support for the following:

  • The VMware vSphere Web Client
  • VASA Provider for clustered Data ONTAP®
  • SnapVault integration as a backup job option for clustered Data ONTAP
  • Adding a virtual machine or datastore to an existing backup job
  • Numerous bug fixes

VSC 5.0 discontinues support for the following:

  • vCenter 5.1 and earlier
  • VMware Desktop client
  • 32-bit Windows installations
  • mbralign
  • Single File Restore
  • Datastore Remote Replication
  • Flash Accel

To download, see  http://mysupport.netapp.com/NOW/download/software/vsc_win/5.0/

Documentation, see NetApp Visual Storage Console (VSC) 5.0

Tracking NetApp Cluster-Mode Performance

To track NetApp Storage on Cluster Performance, do use the statistics

netapp-cluster1::> statistics show-periodic
cluster:summary: cluster.cluster: 9/9/2014 09:33:29
cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
5%      303      303        0   2%   4.86MB    223KB      0%   1.16MB   1.17MB    685KB    571KB
5%      312      312        0   3%   8.27MB    359KB      0%   1.11MB   1001KB    679KB   39.4KB
8%      300      300        0   2%   7.29MB    495KB      0%   2.87MB   3.30MB   2.66MB   59.1KB
6%      158      158        0   1%   3.53MB    168KB      0%   2.16MB   1.51MB   2.71MB   11.1MB
5%      184      184        0   2%   4.48MB   1.22MB      0%   1.99MB   1.97MB   1.21MB   10.9MB
5%      213      213        0   1%   2.82MB    222KB      0%    902KB    749KB    240KB    671KB
3%      144      144        0   1%   2.32MB    762KB      0%    559KB    685KB   96.6KB   15.8KB
4%      199      199        0   1%   3.73MB    881KB      0%    796KB    715KB    390KB   39.6KB
7%      164      164        0   1%   4.49MB    365KB      0%   2.34MB   2.43MB   2.52MB   8.33MB
7%      115      115        0   2%   4.07MB    154KB      0%   1.23MB   1.25MB   2.41MB   9.80MB
3%      224      224        0   1%   2.72MB    163KB      0%   1.80MB    721KB    407KB    996KB
4%      220      220        0   1%   4.38MB   1.32MB      0%    451KB   1.54MB    199KB    110KB
5%      124      124        0   1%   2.97MB    157KB      0%    315KB    273KB    251KB   15.8KB
7%      153      153        0   0%   1.76MB    139KB      0%    220KB    268KB   2.54MB   1.28MB
4%      120      120        0   0%   1.30MB   80.4KB      0%    417KB    325KB   2.86MB   13.9MB
cluster:summary: cluster.cluster: 9/9/2014 09:34:01
cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
Minimums:
3%      115      115        0   0%   1.30MB   80.4KB      0%    220KB    268KB   96.6KB   15.8KB
Averages for 15 samples:
5%      195      195        0   1%   3.93MB    451KB      0%   1.22MB   1.19MB   1.32MB   3.84MB
Maximums:
8%      312      312        0   3%   8.27MB   1.32MB      0%   2.87MB   3.30MB   2.86MB   13.9MB

Assigning ownership to disks for NetApp OnTap 8.2p2 Cluster mode

After replacing broken disks for NetApp Storage and if you want to manually assigned ownership back to the newly replaced but assigned disks, you can use the following command

1. Show Storage Ownership Information

My-NetApp-Cluster::> storage disk show -spare
Original Owner: acai-cluster1-01
  Checksum Compatibility: block
                                                            Usable Physical
    Disk            HA Shelf Bay Chan   Pool  Type    RPM     Size     Size Owner
    --------------- ------------ ---- ------ ----- ------ -------- -------- --------
    cluster1-01:0b.00.9
                    0b     0   9    B  Pool0  BSAS   7200   1.62TB   1.62TB cluster1-01
Original Owner: cluster1-02
  Checksum Compatibility: block
                                                            Usable Physical
    Disk            HA Shelf Bay Chan   Pool  Type    RPM     Size     Size Owner
    --------------- ------------ ---- ------ ----- ------ -------- -------- --------
    cluster1-02:0a.00.7
                    0a     0   7    B  Pool0  BSAS   7200   1.62TB   1.62TB cluster1-02
.....
.....

2. Display all unowned disks by entering the following command:

My-NetApp-Cluster::> storage disk show -container-type unassigned


                     Usable           Container
Disk                   Size Shelf Bay Type        Position   Aggregate Owner
---------------- ---------- ----- --- ----------- ---------- --------- --------
cluster1-01:0b.00.9
                     1.62TB     0   9 spare       present    -         cluster1-01
cluster1-02:0a.00.7
                     1.62TB     0   7 spare       present    -         cluster1-02

3. Assign each disk by entering the following command

My-NetApp-Cluster::> storage disk assign -disk cluster1-01:0b.00.9 -owner cluster1-01

Identify broken disk for NetApp OnTap 8.2p2 Cluster mode

1. Identify the Cluster Nodes

My-NetApp-Cluster::> cluster show

Node                  Health  Eligibility
--------------------- ------- ------------
cluster1-01      true    true
cluster1-02      true    true
cluster1-03      true    true
cluster1-04      true    true
4 entries were displayed.

2. Check for Broken Disk

My-NetApp-Cluster::> run -node cluster1-01 vol status -f
RAID Disk Device   HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
--------- ------   ------------- ---- ---- ---- ----- --------------    --------------
failed   0a.00.12 0a    0   12  SA:A   -  BSAS  7200 1695466/3472315904 1695759/3472914816 
failed   0b.00.9  0b    0   9   SA:B   -  BSAS  7200 1695466/3472315904 1695759/3472914816

3. Get System Information

My-NetApp-Cluster::> run -node cluster1-01 sysconfig -a
NetApp Release 8.2P2 Cluster-Mode: Sat Jul 20 20:31:47 PDT 2013
.....
.....
.....

4. Get further Information

My-NetApp-Cluster::> run -node cluster1-01 sysconfig -r
Aggregate storage1_aggr1 (online, mixed_raid_type, hybrid) (block checksums)
.....
.....

Multiprotocol Performance Test of VMware EX 3.5 on NetApp Storage Systems

NetApp has written a technical paper “Performance Report: Multiprotocol Performance Test of VMware® ESX 3.5 on NetApp Storage Systems” on performance test using FCP, iSCSI, NFSon on Vmware 3.5. Do read the article for good details. I have listed the summary only.

Fibre Channel Protocol Summary

  1. FC achieved up to 9% higher throughput than the other protocols while requiring noticeably lower CPU utilization on the ESX 3.5 host compared to NFS and iSCSI.
  2. FC storage infrastructures are generally the most costly of all the protocols to install and maintain. FC infrastructure requires expensive Fibre Channel switches and Fibre Channel cabling in order to be deployed.

iSCSI Protocol Summary

  1. Using the VMware iSCSI software initiator, we observed performance was at most 7% lower than FC.
  2. Software iSCSI also exhibited the highest maximum ESX 3.5 host CPU utilization of all the protocols tested.
  3. iSCSI  is relatively inexpensive to deploy and maintain. as it is  running on a standard TCP/IP network,

NFS Protocol Summary

  1. NFS performance was at maximum 9% lower than FC. NFS also exhibited ESX 3.5 host server CPU utilization maximum on average higher than FC but lower than iSCSI.
  2. Running on a standard TCP/IP network, NFS does not require the expensive Fibre Channel switches, host bus adapters, and Fibre Channel cabling that FC requires, making NFS a lower cost alternative of the two protocols.
  3. NFS provides further storage efficiencies by allowing on-demand resizing of data stores and increasing storage saving efficiencies gained when using deduplication. Both of these advantages provide additional operational savings as a result of this storage simplification.