Thursday, August 22, 2013

Netapp SnapVault And OSSV Backup Implementation

Netapp SnapVault or OSSV software is another technology similar to the famous Netapp snapshot feature. This is a block-level incremental backup technology that allows you to backup all the typical types of clients, Windows, All flavors of Unix, VMWare and in some cases database servers. What this software does and what is really quite cool is it creates a full point-in-time backup copy of the filesystems on it’s own disk based storage.  The best way to describe this product really is to say that the Netapp SnapVault is a heterogeneous disk-to0-disk backup solution.
The Netapp SnapVault product provides an excellent backup solution. In the event of a massive data loss from a corruption of some kind on your filer, the backed up data can be restored from the SnapVault filer very quickly. This can greatly reduce the downtime. SnapVault has some big advanges over traditional tape backups. The big one bing tape related. You don’t have to worry about meida errors or problems with finding off robotics tapes or even offsite tapes.  Even the cost of tapes and possibly even the cost of an entire tappe backup system if you chose to buy a tape sile and software. The recovery time for restores is very fast compared to tape in most cases. Backup windows are slmost a thing of the bpast. Depending on how much data changes on your filer, backups could have almost not impact. The SnapVault configuration consistes of two devices or two entities. There is the notop of a SnapVault client as well as a SnapVault storagte server. A snapVault clien in this casse can be a Netapp filer and also could be a MS Windows or Unxi server. Those are the client who have data that needs to be backed up. This SnapVault storage server has to be a Netapp filer. This is where the data gets sent to and * backed up*.

Two Types of Backups

Netapp to Netapp Backups – As the name implies, this type of backup involves to Netapps. One being the actual backup server. The
Server to Netapp Backups – The Open System SnapVault client software provided by Netapp needs to be installed on the client. The Netapp SnapFault client software, the actual *backup server* or the SnapVault server can retrieve the data from the client.  The SnapVault software protects client by maintaining multiple *snapshoted* versions of read-only copies on the SnapVault server. The replicated data found on the SnapVault server can be and is accessed by the client by either NFS or CIFS. This allows the client systems to restore either entire folders or down to a single file.
  • A SnapVault replicates the data from a primary system path  to qtrees on a SnapVault secondary filer.
  • A filer can act as a primary, a secondary or both. Possibly depending on its licenses, but I don’t think this needs to be licensed anymore.
  • The primary system can either be a filer or an open system with a SnapVault agent installed on it.
  • When the primary system is a filer, the path to be replicated can be a qtree, non-qtree data on a volume, or a volume path.
  • The SnapVault secondary manages a set of snapshots to preserve old versions of the data.
  • The replicated data on the secondary may be accessed via NFS or CIFS just like regular data.
  • The primary filers can restore qtrees directly from the secondary.
  • When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume.
When SnapVault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. The first baseline or *backup* could take quite a while time to complete depending on the amount of data to be processed. But each subsequent backup will only transfer only the data blocks that has changed since the previous backup. SnapVault creates a new Snapshot copy with every transfer, and allows you to keep a large number of copies according to a schedule configured by the backup administrator. Each backup copy uses disk space, which is directly proportional to the amount of data blocks which have changed from the previous copy.

Setup of a Netapp Filer to Filer SnapVault Configuration

The source filer, the filer with the data to be backed up. Assuming no licenses needed.
Enable the SnapVault feature.
myfiler> options snapvault.enable on
myfiler> options snapvault.access host=mydestfiler 
My destination filer configuration. Again, assuming no license. This is where all the backups are done.
mydestfiler> options snapvault.enable on 
mydestfiler> options snapvault.access host=myfiler
It’s best to disable regular snapshots on the destination volume . So disable the snaphots as follows:
 mydestfiler> snap sched storage_vault 0 0 0

Start SnapVault Process

We have to fire off our first or initial backup. This is the initial baseline, or full backup. It will be a biggy. The command sets the source as myfiler:/vol/mydata/qtree_2do and sends it to the mydestfiler:/vol/storage_vault/qtree_2do.
mydestfiler> snapvault start -S myfiler:/vol/mydata/qtree_2do  mydestfiler:/vol/storage_vault/qtree_2do
I should note that if YOU DON’T DO A START as in the snapvault start command, your schedules will not run. I show some schedule samples below. Now you can watch the status of the first full backup, as I said it could take  a while. This can be run on either Netapp filer. Use the following command:
mydestfiler> snapvault status

SnapVault Scheduling

All that’s left to do is setup a schedule on the destination filer for regular backups using the snapvault snap sched command. Use schedule names prefixed with sv_ for SnapVault. Below we create the schedule – 1 hourly, 2 daily and 2 weekly SnapVault .  The syntax is confusing…. But here is how it looks:
snap sched [ -f ] [ -x ] [ -o options ] [ volname [ snap_name [ schedule ]]] , also remember X@ = equals the # of snapshots
Execute SnapVault backups mon-fri and keep 24 copies of snapshots, transferring data begins at 2 am
mydestfiler> snapvault snap sched  storage_vault sv_daily 24@mon-fri@2
Here is a working example. The commands below shows how to create hourly, daily & weekly SnapVault snapshots.
mydestfiler> snapvault snap sched storage_vault sv_hourly 1@0-22
mydestfiler> snapvault snap sched storage_vault sv_daily  2@23
mydestfiler> snapvault snap sched strorage_vault sv_weekly 2@21@sat
This will display your SnapVault schedule.
myfiler> snapvault snap sched
Get a list of all SnapVault snap’s
mydestfiler> snap list
If you have to modify a SnapVault schedule, you need to use the -x flag.  This must be done on the source system.
myfiler> snapvault snap sched modify -x storage_vault sv_daily 4@mon-fri@2

Netapp CLI NFS Config Commands


A quick and simple Netapp NFS configuration guide with commands and options to help explain and remove the mysteries. Netapps provide highly dependable NFS services, as the name implies, it’s a network appliance. You really can just turn it on and not worry much about outages. Unless someone trips on a power chord or two. Below is a compilation of exporfs and exports configuration options commonly used.
Rules for exporting Resources
• Specify complete pathname, meaning the path must begin with a /vol prefix
• You cannot export /vol, which is not a pathname to a file, directory or volume. Export each volume separately
• When export a resource to multiple targets, separate the target names with a colon (:) Resolve hostnames using DNS, NIS or /etc/hosts per order in /etc/nssswitch.conf
Examples to export resources with NFS on the CLI
> exportfs -a
> exportfs -o rw=host1:host2 /vol/volxyz
Exportable resources are by Volume Directory/Qtree File. Target examples from /etc/exports Host – use name of IP address
/vol/vol0/home -rw=myhost
/vol/vol0/home -root=myhost,-rw=hishost,therehost
Netgroup – use the NIS group name – Although I love NIS, NIS is rare nowadays. But the Netapp supports NIS. Don’t think NIS+.
/vol/vol0/home -rw=the-nisgroup
Subnet – specify the subnet address
/vol/vol0/home -rw=”192.168.100.0/24″
DNS – use DNS subdomain
/vol/vol0/home -rw=”.sap.dev.mydomain.com”
Displays all current export in memory
> exportfs
To export all file system paths specified in the /etc/exports file
> exportfs -a
Adds exports to the /etc/exports file and in memory.  Default export options are always “rw” (all hosts) and  security set to “sec=sys”
> exportfs -p [options] path
> exports -p rw=hostxyz /vol/vol2/sap
Exports a file system path temporarily without adding a corresponding entry to the /etc/exports file, handy short term method.
> exporfs -i -o ro=hostB /vol/vol1/lun2
Reloads the exports from /etc/exports files
> exportfs -r
Unexports all exports defined in the /etc/exports file
> exportfs -uav
Unexports a specific export
> exportfs -u /vol/vol2/homes
Unexports an export and removes it from /etc/exports file. This one is handy.
> exportfs -z /vol/vol0/home
To verify the actual path to which a volume is exported
> exportfs -s /vol/vol2/vms-data
To display list of clients mounting from the storage system
> showmount -a filerabc
 To display list of exported resources on the storage system
>showmount -e filerabc
 To check NFS target to access cache
> exportfs -c clientaddr path [accesstype] [securitytype]
> exportfs -c host1 /vol/vol2 rw
To remove access cache entries
> exportfs -f [path]
Flush the access cache.
> exportfs -f
 Access restrictions that specify what operations an NFS client can perform on a resource
• Default is read-write (rw) and UNIX Auth_SYS (sys) security
• “ro” option provides read-ony access to all hosts
• “ro=” option provides read-only access to specified hosts
• “rw=” option provides read-write access to specified hosts
• “root=” option specifies that root on the target has root permissions

Netapp MultiMode VIFs and Load Balancing

Netapp MultiMode VIFs allow you the administrator to add some reliability and performance to your filer networking configuration. As with everything in the tech world, there are many different ways of doing things. There are many ways to build and configure highly available and high performance networks. Netapps support Ethernet technology that is known in other environments as Ethernet trunking, bonding, Etherchanneling or port channeling (Cisco). The joining of 2 or more physical links together into one big fast and available link. Unix systems have been doing this for many years. Netapps use MultiMode VIF’s for ethernet channeling. This used to be a complicated technology that involved special handshaking with switches, and not all switches handled the technology well as there were and are still different angles on this type of technology. Load balanced or passive active so that if one link fails the other one takes over. Hopefully cabled to a different switch. This technology is now used all the time, it has become very common and is used to increase performance on the Ethernet Fabric is MultiMode VIFs. A MultiMode VIF is NetApp’s term for EtherChannel or Port Channeling. Earlier I used the terms trunking or bonding which are not really the accurate terms. Trunking usually refers to a form of VLAN trunking. So I won’t use the trunk term any more.

  Netapp Defined types of MultiMode VIF’s

Static MultiMode VIF
A Netapp static MultiMode VIF can be compored to a hard coded or static EtherChannel. A hard coded physical multi link into the channel.  The ports do not negotiate, there is no auto detection of the physical port status. The interfaces are forced or told to channel. On the switch side, the static etherchannel has to be enabled on each port.
The Netapp command to to create a static MultiMode VIF is
ourfiler> vif create multi
 Dynamic MultiMode VIF.
A Dynamic MultiMode VIF is a LACP EtherChannel. LACP is an acronmy (as are all things tech) for Link Aggregation Control Protocol. This complies to the IEEE 802.3ad standard for port channeling. LACP allows a device to exchange PDUs which are channeled together.  A dynamic MultiMode VIF is intelligent, it is dynamic in that link status’s will be checked and appropriate actions will be taken ifa physical link is down. There is hand shaking, communication and negotiations between devices. An perfect example of hand shaking and communication is in the event of some type of physical connection problem. The switch would remove the interface from the channel and with link aggregation protocol, a notification from the switch to the Netapp controller would be sent telling it that the interface was removed. This allows the Netapp controller to gracefully handle the situation. It would remove it’s end of the link and not try to blindly sent frames down a broken link. This is in contrast to the static MultiMode VIF’s. A statis MultiMode VIF would just continue trying to transmit and would generate errors. These errors would alert the highly trained admins at which point they would manually remove the link, or fix it. Take some kind of action.
The command to create a dynamic MultiMode VIF is
ourfiler> vif create lacp

MultiMode VIF Load Balancing Options

MultiMode VIF Load Balancing with MAC Load-Balancing
MAC based load balancing is a rarer type of algorithm now a days. There are many conditions which can cause traffic to be distributed unevenly, as in large bursts of traffic being sent on just a single link. MAC based load balancing makes calculations based on thesource and destination pairs MAC addresses. The source being the connecting host and the destination MAC would be the one associated to the MultiMode VIF, seen in the output of the ifconfig command. When your filer is on a more complex network with multiple subnets and routers, MAC based load balancing starts to show it’s cracks. Ethernet frames get changed when connections go through routers, therefore flat networks work the best for this technology. If your filer is on a flat network, not routers etc.. then this type of MultiMode VIF load balancing won’t experience as many problems and is a viable option for a load balancing solution.
MultiMode VIF Load Balanding with Source and Destination Pairs
The MultiMode VIF load-balancing method is the most common and the one used by many of the Netapp MultiMode VIF setups today. Somewhat complex calculations are made to determine which link to use. The source and destinations are compared and divided by the actualnumber of physical interfaces or links in the VIF. The resulting calculation should equal or match one of the physical interfaces, which is then used. The ports are bonded resulting in high availability, not gained throughput. If you had 2 x 1 Gig Ethernet interfaces bonded into one MultiMode VIF using EtherChannel, this wouldn’t mean you would have 2 Gbps throughput. It simply means that youwould have 1 Gbps throughput with 1 optional communication link for redundancy. Because the physical links are bonded into one VIF,all the physical links are used, but bandwidth doesn’t increase proportionaly to the sum of the number of links.  So no single transmission burst on a physical link can exeed the the 1 Gbps speed.
MultiMode VIF Load Balanding with Round Robin
Round Robin type load balancing has been around for a long time. It has it’s good and bad points. One of the shortcomings of round robin technology is that it runs the risk of packets arriving out of order causing untold and unecessary messes.  Oddly enough, in some rare cases, you can find some installations that utilize this feature and they have worked for years without any problems. RoundRobin as the name implies, sends out ethernet frames over the active links in the channel. This theoretically provides an evenly distributed transmission. Problems arise when a frame on one link arrives after a second frame is sent on another link. This could easily occur because of link congestion or something similar, and errors would be generated. It would essentially confuse the hell out of the system. Round robin is supported but very rarely used. Will leave it at that.


MultiMode VIF IP based Load Balanding
MultiMode VIF IP based load balancing I believe is the default load balancing type used. I also think that it is the most common type of MultiMode VIF found in the enterprise environments today.  The principles behind it are more or less the same as MAC based algorithms I described above. The only real difference is that source and destination IP addresses are used instead of MAC addresses. Using MultiMode VIF IP based load balancing results in a more evenly distributed network compared to MAC based load balancing. MultiMode VIF IP based load balancing uses the last octet of the IP address. Care must be taken when your environment has multiple subnets,this can affect the load balancing performance. IP aliasing can also be used as a technic to distrubute traffic over different links. Netapp VIF’s and physical interfaces have the ability to have an alias placed on the interface. Really, they are additional IP addresses mapped to VIF. Typically you don’t want more aliases than physical links on the VIF. Years ago, we used to connect each NIC to a seperate segment or VLAN. So the filer had 8 different IP addresses for 8 different subnets, and 8 different segments too. This distributed the load nicely. In some cases we had bonded NIC’s. That was in the 90′s. The MultiMode VIF’s are slicker for many reasons. It’s important to coordinate your CIFS or NFS clients to use all the aliased IP adddresses that have been assigned. ie. nas0, nas1, nas2 and nas3. Split your clients up to use these name/ip combos evenly. Multiple physical interfaces are partnered together for a good reason. The first reason is for failover in the event of a physical link failure of some kind. This ensures that failover of a controller will move the failed controllers interfaces to the remaining controller.  When an alias is mapped to an interface, and if you have partnered the physical, the aliases always travels to the clustered controller in the event of a failover. The aliases are not clustered or partnered, just the physical devices. This is important and ultimately the desired result.
Sample Static MultiMode VIF
Ssh into filer01 and setup the MultiMode VIF using the first 4 interfaces, e0a-d.
# ssh filer01 -l netappguy
filer01> vif create multi our-vif01 -b ip e0a e0b e0c e0d
filer01> ifconfig our-vif01 172.16.31.10 netmask 255.255.255.0 mtusize 1500 partner cisco-partner-001
filer01> ifconfig our-vif01 alias 172.16.31.11 netmask 255.255.255.0
filer01> ifconfig our-vif01 alias 172.16.31.12 netmask 255.255.255.0
filer01> ifconfig our-vif01 alias 172.16.31.13 netmask 255.255.255.0
LACP Dynamic MultiMode VIF
Ssh into filer01 and setup the MultiMode VIF using the first 4 interfaces, e0a-d.
# ssh filer01 -l netappguy
filer01> vif create lacp our-vif01 -b ip e0a e0b e0c e0d
filer01> ifconfig our-vif01 172.16.31.10 netmask 255.255.255.0 mtusize 1500 partner cisco-partner-001
filer01> ifconfig our-vif01 alias 172.16.31.11 netmask 255.255.255.0
filer01> ifconfig our-vif01 alias 172.16.31.12 netmask 255.255.255.0
filer01> ifconfig our-vif01 alias 172.16.31.13 netmask 255.255.255.0

Netapp – Common Deduplication Misconfiguration

Netapp deduplication misconfigurations are quite common, and nothing to be ashamed of. Netapps have a lot of features to work with, as does the tech world as a whole. Getting all these parts working together well sometimes is tricky. Implementing a simple dedup schedule in your environment can save you time and most importantly disk. But there are some configurations which are not optimal and need to be planned out carefully. This is a quick high level look at what some of these misconfigurations are.

Never Enabling Deduplication on VMWare Implementations!

First big one is in fact never turning on dedupe for VMWare workloads when the systems are initially setup. Also for getting the -s or scan option. It is recommended that dedupication be enabled on all VMware configurations. When using the Virtual Storage Console (VSC) plugin for vCenter for the creation of a VMware datastores, the plugin always enables deduplication. It is strongly recommended that dedupe be enabled right away for an assortment of reasons. When you enable deduplication on a NetApp volume, the controller starts tracking any new blocks that are written to that volume. And following a scheduled deduplication pass the controller will look at those new blocks and eliminate any duplicates.
But if you managed to already have some VM’s on that volume before deduplication was enabled, then those VM’s will never be examined or dedupedlicated. This will result in very poor deduplication. There still exists a simple solution. The administrator can start a deduplication pass using the VSC with a “scan” option enabled.  This can also be done from the command line with the “-s” switch.
Fix is to Scan a volume using the -s flag
 > sis start -s /vol/my_vol_2b_deduped

Misaligned VMware VW’s

Misaligned VM’s with differing operating systems on the same underlying storage subsystem are well known to cause poor deduplication results. This has been well documented. Misaligned VMware VM guests can cause lower than expected deduplication results. At a low level, if the starting offset of one of the guest VM operating system types is different than the starting offset of the other, then almost no blocks will align. This will not deduplicate effectively. Not only will your deduplications be less efficient, there will also be more load on your storage controller. Not just the NetApp controller, any storage array controller will be affected. There is a free tool for NetApp customers called MBRalign which is part of the VSC. It will help remedy this problem.As you align your VM’s, you will see your deduplication savings rise and your array controllers workload decrease.

LUN reservations provisioning

Over the years thin Provisioning has been beaten down and has had a bad reputation. Not necessarilly justified though.  NetApp controllers have multiple levels of reservations, and depending on the requirements with respect to VMware. One type of reservation is volume reservation. Volume reservation reserves space away from the large storage pool which is the aggregate. It insures whatever object you place onto that volume has the space it needs. We create a LUN for VMware using this volume. In some cases, storage admins reserve space for the LUN which in turn removes the space away from the available space in the volume. But there is no need to do this. You have already reserved the space with the volume reservation, therefore there is no need to reserve the space again with the LUN reservation. If you use LUN reservation, it means that the unused space in the LUN will aways consume the space reserved. That is, a 500GB LUN with space reservation turned on will consume 500 GB of space with no data in it. Deduplicating space reserved on the LUN will win you some space from the data that was consumed but the unused space will remain reserved.
Here is an interesting working example
600GB LUN on a 1 TB Volume  - 600GB LUN is reserved with no data on LUN. 
Volume shows 600 GB 
Add 270 GB data onto LUN
Volume still shows 600 GB
Dedpue 270 GB of data down to 100 GB
Volume reports 430GB used since it reclaimed 170 GB from the operation.
Remove LUN reservation and data only takes up 100 GB and volume reports 900 GB free
Simply by removing the LUN reservation will result in an actual saving from deduplication. This can be done on a live volume with VMs running. Once the final deduplication savings are visible, within the 60-70% range, the volume size can be adjusted to meet the actual use of the LUN, based on the actual amount of data on the LUN. BTW, like all things Netapp, volumes can be resized on a live system too.

 Large amounts of data in the VMs

As a desgin option, there are times when the VMDK files are used for boot and data. Not necessarily a misconfiguration, it is more of a design option. It is a much simpler design option to keep data and boot on one VM in a single folder. With this configuration, systems are still able to achieve high deduplication ratios when application data is mixed with the operating system data blocks. The problem arises when there are large data files. Large files like the ones used with databases or image repositories or mailboxes for mail servers.  The large data files do not deduplicate very well and lower the efficiency. The NetApp will deduplicate the operating system blocks as well as the data blocks around these large sections of used blocks

Netapp Deduplication Explained

NetApp and the WAFL Filesystem
Any operating system needs some kind of a filesystem to interface with storage. Unix’s traditionally used ADVFS, UFS or VxFS. Linux distro’s use assorted Ext type filesystems. Newer Solaris systems use ZFS (Which I personally think is the greatest filesystem ever!) ZFS is actually LVM and a filesystem in a box. I digress. Netapps have there own filesystem called WAFL. It’s an acronym for Write Anywhere File Layout. It’s an approach to writing data to storage locations that tries to minimize the conventional parity RAID write penalties. This is done by storing system metadata such as inode maps, block maps, and inodes in a structured fashion, such that the filesystem is able to write file system metadata blocks anywhere on the disk. Blocks are never updated per se. New blocks are created and the pointers are changed to point to the newly created blocks. This is different from many of the traditional Unix filesystems. The WAFL filesystem can then again in theory line up multiple writes to the same RAID stripes. This type of striping is ideal for Netapp’s RAID 4 parity scheme which implements dedicated parity drives. Conventional thinkers, philosophers and techies historically ragged against RAID 4, but as far as I’m concerned, Netapp’s implementation with the WAFL filesystem with it’s “full stripped writes” has won me over again. I won’t speak for the philosophers and other techies, but I bet you will find they agree with me. So the above description is a good high level look at Netapps WAFL filesystem. This lays the foundation to deduplication.

Netapp’s Deduplication

Netapp’s deduplication, as the name implies removes duplication. At a high level, it’s really as simple as that. At a low level, ton’s of stuff happens. Deduplication doesn’t happen at the WAFL level, or even the aggregate level. Data Ontap’s Deduplication feature only applies to the volume level, and more specifically the FlexVol type of volume, not the traditional volume. Netapp’s FlexVol isn’t free, you do have to fork out some money for it. If you are seriously interested in saving and recovering a lot of wasted space like when you are working with a VMWare installation as an example, deduplication fits the bill. VMWare does an astoundingly good job at duplicating and wasting space. Deduplication works superbly in this environment. Any data within a single NetApp volume is fair game for being deduplicated, whether it’s raw data or LUN’s on a volume. NetApp’s deduplication feature can be run as a post process which would run based on a watermark setting for a volume or run as a scheduled job. Using a hypothetical watermark of %75 of capacity, deduplication would start automatically if that watermark was reached. Also as I said, a deduplication run can be scheduled for any day of the week, time etc. This is good because you can schedule this type of intensive work during off peak hours. Deduplication uses a fairly complicated algorithm for determining if a block is worthy of being deduplicated. So deduplication works at the block level on the active volume, and uses the “world class” WAFL block-sharing mechanism. A data block has a digital signature which is compared with all other signatures on a volume. If a data block’s digital signature is an exact match, a byte for byte comparison is done for all of the data within the matching block. If the byte for byte comparison proves to be identical the duplicate block is tossed out the window and the disk space is reclaimed. Being tossed out the window is a little too simple. In real life reclaiming a duplicate data block involves updating an indirect inode which it was pointing to, followed by incrementing the block reference count for the block that’s being kept. This ultimately releases the duplicate data block. For the record, an initial deduplication run has to be done to create the digital signatures for comparison. It’s a very effective house cleaning technique, one which admittedly I should apply at home as well 

Netapp Qtree Creation Options From the Command Line

 You need a volume to work with. in this case, a volume called vol1 was created,
I create a qtree
myfiler> qtree create /vol/vol1/tomcat-5.6-dev
Shows how to create and set the security style for the qtree named java-src that’s on the root volume using the ntfs security model. FWIW, this is just showing the syntax of how it is done if the root volume or vol0 is used. I wouldn’t recommend doing that. Create a new volume, as demonstrated above, use a vol1 or something like that. So here I use
myfiler> qtree create java-src
myfiler> qtree security java-src ntfs
myfiler> cifs shares -add java-dev /vol/vol0/java-src
myfiler> cifs access java-dev BobTheAdmin Full Control
myfiler> qtree create /vol/vol1/forth-asm-src
myfiler> qtree security /vol/vol1/forth-asm-src ntfs
myfiler> cifs shares -add forth-dev /vol/vol1/forth-asm-src
myfiler> cifs access forth-dev BobTheAdmin Full Control
This sets the security style of the root volume vol0 to unix.
myfiler> qtree security / unix
Configures vol1′s security style to use the unix security model
myfiler> qtree security /vol/vol1/ unix
This disables oplocks for the apache-mods qtree
myfiler> qtree oplocks /vol/vol1/apache-mods disable
The following example enables oplocks for the vol1 volume.
myfiler> qtree oplocks /vol/vol1/ enable
Displays the status for all volumes. It will show the security oplocks attribute security style and snapmirrored status for all volumes and qtrees on the filer
myfiler> qtree status
Displays the security oplocks attribute style and snapmirrored status for vol1
myfiler> qtree status vol1
Shows stats for the qtree. NFS ops, CIFS ops etc..
myfiler> qtree stats
Displays the statistics for the qtrees in vol1 of the filer
myfiler> qtree stats vol1
How to clear the qtree statistics counters being reported for the vol_sap tree on the filer
myfiler> qtree stats -z vol_sap

Netapp Bulk Qtree Creation Script

Creating qtree’s in bulk is something that has to be done rarely, but when it does, it’s easier to do it with a script rather than one command at a time. I’ve only had to do this twice in my Netapp career, but they were a life saver. Below is a simplified view of the process.
Create a file with any name you want. I  called it bulk-qtrees . It should have the command structure that would be used to create qtree’s from the command line on the filer. Mine looks like:
qtree create /vol/vol1/tomcat-6.1-BOMG
qtree create /vol/vol1/tomcat-6.1-BOME
qtree create /vol/vol1/tomcat-6.1-BOME3
qtree create /vol/vol1/tomcat-6.1-BOMU
qtree create /vol/vol1/tomcat-6.1-BOM8
qtree create /vol/vol1/WAR-Files-DEBUG1
qtree create /vol/vol1/WAR-Files-DEBUG-dev
qtree create /vol/vol1/WAR-Files-DEBUG-staging
qtree create /vol/vol1/WAR-Files-DEBUG-prod
qtree create /vol/vol1/science-data-test
qtree create /vol/vol1/science-data-ab
qtree create /vol/vol1/science-data-trusted
qtree create /vol/vol1/science-data-imported-south
qtree create /vol/vol1/science-data-imported-south-w
qtree create /vol/vol1/science-data-imported-south-e
qtree create /vol/vol1/science-data-imported-south-central
qtree create /vol/vol1/science-data-export-prod
qtree create /vol/vol1/science-data-export-prod-sync
qtree create /vol/vol1/science-data-export-prod-out
qtree create /vol/vol1/science-data-export-prod-dup
qtree create /vol/vol1/web-int-dev
qtree create /vol/vol1/web-int-stage
qtree create /vol/vol1/web-int-prod
qtree create /vol/vol1/web-ext-dev
qtree create /vol/vol1/web-ext-stage
qtree create /vol/vol1/web-ext-prod
qtree create /vol/vol1/web-val-dev
qtree create /vol/vol1/web-val-stage
qtree create /vol/vol1/web-val-prod

Process the Qtree Creation Script

place file you just created above on /vol/vol0 which should be mounted with root permission on some system. ssh into the filer are run the following command from command line:
> source /vol/vol0/bulk-qtrees
Your qtree’s are now created.

Netapp failed disk rebuild time factor

If a disk fails outright as in it is truly a dead drive from a some type of hardware component failure, this requires reconstruction using the parity information that is stored on the disk. This is the life blood of the RAID 6 DDP. On smaller busy systems, rebuild will take longer because the reconstruction process has less priority than real work in order to not impose additional performance overhead. There are many factors that influence the rebuild time. Rebuild times on large RAID groups will take longer to calculate parity because there are that many more blocks to calculate parity for. So you can count on this affecting the time to rebuild. The number of back-end FC-AL loops are very important if you have large aggregate and busy system. If you have an FC-AL type filer, the more FC-AL loops you have then the more possible bandwidth you have to handle the increased I/O required to rebuild. FC-AL goes up to 400-500 MB/s and this has to be shared. Another factor is the controller types, the type of disk that failed, the storage controller model, and even the version of the Data ONTAP can impact rebuild times.
Most Netapp disk failures are see as soft failures where too many blocks are flagged as bad and not necessarily a fatal hardware failure. This is the old bad block table is full kind of mess. In Data ONTAP 7.1 onwards, these failed drives take advantage of the Rapid RAID Recovery to rebuild by copying the good blocks to a spare drive. This helps to speed up the rebuild and recovery time. In some cases significantly.