Friday, November 8, 2013

How to improve the Netapp storage performance

 How to improve the Netapp storage performance?

There is no direct answer for this question but we shall do it in several way.
If volume/lun present in ATA/SATA harddisk aggregate, then the volume can be migrated to FC/SAS disk aggregate.
For NFS/CIFS instead of accessing from single interface, multi mode vif can be configured to get better bandwidth and fault tolerance.
Always advised to keep aggr/vol utilization below 90%.
Avoid doing multiple volume backup in single point of time.
Aggr/volume/lun reallocation can be done to re–distribute the data to multiple disk for better striping performance.
Schedule scrubbing and deduplication scanning after business hours.
Avoid connecting different types of shelf in a same loop.
Avoid mixing up different speeds of disk and different types of disk in a same aggregate.
Always keep sufficient spare disk to replace incase of disk failure. Because reconstruction time will take more time and cause negative performance.
Keep the advised version of firmware/software which is recommended by netapp.
Better to have nearstore functionality to avoid backing up data from source filer.

Unable to map lun to solaris server, but solaris server side no issue. How to resolve the issue?

FROM STORAGE SIDE:
Verify iscsi/fcp license is added in storage
Verify iscsi/fcp session is logged in from server side
Verify luns are mapped to the corresponding igroup.
Verify whether correct host type is mentioned while creating igroup and lun
Verify whether correct iqn/wwpn number is added to igroup
Verify zoning is properly configured from switch side.
How to create the LUN for solaris server?
lun create –s size –t solaris /vol/vol1/lunname

How to create qtree and provide the security?

Qtree create /vol/vol1/qtreename
Qtree security /vol/vol1/qtree unix|ntfs|mixed

How to copy filer to filer?

ndmpcopy or snapmirror

How to resize the aggregate?

Aggr add agg(name) no.of.disk

How to increase the volume?

Traditional

Vol add vol(name) no.of.disk

Flexiable

Vol size vol(name) +60g

What is qtree?

Qtree are Logical partition of the volume

What is the default snap reserve in aggregate?

5%

What is snapshot?

Copy(read only) of active file system

What are the raid groups netapp supporting?, what is the difference between them?

Raid_dp(double parity,diagonal parity) ,raid4(striping&dedicated parity)
What are the protocols you are using?

Say some protocols like NFS, CIFS, ISCSI and FC
What is the difference between iscsi and fcp?

Iscsi-sending block through(tcp,ip)

Fcp-send through fibre medium
What is the iscsi port number your are using?

860 and 3260

What is the difference between ndmp copy and vol copy?

Ndmp copy –network data management protocol(used for tape backup)

Vol copy – is used to transfer volume to same or another aggr

What is the difference between ONTAP 7 & 8?

In ONTAP 7 the individual aggregate is limited to maximum of 16 TB. Where ONTAP 8 supports the new 64 bit aggregate and hence the size of the individual aggregate extends to 100 TB.

What are the steps need to perform to configure SnapMirror?

The SnapMirror configuration process consists of the following four steps:

a. Install the SnapMirror license on the source and destination systems: license add <code>

b. On the source, specify the host name or IP address of the SnapMirror destination systems you wish to authorize to replicate this source system.

options snapmirror.access host=dst_hostname1,dst_hostname2

c. For each source volume or qtree to replicate, perform an initial baseline transfer. For volume SnapMirror,

restrict the destination volume first: vol restrict dst_vol

Then initialize the volume SnapMirror baseline, using the following syntax on the destination:

snapmirror initialize -S src_hostname:src_v

oldst_hostname:dst_vol

For a qtree SnapMirror baseline transfer, use the following syntax on the destination:

snapmirror initialize –S src_hostname:/vol/src_vol/src_qtree

dst_hostname:/vol/dst_vol/dst_qtree

d. After the initial transfer completes, set the SnapMirror mode of replication by creating the

/etc/snapmirror.conf file in the destination’s root volume.


While doing baseline transfer you’re getting error message. What are the troubleshooting steps you’ll do?


Check both the hosts are reachable by running “ping” command
Check whether the TCP port 10566 & 10000 are open
Check whether the snapmirror license are installed in both and destination

Explain the different types of replication modes.

The SnapMirror Async mode replicates Snapshot copies from a source volume or qtree to a destination

volume or qtree. Incremental updates are based on a schedule or are performed manually using the

snapmirror update command. Async mode works with both volume SnapMirror and qtree SnapMirror.

SnapMirror Sync mode replicates writes from a source volume to a destination volume at the same time it is

written to the source volume. SnapMirror Sync is used in environments that have zero tolerance for data loss.

SnapMirror Semi-Sync provides a middle-ground solution that keeps the source and destination systems more

closely synchronized than Async mode, but with less impact on performance.

How do you configure multiple path in Snapmirror?

Add a connection name line in the snapmirror.conf file
/etc/snapmirror.conf
FAS1_conf = multi (FAS1-e0a,FAS2-e0a) (FAS1-e0b,FAS2-e0b)

Explain how De-Duplication works?

In the context of disk storage, deduplication refers to any algorithm that searches for duplicate data objects (for example, blocks, chunks, files) and discards those duplicates. When duplicate data is detected, it is not retained, but instead a “data pointer” is modified so that the storage system references an exact copy of the data object already stored on disk. This deduplication feature works well with datasets that have lots of duplicated date (for example, full backups).

What is the command used to see amount of space saved using deduplication?

df –s <volume name>

Command used to check progress and status of deduplication?

sis status
How do you setup Snapvault Snapshot schedule?

pri> snapvault snap sched vol1 sv_hourly 22@0-22

This schedule is for the home directories volume vol1
Creates hourly Snapshot copies, except 11:00 p.m.
Keeps nearly a full day of hourly copies

What is metadata?

Metadata is defined as data providing information about one or more aspects of the data,

1. Inode file
2. Used block bitmap file
3. Free block bitmap file
How do you shutdown filer through RLM?

telnet “rlm ip address”
RLM_Netapp> system power on

After creating LUN (iSCSI) & mapped the lun to particular igroup, the client not able to access the LUN. What are the trouble shooting steps you take?

Check whether IQN number specified is correct
Check whether the created LUN is in “restrict” mode
Check the iscsi status

In CIFS how do you check who is using most?

Cifs top

How to check cifs performance statistics

cifs stat

What do you do if a customer reports a particular CIFS share is responding slow?

Check the r/w using "cifs stat" & "sysstat -x 1".
If disk & cpu utilization is more then problem is with filer side only.
CPU utilization will be high if more disk r/w time, i.e.,during tapeback up & also during scrub activities.

what is the degraded mode? You have parity for failed disks then why the filer goes to degraded mode?

If the spare disk is not added within 24hours,then filer will be shutdown auomatically to avoid further disk failures and data loss.

Did you ever do ontap upgrade? From which version to which version and for what reason?

Yes i have done ontap upgrade from version 7.2.6.1 to 7.3.3 due to lot of bugs in old version.

How do you create a lun ?

lun create -s <lunsize> -t <host type> <lunpath>

Production Manager?

Production manager will do the planning,co-ordinating and controlling the process.

Performance Manager?

Performance manager will analyses the performance trends of application.systems and services.

How do you monitor the filers?

Using DFM(Data Fabric Manager) or also using SNMP you can monitor the filer.

What are the prerequisites for a cluster?

cluster interconnect cable should be connected.

shelf connect should be properly done for both the controllers

cluster license should be enabled on both the nodes

Interfaces should be properly configured for fail over

cluster should be enabled

What are the scenarios you have for a cluster failover?

If disk shelf power or shelf port is down, then failover will not happen. It cannot access the mail box disk. Mail box disk stores the cluster configuration data.

What is the diff bet cf takeover and cf force takeover?

If partner shelf power is off (metrco cluster), the forcetakover will work else normal takeover will work.

No comments:

Post a Comment