Professional Documents
Culture Documents
LVM
Layers:
Qtree, and/or subdirectories, export-able
|
Volume (TradVol, FlexVol), export-able, snapshot configured at this level.
|
agregate (OnTap 7.0 and up)
|
plex (relevant mostly in mirror conf)
|
raid group
|
disk
Disks - Physical hardware device :)
Spares are global, auto replace failed disk in any raid group.
Sys will pick correct size spare.
If no hot spare avail, filer run in degraded mode if disk fail, and
def shutdown after 24 hours! (options raid.timeout, in hours)
sysconfig -d # display all disk and some sort of id
sysconfig -r # contain info about usable and physical disk si
ze
# as well as which raid group the disk belongs t
o
disk zero spare # zero all spare disk so they can be added quickly to a
volume.
vol status -s # check whether spare disks are zeroed
web gui: Filer, Status
= display number of spares avail on system
web gui: Storage, Disk, Manage
= list of all disks, size, parity/data/spare/partner info,
which vol the disk is being used for.
(raid group info is omited)
Disk Naming:
.
2a.17 SCSI adaptor 2, disk scsi id 17
3b.97 SCSI adaptor 3, disk scsi id 97
a = the main channel, typically for filer normal use
b = secondary channel, typically hooked to partner's disk for takeover
use only.
Raid-DP?
2 parity disk per raid group instead of 1 in raid4.
If you are going to have a large volume/aggregate that spans 2 raid grou
p (in
a single plex), then may as well use raid-dp.
Larger raid group size save storage by saving parity disk.
at expense of slightly less data safety in case of multi-disks failure.
Plex
- mirrored volume/aggregate have two plexes, one for each complete copy
of the
data.
- raid4/raid_dp has only one plex, raid groups are "serialized".
aggregate - OnTap 7.0 addition, layer b/w volume and disk. With this, NA
recommend creating a huge aggregate that span all disks with
same RAID level, then carve out as many volume as desired.
QTree - "Quota Tree", store security style config, oplock, disk space usage an
d file limits.
Multiple qtrees per volume. QTrees are not req, NA can hae simple/plain
subdir at the the "root level" in a vol, but such dir cannot be converted
to qtree.
Any files/dirs not explicitly under any qtree will be placed in a
default/system QTree 0.
qtree create /vol/vol1/qtree1 # create a qtree under vol1
qtree security /vol/vol1/qtree1 unix # set unix security mode for the
qtree
# could also be ntfs or mixed
qtree oplocks /vol/vol1/qtree1 enable # enable oplock (windows access
can perform catching)
Config Approach
Aggregate:
Create largest aggregate, 1 per filer head is fine, unless need traditional vol.
Can create as many FlexVol as desired, since FlexVol can growth and srink as nee
ded.
Max vol per aggregate = 100 ??
TradVol vs QTree?
- use fewer traditional volume when possible, since volume has parity disk overh
ead
- and space fragmentation problem.
- use QTree as size management unit.
FlexVol vs QTree?
- Use Volume for same "conceptual management unit"
- Use diff vol to separate production data vs test data
- QTree should still be created under the volume instead of simple plain subdire
ctories
at the "root" of the volume.
This way, quota can be turned on if just to monitor space usage.
- One FlexVol per Project is good. Start Vol small and expand as needed.
Strink as it dies off.
- Use QTree for different pieces of the same project.
- Depending on the backup approach, smaller volume may make backup easier.
Should try to limit volume to 3 TB or less.
Quotas
mount root dir of the netapp volume in a unix or windows machine.
vi (/) etc/quotas (in dos, use edit, not notepad!!)
then telnet to netapp server, issue command of quota resize vol1 .
quota on vol1
quota off vol0
quota report
quota resize # update/re-read quotas (per-vol)
# for user quota creation, may need to turn quota off,on for vol
ume
# for changes to be parsed correctly.
Netapp quota support hard limit, threshold, and soft limit.
However, only hard limit return error to FS. The rest is largely useless,
quota command on linux is not functional :(
Best Practices:
Other than user home directory, probably don't want to enforce quota limits.
However, still good to turn on quota so that space utilization can be monitored.
/etc/quotas
## hard limit | thres |soft limit
##Quota Target type disk files| hold |disk file
##------------- ----- ---- ----- ----- ----- -----
* tree@/vol/vol0 - - - -
- # monitor usage on all qtree in vol0
* tree@/vol/vol1 - - - -
-
* tree@/vol/vol2 - - - -
-
/vol/vol2/qtree1 tree 200111000k 75K - -
- # enforce qtree quota, use kb is easier to compare on report
/vol/vol2/qtree2 tree - - 1000M -
- # enable threshold notification for qtree (useless)
* user@/vol/vol2 - - - -
- # provide usage based on file ownership, w/in specified volume
tinh user 50777000k - 5M 7M
- # user quota, on ALL fs ?! may want to avoid
tinh user@/vol/vol2 10M - 5M 7M
- # enforce user's quota w/in a specified volume
tinh user@/vol/vol2/qtree1 100M - - -
- # enforce user's quota w/in a specified qtree
# exceptions for +/- space can be specified for given user/location
# 200111000k = 200 GB
# 50777000k = 50 GB
# they make output of quota report a bit easier to read
# * = default user/group/qtree
# - = placeholder, no limit enforced, just enable stats collection
Snapshot
Snapshots are configured at the volume level. Thus, if different data need to ha
ve different snapshot characteristics, then they should be in different volume r
ather than just being in different QTree.
WAFL automatically reserve 20% for snapshot use.
snap list vol1
snap create vol1 snapname # manual snapshots creation.
snap sched # print all snapshot schedules for all volumes
snap sched vol1 2 4 # scheduled snapshots for vol1: keep 2 weekly, 4
daily, 0 hourly snapshots
snap sched vol1 2 4 6 # same as above, but keep 6 hourly snapshots,
snap sched vol1 2 4 6@9,16,20 # same as above, specifying which 3 hourly snaps
hot to keep + last 3 hours
snap reserve vol1 # display the percentage of space that is reserv
ed for snapshot (def=20%)
snap reserve vol1 30 # set 30% of volume space for snapshot
vol options vol1 nosnap on # turn off snapshot, it is for whole volume!
gotchas, as per netapp:
"There is no way to tell how much space will be freed by deleting a particular s
napshot or group of snapshots."
NFS
(/) etc/export
is the file containing what is exported, and who can mount root fs as root. Uni
x NFS related only.
/vol/vol0 -access=sco-i:10.215.55.220,root=sco-i:10.215.55.220
/vol/vol0/50gig -access=alaska:siberia:root=alaska
Unlike most Unices, NetApp allow export of ancestors and descendants.
other options:
-sec=sys # unix security, ie use uid/gid to define access
# other options are kerberos-based.
Besides just having export for nfs and ahare for cifs,
there is another setting about fs security permission style, nfs, ntfs, or mixed
.
this control characteristic of chmod and files ACL.
Once edit is done, telnet to netapp and issue cmd:
exportfs -a # rea-add all exports as per new file
exportfs -u vol/vol1 # unexport vol1 (everything else remains intact)
exportfs -au # remove all exports that are no longer listed in etc/ex
ports
The bug that Solaris and Linux NFS seems to exist on NetApp also.
Hosts listed in exports sometime need to be given by IP address, or an
explicit entry in the hosts file need to be setup. Somehow, sometime
the hostname does not get resolved thru DNS :(
options nfs.per_client_stats.enable on
# enable the collection of detained nfs stat per client
options nfs.v3.enable on
options nfs.tcp.enable on
# enable NFS v3 and TCP for better performance.
nfsstat # display nfs sttistics, separte v2 and v3
nfsstat -z # zero the nfsstat counter
nfsstat -h # show detailed nfs statistics, several lines per client, since
zero
nfsstat -l # show 1 line stat per client, since boot (non resetable stat)
CIFS
cifs disable # turn off CIFS service
cifs enable
cifs setup # configure domainname, wins. only work when cifs is of
f.
cifs testdc # check registration w/ Windows Domain Controller
wcc wafle cache control, oft use to check windows to unix mapping
-u uid/uname uname may be a UNIX account name or a numeric UID
-s sid/ntname ntname may be an NT account name or a numeric SID
SID has a long strings for domainname, then last 4-5 digits is the user.
All computer in the same domain will use the domain SID.
-x remove entries from WAFL cache
-a add entrie
-d display stats
(??) another map does the reverse of windows mapping back to NFS when fs is NFS
and access is from windows.
(or was it the same file?). It was pretty stupid in that it needed all users to
be explicityly mapped.
CIFS Commands
cifs_setup # configure CIFS, require CIFS service to be res
tarted
# - register computer to windows domain controll
er
# - define WINS server
options cifs.wins_server # display which WINS server machine is using
# prior to OnTap 7.0.1, this is read only
cifs domaininfo # see DC info
cifs testdc # query DC to see if they are okay
cifs prefdc print # (display) which DC is used preferentially
FilerView
FilerView is the Web GUI. If SSL certificate is broken, then it may load up a bl
ank page.
secureadmin status
secureadmin disable ssl
secureadmin setup -f ssl # follow prompt to setup new ssl cert
SSH
To allow root login to netapp w/o password, add root's id_dsa.pub to vol1/etc/ss
hd/root/.ssh/authorized_keys
Beware of the security implications!
Config Files
all stored in etc folder.
resolve.conf
nsswitch.conf
# etc/exports
/vol/unix02 -rw=192.168.1.0/24:172.27.1.5:www,root=www
/vol/unix02/dir1 -rw=10.10.10.0/8
# can export subdirs with separate permissions
# issue exportfs -a to reread file
Logs
(/) etc/messages.* unix syslog style logs. can configure to use remote syslog h
ost.
(/) etc/log/auditlog
log all filer level command. Not changes on done on the FS.
The "root" of vol 0,1,etc in the netapp can be choose as the netapp root and sto
re the /etc directory,
where all the config files are saved. eg.
/mnt/nar_200_vol0/etc
/mnt/na4_vol1/etc
other command that need to be issued is to be done via telnet/rsh/ssh to the net
app box.
< ! - - - - >
Howto
Create new vol, qtree, and make access for CIFS
vol create win01 ...
qtree create /vol/win01/wingrow
qtree security /vol/win01/wingrow ntfs
qtree oplocks /vol/win01/wingrow enable
cifs shares -add wingrow /vol/win01/wingrow -comment "Windows share growing"
#-cifs access wingrow ad\tinh "Full Control" # share level control is usually r
edundant
cifs access -delete wingrow Everyone
cifs access wingrow "authenticated users" "Full Control"
# still need to go to the folder and set file/folder permission,
# added corresponding department (MMC share, permission, type in am\Dept-S
# the alt+k to complete list (ie, checK names).
# also remove inherit from parent, so took out full access to everyone.