Legato NetWorker Commands Index:
ansrdascdcode
cdi_block_limits
cdi_bsf
cdi_bsr
cdi_eod
cdi_filemark
cdi_fsf
cdi_fsr
cdi_get_config
cdi_get_status
cdi_inq
cdi_load_unload
cdi_locate
cdi_offline
cdi_rewind
cdi_set_compression
cdi_space
cdi_ta
cdi_tapesize
cdi_tur
changers
dasadmin
ddmgr
EMASS_silo
erase
generate_test_tape
hadump
hafs
hagentd
hagetconf
haprune
hascsi
hasubmit
hasys
hpflip
IBM_silo
ielem
inquire
jbconfig
jbexercise
jbverify
ldunld
lgtolic
lgtolmd
libcdi
libscsi
libsji
libstlemass
libstlibm
libstlstk
lrescan
lreset
lus_add_fp_devs
lusbinfo
lusdebug
mini_el
mm_data
mminfo
mmlocate
mmpool
mmrecov
msense
mt
ndmpjbconf
networker
nsr (1)
nsr (5)
nsr_archive_request
nsr_client
nsr_crash
nsr_data
nsr_device
nsr_directive
nsr_getdate
nsr_group
nsr_ize
nsr_jukebox
nsr_label
nsr_layout
nsr_license
nsr_migration
nsr_notification
nsr_policy
nsr_pool
nsr_regexp
nsr_resource
nsr_schedule
nsr_service
nsr_shutdown
nsr_stage
nsr_storage_node
nsr_support
nsr_usergroup
nsradmin
nsralist
nsrarchive
nsrcap
nsrcat
nsrck
nsrclone
nsrcnct
nsrd
nsrexec
nsrexecd
nsrhsmck
nsrhsmclear
nsrhsmd
nsrhsmls
nsrhsmnfs
nsrhsmrc
nsrhsmrecall
nsrib
nsriba
nsrim
nsrindexasm
nsrindexd
nsrinfo
nsrjb
nsrlic
nsrls
nsrmig
nsrmm
nsrmmd
nsrmmdbasm
nsrmmdbd
nsrmon
nsrndmp_clone
nsrndmp_recover
nsrndmp_save
nsrpmig
nsrports
nsrretrieve
nsrssc
nsrstage
nsrtrap
nsrwatch
nwadmin
nwarchive
nwbackup
nwrecover
nwretrieve
pathownerignore
pmode
preclntsave
pstclntsave
read_a_block
recover
relem
resource
save
savefs
savegrp
savepnpc
scanner
sjiielm
sjiinq
sjimm
sjirdp
sjirdtag
sjirelem
sjirjc
sjisn
sn
ssi
stk_eject
STK_silo
stli
sym2xdm
tapeexercise
tur
uasm
writebuf
* - Windows Only
* mt
* nsrlpr
* nsrperf
nsr_storage_node
nsr_storage_node - description of the storage node featureSYNOPSIS
The storage node feature provides central server control of distributed devices for saving and recovering client data.
DESCRIPTION
A storage node is a host that has directly attached devices that are used and controlled by a NetWorker server. These devices are called remote devices, because they are remote from the server. Clients may save and recover to these remote devices by altering their "storage nodes" attribute (see nsr_client(5)). A storage node may also be a client of the server, and may save to its own devices. The main advantages provided by this feature are central control of remote devices, reduction of network traffic, use of faster local saves and recovers on a storage node, and support of heterogeneous server and storage node architectures. There are several attributes which affect this function. Within the NSR resource (see nsr_service(5)) there are the "nsrmmd polling inter- val", "nsrmmd restart interval" and "nsrmmd control timeout" attributes. These attributes control how often the remote media dae- mons (see nsrmmd(1)) are polled, how long between restart attempts, and how long to wait for remote requests to complete. Within the "NSR device" resource (see nsr_device(5)) the resource's name will accept the "rd=hostname:dev_path" format when defining a remote device. The "hostname" is the hostname of the storage node and "dev_path" is the device path of the device attached to that host. There are also hidden attributes called "save mount timeout" and "save lockout," which allow a pending save mount request to timeout, and a storage node to be locked out for upcoming save requests. Within the "NSR client" resource (see nsr_client(5)), there are "stor- age nodes" and "clone storage nodes" attributes. The "storage nodes" attribute is used by the server in selecting a storage node when the client is saving data. The "clone storage nodes" attribute is used during cloning, to direct cloned data from a volume on the storage node (the node represented by this client resource). The "NSR jukebox" resource (see nsr_jukebox(5)), contains the "read hostname" attribute. When all of a jukebox's devices are not attached to the same host, this attribute specifies the hostname that is used in selecting a storage node for recover and read-side clone requests. For recover requests, if the required volume is not mounted, and the client's "storage nodes" attribute does not match one of the owning hosts in the jukebox, then this attribute is used. For clone requests, if the required volume is not mounted, then this attribute is used.
INSTALL AND CONFIGURE
In order to install a storage node, choose the client and storage node packages, where given the choice. For those platforms that do not have a choice, the storage node binaries are included in the client package. after adding root@storage_node to the server's administrator list, (where root is the user running jbconfig(1) and storage_node is the hostname of the storage node). This administrator list entry may be removed after jbconfig(1) completes. In addition to jbconfig(1), when running scanner(1) on a storage node, root@storage_node must be on the adminstrator list. When a device is defined (or enabled) on a storage node, the server will attempt to start a media daemon (see nsrmmd(1)) on the node. In order for the server to know whether the node is alive, it polls the node every "nsrmmd polling interval" minutes. When the server detects a problem with the node's daemon or the node itself, it attempts to restart the daemon every "nsrmmd restart interval" minutes, until either the daemon is restarted or the device is disabled (by setting "enabled" to "no" in the device's "enabled" attribute). In addition to needing a storage node enabler for each storage node, each jukebox will need its own jukebox enabler.
OPERATION
A storage node is assignable for work when it is considered functional by the server - nsrexecd(1) running, device enabled, nsrmmd(1) running, and the node is responding to the server's polls. When a client save starts, the client's "storage nodes" attribute is used to select a storage node. This attribute is a list of storage node hostnames, which are considered in order, for assignment to the request. The exception to this node assignment approach is when the server's index or bootstrap is being saved - these save sets are always directed to the server's local devices, regardless of the server's "storage nodes" attribute. Hence, the server will always need a local device to backup such data, at a minimum. These save sets can later be cloned to a storage node, as can any save set. If a storage node is created first (by defining a device on the host), and a client resource for that host is then added, that hostname is added to its "storage nodes" attribute. This addition means the client will back up to its own devices. However, if a client resource already exists, and a device is later defined on that host, then the client's hostname must be added manually to the client's "storage nodes" attribute. This attribute is an ordered list of hostnames; add the client's own name as the first entry. The volume's location field is used to determine the host location of an unmounted volume. The server looks for a device or jukebox name in this field, as would be added when a volume resides in a jukebox. Vol- umes in a jukebox are considered to be located on the host to which the jukebox is connected. The location field can be used to bind a stand- alone volume to a particular node by manually setting this field to any device on that node (using the "rd=" syntax). For jukeboxes which do not have all of their devices attached to the same host, see the previ- ous description of the "read hostname" attribute. There are several commands that interact directly with a device, and so must run on a storage node. These include jbconfig(1), nsrjb(1) and scanner(1), in addition to those in the device driver package. Invoke these commands directly on the storage node rather than on the server, respectively. Such a request would be divided into two sub-requests, one to read volumeA from storage node A and another to read volumeB from storage node B. A clone request involves two sides, the source that reads data and the target that writes data. These two sides may be on the same host or on different hosts, depending on the configuration. The source host is determined first and then the target host. If the volume is mounted, the source host is determined by its current mount location. If the volume is not mounted at the time of the clone request and it resides in a jukebox, then the source host is determined by the value of the jukebox's "read hostname" attribute. Once the source host is known, the target host is determined by examin- ing the "clone storage nodes" attribute of the client resource of the source host. If this attribute has no value, the "clone storage nodes" attribute of the server's client resource is consulted. If this attribute has no value, the "storage nodes" attribute of the server's client resource is used.
LIMITATIONS
A server cannot be a storage node of another server.
SEE ALSO
jbconfig(1), mmlocate(1), nsr_client(5), nsr_device(5), nsr_jukebox(5), nsr_service(5), nsrclone(1), nsrexecd(1), nsrjb(1), nsrmmd(1), nsrmon(1), scanner(1).
ADVERTISEMENT
Legato NetWorker 7.xMan(1) output converted with man2html, sed, awk