You are on page 1of 21

Basics

Master (NIM master):


The one and only one machine in a NIM environment that has
commands remotely on NIM clients. The NIM master holds all
client can only have one master, and a master can not be a
master. The NIM master must be at an equal or higher level

permission to run
the NIM resources. A
client of any other
than the client.

Client (NIM client):


Any standalone machine or lpar in a NIM environment other than the NIM master.
Clients use resources that reside on the NIM master to perform various software
maintenance, backup ...
Resource (NIM resources):
This can be a single file or a whole filesystem that is used to provide some
sort of information to, or perform an operation on a NIM client. Resources are
allocated to NIM clients using NFS and can be allocated to multiple clients at
the same time. Resources can be: mksysb, spot, lpp_source, machines...
Allocate/Allocation:
This process is what allows your NIM client to access resources in NIM. The
master uses NFS to perform the allocation process. Resource can be allocated to
one or more NIM clients at the same time. You can check which resources are
allocated to clients by lsnim -a <type> command. For clean up purposes, the
allocated resorces must be deallocated.
nimsh (NIM service handler):
For environments where the standard rsh protocols are not secure enough, nimsh
may be implemented. With nimsh, the primary port is 3901, and it listens for
service requests. The primary port is used for stdin and stdout while stderr is
redirected to secondary, which is port 3902.
more info: http://www-01.ibm.com/support/docview.wss?uid=isg3T1010383
-----------------------------------NIM DATABASE:
The NIM database is stored in the AIX Object Data Management (ODM) repository on
the NIM master and is divided into four classes: machines, networks, resources,
groups.
machines: shows the machines in NIM (master, clients)
networks: shows what type of network (topology: ent, Token-Ring... ) can be
used
resources: shows resource types: mksysb, spot ...
Client Install
Install

Install from NIM:

NIM performs network installs by using a client/server model based on the bootp/tftp
protocols for receiving a network boot image.
1. config for netboot
Configure client to be booted from master. Then reboot client and in SMS choose
network boot.
2. /etc/bootptab
When a NIM client is configured to be booted from the NIM Master, client requests
info from server about the boot image.
The bootpd daemon will use the /etc/bootptab configuration file to pass
information to the client (server, gateway IP...).
tail /etc/bootptab
aix21.domain.com:bf=/tftpboot/aix21.domain.com:ip=10.200.50.50:ht=ethernet:sa=50.20.
100.48:gw=10.200.50.1:sm=255.255.255.0:
3. /tftpboot
After client receives bootp reply, next step is transferring the boot image to
the client. It is achieved with the help of tftp.
root@aixnim1: /etc # ls -l /tftpboot
lrwxrwxrwx
1 root
system
34 Dec 19 18:36 aix21.domain.com ->
/tftpboot/spot_5200-08.chrp.mp.ent
-rw-r--r-1 root
system
1276 Dec 19 18:36 aix21.domain.com.info
...
-rw-r--r-1 root
system
9260943 Dec 8 15:31 spot_520008.chrp.mp.ent
(this contains the boot image (kernel), what the client is using, until it can
NFS mount the SPOT )
Once the boot image is obtained, the client requests (using tftp) a configuration
file (.info).
This contains info about which server contains the install image and other
necessary install resources.
4.-5.-6. NFS mount of resources

After the boot image is loaded into memory at the client, the SPOT and other
resources are NFS mounted in the client's RAM file system.
SPOT consists of filesets (device drivers, BOS install programs) what is used
during boot.
After install finished:
Upon completion of install, the client sends state information to the master via
'nimclient' calls to the master's nimesis daemon.
The NIM master then deallocates all install resources from the client.
The deallocation process consists of:
- Removing files from tftp directory
- Remove file entry in /etc/bootptab
- Unexporting nfs resources from client (remove entries from /etc/exports)
- Updating client information in the NIM database (machine state)
----------------------SERVER SIDE (PUSH) INSTALL:
For new installations and mksysb restores:
1. install NIM filesets
bos.sysmgt.nim.master
bos.sysmgt.nim.spot
2. configure machine as NIM master
smitty nimconfig
(Netw. name and Primary Netw. Install Interface should be set)
3. Define lpp_source
smitty nim_mkres -> choose lpp_source
(Server of Resource : master, and the others...)
4. After lpp_source, define SPOT
smitty nim_mkres -> choose spot
(Server of Resource: master, and the others...)
5. After the 2 basic resources defined, add the first NIM client
smitty nim_mkmac
6. start AIX installation to the client
IN SMITTY:
smitty nim_task_inst -> select install
(install method should be rte, and SPOT and LPP (what we created above))
(if mksysb restore, then mksysb)
Select:
Accept new license agreements -> yes
Initiate reboot and installation now -> no (this should be done manually,
because it is the first time installation)
IN COMMAND LINE:
1.allocate sopt and lpp_source to the client:
nim -o allocate -a spot=spot_5300-05 -a lpp_source=5300-05 aix21
2.initiate the install (it will set the bootlist to network boot on the client
and then reboot the client)
nim -o bos_inst -a source=rte -a installp_flags=agX -a accept_licenses=yes aix21
(a:apply, g:install prereqs, X:expand filesystems)
7. Additional checks before installation
- verify that the correct entry has been added in the /etc/bootptab file:
tail /etc/bootptab -> itt will show a line with the name of the host and

other settings
- verify that boot files have been created in the /tftpboot directory
a symbolic link with the hostname (this is a link to the boot image
(kernel))
a <client name>.info file, which is used to set variables during
installation
- lsnim -l <hostname> -> Cstate will be changed from "ready" to "enabled"
- showmount -e -> it should show what is exported to the system
(it is a <hostname>.script file, where you can see some details: ip...)
8. Start client installation
power on the client -> go into SMS menu
- 2. Setup Remote IPL -> select the desired adapter
- 1. IP parameters -> then if needed Client/Server/Gateway IP Address and Subnet
Mask can be set
**VERY IMPORTANT**
If master and client are on the SAME subnet, and ping does not work with given
gateway, then set Gateway IP to NIM masters ip address!!
There were a few firmware levels that made you use 0.0.0.0 if the master and
client were on the same network.
- then go back (ESC) to do a Ping Test (1. Execute Ping Test)
- then go back to the main menu (M) -> 5. Select Boot Options -> 1. Select
Install/Boot device -> 6. Network
- select the needed adapter -> 2. Normal Mode Boot -> 1. Yes
(later need to be chosen: language, disk to be installed on (mksysb: 2 disks are
needed, lpp_source: 1 disk is needed only))
9. Redo checks - and during install:
-/etc/bootptab -> no longer contains a specific line
-/tftpboot -> no longer contains the link
-lsnim -l <hostname> -> Cstate must be set back to "ready for nim op."
-nim -o showlog -a log_type=boot aix21
If install unsuccessful:
nim -Fo reset aix21
nim -Fo deallocate -a subclass=all aix21
----------------------AT CLIENT SIDE (PULL):
BOS installation from the client:
1. lpp_source and SPOT allocated first to the client
nimclient -o allocate -a lpp_source=LPP_53_ML4 -a spot=SPOT_53_ML4
2. starting BOS installation
nimclient -o bos_inst -a accept_license=yes
OS Update from the client:
Performing an update_all (cust) operation from the client:
nimclient -o cust -a lpp_source=lpp5305 -a fixes=update_all
----------------------Debugging an installation
If there are problems during BOS install, you can use 911.

Type: 911 at this screen, and it will turn on debug mode, which will show many
additional info during install.

----------------------NIM MASTER BACKUP AND RESTORE TO A NEW LPAR:


1) Take a backup of the NIM database (into a file in rootvg on the NIM master)
smitty nim -> perform nim admin. -> backup/restore nim database
2) make an mksysb of master, to itself.
3) restore mksysb to new LPAR:
smitty nim -> perform nim spftw. install. -> install and upd. softw. -> install base
op. system -> ... -> mksysb
after you have chosen client, mksysb, spot, change this to 'no'
Remain NIM client after install? [no]
(This eliminates the removal of bos.sysmgt.nim.master, nim.spot filesets)
4) After the restore, copy to the new LPAR the various lpp_source, scripts,
bosinst_data, resources you want to preserve.
- If you want to keep the IP of the old NIM server on the new LPAR:
5) change hostname and IP on the old NIM server to something different
6) set hostname and IP on the new LPAR (using the original IP of the old NIM server)
7) restore nim database: smitty nim -> perform nim admin. -> backup/restore nim
databse
- If new IP will be used on the new LPAR:
5) On the new LPAR, run the nim_master_recover command to restore the NIM database
(It will likely look for the copied resources in the exact path and filenames they
had on the old NIM server)

-----------------------------------/ETC/NIMINFO:
This file always exist on the NIM master and mostly will exist on a NIM client.
This file is a text file and contains hostname information for the client, tells
the client who its master is, communication port and protocol informations. This
file should not be edited manually. If there is incorrect information in the
file, it should be removed and recreated.
rebuild /etc/nimifo of NIM master:
on NIM master: nimconfig -r
rebuild /etc/niminfo of NIM client:
on NIM master: smitty nim -> perf. nim adm. -> manage machines -> specify new
master
(select client, then NIM master (if already used master name is used, it
rebuilds /etc/niminfo on client))
on NIM client: niminit -a master=<MASTER_HOSTNAME> -a name=<CLIENT_NIM_NAME>
(niminit -a master=aixnim01 -a name=aix01 -a connect=nimsh (it will use nimsh,
deafult is rsh))
(niminit -av name=aix31 -a master=aixnim1.domain.com -a master_port=1058 (-v:
verbose mode))
-----------------------------------Commands on Master:
/var/adm/ras
files

this directory contains the NIM master log

/var/adm/ras/nimsh.log
log of nimsh (connection problem with client
can be checked here)
/var/adm/ras/nimlog
general nimlog file, can be view: alog -f
/var/adm/ras/nimlog -o (shows failed NIM operations)
lsnim
shows the classes of the NIM database:
machines, networks ... (it is stored in the ODM)
lsnim -c machines
lists this class elements: machines,
networks, resources
lsnim -t <type>
lists the resources of that type (spot,
lpp_source, mksysb, standalone...) (e.g. lsnim -t spot)
lsnim -l <resource>
shows the attributes of the resource (e.g.
lsnim -l spot_5300_09)
lsnim -O <resource>
shows valid nim operations for that resource
(remove, change...)(e.g: lsnim -O lpp5300)
nim -o check <resource>
<rewsource>)

check the status of a resource (nim -Fo check


(on lppsource: it will create .toc and checks

filesets for simages attribute)


(on spot: rebuilds the spot network boot
images, if necessary, and change state to "ready for use")
(on machine: check the status, if Cstate is
not OK, it will inform about that)
nim -o lslpp <client>
lists client installed filesets (good command
for checking connection between master and client)
lsnim -a spot
shows which spot is allocated to which client
(you can check lpp_source ans mksysb as well)
nim -o deallocate -a spot=<spot> <client>
it will deallocate the specified
spot from a given client
nim -Fo deallocate -a subclass=all <client> it will deallocate all allocated
resources from a given client
(-F is force, typically you should only need
to use this flag with "reset" operation.)
nim -Fo reset <resource>
reset a NIM object state to "ready for NIM
operation" (it is needed if an operation failed/stopped)
(on machine: Cstate will be: ready for a NIM
operation)
(on spot: Rstate will be: ready for use)
nim -o remove <resource>
removes an object (object definitions will be
removed from NIM db, but dir and filesets will remain)
(if you remove a spot, directory will be
removed as well (unless you umount it before the command)
-----------------------------------Commands on Client:
nimclient -l -L aix31
for the client (aix31)
nimclient -o allocate -a lpp_source=lpp5305
client
nimclient -o deallocate -a lpp_source=lpp5305
the client
nimclient -l -c resources aix31
the client
nimclient -Fo reset
state
------------------------------------

list all available resources


allocate an lppsource to the
deallocate an lppsource to
show allocated resources for
resetting the NIM client

How to reset/deallocate resources:


If resources were allocated to a client and later the operation failed or want
to do a clean up:
1. check what is allocated:
-lsnim -a spot, lsnim -a lpp_source, lsnim -a mksysb
resource is alloated to which clients
-lsnim -l <resource> | grep alloc_count
many clients it is allocated to
-lsnim -l <resource> | grep state
Rstate/Cstate (Resource/Current state) of a resource
-showmount -e; tail /etc/bootptab
anything is exported to a client

<--it will show which


<--it will show how
<--it will show
<--it not show if

2. reset the client state: (it will reset the Cstate/Rstate of a resource to
"ready for use")
-nim -Fo reset <client>
3. deallocate the given resources:
-nim -o deallocate -a spot=<spot> <client>
(nim -Fo deallocate -a subclass=all <client>)
-----------------------------------Preparing a system for maintenance (network) boot:
# nim -Fo reset <client>
<--reset the state of the
client (if it was not "ready for NIM operations")
# nim -o deallocate -a subclass=all <client>
<--deallocates all
resources from client (if lpp_source/spot was allocated to it before)
# nim -o maint_boot -a spot=spot_5300-11-04 <client> <--prepares the system for
network boot
(after boot if needed later, you can do reset and deallocate again)
-----------------------------------SOME CHECKS FOR COMMON PROBLEMS:
ON MASTER:
- check the communication between nim master and client: nim -o lslpp <nim
client>
- check if there are allocations to the client: lsnim -a spot ... (reset client,
deallocate resource)
ON CLIENT:
- if rsh is used:
-check correct connection (for connecttion refused error: inetd.conf,
.rhosts file)
-check if firewall is blocking communication (telnet to rsh ports)
- if nimsh is used:
-check nimsh log: /var/adm/ras/nimsh.log
-check if nimsh is running: lssrc -s nimsh (restart can help: stopsrc -s
nimsh; startsrc -s nimsh)
-check /etc/niminfo file (if there is invalid entry, correct on master and
recreate /etc/niminfo)
- for authentication (cpuid) problems in the log:

on client check cpuid: uname -m


on master compare it with stored value of the client: lsnim -l <client>
if differs, change it to correct value (smitty nim --> perform nim adm. ->
manage machines -> change show char.)
or you can turn off cpu validation on master: nim -o change -a
validate_cpuid=no master (/etc/niminfo on client my need to be recreated)
(if validate_cpu is on yes, lsnim -l master will not show its value only if
it is on no)
- for authentication errors in the log:
may be problem with reverse reolution: in /etc/niminfo there is only a
hostname, but /etc/hosts give back an FQDN:
# grep NIMSH_AUTH /etc/niminfo
export NIMSH_AUTH="master01|FF00FF00FF00"
# host 192.168.1.1
master01.domain.com is 192.168.1.1
Change /etc/niminfo to include the domain and restart:
# grep NIMSH_AUTH /etc/niminfo
export NIMSH_AUTH="master01.domain.com|FF00FF00FF00"
# stopsrc -s nimsh; startsrc -s nimsh
-----------------------------------on nimclient in /var/adm/ras/nimsh.log:
error: remote value passed, '00080EC2D550', does not match environment value
'00080E82D990
This means, NIM client does not store in /etc/niminfo file the correct cpu id of
NIM master.
(Could come up after NIM master LPM movement.)
1. check both values:
stored value on nim client:
# cat /etc/niminfo | grep MASTERID
export NIM_MASTERID=00080E82D990
actual value on nim master:
# uname -m
00080EC2D550
2. correct /etc/niminfo file on client
vi /etc/niminfo and change it to the actual value
(output of uname -m from nim master)
3. restart nimsh on nim client:
stopsrc -s nimsh
startsrc -s nimsh
Client (machines)
Machines
NIM CLIENT:
On NIM master clients should be defined as NIM objects (machines). The types of
a NIM client can be:
-standalone: it is not dependent on any NIM resources for functioning (this
is used mostly).

-diskless, dataless: these needs certain resouces (I have never seen clients
with this type)
Settings/attributes of a NIM client can be cheked on NIM master: lsnim -l
<client>
# lsnim -l aix31
aix31:
class
type
comments
connect
platform
netboot_kernel
if1
...

=
=
=
=
=
=
=

machines
standalone
autosysb:
shell
chrp
mp
VLAN448_Admin_10_200_30_0 aix31 0 ent1

----------------------------------------------------Creating a NIM client:


1. checking communication between master and client:
from master: rsh aix222 date (if problem update on client .rhosts, or
firewall port opening, or with nimsh)
from client: telnet aixnim1 1058
(if nimsh used port 3901 is needed)
2. create nim client: smitty nim -> nim admin...
after giving hostname (aix222):
* NIM Machine Name
name what nim will use
* Machine Type
[standalone]
* Hardware Platform Type
[chrp]
Kernel to use for Network Boot
[mp]

[aix222]

Communication Protocol used by client


[shell]
choose shell (it is rsh) (for nimsh other ports are needed)
Primary Network Install Interface
*
Cable Type
tp
pair (we choosed it) (bn is coaxial cable (we don't use it)
Network Speed Setting
[100]
interfaces speed: netstat -v
Network Duplex Setting
[full]
above
*
NIM Network
[ent-Network1]
*
Network Type
ent
*
Ethernet Type
Standard
*
Subnetmask
[255.255.255.0]
subnetmask
*
Default Gateway Used by Machine
[50.50.110.3]
client default gateway
*
Default Gateway Used by Master
[50.20.100.1]
*
Host Name
aix222

<--any

<--we
<--twisted
<--checked
<--same as

<--set
<--set

(If you created earlier a network for this new client (what is recommended),
network settingswill be filled automatically.)
--------------------------

create a client with command: nim -o define -t standalone -a if1="net_10_1_1


lpar55 0 ent0" LPAR55
-net_10_1_1
the name of the NIM network to which this interface
connects
-lpar55
hostname associated with the interface
-0
MAC address of the interface (if MAC address will not
be set 0 can be used)
-ent0
logical device name of the network adapter
-LPAR55
the name of the resource to create (host will be
referred via this name in NIM commands)
-------------------------3. after that, create /etc/niminfo file
on client: niminit -a master=aixnim01 -a name=aix222
----------------------------------------------------NIM commands from a client:
niminit -a master=aixnim01 -a name=aix222 -a master_port=1058 rebuild
/etc/niminfo form master (aixnim01) to client (aix22) using given port
nimclient -l -l <client>
you can retrieve data from nim master about
the client (same as on master: lsnim -l <client>)
nimclient -l -L -t spot <client>
list availablre SPOT resources on the nim
master
nimclient -l -p -s pull_ops
list the operations which may be initiated
from this machine, enter:
----------------------------------------------------Running commands from NIM master on a client via nimsh
If you cannot reach a client (console/ssh/telnet does not work on a nim client),
you can use nimsh to run commands on LPARs. (nimsh (port 3901) should be able to
communicate to the client)
1. vi nim_script.ksh
<--on the nim master create a file
(script) with commands you would like to run on a nim client
#!/usr/bin/ksh
hostname
oslevel -s
ps -ef
2. nim -o define -t script -a server=master -a location=/root/nim_script.ksh
nim_script
<--create a nim resource from that script
3. /usr/lpp/bos.sysmgt/nim/methods/m_cust -a script=nim_script lpar11
<--running this script (nim_script) on nim client (lpar11)
(it takes few seconds to show the output of those commands)

LPP SOURCE
LPP_SOURCE:
An lpp_source is a directory similar to AIX install CDs. It contains AIX Licensed Program Products (LLPs) in Backup
File Format (BFF) format and RPM Package Manager (RPM) filesets that you can install.

root@aixnim1: / # lsnim -l 5300-TL_00_to_08_works


5300-TL_00_to_08_works:
class
= resources
type
= lpp_source
arch
= power
Rstate
= ready for use
prev_state = unavailable for use
location = /nim/lppsource/5300-TL_00_to_08_works
simages = yes
<--yes is needed for creation of a SPOT, or for installation over the network
...
operations:
showres = show contents of a resource
lslpp = list LPP information about an object
check = check the status of a NIM object
lppmgr = eliminate unnecessary software images in an lpp_source
update = add or remove software to or from an lpp_source
nim -o showres lpp5300
lists filesets in the lpp_source
nim -Fo check <lpp_source>
checks and rebuilds the .toc file, and determines if all files are included for
simages=yes
nim -o remove lpp5300
removes the lpp_source object (the object definitions will be removed but the
directory/filesets remain)
nim -o lppmgr <lpp_source>
removes duplicate filesets from an lpp_source
nim -o lppmgr -a lppmgr_flags=rub <lpp_source> removes (-r) all duplicate updates (-u) and duplicate base levels (-b)
nim -o update -a source=/dev/cd0 -a packages=all 5305_lpp add software to lpp_source
---------------------Creating a NIM lpp_source (above 5300): Base (5300) + TL update (installable for a new system)
1. Replicating a base level lpp_source (and give the name lpp5304, what we will extend with other filesets later)
using a lpp_source (lpp5300) as source:
nim -o define -t lpp_source -a server=master -a source=lpp5300 -a location=/nim/lppsource/lpp5304 lpp5304
using a directory (pathname) as a source:
nim -o define -t lpp_source -a server=master -a source=/nim/lppsource/lpp5300 -a location=/nim/lppsource/lpp5304
lpp5304
----another way if we don't give the "source":
nim -o define -t lpp_source -a server=master -a location=/nim/lpp_sources/OSFilesets/bb/DVD_1/installp/ppc 530008_bb
(here the "location" already existing directory, and the filesets will not be copied to anywhere, only the nim object
will be created)
----2. Download a TL level and create an lpp_source from it + check (remove) duplicate filesets
-creating an lpp_source from TL update directory
nim -o define -t lpp_source -a server=master -a location=/nim/lppsource/530004 TL5304
(it can warn us, that simages attribute can not be set, so it can not be used for BOS install (it is a TL update only), it
is OK)
-checking and removing duplicate filesets
nim -o lppmgr -a lppmgr_flags=rub TL5304
(r:remove, u:update filesets, b:base levels)

3. Updating the base level lpp_source (in point 1 named as lpp5304) with TL update lpp_source (TL5304)
-updating base level lpp_source from a TL directory
nim -o update -a show_progress=yes -a packages=all -a source=TL5304 lpp5304
(source:downloaded update lpp_source; lpp5304:the base lpp_source what we wanted to update with the downloaded
update lpp_source)
-checking and removing duplicate filesets
nim -o lppmgr TL5304
---------------------Creating a NIM lpp source + TL update (with SMITTY)
(An lpp_source from TL6 SP6 DVD image will be updated by TL7 SP3 filesets downloaded from FixCentral)
1. I copied 2 AIX DVDs into 1 directory:
cp -prh /iso/installp/ppc/* /nim/lppsource/TL6_SP6_base
(for both DVDs)
2. lpp_source creation
smitty nim -> nim administration -> manage resouces -> define a resource (lpp_source)
Resource Name
Resource Type
Server of Resource
Location of Resource

[TL6_SP6_base]
lpp_source
[master]
[/nim/lppsource/TL6_SP6_base]

then it showed this:


...
Now checking for missing install images...
All required install images have been found. This lpp_source is now ready.
(if lpp_source will be used for install, simages must be on yes, check with lsnim -l <lpp_source>
3. Checking if there are duplicate filesets (no language filesets were removed)
smitty nim -> nim administration -> manage resouces -> perform operations (lpp_source name -> lppmgr)
TARGET lpp_source
TL6_SP6_base
PREVIEW only? (remove operation will NOT occur) yes
REMOVE DUPLICATE software
yes
REMOVE SUPERSEDED updates
yes
REMOVE LANGUAGE software
no
PRESERVE language
[C]
REMOVE NON-SIMAGES software
no
SAVE removed files
no
DIRECTORY for storing saved files
[]
EXTEND file systems if space needed?
yes
4. Downloaded from FixCentral and then copied TL update filesets to NIM
(It was TL7 SP3)
5. Create lpp_source from TL update direcory
Resource Name
[TL7_SP3_update]
Resource Type
lpp_source
Server of Resource
[master]
Location of Resource
[/nim/lppsource/TL7_SP3_update]
then it showed this:

warning: 0042-267 c_mk_lpp_source: The defined lpp_source does not have the
"simages" attribute because one or more of the following packages are missing:
bos
bos.net
bos.diag
...
(it is OK, we do not need simages=yes, because it is only a TL update lpp_source not a base install)
(removed duplicate (superseded) as in step 3)
6. Update TL6_SP6_base with TL7_SP3_update
smitty nim -> nim administration -> manage resouces -> perform operations (base what we want tu update -> update
TARGET lpp_source
SOURCE of Software to Add
SOFTWARE Packages to Add

TL6_SP6_base
TL7_SP3_update
[all]

After this TL6_SP6_base was renamed, because it contains now everything.


I did the following:
- I copied the content of TL6_SP6_base to a new directory (TL7_SP3_all)
- created a new lpp_source from directory TL7_SP3_all
- removed TL6_SP6_base
NIMADM
AIX migration (upgrade) with nimadm:
AIX migration (or upgrade) is the process of moving from one version of AIX to another verson. (for example from
AIX 5.3 to 6.1 or 7.1)
This method will preserve all user configurations, and will update the installed filesets and optional software products.
The main advantage with the migration installation compared to a new and complete overwrite is that most of the
filesets and data is preserved on the system. It keeps all the directories such as /home, /usr, /var, logical volumes
information and configuration files. The /tmp file system is not preserved during the migration of the system.
Migration can be achieved by:
- Migration by using NIM: full description can be found at http://www.ibm.com/developerworks/aix/library/au-aixsystem-migration-installation/index.html
- Migration by using a CD or DVD drive: DVD must be inserted, and instructions on the screen has to be followed
- Migration by using an alternate disk migration. : This can be done with command "nimadm"
- Migration by using mksysb: This can be done by using NIM or with command "nimadm"
----------------------------Migration with nimadm:
NIMADM: Network Install Manager Alternate Disk Migration
(It means installation occurs through the network and it is written to another disk.)
The nimadm command is a utility that creates a copy of rootvg to a free disk and simultaneously migrate it to a new
version of AIX in the background. This command can be called from the NIM master, and it will copy the NIM client's
rootvg to the NIM master (via rsh). It will do migration on the NIM master, and after that copy the data back to the
NIM client (to the specified disk). When the system is rebooted new AIX version will be loaded.
Advantages:
- Migration is happening while the system is online, the only downtime is the reboot time.
- Extra load will be only on the NIM master, NIM client is able to run any additional overhead.
- If problems with new version, fallback is only 1 reboot to the old image.
-----------------------------

Migration with Local Disk Caching vs. NFS:


By default nimadm uses NFS for transferring data from client. Local disk caching on the NIM master allows to avoid
too many NFS writes, which can be useful on slow networks (where NFS is a bottleneck). This function can be invoked
with the "-j VGname" flag. With this flag nimadm command creates file systems on the specified volume group (on the
NIM master) and uses streams (rsh) to cache all of the data from the client to these file systems. (Without this flag NFS
read/write operations will be used for data transfer and NFS tuning may be required to optimize nimadm performance.)
Local disk caching could have improved peformance on slow networks, and allows TCB enabled systems to be
migrated with nimadm.
(Trusted Computing Base is a security feature which periodically checks the integrity of the system. Some info about
TCB can be found here: http://www.ibm.com/developerworks/forums/thread.jspa?threadID=183572)
----------------------------PREREQUISITES, PREPARATION, MIGRATION, POST_MIGRATION
I. Prerequisites on NIM master:
1. NIM master level
NIM master must be at the same or higher level than the level being migrated to.
2. lpp_source and spot level
The selected lpp_source and SPOT must match the AIX level to which you are migrating.
3. bos.alt_disk_install.rte
The same level of bos.alt_disk_install.rte must be installed in the rootvg (on NIM master) and in the SPOT which will
be used.
Check on NIM master: lslpp -l bos.alt_disk_install.rte
Check in SPOT:
nim -o lslpp -a filesets='bos.alt_disk_install.rte' <spot_name>
(It is not necessary to install the alt_disk_install utilities on the client)
0505-205 nimadm: The level of bos.alt_disk_install.rte installed in SPOT
spot_6100-06-06 (6.1.6.16) does not match the NIM master's level (7.1.1.2).
If NIM Master is on 7.1 but you would like to migrate from 5.3 to 6.1, SPOT and installed version will differ. 2 ways
to correct this:
- Install 7.1 version of this fileset to the 6.1 SPOT or
- remove 7.1 verson of this fileset from NIM, temporarily install 6.1 version and after the migration install back 7.1
version.
4. free space in vg
In the VG, which will be used for migration, must be enough free space (about the size of the client's rootvg)
5. rsh
Check if client can be reached via RSH from NIM master: rsh <client_name> oslevel -s
6. NFS mount (only if not Local Disk Caching is used)
NIM master must be able to perform NFS mounts and read/write operations. (If "-j VGname" is used in the nimadm
command, this is not needed!)

II. Prerequisites on NIM client:


1. Hardware and firmware levels
The client's hardware and software must be at the required level to support the level that is being migrated to.
2. free disk
Client must have a free disk, large enough to clone rootvg

3. NFS mount
NIM client must be able to perform NFS mounts and read/write operations.
4. multibos
The nimadm command is not supported with the multibos command when there is a bos_hd5 logical volume.
5. lv names
lv names must not be longer than 11 characters (because they will get an alt_ prefix during migration, and AIX
limitaion is 15 characters for an lv).
6. TCB (Trusted Computing Base is a security feature which periodically checks the integrity of the system.)
If you use disk caching option (-j flag) it does not matter if TCB is turned on or off (usually it is not turned on)
However if you omit "-j" flag (NFS read/write) TCB should be turned off. (TCB needs to access file metadata which
is not visible over NFS).
Command to check if TCB is enabled/disabled: odmget -q attribute=TCB_STATE PdAt
7. ncargs (Specifies the maximum allowable size of the ARG/ENV list when running exec() subroutines.)
This is a bug: if ncargs is customized to a value less than 256, it resets all other sys0 attributes to default value.
So make sure ncargs value is at least on 256: lsattr -El sys0 -a ncargs (chdev -l sys0 -a ncargs='256' )

III. Preparation on NIM client:


1. create mksysb
2. check filesets, commit: lppchk -v, installp -s (smitty commit if needed)
3. pre_migration script: /usr/lpp/bos/pre_migration (it will show you if anything must be corrected, output is in
/home/pre_migration...)
4. save actual config (mounts, routes, filesystems, interfaces, lsattr -El sys0, vmo -a, no -a, ioo -a ...)
5. save some config files (/etc/motd, /etc/sendmail.cf, /etc/ssh... (/home won't be overwritten these can be saved there))
for ssh this can be used:
# ssh -v dummyhost 2>&1 | grep "Reading configuration" (it will show location of ssh_config: debug1: Reading
configuration data /etc/ssh/ssh_config)
# cp -pr <path_to_ssh_dir> /home/pre_migration.<timestamp>/ssh
6. free up disk: unmirrorvg, reducevg, bosboot, bootlist

IV. Migration (on NIM master):


nimadm -j nimadmvg -c aix_client1 -s spot_6100-06-06 -l lpp_source_6100-0606 -d hdisk1 -Y
-j: specifies VG on master which will be used for migration (filesystems will be created here and client's data is
cacahed here via rsh)
-c: client name
-s: SPOT name
-l: lpp_source name
-d: hdisk name for the alternate root volume group (altinst_rootvg)
-Y: agrees to the software license agreements for software that will be installed during the migration.
Migration logs can be found in /var/adm/ras/alt_mig directory. There will be 12 phases after that, you will get back the
prompt.
Check if alt_inst_rootvg exist on client and bootlist is set correctly.

V. Post migration checks on client:


1. check filesets: oslevel -s, lppchk -v, instfix -i | grep ML (update/correct/commit other softwares/filesets if needed)
2. check config, config files: (sys0, vmo, tunables: tuncheck -p -f /etc/tunables/nextboot) (maxuproc: lsattr -El sys0,
chdev -l sys0 -a maxuproc=<value>)
3. post_migration script: /usr/lpp/bos/post_migration (it can run for a long time, 5-10 minutes)
4. others: mksysb, smtctl, rsh, rootvg mirror
------------------------------------------------------------------------------------NIMADM MIGRATION with MKSYSB and ALT_DISK_MKSYSB
This is a different method as you will migrate an mksysb to a higher level then restore that mksysb to a free disk.
I did this migration from 6.1 TL6 SP6 to 7.1 TL2 SP2
1. on client: update alt_disk filesets to the new version.
(this is needed for alt_disk_mksysb, because mksysb and alt_disk filesets has to be on the same level)
I updated these filesets to 7.1 (however AIX was on 6.1):
bos.alt_disk_install.boot_images 7.1.2.15 COMMITTED
bos.alt_disk_install.rte
7.1.2.15 COMMITTED
2. on client: unmirror rootvg, free up a disk
(if rootvg is mirrored you should unmirror it, and free up 1 disk, so for mksysb restore 1 disk will be enough.)
(otherwise you will get mklv failures, becuse system cannot fulfill the allocation request.)
# unmirrorvg rootvg hdiskX
# reducevg rootvg hdiskX
# bosboot -ad /dev/hdiskY
# bootlist -m normal hdiskY
3. on client: create mksysb locally:
# mksysb -ie /mnt/bb_lpar61_mksysb
(copy it to a NIM Master server, which is already at that level, which we want to migrate to)
4. on NIM master: create a resource from the mksysb file
# nim -o define -t mksysb -a server=master -a location=/nim/mksysb/bb_lpar61_mksysb bb_lpar61_mksysb
5. on NIM master: migrate mksysb resource to new AIX level
# nimadm -T bb_lpar61_mksysb -O /nim/mksysb/bb_lpar71_mksysb -s spot_7100-02-02 -l lpp_7100-02-02 -j nimvg
-Y -N bb_lpar71_mksysb
-T - existing AIX 6.1 NIM mksysb resource
-O - path to the new migrated mksysb resource
-s - spot used for the migration
-l - lpp_source used for the migration
-j - volume group which will be used on NIM master to create file systems temporarily (with alt_ prefix)
-Y - agrees to license agreements
-N - name of the new AIX 7.1 mksysb resource
(after new mksysb image has been ceated, copy it to the client)

6. on client: restore new 7.1 mksysb to a free disk with alt_disk_mksysb


# alt_disk_mksysb -m /mnt/bb_lpar71_mksysb -d hdiskX -k
-m - path to the mksysb
-d - disk used for restore
-k - keep user defined device configuration
(after that you can reboot system to the new rootvg.)
----------------------------nimadm fails: 0505-160, 0505-213
Solution:
#nim -o showres lpp_source |grep sysmgt.websm.webaccess
The problem is with the package sysmgt.websm.webaccess. The fileset sysmgt.websm.webaccess, part of the
sysmgt.websm package is starting processes out of its post_i script that cause problems for installations in SPOT
environments and it is also affecting alternate disk install migration (nimadm).
The workaround for this is going to be to remove sysmgt.websm package from the lpp_source and rebuild the .toc and
then do the nimadm process.
----------------------------/usr/sbin/nimadm[1147]: domainname: not found
I have found this: http://www-01.ibm.com/support/docview.wss?uid=isg1IV32979
if bos.net.nis.client is not installed on the nim master,
nimadm will output error :
/usr/sbin/nimadm[1147]: domainname: not found.
It has no impact on the nimadm process, only the error
message should be hidden.
Local fix:
None, the error message can be ignored, the code behind
will get that domainname is empty.
----------------------------umount: error unmounting /dev/lv11: Device busy
umount: error unmounting /dev/lv10: Device busy
umount: error unmounting /dev/lv01: Device busy
0505-158 nimadm: WARNING, unexpected result from the umount command.
0505-192 nimadm: WARNING, cleanup may not have completed successfully.
This is happening because, there are running processes in the above mentioned filesystems.
These filesystems were created during migation, but when AIX is upgraded a fileset started running this process there:

root
14811364
1 0 12:21:19 pts/2 0:06 /usr/java5/jre/bin/java
-Dderby.system.home=/usr/ibm/common/acsi/repos -Xrs -Djava.library.path=...

This process is some System Director stuff, so if you don't have that you can get rid of it.
Workaround:
After resterting nimadm migration, I monitored these filesystems. (fuser -cux <fs_name>)
Around 70% of the installation, this process popped up. I waited install goes about 90% and I did "kill <pid>".
After this nimadm was successful.
SPOT
SPOT:
Essentially the SPOT is a /usr filesystem just like the one on your NIM master. Everything that a machine requires in
a /usr file system, such as the AIX kernel, executable commands, libraries, and applications are included in the SPOT.
During client install client needs to run commands (mkvg, mklv..), these commands are availabe in the SPOT.
During the installation, the client machine NFS mounts this resource in order to access the code needed for the
installation process. Device drivers, the BOS install program, and other necessary code needed to perform a base
operating system installation are found inside the SPOT.
SPOT is responsible for
- Creating a boot image to send to the client machine over the network.
- Running the commands needed to install the NIM client.
You can think of it as having multiple "mini-systems" on your NIM master, because each SPOT is its own /usr
filesystem. You can upgrade it, add fixes to it, use it to boot a client system....etc.
You can also create a SPOT from a NIM mksysb resource. This SPOT however is not as versatile as one created from
an lpp_source and can not be upgraded with any fixes and can only be used with the mksysb resource it was created
from.
When a SPOT is created, network boot images are constructed in the /tftpboot directory using code from the newly
created SPOT. When a client performs a network boot, it uses tftp to obtain a boot image from the server. After the boot
image is loaded into memory at the client, the SPOT is mounted in the client's RAM file system to provide all
additional software support required to complete the operation.
root@aixnim1: / # lsnim -l spot_5300_09
spot_5300_09:
class
= resources
type
= spot
plat_defined = chrp
Rstate
= ready for use
prev_state = ready for use
location
= /nim/spot/spot_5300_09/usr
...

<--shows the location

operations:
reset
= reset an object's NIM state
cust
= perform software customization
showres = show contents of a resource
maint
= perform software maintenance
lslpp
= list LPP information about an object
fix_query = perform queries on installed fixes
showlog = display a log in the NIM environment
check = check the status of a NIM object
lppchk = verify installed filesets
update_all = update all currently installed filesets
creating a SPOT (only the top directory should be specified, the SPOT directory will be created automatically):
nim -o define -t spot -a server=master -a location=/nim/spot -a source=5300-09-03 -a installp_flags=-aQg
spot_5300-09-03

resetting a SPOT (if an operation failed, with this the resource state (Rstate) will be updated, and SPOT is ready to use):
nim -Fo reset spot_5300-09-03
preferable however to run a force check on the SPOT instead:
checking a SPOT (verifies the usability of a SPOT, and rebuild network boot image if necessary and change its state to
"ready for use"):
nim -Fo check spot_5300-09-03
checking the contents of the spot (verifies that software was installed successfully on a spot resource):
nim -o lppchk -a show_progress=yes spot_5200_08
Creating a SPOT from an mksysb (created spot can be used only for this mksysb):
smitty nim_mkres -> spot -> enter the values needed (the Source of Install Image should be the mksysb)
checking if a SPOT contains a fileset:
nim -o showres 'spot_5300-11-04_bb1' | grep bos.alt_disk_install.rte
nim -o lslpp -a filesets="bos.alt_disk_install.rte" spot_5300-11-04_bb1
checking a SPOT level (similar to instfix -i | grep ML):
root@aixnim1: / # nim -o fix_query spot_5200-08 | grep ML
All filesets for 5.2.0.0_AIX_ML were found.
All filesets for 5200-01_AIX_ML were found.
All filesets for 5200-02_AIX_ML were found.
update a spot with an lpp_source:
nim -o cust -a fixes=update_all -a lpp_source=5305_lpp 5305_spot
SPOT is an installed entity, like any other AIX system, so it can run into cases where it has broken filesets, broken
links, or missing/corrupt files. They are also fixed in the same manner as you would on any other system:
nim -o lppchk -a lppchk_flags="v" 5305_spot <--use the "Force Overwrite" or "Force Reinstall" options for -v
errors
nim -o lppchk -a lppchk_flags="l" 5305_spot <--using the "-ul" flags for missing links from "-l" errors
nim -o lppchk -a lppchk_flags="c" 5305_spot <--replacing bad files for any "-c" output
---------------------------------Spot creation with SMITTY:
smitty nim -> perform nim administration -> manage resources -> define a resource (spot)
Resource Name
Resource Type
Server of Resource
Source of Install Images
Location of Resource
...
COMMIT software updates?

[spot_TL7_SP3]
spot
[master]
[TL7_SP3]
[/nim/spots]
yes

---------------------------------Spot update with SMITTY:


(bos.alt_disk_install.rte fileset will be added to a spot)
smitty nim -> perform nim softw. inst. -> inst. and upd. softw. -> Inst. softw. (spot -> lpp_source)
Installation Target
LPP_SOURCE
Software to Install
...
installp Flags

spot_TL7_SP3
TL7_SP3
[+ 6.1.7.2 Alt. Disk Inst. Runt.] <--after F4 -> bos.alt_disk_install.rte with F7

COMMIT software updates?


SAVE replaced files?

[yes]
[no]

You might also like