DISCLAIMER : Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility.If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.
LVM in Linux step by step
LVM stands for Logical Volume Manager.
With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes, then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group.
With LVM, we can create logical partitions that can span across one or more physical hard drives. First, the hard drives are divided into physical volumes, then those physical volumes are combined together to create the volume group and finally the logical volumes are created from volume group.
The LVM commands listed in this article are used under Ubuntu Distribution. But, it is the same for other Linux distributions.
Before we start, install the lvm2 package as shown below.
In this step, we need to choose the physical volumes that will be used to create the LVM. We can create the physical volumes using pvcreate command as shown below.
If the physical volumes are already created, you can view them using the pvscan command as shown below.
Note : PE – Physical Extents are nothing but equal-sized chunks. The default size of extent is 4MB.
Volume groups are nothing but a pool of storage that consists of one or more physical volumes. Once you create the physical volume, you can create the volume group (VG) from these physical volumes (PV).
In this example, the volume group vol_grp1 is created from the two physical volumes as shown below.
LVM processes the storage in terms of extents. We can also change the extent size (from the default size 4MB) using -s flag.
vgdisplay command lists the created volume groups.
Now, everything is ready to create the logical volumes from the volume groups. lvcreate command creates the logical volume with the size of 80MB.
We can extend the size of the logical volumes after creating it by using lvextend utility as shown below. The changes the size of the logical volume from 80MB to 100MB.
Before we start, install the lvm2 package as shown below.
$ sudo apt-get intall lvm2
To create a LVM, we need to run through the following steps.
- Select the physical storage devices for LVM
- Create the Volume Group from Physical Volumes
- Create Logical Volumes from Volume Group
Select the Physical Storage Devices for LVM – Use pvcreate, pvscan, pvdisplay Commands
In this step, we need to choose the physical volumes that will be used to create the LVM. We can create the physical volumes using pvcreate command as shown below.
$ sudo pvcreate /dev/sda6 /dev/sda7
Physical volume "/dev/sda6" successfully created
Physical volume "/dev/sda7" successfully created
As shown above two physical volumes are created – /dev/sda6 and /dev/sda7.If the physical volumes are already created, you can view them using the pvscan command as shown below.
$ sudo pvscan
PV /dev/sda6 lvm2 [1.86 GB]
PV /dev/sda7 lvm2 [1.86 GB]
Total: 2 [3.72 GB] / in use: 0 [0 ] / in no VG: 2 [3.72 GB]
You can view the list of physical volumes with attributes like size, physical extent size, total physical extent size, the free space, etc., using pvdisplay command as shown below.$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda6
VG Name
PV Size 1.86 GB / not usable 2.12 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 476
Free PE 456
Allocated PE 20
PV UUID m67TXf-EY6w-6LuX-NNB6-kU4L-wnk8-NjjZfv
--- Physical volume ---
PV Name /dev/sda7
VG Name
PV Size 1.86 GB / not usable 2.12 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 476
Free PE 476
Allocated PE 0
PV UUID b031x0-6rej-BcBu-bE2C-eCXG-jObu-0Boo0x
Note : PE – Physical Extents are nothing but equal-sized chunks. The default size of extent is 4MB.
Create the Volume Group – Use vgcreate, vgdisplay Commands
Volume groups are nothing but a pool of storage that consists of one or more physical volumes. Once you create the physical volume, you can create the volume group (VG) from these physical volumes (PV).
In this example, the volume group vol_grp1 is created from the two physical volumes as shown below.
$ sudo vgcreate vol_grp1 /dev/sda6 /dev/sda7 Volume group "vol_grp1" successfully created
LVM processes the storage in terms of extents. We can also change the extent size (from the default size 4MB) using -s flag.
vgdisplay command lists the created volume groups.
$ sudo vgdisplay
--- Volume group ---
VG Name vol_grp1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.72 GB
PE Size 4.00 MB
Total PE 952
Alloc PE / Size 0 / 0
Free PE / Size 952 / 3.72 GB
VG UUID Kk1ufB-rT15-bSWe-5270-KDfZ-shUX-FUYBvR
LVM Create: Create Logical Volumes – Use lvcreate, lvdisplay commandNow, everything is ready to create the logical volumes from the volume groups. lvcreate command creates the logical volume with the size of 80MB.
$ sudo lvcreate -l 20 -n logical_vol1 vol_grp1
Logical volume "logical_vol1" created
Use lvdisplay command as shown below, to view the available logical volumes with its attributes.$ sudo lvdisplay --- Logical volume --- LV Name /dev/vol_grp1/logical_vol1 VG Name vol_grp1 LV UUID ap8sZ2-WqE1-6401-Kupm-DbnO-2P7g-x1HwtQ LV Write Access read/write LV Status available # open 0 LV Size 80.00 MB Current LE 20 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0After creating the appropriate filesystem on the logical volumes, it becomes ready to use for the storage purpose.
$ sudo mkfs.ext3 /dev/vol_grp1/logical_vol1
LVM resize: Change the size of the logical volumes – Use lvextend Command
We can extend the size of the logical volumes after creating it by using lvextend utility as shown below. The changes the size of the logical volume from 80MB to 100MB.
$ sudo lvextend -L100 /dev/vol_grp1/logical_vol1
Extending logical volume logical_vol1 to 100.00 MB
Logical volume logical_vol1 successfully resized
We can also add additional size to a specific logical volume as shown below.$ sudo lvextend -L+100 /dev/vol_grp1/logical_vol1 Extending logical volume logical_vol1 to 200.00 MB Logical volume logical_vol1 successfully resized
AIX as DNS client
DNS server IP address/hostnames are defined in /etc/resolv.conf in AIX servers.
Here is an example of /etc/resolv.conf
nameserver 192.168.2.12
nameserver 192.168.2.13
nameserver 192.168.2.14
search india.cope.com usa.cope.com uk.cope.com
Let us see few tips and tricks on using nslookup.
1. To look up address in non interactive way,
$ nslookup webserv
Server: dnserver1.india.cope.com
Address: 192.168.2.12
Name: webserv.india.cope.com
Address: 192.168.2.211
$
2. To look up address in interactive way,
$nslookup
Default Server: dnserver1.india.cope.com
Address: 192.168.2.12
> websrv
Server: dnserver1.india.cope.com
Address: 192.168.2.12
Name: webserv.india.cope.com
Address: 192.168.2.211
> exit
$
3. To look up hostname in non interactive way,
$ nslookup 192.168.2.211
Server: dnserver1.india.cope.com
Address: 192.168.2.12
Name: webserv.india.cope.com
Address: 192.168.2.211
$
4. To look up hostname in interactive way,
$ nslookup
Default Server: dnserver1.india.cope.com
Address: 192.168.2.12
> 192.168.2.211
Server: dnserver1.india.cope.com
Address: 192.168.2.12
Name: webserv.india.cope.com
Address: 192.168.2.211
> exit
$
5. To look up MX data,
$ nslookup
Default Server: dnserver1.india.cope.com
Address: 192.168.2.12
> set q=mx
> rajs
Server: dnserver1.india.cope.com
Address: 192.168.2.12
Name: rajs.india.cope.in
Address: 0.0.0.0
> exit
$
6. How to query a specific DNS server for an address ?
We can do in both interactive and non-interactive ways.
Below example will query for the IP address of the host websrv using the DNS serer "192.168.2.15" which is not specified in the /etc/resolv.conf file.
Interactive Way:
$nslookup
Default Server: dnserver1.india.cope.com
Address: 192.168.2.12
> server 192.168.2.15
Default Server: dnserver4.india.cope.com
Address: 192.168.2.15
> websrv
Server: dnserver4.india.cope.com
Address: 192.168.2.15
Name: webserv.india.cope.com
Address: 192.168.2.211
> exit
$
Non-Interactive Way:
$nslookup - websrv 192.168.2.15
Server: dnserver4.india.cope.com
Address: 192.168.2.15
Name: webserv.india.cope.com
Address: 192.168.2.211
7.What is the difference between Authoritave and Non-authoritative answers ?
When you query for something for the first time, we get the answer from the DNS server and it will be displayed as well as stored in the local cache. This is called as authoritative answer. ie., getting the answer directly from the DNS server. This answer will be kept in cache for certain time.
But when you do the same query for the second time, we get the answer from the cache instead of the DNS server. This is called as non-authoritative answer.
8. How will you specify an alternate DNS server when using nslookup ?
For using 192.168.2.24 as an alternate DNS server,
$ nslookup - 192.168.2.24
This will query the alternate server instead of the DNS servers configured in /etc/resolv.conf file.
9. How will you query a MX record in an alternate server ?
$ nslookup - type=mx bashi.usa.cope.com 192.168.2.24
10. How will you debug while querying a DNS server ?
$ nslookup
Default Server: dnserver1.india.cope.com
Address: 192.168.2.12
> set debug
> webserv
Server: dnserver1.india.cope.com
Address: 192.168.2.12
;; res_nmkquery(QUERY, websrv.india.cope.com, IN, A)
------------
Got answer:
HEADER:
opcode = QUERY, id = 54305, rcode = NOERROR
header flags: response, authoritive answer, want recursion, recursion available
questions = 1, answers = 1, authority records = 0, additional = 0
QUESTIONS:
websrv.india.cope.com, type = A, class = IN
ANSWERS:
-> webserv.india.cope.com
internet address = 192.168.2.211
ttl = 3600 (1H)
------------
Name: webserv.india.cope.com
Address: 192.168.2.211
Each DNS packet is composed of 5 sections as given below
- Header Section
- Question Section
- Answer Section
- Authority Section
- Additional Section
11. You can use options with the nslookup command using 'set' sub-command.
Here are few options ...
port=53 By default, DNS service uses port 53. If you have a DNS service on a different port, you can sue the port option to specify the prot number.
timeout=10 It is used to specify the timeout value. If the name server doesn't respond in 10 seconds, nslookup will send the query again.
debug To turn on debug mode
nodebug To turn off debug mode
querytype=A By default, nslookup looks for A record. If you type the IP address, it will look for PTR record.. You can change the querytype to MX or SOA.
12. How will you come out of the interactive nslookup session.
You can use exit command or type ^D (control+D) to come out of the session.
0301-168 bosboot: The current boot logical volume, /dev/ does not exist on /dev/hdisk0
root@yyxxxx4:/dev
# bosboot -ad /dev/ipldevice
0516-602 lslv: Logical volume name not entered.
Usage: lslv [-L] [-l | -m] [-n DescriptorPV] LVname
lslv: [-L] [-n DescriptorPV] -p PVname [LVname]
Lists the characteristics of a logical volume.
0301-168 bosboot: The current boot logical volume, /dev/, does not exist on /dev/hdisk0.
Solution:
lsvg -p rootvg - hdisk0
lslv -m hd5 - hdisk0 on PV1 1st partition
savebase -v - successful
will remove/recreate hd5
rmlv hd5
# mklv -y hd5 -t boot -a e rootvg 1 hdisk0
cd /dev
rm ipldevice
ln /dev/rhdisk0 /dev/ipldevice
bosboot -ad /dev/ipldevice - same error
bootinfo -B hdisk0 - 1
# ln /dev/rhd5 /dev/ipl_blv
cd /dev - same maj/min numbers
bosboot -ad /dev/ipldevice - works!
alt_disk_install was able to proceed now
EMC ODM definations cleanup
Before making any changes, collect host logs to document the current configuration. At a minimum, save the following: inq, lsdev -Cc disk, lsdev -Cc adapter, lspv, and lsvg
Shutdown the application(s), unmount the file system(s), and varyoff all volume groups except for rootvg. Do not export the volume groups.
# varyoffvg
Check with lsvg -o (confirm that only rootvg is varied on)
If no PowerPath, skip all steps with power names.
For CLARiiON configuration, if Navisphere Agent is running, stop it:
# /etc/rc.agent stop
Remove paths from Powerpath configuration:
# powermt remove hba=all
Delete all hdiskpower devices:
# lsdev -Cc disk -Fname grep power xargs -n1 rmdev -dl
Remove the PowerPath driver instance:
# rmdev -dl powerpath0
Delete all hdisk devices:For Symmetrix devices, use this command:
# lsdev -CtSYMM* -Fname xargs -n1 rmdev -dl
For CLARiiON devices, use this command:
# lsdev -CtCLAR* -Fname xargs -n1 rmdev -dl
Confirm with lsdev -Cc disk that there are no EMC hdisks or hdiskpowers.
Remove all Fiber driver instances:
# rmdev -Rdl fscsiX(X being driver instance number, i.e. 0,1,2, etc.)
Verify through lsdev -Cc driver that there are no more fiber driver instances (fscsi).
Change the adapter instances in Defined state
# rmdev -l fcsX(X being adapter instance number, i.e. 0,1,2, etc.)
Create the hdisk entries for all EMC devices:
# emc_cfgmgror
# cfgmgr -vl fcsx(x being each adapter instance which was rebuilt).
Skip this part if no PowerPath.
Configure all EMC devices into PowerPath:
# powermt config
Check the system to see if it now displays correctly:
# powermt display
# powermt display dev=all
# lsdev -Cc disk
# /etc/rc.agent start
Shutdown the application(s), unmount the file system(s), and varyoff all volume groups except for rootvg. Do not export the volume groups.
# varyoffvg
Check with lsvg -o (confirm that only rootvg is varied on)
If no PowerPath, skip all steps with power names.
For CLARiiON configuration, if Navisphere Agent is running, stop it:
# /etc/rc.agent stop
Remove paths from Powerpath configuration:
# powermt remove hba=all
Delete all hdiskpower devices:
# lsdev -Cc disk -Fname grep power xargs -n1 rmdev -dl
Remove the PowerPath driver instance:
# rmdev -dl powerpath0
Delete all hdisk devices:For Symmetrix devices, use this command:
# lsdev -CtSYMM* -Fname xargs -n1 rmdev -dl
For CLARiiON devices, use this command:
# lsdev -CtCLAR* -Fname xargs -n1 rmdev -dl
Confirm with lsdev -Cc disk that there are no EMC hdisks or hdiskpowers.
Remove all Fiber driver instances:
# rmdev -Rdl fscsiX(X being driver instance number, i.e. 0,1,2, etc.)
Verify through lsdev -Cc driver that there are no more fiber driver instances (fscsi).
Change the adapter instances in Defined state
# rmdev -l fcsX(X being adapter instance number, i.e. 0,1,2, etc.)
Create the hdisk entries for all EMC devices:
# emc_cfgmgror
# cfgmgr -vl fcsx(x being each adapter instance which was rebuilt).
Skip this part if no PowerPath.
Configure all EMC devices into PowerPath:
# powermt config
Check the system to see if it now displays correctly:
# powermt display
# powermt display dev=all
# lsdev -Cc disk
# /etc/rc.agent start
Recovering emc dead path
# powermt display dev=all
And you notice that there are "dead" paths, then these are the commands to run in order to set these paths back to "alive" again, of course, AFTER ensuring that any SAN related issues are resolved.
To have PowerPath scan all devices and mark any dead devices as alive, if it finds that a device is in fact capable of doing I/O commands, run:
# powermt restore To delete any dead paths, and to reconfigure them again:
# powermt reset
# powermt config
Or you could run:
# powermt check
And you notice that there are "dead" paths, then these are the commands to run in order to set these paths back to "alive" again, of course, AFTER ensuring that any SAN related issues are resolved.
To have PowerPath scan all devices and mark any dead devices as alive, if it finds that a device is in fact capable of doing I/O commands, run:
# powermt restore To delete any dead paths, and to reconfigure them again:
# powermt reset
# powermt config
Or you could run:
# powermt check
Using the “tar” and “gzip” commands
In UNIX files are packed using the Unix Tape ARchive utility (derived from tape archive and commonly referred to as “tarball”), otherwise are compressed and stored using the GNU zip utilities.
The purpose of this post in not do a detailed and exhaustive description about “tar” and “gzip” commands, but present the essential to use it of an easy way. I hope you enjoy the post!
Basically the “tar”(Tape ARchive) command allows pack/unpack, and the gzip(GNU zip) command compress/uncompress files.
tar
The “tar” command allows us group and ungroup( pack and unpack), a set of files and/or folders into a single file.To pack some folders and files listed above into a single file, we can run the “tar” command with the following parameters.
| tar -cvf application. tar * |
| tar –cvf application. tar java PERL requirements.txt config.xml |
| tar cvf application. tar java PERL requirements.txt config.xml |
Regardless of how the parameters can be referred, below is listed a description of the used parameters:
Option | Meaning | Description |
-c | Create | Create a new archive |
-v | Verbose | Verbosely list files which are processed |
-f | File=ARCHIVE | Use archive file or device |
Otherwise the “tar” command can be used to the reverse process: unpack or extract a set of files, directly from the “tar” file:
Option | Meaning | Description |
-x | eXtract | Extract files from an archive |
gzip
This command simply allow to compress a file: any type of file(with the “tar” extension or any other).To compress a file we can use the following syntax:
| gunzip myfile. tar .gz |
The “tar” also allows compress the “tar” result file, by the “z” option as shown below:
| tar cvzf archive_name. tar .gz dirname |
tar and gzip
Oftentimes can be confused to understand the meaning of “tar” and “gzip” commands, because they are used together at the same command line through the pipe operator “|”. Example 1
| tar cvf * | gzip > oracle. tar .gz |
The first command(before the pipe) is the “tar”, which pack all files in the current directory.
After this, the ”tar” file is compress :”gzip”(after the pipe) and produce the final file(by the redirection operator “>”) : “oracle.tar.gz”.
Example 2
| gunzip < oracle. tar .gz | tar xvf - |
In this example the “oracle.tar.gz” file is unzip and the result is a “tar” file that is placed in the “tar” command to unpack. With the hyphen the data resulting from the “gunzip” command is used as input in the tar command.
Useful list
Below you can see a list of very useful actions included these commands:
- List the contents of “tar” file
| tar tvf archive_name. tar |
- Extract a single file from “tar” file
| tar xvf oracle. tar java/MyLib.java |
- Add a file to an existing ”tar” file
| tar rvf oracle. tar conf.cnf |
- Untar an archive to a different directory
| tar -zxf oracle. tar .gz -C ora |
How to add a new network gateway or static route on Red Hat Enterprise Linux host?
To add a static route or gateway on Red Hat Enterprise Linux host, such as when adding for a second (or tertiary) interface, use the
There are two possible formats for this file. The first is with ip command arguments and the second is with network/netmask directives.
Format 1:
Format 2:
/etc/sysconfig/network-scripts/route-<interface>
files. These configuration filesare read during network service initialization. For example to add static route for eth0, create a file /etc/sysconfig/network-scripts/route-eth0
and add the routes as explained below. There are two possible formats for this file. The first is with ip command arguments and the second is with network/netmask directives.
Format 1:
-
For
ip
commands, theifup-route
script suppliesip route add
and the contents of the file are all parameters necessary to set up the route. For example, to set up a default route, the file would contain the following:
default via X.X.X.X Y.Y.Y.Y via Z.Z.Z.Z e.g default via 192.168.1.1 10.10.10.0/24 via 192.168.1.2
-
In the above,
X.X.X.X
is the gateway IP address. The second line adds another static route whereY.Y.Y.Y
is the network,Z.Z.Z.Z
is the gateway IP address. Multiple lines will be parsed as individual routes.
Format 2:
-
The alternative format is as follows:
ADDRESS<N>=X.X.X.X NETMASK<N>=Y.Y.Y.Y GATEWAY<N>=Z.Z.Z.Z e.g. ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.2 ADDRESS1=20.20.20.0 NETMASK1=255.255.255.0 GATEWAY1=192.168.1.2
-
This format deals with three fields: GATEWAY, NETMASK, and ADDRESS.
Each field should have a number appended to it indicating which route it
relates to.
- In the above example,
Z.Z.Z.Z
is the gateway IP address. Subsequent entries must be numbered sequentially (for exampleADDRESS1=, NETMASK1=, GATEWAY1=
). Note, that multiple entries must be sequentially numbered and must not skip a value (0 must be followed by 1, not a number greater than 1).
View mksysb content & restore individual files
To view information about a mksysb backup file use:
To get the LPP info from the mksysb use:
Display info about VG backup | # lsmksysb -lf P2_1202_TL7.mk | | | Mksyb file Device:(file,tape,cdrom) VOLUME GROUP: rootvg BACKUP DATE/TIME: Tue Feb 21 18:08:29 GMT+01:00 2012 UNAME INFO: AIX power2s 1 6 00C4489D4C00 BACKUP OSLEVEL: 6.1.7.0 MAINTENANCE LEVEL: 6100-07 BACKUP SIZE (MB): 41216 SHRINK SIZE (MB): 8421 VG DATA ONLY: no rootvg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd6 paging 16 32 2 open/syncd N/A hd5 boot 1 2 2 closed/syncd N/A hd8 jfs2log 1 2 2 open/syncd N/A hd3 jfs2 4 8 2 open/syncd /tmp hd1 jfs2 1 2 2 open/syncd /home hd11admin jfs2 1 2 2 open/syncd /admin livedump jfs2 2 4 2 open/syncd /var/adm/ras/livedump fslv00 jfs2 72 144 2 open/syncd /usr/sys/inst.images hd4 jfs2 22 44 2 open/syncd / hd2 jfs2 34 68 2 open/syncd /usr hd9var jfs2 2 4 2 open/syncd /var hd10opt jfs2 5 10 2 open/syncd /opt
To get the LPP info from the mksysb use:
Display LPP info | # lsmksysb -Lf P2_1202_TL7.mk | Mksysb file Fileset Level State Type Description (Uninstaller) ---------------------------------------------------------------------------- ICU4C.rte 6.1.7.0 C F International Components for Unicode Java5.sdk 5.0.0.430 C F Java SDK 32-bit Java5_64.sdk 5.0.0.430 C F Java SDK 64-bit Java6.sdk 6.0.0.280 A F Java SDK 32-bit Tivoli_Management_Agent.client.rte 3.7.1.0 C F Management Framework Endpoint Runtime" X11.adt.bitmaps 6.1.0.0 C F AIXwindows Application Development Toolkit Bitmap Files X11.adt.imake 6.1.6.0 C F AIXwindows Application Development Toolkit imake .....
For the list of files contained in the mksysb
# lsmksysb -f P2_1202_TL7.mk 6666 ./bosinst.data 11 ./tmp/vgdata/rootvg/image.info 11861 ./image.data 187869 ./tmp/vgdata/rootvg/backup.data 0 ./opt 0 ./opt/IBM 0 ./opt/IBM/perfpmr 2249 ./opt/IBM/perfpmr/Install 2616 ./opt/IBM/perfpmr/PROBLEM.INFO 9818 ./opt/IBM/perfpmr/README 26741 ./opt/IBM/perfpmr/config.sh
A specific file from the mksysb backup can be restored using the
restorevgfiles command. In the following example the file will be
restored to the current directory (/tmp/restore). Using the -d flag a
alternative restore location can be specified. Path to the mksysb image file | (/tmp/restore) # restorevgfiles -f /export2/P2_1202_TL7.mk ./root/j1 | The file to be extracted from the mksysb image. New volume on /export2/P2_1202_TL7.mk: Cluster size is 51200 bytes (100 blocks). The volume number is 1. The backup date is: Tue Feb 21 18:09:12 GMT+01:00 2012 Files are backed up by name. The user is root. x 6 ./root/j1 The total size is 6 bytes. The number of restored files is 1. ================================================================== (/tmp/restore) # ls -la */* -rw-r--r-- 1 root system 6 Feb 17 11:16 root/j1
Restore or Install AIX with a mksysb image using NIM
mksysb resource is a file containing the image of the root volume group (created with the AIXmksysb command) of a machine. It is used to restore a machine when it crashed, or to install it from scratch (also known as “cloning” a client). In a environment, we usually installed AIX on new LPARs or VIO clients from a existing mksysb rather than going for fresh AIX CD. Installing OS from existing mksysb help us to keep the customization same for all LPAR .
Assumptions:
1. The NIM client (In our example NIM Client is webmanual01) is defined on the NIM master (In our example nim01)
2. The client’s hostname and IP address are listed in the /etc/hosts file.
3. The mksysb image has been transferred or restored from TSM and resides in the NIM master nim01:/export/nim/mksysb.The size and sum command output match that from the source mksysb image.
Create a mksysb resource:
Run smit nim_mkres –> You should see a “Resource type” listing disppalyed –> Scroll through the menu list and select mksysb.
Hit enter and you will see menu below:
Prepare Bos install on client:
Run smit nim_tasks –> Select Install and Update Software and press enter–> Then select theInstall the Base Operating System on Standalone Clients –> Select the target definition ( ie the client which will be restored) –>Select the installation type–> select mksysb Select the mksysb resource webmanual01mksysb which you created in last step –> Select the SPOT for the restore / installation –
The entire bos_inst smit panel should now displayed
On NIM nim01 server ,check for correct client setup and start:
a. Check the subserver bootps is active
b. Check the subserver tftp is active
Boot client webmanual01 into SMS mode using HMC and select the option 2
Make sure you get a ping success; else, the NIM will fail. If ping failure, check the address info entered; check if the adapter is correct and if the adapter is connected into the network. When ping test OK, proceed to next step:
Go back to Main Menu and Select 5 for Select Boot Options
Now Select 1 Select Install/Boot Device and Select 6 for Network. Then Select Adapter # from previous step as option 2
Now select Normal Mode Boot and Select 1 to Exit System Management Services
At this point, you should see the BOOTP packets increase until finally the system is able to load the minimal kernel, and start the NIM install, Once loaded, you reach the AIX install menu as below , from there I have select to restore the OS on hdisk0.
Now system will install here and after installation complete you need to do post customization steps as per your environment.--
Assumptions:
1. The NIM client (In our example NIM Client is webmanual01) is defined on the NIM master (In our example nim01)
2. The client’s hostname and IP address are listed in the /etc/hosts file.
3. The mksysb image has been transferred or restored from TSM and resides in the NIM master nim01:/export/nim/mksysb.The size and sum command output match that from the source mksysb image.
Create a mksysb resource:
Run smit nim_mkres –> You should see a “Resource type” listing disppalyed –> Scroll through the menu list and select mksysb.
Hit enter and you will see menu below:
Define a Resource Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP][Entry Fields] * Resource Name [webmanual01mksysb] * Resource Type mksysb * Server of Resource [master] * Location of Resource [/export/nim/mksysb/ webmanual01.mksysb.0] Comments [] Source for Replication [] -OR- System Backup Image Creation Options: CREATE system backup image? no NIM CLIENT to backup [] PREVIEW only? no IGNORE space requirements? no [MORE...10]
Prepare Bos install on client:
Run smit nim_tasks –> Select Install and Update Software and press enter–> Then select theInstall the Base Operating System on Standalone Clients –> Select the target definition ( ie the client which will be restored) –>Select the installation type–> select mksysb Select the mksysb resource webmanual01mksysb which you created in last step –> Select the SPOT for the restore / installation –
The entire bos_inst smit panel should now displayed
Install the Base Operating System on Standalone Clients Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP][Entry Fields] * Installation Target webmanul01 * Installation TYPE mksysb * SPOT spot61_TL06_SP3 LPP_SOURCE MKSYSB webmanual01mksysb BOSINST_DATA to use during installation [] IMAGE_DATA to use during installation [] RESOLV_CONF to use for network configuration [] Customization SCRIPT to run after installation [] Customization FB Script to run at first reboot [] ACCEPT new license agreements? [] Remain NIM client after install? [yes] PRESERVE NIM definitions for resources on this target? [yes] FORCE PUSH the installation? [no] Initiate reboot and installation now? [no] -OR- Set bootlist for installation at the next reboot? [no] Additional BUNDLES to install [] -OR- Additional FILESETS to install [] (bundles will be ignored)
On NIM nim01 server ,check for correct client setup and start:
a. Check the subserver bootps is active
#lssrc -t bootps Service Command Description Status bootps /usr/sbin/bootpd bootpd /etc/bootptab active
b. Check the subserver tftp is active
#lssrc -t tftp Service Command Description Status tftp /usr/sbin/tftpd tftpd -n activec. Tail /etc/bootptab and should see the client network info listed per example below
webmanual01:bf=/tftpboot/d. Showmount -e —–> should list 2 filesystems being NFS exported to the client ( webmanual01)testlpar:ip=10.190.120.90 :ht=ethernet:sa=10.190.120. 120:sm=255.255.255.0:
/export/nim/mksysb/webmanul01.mksysb.0 webmanual01 /export/nim/scripts/ webmanual01.script webmanual01
PowerPC Firmware ------------------------------Select Ethernet Adapter as below screen . I will select 2 as I know it is configured------------------------------ ------------------- Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5. Select Boot Options
PowerPC Firmware ------------------------------Now select 1 for IP Parameters as follows------------------------------ ------------------- NIC Adapters Device Location Code Hardware Address 1. Port 1 - IBM 2 PORT 10/100/100 U787B.001.WEBDEV-P1-C1-T1 001a6491a656 2. Port 2 - IBM 2 PORT 10/100/100 U787B.001.WEBDEV-P1-C1-T2 001a6491a657
PowerPC Firmware ------------------------------Now fill up the parameters as below------------------------------ ------------------- Network Parameters Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1- 1. IP Parameters 2. Adapter Configuration 3. Ping Test 4. Advanced Setup: BOOTP
PowerPC Firmware ------------------------------Now go back previous menu and select Ping Test (option 3)------------------------------ ------------------- IP Parameters Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1- 1. Client IP Address [10.190.120.17] 2. Server IP Address [10.190.120.24] 3. Gateway IP Address [10.120.112.1] 4. Subnet Mask [255.255.255.000]
Make sure you get a ping success; else, the NIM will fail. If ping failure, check the address info entered; check if the adapter is correct and if the adapter is connected into the network. When ping test OK, proceed to next step:
Type menu item number and press Enter or select Navigation key:3 PowerPC Firmware ------------------------------------------------------------ ------------------- Ping Test Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1 Speed, Duplex: auto,auto Client IP Address: 10.190.120.17 Server IP Address: 10.190.120.24 Gateway IP Address: 10.190.112.1 Subnet Mask: 255.255.255.000 Protocol: Standard Spanning Tree Enabled: 0 Connector Type: 1. Execute Ping Test ------------------------------ ------------------------------ ------------------- Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------ ------------------------------ ------------------- Type menu item number and press Enter or select Navigation key:1 .---------------------. | Attempting Ping... | `---------------------' <strong>Lots of OutPut</strong> .-----------------. | Ping Success. | `-----------------'
Go back to Main Menu and Select 5 for Select Boot Options
Main Menu 1. Select Language 2. Setup Remote IPL (Initial Program Load) 3. Change SCSI Settings 4. Select Console 5. Select Boot Options
Multiboot 1. Select Install/Boot Device 2. Configure Boot Device Order 3. Multiboot Startup
PowerPC Firmware ------------------------------------------------------------ ------------------- Select Device Type 1. Diskette 2. Tape 3. CD/DVD 4. IDE 5. Hard Drive 6. Network 7. List all Devices ------------------------------ ------------------------------ ------------------- Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------ ------------------------------ ------------------- Type menu item number and press Enter or select Navigation key:6
PowerPC Firmware ------------------------------------------------------------ ------------------- Select Device Device Current Device Number Position Name 1. - Ethernet ( loc=U787B.001.WEBDEV-P1-C1-T1 ) 2. - Ethernet ( loc=U787B.001.WEBDEV-P1-C1-T2 )
Now select Normal Mode Boot and Select 1 to Exit System Management Services
PowerPC Firmware ------------------------------------------------------------ ------------------- Select Task Ethernet ( loc=U787B.001.WEBDEV-P1-C1-T2 ) 1. Information 2. Normal Mode Boot 3. Service Mode Boot ------------------------------ ------------------------------ ------------------- Navigation keys: M = return to Main Menu ESC key = return to previous screen X = eXit System Management Services ------------------------------ ------------------------------ ------------------- Type menu item number and press Enter or select Navigation key:2 PowerPC Firmware ------------------------------ ------------------------------ ------------------- Are you sure you want to exit System Management Services? 1. Yes 2. No
Type menu item number and press Enter or select Navigation key: Welcome to Base Operating System Installation and Maintenance Type the number of your choice and press Enter. Choice is indicated by >>>. >>> 1 Start Install Now with Default Settings 2 Change/Show Installation Settings and Install 3 Start Maintenance Mode for System Recovery 4 Configure Network Disks (iSCSI) 5 Select Storage Adapters
System Backup Installation and Settings Either type 0 and press Enter to install with the current settings, or type the number of the setting you want to change and press Enter. Setting: Current Choice(s): 1 Disk(s) where you want to install ...... hdisk0 Use Maps............................. No 2 Shrink File Systems..................... No 3 Import User Volume Groups............... Yes 4 Recover Devices....................... .. Yes >>> 0 Install with the settings listed above.
Now system will install here and after installation complete you need to do post customization steps as per your environment.--
NTP Client Configuration in AIX
Below are the steps for NTP client configuration in AIX .
1) Using “ntpdate” command , have a server suitable for synchronization by using the
#ntpdate -d ip.address.of.ntpserver
2) Client configuration for ntp is defined in the configuration file
#cat /etc/ntp.conf
server <NTP.SERVER.IP>
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace
3) start the xntpd daemon
#startsrc -s xntpd
4) To make permanent after reboot, uncomment the following line in /etc/rc.tcpip
vi /etc/rc.tcpip
start /usr/sbin/xntpd “$src_running”
5) check the service status
# lssrc -s xntpd
Subsystem Group PID Status
xntpd tcpip 3997772 active
6) check the time sync with server
#ntpq -p
1) Using “ntpdate” command , have a server suitable for synchronization by using the
#ntpdate -d ip.address.of.ntpserver
2) Client configuration for ntp is defined in the configuration file
#cat /etc/ntp.conf
server <NTP.SERVER.IP>
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace
3) start the xntpd daemon
#startsrc -s xntpd
4) To make permanent after reboot, uncomment the following line in /etc/rc.tcpip
vi /etc/rc.tcpip
start /usr/sbin/xntpd “$src_running”
5) check the service status
# lssrc -s xntpd
Subsystem Group PID Status
xntpd tcpip 3997772 active
6) check the time sync with server
#ntpq -p
What is VIOS ?
The virtual i/o server is an appliance that provides virtual storage and shared ethernet adapter capability to client logical partitions.it allow a physical adapter with attached disks on the virtual i/o sever partition to be shared by one or more partitions,enabling clients to consolidate and potentially minimize the number of physical adapters required.
VIOS is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual server adapters, where a regular AIX/Linux LPAR does not.
• VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.
• VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).
Depending on configurations, VIOS may or may not be a single point of failure. When client partitions access I/O via a single path that is delivered via in a single VIOS, then that VIOS represents a potential single point of failure for that client partition.
• VIOS is typically configured in pairs along with various multipathing / failover methods in the client for virtual resources to prevent the VIOS from becoming a single point of failure.
• Active memory sharing and partition mobility require a VIOS partition. The VIOS partition acts as the controlling device for backing store for active memory sharing. All I/O to a partition capable of partition mobility must be handled by VIOS as well as the process of shipping memory between physical systems.
VIOS is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual server adapters, where a regular AIX/Linux LPAR does not.
• VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.
• VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).
Depending on configurations, VIOS may or may not be a single point of failure. When client partitions access I/O via a single path that is delivered via in a single VIOS, then that VIOS represents a potential single point of failure for that client partition.
• VIOS is typically configured in pairs along with various multipathing / failover methods in the client for virtual resources to prevent the VIOS from becoming a single point of failure.
• Active memory sharing and partition mobility require a VIOS partition. The VIOS partition acts as the controlling device for backing store for active memory sharing. All I/O to a partition capable of partition mobility must be handled by VIOS as well as the process of shipping memory between physical systems.
Veritas Cluster Cheat sheet
VCS is built on three components: LLT, GAB, and VCS itself. LLT handles kernel-to-kernel communication over the LAN heartbeat links, GAB handles shared disk communication and messaging between cluster members, and VCS handles the management of services.
Once cluster members can communicate via LLT and GAB, VCS is started.
In the VCS configuration, each Cluster contains systems, Service Groups, and Resources. Service Groups contain a list of systems belonging to that group, a list of systems on which the Group should
be started, and Resources. A Resource is something controlled or monitored by VCS, like network interfaces, logical IP's, mount point, physical/logical disks, processes, files, etc. Each resource
corresponds to a VCS agent which actually handles VCS control over the resource.
VCS configuration can be set either statically through a configuration file, dynamically through the CLI, or both. LLT and GAB configurations are primarily set through configuration files.
Configuration
VCS configuration is fairly simple. The three configurations to worry about are LLT, GAB, and VCS resources.
LLT
LLT configuration requires two files: /etc/llttab and /etc/llthosts.
llttab contains information on node-id, cluster membership, and heartbeat links. It should look like this:
# llttab -- low-latency transport configuration file
GAB
GAB requires only one configuration file, /etc/gabtab. This file lists the number of nodes in the cluster and also, if there are any communication disks in the system, configuration for them. Ex:
/sbin/gabconfig -c -n2
tells GAB to start GAB with 2 hosts in the cluster.
LLT and GAB
VCS uses two components, LLT and GAB to share data over the private networks among systems.
These components provide the performance and reliability required by VCS.
LLT and GAB files
Gabtab Entries
LLT and GAB Commands
GAB Port Memberbership
Cluster daemons
Cluster Log Files
Starting and Stopping the cluster
Cluster Status
Cluster Details
Users
System Operations
Dynamic Configuration
The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the
configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put
back into read only mode the .stale file is removed.
Service Groups
Service Group Operations
Resources
Resource Operations
Resource Types
Resource Agents
Resource Agent Operations
Once cluster members can communicate via LLT and GAB, VCS is started.
In the VCS configuration, each Cluster contains systems, Service Groups, and Resources. Service Groups contain a list of systems belonging to that group, a list of systems on which the Group should
be started, and Resources. A Resource is something controlled or monitored by VCS, like network interfaces, logical IP's, mount point, physical/logical disks, processes, files, etc. Each resource
corresponds to a VCS agent which actually handles VCS control over the resource.
VCS configuration can be set either statically through a configuration file, dynamically through the CLI, or both. LLT and GAB configurations are primarily set through configuration files.
Configuration
VCS configuration is fairly simple. The three configurations to worry about are LLT, GAB, and VCS resources.
LLT
LLT configuration requires two files: /etc/llttab and /etc/llthosts.
llttab contains information on node-id, cluster membership, and heartbeat links. It should look like this:
# llttab -- low-latency transport configuration file
GAB
GAB requires only one configuration file, /etc/gabtab. This file lists the number of nodes in the cluster and also, if there are any communication disks in the system, configuration for them. Ex:
/sbin/gabconfig -c -n2
tells GAB to start GAB with 2 hosts in the cluster.
LLT and GAB
VCS uses two components, LLT and GAB to share data over the private networks among systems.
These components provide the performance and reliability required by VCS.
LLT | LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack |
GAB | GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab). |
LLT and GAB files
/etc/llthosts | The file is a database, containing one entry per system, that links the LLT system ID with the hosts name. The file is identical on each server in the cluster. |
/etc/llttab | The file contains information that is derived during installation and is used by the utility lltconfig. |
/etc/gabtab | The file contains the information needed to configure the GAB driver. This file is used by the gabconfig utility. |
/etc/VRTSvcs/conf/config/main.cf | The VCS configuration file. The file contains the information that defines the cluster and its systems. |
Gabtab Entries
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123 /sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124 /sbin/gabconfig -c -n2 |
gabdiskconf
|
-i Initialises the disk region -s Start Block -S Signature |
gabdiskhb (heartbeat disks)
|
-a Add a gab disk heartbeat resource -s Start Block -p Port -S Signature |
gabconfig
|
-c Configure the driver for use -n Number of systems in the cluster. |
LLT and GAB Commands
Verifying that links are active for LLT | lltstat -n |
verbose output of the lltstat command | lltstat -nvv | more |
open ports for LLT | lltstat -p |
display the values of LLT configuration directives | lltstat -c |
lists information about each configured LLT link | lltstat -l |
List all MAC addresses in the cluster | lltconfig -a list |
stop the LLT running | lltconfig -U |
start the LLT | lltconfig -c |
verify that GAB is operating | gabconfig -a Note: port a indicates that GAB is communicating, port h indicates that VCS is started |
stop GAB running | gabconfig -U |
start the GAB | gabconfig -c -n <number of nodes> |
override the seed values in the gabtab file | gabconfig -c -x |
GAB Port Memberbership
List Membership | gabconfig -a |
Unregister port f | /opt/VRTS/bin/fsclustadm cfsdeinit |
Port Function | a gab driver b I/O fencing (designed to guarantee data integrity) d ODM (Oracle Disk Manager) f CFS (Cluster File System) h VCS (VERITAS Cluster Server: high availability daemon) o VCSMM driver (kernel module needed for Oracle and VCS interface) q QuickLog daemon v CVM (Cluster Volume Manager) w vxconfigd (module for cvm) |
Cluster daemons
High Availability Daemon | had |
Companion Daemon | hashadow |
Resource Agent daemon | <resource>Agent |
Web Console cluster managerment daemon | CmdServer |
Cluster Log Files
Log Directory | /var/VRTSvcs/log |
primary log file (engine log file) | /var/VRTSvcs/log/engine_A.log |
Starting and Stopping the cluster
"-stale" instructs the engine to treat the local config as stale "-force" instructs the engine to treat a stale config as a valid one |
hastart [-stale|-force] |
Bring the cluster into running mode from a stale state using the configuration file from a particular server | hasys -force <server_name> |
stop the cluster on the local server but leave the application/s running, do not failover the application/s | hastop -local |
stop cluster on local server but evacuate (failover) the application/s to another node within the cluster | hastop -local -evacuate |
stop the cluster on all nodes but leave the application/s running | hastop -all -force |
Cluster Status
display cluster summary | hastatus -summary |
continually monitor cluster | hastatus |
verify the cluster is operating | hasys -display |
Cluster Details
information about a cluster | haclus -display |
value for a specific cluster attribute | haclus -value <attribute> |
modify a cluster attribute | haclus -modify <attribute name> <new> |
Enable LinkMonitoring | haclus -enable LinkMonitoring |
Disable LinkMonitoring | haclus -disable LinkMonitoring |
Users
add a user | hauser -add <username> |
modify a user | hauser -update <username> |
delete a user | hauser -delete <username> |
display all users | hauser -display |
System Operations
add a system to the cluster | hasys -add <sys> |
delete a system from the cluster | hasys -delete <sys> |
Modify a system attributes | hasys -modify <sys> <modify options> |
list a system state | hasys -state |
Force a system to start | hasys -force |
Display the systems attributes | hasys -display [-sys] |
List all the systems in the cluster | hasys -list |
Change the load attribute of a system | hasys -load <system> <value> |
Display the value of a systems nodeid (/etc/llthosts) | hasys -nodeid |
Freeze a system (No offlining system, No groups onlining) | hasys -freeze [-persistent][-evacuate] Note: main.cf must be in write mode |
Unfreeze a system ( reenable groups and resource back online) | hasys -unfreeze [-persistent] Note: main.cf must be in write mode |
Dynamic Configuration
The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the
configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put
back into read only mode the .stale file is removed.
Change configuration to read/write mode | haconf -makerw |
Change configuration to read-only mode | haconf -dump -makero |
Check what mode cluster is running in | haclus -display |grep -i 'readonly' 0 = write mode 1 = read only mode |
Check the configuration file | hacf -verify /etc/VRTSvcs/conf/config Note: you can point to any directory as long as it has main.cf and types.cf |
convert a main.cf file into cluster commands | hacf -cftocmd /etc/VRTSvcs/conf/config -dest /tmp |
convert a command file into a main.cf file | hacf -cmdtocf /tmp -dest /etc/VRTSvcs/conf/config |
Service Groups
add a service group | haconf -makerw hagrp -add groupw hagrp -modify groupw SystemList sun1 1 sun2 2 hagrp -autoenable groupw -sys sun1 haconf -dump -makero |
delete a service group | haconf -makerw hagrp -delete groupw haconf -dump -makero |
change a service group | haconf -makerw hagrp -modify groupw SystemList sun1 1 sun2 2 sun3 3 haconf -dump -makero Note: use the "hagrp -display <group>" to list attributes |
list the service groups | hagrp -list |
list the groups dependencies | hagrp -dep <group> |
list the parameters of a group | hagrp -display <group> |
display a service group's resource | hagrp -resources <group> |
display the current state of the service group | hagrp -state <group> |
clear a faulted non-persistent resource in a specific grp | hagrp -clear <group> [-sys] <host> <sys> |
Change the system list in a cluster | # remove the host hagrp -modify grp_zlnrssd SystemList -delete <hostname> # add the new host (don't forget to state its position) hagrp -modify grp_zlnrssd SystemList -add <hostname> 1 # update the autostart list hagrp -modify grp_zlnrssd AutoStartList <host> <host> |
Service Group Operations
Start a service group and bring its resources online | hagrp -online <group> -sys <sys> |
Stop a service group and takes its resources offline | hagrp -offline <group> -sys <sys> |
Switch a service group from system to another | hagrp -switch <group> to <sys> |
Enable all the resources in a group | hagrp -enableresources <group> |
Disable all the resources in a group | hagrp -disableresources <group> |
Freeze a service group (disable onlining and offlining) | hagrp -freeze <group> [-persistent] note: use the following to check "hagrp -display <group> | grep TFrozen" |
Unfreeze a service group (enable onlining and offlining) | hagrp -unfreeze <group> [-persistent] note: use the following to check "hagrp -display <group> | grep TFrozen" |
Enable a service group. Enabled groups can only be brought online | haconf -makerw hagrp -enable <group> [-sys] haconf -dump -makero Note to check run the following command "hagrp -display | grep Enabled" |
Disable a service group. Stop from bringing online | haconf -makerw hagrp -disable <group> [-sys] haconf -dump -makero Note to check run the following command "hagrp -display | grep Enabled" |
Flush a service group and enable corrective action. | hagrp -flush <group> -sys <system> |
Resources
add a resource | haconf -makerw hares -add appDG DiskGroup groupw hares -modify appDG Enabled 1 hares -modify appDG DiskGroup appdg hares -modify appDG StartVolumes 0 haconf -dump -makero |
delete a resource | haconf -makerw hares -delete <resource> haconf -dump -makero |
change a resource | haconf -makerw hares -modify appDG Enabled 1 haconf -dump -makero Note: list parameters "hares -display <resource>" |
change a resource attribute to be globally wide | hares -global <resource> <attribute> <value> |
change a resource attribute to be locally wide | hares -local <resource> <attribute> <value> |
list the parameters of a resource | hares -display <resource> |
list the resources | hares -list |
list the resource dependencies | hares -dep |
Resource Operations
Online a resource | hares -online <resource> [-sys] |
Offline a resource | hares -offline <resource> [-sys] |
display the state of a resource( offline, online, etc) | hares -state |
display the parameters of a resource | hares -display <resource> |
Offline a resource and propagate the command to its children | hares -offprop <resource> -sys <sys> |
Cause a resource agent to immediately monitor the resource | hares -probe <resource> -sys <sys> |
Clearing a resource (automatically initiates the onlining) | hares -clear <resource> [-sys] |
Resource Types
Add a resource type | hatype -add <type> |
Remove a resource type | hatype -delete <type> |
List all resource types | hatype -list |
Display a resource type | hatype -display <type> |
List a partitcular resource type | hatype -resources <type> |
Change a particular resource types attributes | hatype -value <type> <attr> |
Resource Agents
add a agent | pkgadd -d . <agent package> |
remove a agent | pkgrm <agent package> |
change a agent | n/a |
list all ha agents | haagent -list |
Display agents run-time information i.e has it started, is it running ? | haagent -display <agent_name> |
Display agents faults | haagent -display |grep Faults |
Resource Agent Operations
Start an agent | haagent -start <agent_name>[-sys] |
Stop an agent | haagent -stop <agent_name>[-sys] |
Show the line number while monitoring the log files using tail -f command
You can combine the tail -f command using either cat or awk commands:
Method 1:
Method 2:
You should get the similar output as below:
Method 1:
# tail -f syslog|cat -n
Method 2:
# tail -f syslog|awk '{print NR,$0}'
You should get the similar output as below:
1 Mar 4 15:21:07 oraserver local1:info Oracle Audit[1433636]: 2 Mar 4 15:21:07 oraserver local1:info Oracle Audit[4198698]: 3 Mar 4 15:21:07 oraserver local1:info Oracle Audit[5456076]: 4 Mar 4 15:21:07 oraserver local1:info Oracle Audit[6545472]: 5 Mar 4 15:21:09 oraserver local1:info Oracle Audit[5456078]: 6 Mar 4 15:21:09 oraserver local1:info Oracle Audit[1609878]: 7 Mar 4 15:21:09 oraserver local1:info Oracle Audit[5456078]: 8 Mar 4 15:21:17 oraserver auth|security:info sshd[6545478]: 9 Mar 4 15:21:17 oraserver auth|security:info sshd[5456086]: 10 Mar 4 15:21:46 oraserver daemon:info CCIRMTD[295062]: