DISCLAIMER : Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility.If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.


Restore or Install AIX with a mksysb image using NIM

mksysb resource is a file containing the image of the root volume group (created with the AIXmksysb command) of a machine. It is used to restore a machine when it crashed, or to install it from scratch (also known as “cloning” a client). In a environment, we usually installed AIX on new LPARs or VIO clients from a existing mksysb rather than going for fresh AIX CD. Installing OS from existing mksysb help us to keep the customization same for all LPAR .

Assumptions:

1. The NIM client (In our example NIM Client is webmanual01) is defined on the NIM master (In our example nim01)
2. The client’s hostname and IP address are listed in the /etc/hosts file.
3. The mksysb image has been transferred or restored from TSM and resides in the NIM master nim01:/export/nim/mksysb.The size and sum command output match that from the source mksysb image.

Create a mksysb resource:

Run smit nim_mkres –> You should see a “Resource type” listing disppalyed –> Scroll through the menu list and select mksysb.
Hit enter and you will see menu below:

Define a Resource

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                            [Entry Fields]
* Resource Name                       [webmanual01mksysb]
* Resource Type                         mksysb
* Server of Resource                   [master]
* Location of Resource                 [/export/nim/mksysb/webmanual01.mksysb.0] Comments []

Source for Replication                  []
-OR-
System Backup Image Creation Options:
CREATE system backup image?            no
NIM CLIENT to backup                     []
PREVIEW only?                           no
IGNORE space requirements?              no
[MORE...10]

Prepare Bos install on client:

Run smit nim_tasks –> Select Install and Update Software and press enter–> Then select theInstall the Base Operating System on Standalone Clients –> Select the target definition ( ie the client which will be restored) –>Select the installation type–> select mksysb Select the mksysb resource webmanual01mksysb which you created in last step –> Select the SPOT for the restore / installation –

The entire bos_inst smit panel should now displayed
Install the Base Operating System on Standalone Clients

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

[TOP]                                             [Entry Fields]
* Installation Target                    webmanul01
* Installation TYPE                       mksysb
* SPOT                                       spot61_TL06_SP3
LPP_SOURCE
MKSYSB                                   webmanual01mksysb

BOSINST_DATA to use during installation         []
IMAGE_DATA to use during installation           []
RESOLV_CONF to use for network configuration     []
Customization SCRIPT to run after installation        []
Customization FB Script to run at first reboot          []
ACCEPT new license agreements?                   []
Remain NIM client after install?                       [yes]
PRESERVE NIM definitions for resources on
this target?                                     [yes]
FORCE PUSH the installation?                           [no]
Initiate reboot and installation now?         [no]
-OR-
Set bootlist for installation at the next reboot?    [no]
Additional BUNDLES to install                     []
-OR-
Additional FILESETS to install                   []
(bundles will be ignored)


On NIM nim01 server ,check for correct client setup and start:

a. Check the subserver bootps is active
#lssrc -t bootps
Service Command Description Status
bootps /usr/sbin/bootpd bootpd /etc/bootptab active

b. Check the subserver tftp is active
#lssrc -t tftp
Service Command Description Status
tftp /usr/sbin/tftpd tftpd -n active
c. Tail /etc/bootptab and should see the client network info listed per example below
webmanual01:bf=/tftpboot/testlpar:ip=10.190.120.90 :ht=ethernet:sa=10.190.120.120:sm=255.255.255.0: 
d. Showmount -e —–> should list 2 filesystems being NFS exported to the client ( webmanual01)
/export/nim/mksysb/webmanul01.mksysb.0 webmanual01
/export/nim/scripts/webmanual01.script webmanual01
Boot client webmanual01 into SMS mode using HMC and select the option 2
PowerPC Firmware
-------------------------------------------------------------------------------
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
Select Ethernet Adapter as below screen . I will select 2 as I know it is configured
PowerPC Firmware
-------------------------------------------------------------------------------
NIC Adapters
Device Location Code Hardware
Address
1. Port 1 - IBM 2 PORT 10/100/100 U787B.001.WEBDEV-P1-C1-T1 001a6491a656
2. Port 2 - IBM 2 PORT 10/100/100 U787B.001.WEBDEV-P1-C1-T2 001a6491a657
Now select 1 for IP Parameters as follows
PowerPC Firmware
-------------------------------------------------------------------------------
Network Parameters
Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1-
1. IP Parameters
2. Adapter Configuration
3. Ping Test
4. Advanced Setup: BOOTP
Now fill up the parameters as below
PowerPC Firmware
-------------------------------------------------------------------------------
IP Parameters
Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1-
1. Client IP Address [10.190.120.17] 2. Server IP Address [10.190.120.24] 3. Gateway IP Address [10.120.112.1]
4. Subnet Mask [255.255.255.000]
Now go back previous menu and select Ping Test (option 3)
Make sure you get a ping success; else, the NIM will fail. If ping failure, check the address info entered; check if the adapter is correct and if the adapter is connected into the network. When ping test OK, proceed to next step:
Type menu item number and press Enter or select Navigation key:3
PowerPC Firmware
-------------------------------------------------------------------------------

Ping Test
Port 2 - IBM 2 PORT 10/100/1000 Base-TX PCI-X Adapter : U787B.001.WEBDEV-P1-C1
Speed, Duplex: auto,auto
Client IP Address: 10.190.120.17
Server IP Address: 10.190.120.24
Gateway IP Address: 10.190.112.1
Subnet Mask: 255.255.255.000
Protocol: Standard
Spanning Tree Enabled: 0
Connector Type:

1. Execute Ping Test
-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:1

.---------------------.
| Attempting Ping... |
`---------------------'

<strong>Lots of OutPut</strong>

.-----------------.
| Ping Success. |
`-----------------'


Go back to Main Menu and Select 5 for Select Boot Options
Main Menu
1. Select Language
2. Setup Remote IPL (Initial Program Load)
3. Change SCSI Settings
4. Select Console
5. Select Boot Options
Now Select 1 Select Install/Boot Device and Select 6 for Network. Then Select Adapter # from previous step as option 2
Multiboot
1. Select Install/Boot Device
2. Configure Boot Device Order
3. Multiboot Startup
PowerPC Firmware
-------------------------------------------------------------------------------
Select Device Type
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices

-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:6
PowerPC Firmware

-------------------------------------------------------------------------------
Select Device
Device Current Device
Number Position Name
1. - Ethernet
( loc=U787B.001.WEBDEV-P1-C1-T1 )
2. - Ethernet
( loc=U787B.001.WEBDEV-P1-C1-T2 )

Now select Normal Mode Boot and Select 1 to Exit System Management Services
PowerPC Firmware
-------------------------------------------------------------------------------
Select Task

Ethernet
( loc=U787B.001.WEBDEV-P1-C1-T2 )

1. Information
2. Normal Mode Boot
3. Service Mode Boot

-------------------------------------------------------------------------------
Navigation keys:
M = return to Main Menu
ESC key = return to previous screen X = eXit System Management Services
-------------------------------------------------------------------------------
Type menu item number and press Enter or select Navigation key:2

PowerPC Firmware

-------------------------------------------------------------------------------
Are you sure you want to exit System Management Services?
1. Yes
2. No
At this point, you should see the BOOTP packets increase until finally the system is able to load the minimal kernel, and start the NIM install, Once loaded, you reach the AIX install menu as below , from there I have select to restore the OS on hdisk0.
Type menu item number and press Enter or select Navigation key:

Welcome to Base Operating System Installation and Maintenance

Type the number of your choice and press Enter. Choice is indicated by >>>.

>>> 1 Start Install Now with Default Settings

2 Change/Show Installation Settings and Install

3 Start Maintenance Mode for System Recovery

4 Configure Network Disks (iSCSI)

5 Select Storage Adapters
System Backup Installation and Settings

Either type 0 and press Enter to install with the current settings, or type the
number of the setting you want to change and press Enter.

Setting: Current Choice(s):

1 Disk(s) where you want to install ...... hdisk0
Use Maps............................. No
2 Shrink File Systems..................... No
3 Import User Volume Groups............... Yes
4 Recover Devices......................... Yes

>>> 0 Install with the settings listed above.

Now system will install here and after installation complete you need to do post customization steps as per your environment.--

NTP Client Configuration in AIX

Below are the steps for NTP client configuration in AIX .

1) Using “ntpdate” command , have a server suitable for synchronization by using the
#ntpdate -d ip.address.of.ntpserver

2) Client configuration for ntp is defined in the configuration file
#cat /etc/ntp.conf
server <NTP.SERVER.IP>
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace

3) start the xntpd daemon
#startsrc -s xntpd

4) To make permanent after reboot, uncomment the following line in /etc/rc.tcpip
vi /etc/rc.tcpip
start /usr/sbin/xntpd “$src_running”

5) check the service status
# lssrc -s xntpd
Subsystem Group PID Status
xntpd tcpip 3997772 active

6) check the time sync with server
#ntpq -p

What is VIOS ?

The virtual i/o server is an appliance that provides virtual storage and shared ethernet adapter capability to client logical partitions.it allow a physical adapter with attached disks on the virtual i/o sever partition to be shared by one or more partitions,enabling clients to consolidate and potentially minimize the number of physical adapters required.


VIOS is a special purpose partition that can serve I/O resources to other partitions. The type of LPAR is set at creation. The VIOS LPAR type allows for the creation of virtual server adapters, where a regular AIX/Linux LPAR does not.

• VIOS works by owning a physical resource and mapping that physical resource to virtual resources. Client LPARs can connect to the physical resource via these mappings.

• VIOS is not a hypervisor, nor is it required for sub-CPU virtualization. VIOS can be used to manage other partitions in some situations when a HMC is not used. This is called IVM (Integrated Virtualization Manager).

Depending on configurations, VIOS may or may not be a single point of failure. When client partitions access I/O via a single path that is delivered via in a single VIOS, then that VIOS represents a potential single point of failure for that client partition.

• VIOS is typically configured in pairs along with various multipathing / failover methods in the client for virtual resources to prevent the VIOS from becoming a single point of failure.

• Active memory sharing and partition mobility require a VIOS partition. The VIOS partition acts as the controlling device for backing store for active memory sharing. All I/O to a partition capable of partition mobility must be handled by VIOS as well as the process of shipping memory between physical systems.


VIO server setups

VIO Server General setup

 

 


VIO server Detail 

 



HA VIO server setup

 

 


VIO VLAN Setup

 

 

Veritas Cluster Cheat sheet

 VCS is built on three components: LLT, GAB, and VCS itself. LLT handles kernel-to-kernel communication over the LAN heartbeat links, GAB handles shared disk communication and messaging between cluster members, and VCS handles the management of services.

Once cluster members can communicate via LLT and GAB, VCS is started.
In the VCS configuration, each Cluster contains systems, Service Groups, and Resources. Service Groups contain a list of systems belonging to that group, a list of systems on which the Group should
be started, and Resources. A Resource is something controlled or monitored by VCS, like network interfaces, logical IP's, mount point, physical/logical disks, processes, files, etc. Each resource
corresponds to a VCS agent which actually handles VCS control over the resource.

VCS configuration can be set either statically through a configuration file, dynamically through the CLI, or both. LLT and GAB configurations are primarily set through configuration files.

Configuration

VCS configuration is fairly simple. The three configurations to worry about are LLT, GAB, and VCS resources.

LLT

LLT configuration requires two files: /etc/llttab and /etc/llthosts.
llttab contains information on node-id, cluster membership, and heartbeat links. It should look like this:

# llttab -- low-latency transport configuration file



GAB

GAB requires only one configuration file, /etc/gabtab. This file lists the number of nodes in the cluster and also, if there are any communication disks in the system, configuration for them. Ex:

/sbin/gabconfig -c -n2

tells GAB to start GAB with 2 hosts in the cluster.

LLT and GAB

VCS uses two components, LLT and GAB to share data over the private networks among systems.
These components provide the performance and reliability required by VCS.

LLT LLT (Low Latency Transport) provides fast, kernel-to-kernel comms and monitors network connections. The system admin configures the LLT by creating a configuration file (llttab) that describes the systems in the cluster and private network links among them. The LLT runs in layer 2 of the network stack
GAB GAB (Group membership and Atomic Broadcast) provides the global message order required to maintain a synchronised state among the systems, and monitors disk comms such as that required by the VCS heartbeat utility. The system admin configures GAB driver by creating a configuration file ( gabtab).

LLT and GAB files

/etc/llthosts The file is a database, containing one entry per system, that links the LLT system ID with the hosts name. The file is identical on each server in the cluster.
/etc/llttab The file contains information that is derived during installation and is used by the utility lltconfig.
/etc/gabtab The file contains the information needed to configure the GAB driver. This file is used by the gabconfig utility.
/etc/VRTSvcs/conf/config/main.cf The VCS configuration file. The file contains the information that defines the cluster and its systems.

Gabtab Entries

/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 16 -S 1123
/sbin/gabdiskconf - i /dev/dsk/c1t2d0s2 -s 144 -S 1124
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 16 -p a -s 1123
/sbin/gabdiskhb -a /dev/dsk/c1t2d0s2 -s 144 -p h -s 1124
/sbin/gabconfig -c -n2

gabdiskconf
-i   Initialises the disk region
-s   Start Block
-S   Signature
gabdiskhb (heartbeat disks)
-a   Add a gab disk heartbeat resource
-s   Start Block
-p   Port
-S   Signature
gabconfig
-c   Configure the driver for use
-n   Number of systems in the cluster.

LLT and GAB Commands


Verifying that links are active for LLT lltstat -n
verbose output of the lltstat command lltstat -nvv | more
open ports for LLT lltstat -p
display the values of LLT configuration directives lltstat -c
lists information about each configured LLT link lltstat -l
List all MAC addresses in the cluster lltconfig -a list
stop the LLT running lltconfig -U
start the LLT lltconfig -c
verify that GAB is operating gabconfig -a
Note: port a indicates that GAB is communicating, port h indicates that VCS is started
stop GAB running gabconfig -U
start the GAB gabconfig -c -n <number of nodes>
override the seed values in the gabtab file gabconfig -c -x

GAB Port Memberbership


List Membership gabconfig -a
Unregister port f /opt/VRTS/bin/fsclustadm cfsdeinit
Port Function a   gab driver
b   I/O fencing (designed to guarantee data integrity)
d   ODM (Oracle Disk Manager)
f   CFS (Cluster File System)
h   VCS (VERITAS Cluster Server: high availability daemon)
o   VCSMM driver (kernel module needed for Oracle and VCS interface)
q   QuickLog daemon
v   CVM (Cluster Volume Manager)
w   vxconfigd (module for cvm)

Cluster daemons


High Availability Daemon had
Companion Daemon hashadow
Resource Agent daemon <resource>Agent
Web Console cluster managerment daemon CmdServer

Cluster Log Files

Log Directory /var/VRTSvcs/log
primary log file (engine log file) /var/VRTSvcs/log/engine_A.log

Starting and Stopping the cluster


"-stale" instructs the engine to treat the local config as stale
"-force" instructs the engine to treat a stale config as a valid one
hastart [-stale|-force]
Bring the cluster into running mode from a stale state using the configuration file from a particular server hasys -force <server_name>
stop the cluster on the local server but leave the application/s running, do not failover the application/s hastop -local
stop cluster on local server but evacuate (failover) the application/s to another node within the cluster hastop -local -evacuate
stop the cluster on all nodes but leave the application/s running hastop -all -force

Cluster Status


display cluster summary hastatus -summary
continually monitor cluster hastatus
verify the cluster is operating hasys -display

Cluster Details



information about a cluster haclus -display
value for a specific cluster attribute haclus -value <attribute>
modify a cluster attribute haclus -modify <attribute name> <new>
Enable LinkMonitoring haclus -enable LinkMonitoring
Disable LinkMonitoring haclus -disable LinkMonitoring

Users


add a user hauser -add <username>
modify a user hauser -update <username>
delete a user hauser -delete <username>
display all users hauser -display

System Operations


add a system to the cluster hasys -add <sys>
delete a system from the cluster hasys -delete <sys>
Modify a system attributes hasys -modify <sys> <modify options>
list a system state hasys -state
Force a system to start hasys -force
Display the systems attributes hasys -display [-sys]
List all the systems in the cluster hasys -list
Change the load attribute of a system hasys -load <system> <value>
Display the value of a systems nodeid (/etc/llthosts) hasys -nodeid
Freeze a system (No offlining system, No groups onlining) hasys -freeze [-persistent][-evacuate]
Note: main.cf must be in write mode
Unfreeze a system ( reenable groups and resource back online) hasys -unfreeze [-persistent]
Note: main.cf must be in write mode

Dynamic Configuration 

The VCS configuration must be in read/write mode in order to make changes. When in read/write mode the
configuration becomes stale, a .stale file is created in $VCS_CONF/conf/config. When the configuration is put
back into read only mode the .stale file is removed.
Change configuration to read/write mode haconf -makerw
Change configuration to read-only mode haconf -dump -makero
Check what mode cluster is running in haclus -display |grep -i 'readonly'
0 = write mode
1 = read only mode
Check the configuration file hacf -verify /etc/VRTSvcs/conf/config
Note: you can point to any directory as long as it has main.cf and types.cf
convert a main.cf file into cluster commands hacf -cftocmd /etc/VRTSvcs/conf/config -dest /tmp
convert a command file into a main.cf file hacf -cmdtocf /tmp -dest /etc/VRTSvcs/conf/config

Service Groups


add a service group haconf -makerw
  hagrp -add groupw
  hagrp -modify groupw SystemList sun1 1 sun2 2
  hagrp -autoenable groupw -sys sun1
haconf -dump -makero
delete a service group haconf -makerw
  hagrp -delete groupw
haconf -dump -makero
change a service group haconf -makerw
  hagrp -modify groupw SystemList sun1 1 sun2 2 sun3 3
haconf -dump -makero
Note: use the "hagrp -display <group>" to list attributes
list the service groups hagrp -list
list the groups dependencies hagrp -dep <group>
list the parameters of a group hagrp -display <group>
display a service group's resource hagrp -resources <group>
display the current state of the service group hagrp -state <group>
clear a faulted non-persistent resource in a specific grp hagrp -clear <group> [-sys] <host> <sys>
Change the system list in a cluster # remove the host
hagrp -modify grp_zlnrssd SystemList -delete <hostname>
# add the new host (don't forget to state its position)
hagrp -modify grp_zlnrssd SystemList -add <hostname> 1
# update the autostart list
hagrp -modify grp_zlnrssd AutoStartList <host> <host>

Service Group Operations


Start a service group and bring its resources online hagrp -online <group> -sys <sys>
Stop a service group and takes its resources offline hagrp -offline <group> -sys <sys>
Switch a service group from system to another hagrp -switch <group> to <sys>
Enable all the resources in a group hagrp -enableresources <group>
Disable all the resources in a group hagrp -disableresources <group>
Freeze a service group (disable onlining and offlining) hagrp -freeze <group> [-persistent]
note: use the following to check "hagrp -display <group> | grep TFrozen"
Unfreeze a service group (enable onlining and offlining) hagrp -unfreeze <group> [-persistent]
note: use the following to check "hagrp -display <group> | grep TFrozen"
Enable a service group. Enabled groups can only be brought online haconf -makerw
  hagrp -enable <group> [-sys]
haconf -dump -makero
Note to check run the following command "hagrp -display | grep Enabled"
Disable a service group. Stop from bringing online haconf -makerw
  hagrp -disable <group> [-sys]
haconf -dump -makero
Note to check run the following command "hagrp -display | grep Enabled"
Flush a service group and enable corrective action. hagrp -flush <group> -sys <system>

Resources


add a resource haconf -makerw
  hares -add appDG DiskGroup groupw
  hares -modify appDG Enabled 1
  hares -modify appDG DiskGroup appdg
  hares -modify appDG StartVolumes 0
haconf -dump -makero
delete a resource haconf -makerw
  hares -delete <resource>
haconf -dump -makero
change a resource haconf -makerw
  hares -modify appDG Enabled 1
haconf -dump -makero
Note: list parameters "hares -display <resource>"
change a resource attribute to be globally wide hares -global <resource> <attribute> <value>
change a resource attribute to be locally wide hares -local <resource> <attribute> <value>
list the parameters of a resource hares -display <resource>
list the resources hares -list  
list the resource dependencies hares -dep

Resource Operations


Online a resource hares -online <resource> [-sys]
Offline a resource hares -offline <resource> [-sys]
display the state of a resource( offline, online, etc) hares -state
display the parameters of a resource hares -display <resource>
Offline a resource and propagate the command to its children hares -offprop <resource> -sys <sys>
Cause a resource agent to immediately monitor the resource hares -probe <resource> -sys <sys>
Clearing a resource (automatically initiates the onlining) hares -clear <resource> [-sys]

Resource Types

Add a resource type hatype -add <type>
Remove a resource type hatype -delete <type>
List all resource types hatype -list
Display a resource type hatype -display <type>
List a partitcular resource type hatype -resources <type>
Change a particular resource types attributes hatype -value <type> <attr>

Resource Agents


add a agent pkgadd -d . <agent package>
remove a agent pkgrm <agent package>
change a agent n/a
list all ha agents haagent -list  
Display agents run-time information i.e has it started, is it running ? haagent -display <agent_name>  
Display agents faults haagent -display |grep Faults

Resource Agent Operations


Start an agent haagent -start <agent_name>[-sys]
Stop an agent haagent -stop <agent_name>[-sys]

Show the line number while monitoring the log files using tail -f command

You can combine the tail -f command using either cat or awk commands:

Method 1:

# tail -f syslog|cat -n

Method 2: 

# tail -f syslog|awk '{print NR,$0}'

You should get the similar output as below:

 1 Mar 4 15:21:07 oraserver local1:info Oracle Audit[1433636]: 
 2 Mar 4 15:21:07 oraserver local1:info Oracle Audit[4198698]:
 3 Mar 4 15:21:07 oraserver local1:info Oracle Audit[5456076]: 
 4 Mar 4 15:21:07 oraserver local1:info Oracle Audit[6545472]: 
 5 Mar 4 15:21:09 oraserver local1:info Oracle Audit[5456078]: 
 6 Mar 4 15:21:09 oraserver local1:info Oracle Audit[1609878]: 
 7 Mar 4 15:21:09 oraserver local1:info Oracle Audit[5456078]: 
 8 Mar 4 15:21:17 oraserver auth|security:info sshd[6545478]: 
 9 Mar 4 15:21:17 oraserver auth|security:info sshd[5456086]: 
 10 Mar 4 15:21:46 oraserver daemon:info CCIRMTD[295062]: 
This perfpmr package contains a number of performance tools and some instructions.  Some of these tools are products available with AIX.  Some of the tools are prototype internal tools (setpri, setsched, iomon, getevars, pmcount, lsc, fcstat2, memfill, getdate, perfstat_trigger) and are not generally available to customers. 

All results generated by the Program are estimates and averages based on certain assumptions and conditions. Each environment has its own unique set of requirements that no tool can entirely account for. No representation is made that the results will be accurate or achieved in any given IBM installation environment. The result is based on specific configurations and run time environments. Customer results will vary. Any configuration recommended by the Program should be tested and verified. Any code provided is for illustrative purposes only.

AIX 7.1 PERFORMANCE DATA COLLECTION PROCESS

  Note:   The act of collecting performance data will add load on the system.  HACMP users may        want to extend the Dead Man Switch timeout or shutdown HACMP prior to collecting  perfpmr data to avoid accidental failovers.

 I.   INTRODUCTION


      This package contains a set of tools and instructions for  collecting the data needed to analyze a AIX performance   problem.  This tool set runs on AIX V7.1

 II.  HOW TO OBTAIN AND INSTALL THE TOOLS ON AN IBM RISC SYSTEM/6000.

      A. OBTAINING THE PACKAGE

           The package will be distributed as a compressed "tar" file available electronically.

            From the internet:
            ==================
            'ftp://ftp.software.ibm.com/aix/tools/perftools/perfpmr'


      B. INSTALLING THE PACKAGE

           The following assumes the tar file is in /tmp and named  'perf71.tar.Z'.

           a. login as root or use the 'su' command to obtain root  authority

           b. create perf71 directory and move to that directory (this example assumes the directory built  is under /tmp)

              # mkdir /tmp/perf71
              # cd /tmp/perf71

           c. extract the shell scripts out of the compressed tar file:

              # zcat /tmp/perf71.tar.Z | tar -xvf -

 III. HOW TO COLLECT DATA FOR AN AIX PERFORMANCE PROBLEM


      A. Purpose:

           1. This section describes the set of steps that should be followed to collect performance data.

           2. The goal is to collect a good base of information that can be used by AIX technical support specialists or development lab programmers to get started in analyzing and solving the performance problem. This process may need to be repeated after analysis of the initial set of data is completed and/or AIX personnel may want to dial-in to the customer's machine if appropriate for  additional data collection/analysis.

      B. Collection of the Performance Data on Your System

           1. Detailed System Performance Data:

              Detailed performance data is required to analyze and solve a performance problem. Follow these steps to  invoke the supplied shell scripts:

              NOTE:  You must have root user authority when executing these shell scripts.

                a. Create a data collection directory and 'cd' into this  directory.
                   Allow at least 45MB*#of_logicalcpus of unused space in whatever file system is used.


                   *IMPORTANT* - DO NOT COLLECT DATA IN A REMOTELY MOUNTED                FILESYSTEM SINCE IPTRACE MAY HANG

                   For example using /tmp filesystem:
                       # mkdir /tmp/perfdata
                       # cd /tmp/perfdata

                b. HACMP users:
                     Generaly recommend HACMP deadman switch interval be lengthened while performance data is being collected.

                c. Collect our 'standard' PERF71 data for 600 seconds (600 seconds = 10 minutes).  Start the data collection while the problem is already occurring with the command:

                     /directory_where_perfpmrscripts_are_installed/perfpmr.sh 600

                   The perfpmr.sh shell provided will:
                   - immediately collect a 5 second trace (trace.sh 5)
                   - collect 600 seconds of general system performance data (monitor.sh 600).
                   - collect hardware and software configuration information (config.sh).

                   In addition, if it finds the following programs available  in the current execution path, it will:
                   - collect 10 seconds of iptrace information (iptrace.sh 10)
                   - collect 10 seconds of filemon information (filemon.sh 10)
                   - collect 60 seconds of tprof information (tprof.sh 60)

                   NOTE:  Since a performance problems may mask other problems, it is not uncommon to fix one issue and then collect more data to work on another issue.

                d. Answer the questions in the text file called 'PROBLEM.INFO' in the data collection directory created above.  This background information about your problem helps us better understand what is going wrong.

 IV. HOW TO SEND THE DATA TO IBM.


      A. Combine all the collected data into a single binary 'tar' file and compress it:

           Put the completed PROBLEM.INFO in the same directory where the data was collected (ie. /tmp/perfdata in the following example).  Change to the parent directory, and use the tar command as follows:

       Either use: cd /tmp; perfpmr.sh -o perfdata -z pmr#.pax.gz
       or
           # cd /tmp/perfdata   (or whatever directory used
                                  to collect the data)
           # cd ..
       # pax -xpax -vw perfdata | gzip -c > pmr#.pax.gz


      B. Submission of testcase to IBM:

           Internet 'ftp' access:
           ----------------------
             The quickest method to get the data analyzed is for the customer to ftp the data directly to IBM. Data placed on the server listed below cannot be accessed by unauthorized personnel.  Please contact your IBM representative for the PMR#, BRANCH#, and COUNTRY#.  IBM uses all 3 to uniquely associate your data with your problem tracking record.

               'ftp testcase.software.ibm.com'
                Userid:  anonymous
                password:  your_internet_email_address
                           (ie. smith@austin.ibm.com)
               'cd toibm/aix'
               'bin'
               'put  PMR#.BRANCH#.COUNTRY#.pax.gz'
                  (ie. '16443.060.000.pax.gz'
               'quit'

            If the transfer fails with an error, it's possible that a file already exists by the same name on the ftp server. In this case, add something to the name of the file to differentiate it from the file already on the ftp site (ex. 16443.060.000.july18.pax.gz).

             Notify your IBM customer representative you have submitted the data.  They will then update the defect report to indicate the data is available for analysis.
 

Cloning a rootvg using alternate disk installation


Using this scenario, you can clone AIX® running on rootvg to an alternate disk on the same system, install a user-defined software bundle, and run a user-defined script to customize the AIX image on the alternate disk.
The information in this how-to scenario was tested using specific versions of AIX. The results you obtain might vary significantly depending on your version and level of AIX.
Because the alternate disk installation process involves cloning an existing rootvg to a target alternate disk, the target alternate disk must not be already assigned to a volume group.

In this scenario you will do the following:
  1. Prepare for the alternate disk installation
  2. Perform the alternate disk installation and customization
  3. Boot off the alternate disk
  4. Verify the operation

Step 1. Prepare for the alternate disk installation

  1. Check the status of physical disks on your system. Type:
    # lspv
    Output similar to the following displays:
    hdisk0         0009710fa9c79877    rootvg    active
    hdisk1         0009710f0b90db93    None
    We can use hdisk1 as our alternate disk because no volume group is assigned to this physical disk.
  2. Check to see if the alt_disk_copy fileset has been installed by running the following:
    # lslpp -L bos.alt_disk_install.rte
    Output similar to the following displays if the alt_disk_copy fileset is not installed:
    lslpp: 0504-132  Fileset bos.alt_disk_install.rte not installed.
  3. Using volume 1 of the AIX installation media, install the alt_disk_copy fileset by running the following:
    # geninstall -d/dev/cd0 bos.alt_disk_install.rte
    Output similar to the following displays:
    +-----------------------------------------------------------------------------+
                                    Summaries:                                     
    +-----------------------------------------------------------------------------+
                                                                                   
    Installation Summary                                                           
    --------------------                                                           
    Name                        Level           Part        Event       Result     
    -------------------------------------------------------------------------------
    bos.alt_disk_install.rte    5.3.0.0         USR         APPLY       SUCCESS    
  4. Create a user-defined bundle called /usr/sys/inst.data/user_bundles/MyBundle.bnd that contains the following filesets:
    I:bos.content_list
    I:bos.games

  5. Create the /home/scripts directory:
    mkdir /home/scripts
  6. Create a user-defined customization script called AddUsers.sh in the /home/scripts directory:
    touch /home/scripts/AddUsers.sh
    chmod 755 /home/scripts/AddUsers.sh
  7. Edit /home/scripts/AddUsers.sh to contain the following lines:
    mkuser johndoe
    touch /home/johndoe/abc.txt
    touch /home/johndoe/xyz.txt

Step 2. Perform the alternate disk installation and customization

  1. To clone the rootvg to an alternate disk, type the following at the command line to open the SMIT menu :
    # smit alt_clone
  2. Select hdisk1 in the Target Disk to Install field.
  3. Select the MyBundle bundle in the Bundle to Install field.
  4. Insert volume one of the installation media.
  5. Type /dev/cd0 in the Directory or Device with images field.
  6. Type /home/scripts/AddUsers.sh in the Customization script field.
  7. Press Enter to start the alternate disk installation.
  8. Check that the alternate disk was created, by running the following:
    # lspv
    Output similar to the following displays:
    hdisk0         0009710fa9c79877    rootvg             
    hdisk1         0009710f0b90db93    altinst_rootvg     

Step 3. Boot from the alternate disk

  1. By default, the alternate-disk-installation process changes the boot list to the alternate disk. To check this run the following:
    # bootlist -m normal -o   
    Output similar to the following displays:
    hdisk1
  2. Reboot the system. Type:
    # shutdown -r
    The system boots from the boot image on the alternate disk (hdisk1).

Step 4. Verify the operation

  1. When the system reboots, it will be running off the alternate disk. To check this, type the following:
    # lspv
    Output similar to the following displays:
    hdisk0         0009710fa9c79877    old_rootvg  
    hdisk1         0009710f0b90db93    rootvg      
    
  2. Verify that the customization script ran correctly, by typing the following:
    # find /home/johndoe -print       
    Output similar to the following displays:
    /home/johndoe                     
    /home/johndoe/.profile            
    /home/johndoe/abc.txt             
    /home/johndoe/xyz.txt             
  3. Verify that the contents of your software bundle was installed, by typing the following:
    # lslpp -Lb MyBundle                                         
    Output similar to the following displays:
      Fileset                      Level  State  Description                      
      ----------------------------------------------------------------------------
      bos.content_list           5.3.0.0    C    AIX Release Content List         
      bos.games                  5.3.0.0    C    Games 
     
     

alt_disk in AIX

alt_disk_copy:

Required filesets:
    bos.alt_disk_install.boot_images
    bos.alt_disk_install.rte
    bos.msg.en_US.alt_disk_install.rte

alt_disk_copy -d <hdisk to clone rootvg>                 this will clone the rootvg to the specified disk
alt_disk_copy -e /etc/exclude.rootvg -d <hdisk>      this will use the exclude list during the cloning
alt_disk_copy -T -d <hdisk>                                      it will convert jfs to jfs2 on the new target disk (from 6.1 TL4 only)
alt_rootvg_op -X <cloned rootvg to destroy>          this will destroy the cloned rootvg (alt_rootvg_op -X altinst_rootvg)
alt_rootvg_op -W -d <hdisk>                                   this will wake up a disk (cloned filesystems will be mounted with prefix /alt_)
alt_rootvg_op -S -t <hdisk>                                      this will put cloned rootvg to sleep (before that it will do a bosboot)
                                                     (-S: put to sleep earlier "waked up" vg, -t: rebuilds the alt. bootimage before sleep)
alt_rootvg_op -v <new cloned rootvg name> -d <hdisk> this will rename the given cloned rootvg name
                                                     (after wake-up and sleep the cloned vg name will be changed, in this case it is useful)

alt_disk_mksysb -m /mnt/aix1mksysb -d hdisk1 -k      this will resore given mksysb (aix1mksysb) to hdisk1 (-k: keep device configuration)

/var/adm/ras/alt_disk_inst.log                       alt_disk log file
----------------------------------

alt_disk_copy: (copy hdisk0 to hdsik1)
lv names can't be longer than 11 characters (because of alt_ prefix)
do not take out that disk which was used during boot (otherwise there will be problems with bosboot)

-unmirrorvg rootvg hdisk1   
-reducevg rootvg hdisk1       
-bosboot -ad hdisk0       
-bootlist -m normal hdisk0   
-alt_disk_copy -d hdisk1       
-bootlist -m normal hdisk0

after booting from hdisk1:
root@aix11: / # lspv
hdisk0          00cf5d8fe9c88a34                    old_rootvg
hdisk1          00cf5d8fadcaa9a9                    rootvg          active


booting from the old disk:
root@aix11: / # lspv
hdisk0          00cf5d8fe9c88a34                    rootvg          active
hdisk1          00cf5d8fadcaa9a9                    altinst_rootvg


removing the new image (keeping the old one):
-alt_rootvg_op -X altinst_rootvg         <--removing the new image from hdisk1
-chpv -c hdisk1                          <--clear that pv what contained the removed image
-extendvg -f rootvg hdisk1               <--extend the currently used rootvg with the cleared disk (hdisk1)
-mirrorvg -S rootvg hdisk1               <--mirroring rootvg to hdisk1 (checking: lsvg rootvg | grep STALE)(-S: -background sync)
-bosboot -ad hdisk0; bosboot -ad hdisk1  <--recreate the bootimage
-bootlist -m normal hdisk0 hdisk1        <--setup correct bootlist (checking: bootlist -m normal -o)

------------------------------------

Changing lv names (to avoid 11 characters problem):
1. # mkszfile                            <--creates image.data file of rootvg
2. # vi image.data                       <--edit image.data
3. # alt_disk_copy -d hdiskX -i /image.data -B    <--give image.data fie for alt_disk_copy


--------------------------------------

ONLINE UPDATE WITH ALT_DISK_INSTALL:

unmirrorvg rootvg hdisk1                 <--removing mirror ( check: lsvg -p rootvg)
chpv -c hdisk1                           <--clears boot record
reducevg rootvg hdisk1                   <--free up hdisk1
bosboot -ad hdisk0                       <--creates boot record
bootlist -m normal hdisk0                <--sets boot list (check: bootlist -m normal -o)

installp -s                              <--check if anything can be commited
copy new bos.rte.install                 <--will be needed for checking if update will be successful (cd to this directory)
install_all_updates -pYd .               <--preview of new bos.rte.install
install_all_updates -Yd .                <--installs new bos.rte.install

oslevel -sg 5300-09-01-0847              <--shows which fileset is greater than current service pack, it will show bos.rte.install
instfix -i | grep SP                            <--it will show where to update (53-09-020849_SP)
oslevel -sl 53-09-020849                 <--shows which filesets should be update

cd /mnt/5300-09-SP2                      <--go to servicepack dir
install_all_updates -pYd .               <--preview check

alt_disk_copy -d hdisk1 -b update_all -l /mnt/5300-09-SP2     <--this will do the update

shutdown -Fr                                      <--new OS will boot up
smitty commit                                    <--if needed

alt_rootvg_op -X old_rootvg               <--removes cloned old OS
chpv -c hdisk0                                     <--clears bootrecord
extendvg -f rootvg hdisk0                   <--add hdisk0 to rootvg
mirrorvg -S rootvg hdisk0                   <--mirror rootvg (-S: in background)
bosboot -a                                              <--creates boot record
bootlist -m normal hdisk0 hdisk1         <--set bootlist

AIX as DNS client - Tips & Tricks

nslookup is the command used to query DNS servers. Normally nslookup looks up the hostname for a ip address or IP address for a hostname.

DNS server IP address/hostnames are defined in /etc/resolv.conf in AIX servers.

Here is an example of /etc/resolv.conf

nameserver 192.168.2.12
nameserver 192.168.2.13
nameserver 192.168.2.14
search india.cope.com usa.cope.com uk.cope.com

Let us see few tips and tricks on using nslookup.

1. To look up address in non interactive way,

$ nslookup webserv
Server:  dnserver1.india.cope.com
Address:  192.168.2.12

Name:    webserv.india.cope.com
Address:  192.168.2.211
$

2. To look up address in interactive way,

$nslookup
Default Server:  dnserver1.india.cope.com
Address:  192.168.2.12

> websrv
Server:  dnserver1.india.cope.com
Address:  192.168.2.12

Name:    webserv.india.cope.com
Address:  192.168.2.211

> exit
$

3. To look up hostname in non interactive way,

$ nslookup 192.168.2.211
Server:  dnserver1.india.cope.com
Address:  192.168.2.12

Name:    webserv.india.cope.com
Address:  192.168.2.211
$

4. To look up hostname in interactive way,

$ nslookup
Default Server:  dnserver1.india.cope.com
Address:  192.168.2.12

> 192.168.2.211
Server:  dnserver1.india.cope.com
Address:  192.168.2.12

Name:    webserv.india.cope.com
Address:  192.168.2.211

> exit
$

4. To look up MX data,

$ nslookup
Default Server:  dnserver1.india.cope.com
Address:  192.168.2.12

> set q=mx
> rajs
Server:  dnserver1.india.cope.com
Address:  192.168.2.12

Name:    rajs.india.cope.in
Address:  0.0.0.0
> exit
$

5. How to query a specific DNS server for an address ?

We can do in both interactive and non-interactive ways.
Below example will query for the IP address of the host websrv using the DNS serer "192.168.2.15" which is not specified in the /etc/resolv.conf file.

Interactive Way:

$nslookup
Default Server:  dnserver1.india.cope.com
Address:  192.168.2.12

> server 192.168.2.15
Default Server:  dnserver4.india.cope.com
Address:  192.168.2.15

> websrv
Server:  dnserver4.india.cope.com
Address:  192.168.2.15

Name:    webserv.india.cope.com
Address:  192.168.2.211

> exit
$

Non-Interactive Way:

$nslookup - websrv 192.168.2.15

Server:  dnserver4.india.cope.com
Address:  192.168.2.15

Name:    webserv.india.cope.com
Address:  192.168.2.211

6.What is the difference between Authoritave and Non-authoritative answers ?

When you query for something for the first time, we get the answer from the DNS server and it will be displayed as well as stored in the local cache. This is called as authoritative answer. ie., getting the answer directly from the DNS server. This answer will be kept in cache for certain time.

But when you do the same query for the second time, we get the answer from the cache instead of the DNS server. This is called as non-authoritative answer.

7. How will you specify an alternate DNS server when using nslookup ?

For using 192.168.2.24 as an alternate DNS server,
$ nslookup - 192.168.2.24

This will query the alternate server instead of the DNS servers configured in /etc/resolv.conf file.

8. How will you query a MX record in an alternate server ?

$ nslookup - type=mx bashi.usa.cope.com 192.168.2.24

9. How will you debug while querying a DNS server ?


$ nslookup
Default Server:  dnserver1.india.cope.com
Address:  192.168.2.12
> set debug
> webserv

Server:  dnserver1.india.cope.com
Address:  192.168.2.12

;; res_nmkquery(QUERY, websrv.india.cope.com, IN, A)
------------
Got answer:
    HEADER:
        opcode = QUERY, id = 54305, rcode = NOERROR
        header flags:  response, authoritive answer, want recursion, recursion available
        questions = 1,  answers = 1,  authority records = 0,  additional = 0

    QUESTIONS:
        websrv.india.cope.com, type = A, class = IN
    ANSWERS:
    ->  webserv.india.cope.com
        internet address = 192.168.2.211
        ttl = 3600 (1H)

------------
Name:    webserv.india.cope.com
Address:  192.168.2.211

10.
Each DNS packet is composed of 5 sections as given below
  1. Header Section
  2. Question Section
  3. Answer Section
  4. Authority Section
  5. Additional Section

11. You can use options with the nslookup command using 'set' sub-command.
Here are few options ...

port=53          By default, DNS service uses port 53. If you have a DNS service on a different port, you can sue the port option to specify the prot number.

timeout=10    It is used to specify the timeout value. If the name server doesn't respond in 10 seconds, nslookup will send the query again.

debug              To turn on debug mode

nodebug         To turn off debug mode

querytype=A  By default, nslookup looks for A record. If you type the IP address, it will look for PTR record.. You can change the querytype to MX or SOA.

12. How will you come out of the interactive nslookup session.

You can use exit command or type ^D (control+D) to come out of the session.

Paging space commands in AIX


Below are the commands regarding the paging space in AIX.

To monitor paging space utilization:

lsps –a or lsps -s


To create an additional paging space:

mkps -s <#LPs> <vgname> <disk> or smit mkps


To activate a paging space:

swapon <device file name>


To deactivate a paging space:

swapoff <device file name>


To remove a paging space (must be inactive):

rmps <device file name> or smit mkps


To increase the size of a paging space:

chps –s <#LPs> <paging space name> or smit chps


To decrease the size of a paging space:

chps –d <#LPs> <paging space name> or smit chps


Activate paging space at restart:

chps -a -y <device file name>

Prevent and Detect Orphaned Mksysb NIM resources

In order to have a working mksysb resource in a NIM environment you need to have 2 items:   a NIM mksysb resource which points to a mksysb file in the filesystem. 
A good NIM mksysb resource looks like the image below.   The NIM mksysb resource is stored in the ODM and has a "location" attribute that points to a file in the filesystem.
image

However, if something (or someone) deletes the mksysb file from the filesystem, but doesn't delete the NIM mksysb resource, you are left with an orphaned mksysb NIM resource.   The mksysb resource will still show up in NIM and still appear to be useable, however any operations that try to use it will fail since its backing mksysb file isn't present.   An oprhaned mksysb resource looks like the image below:
image


How to prevent orphaned NIM mksysb resources:

The best way to prevent a orphaned NIM mksysb resource is to never delete mksysb files from the filesystem using "rm".   Instead, if you no longer need a NIM mksysb, use the "nim" command to delete it and also specify that the backing mksysb file should be deleted as well.   This can be done with a command such as this:
nim -o remove -a rm_image=yes aix3_mksysb
Substitute "aix3_mksysb" for the name of the mksysb that you want to delete.  The "-a rm_image=yes" tells NIM to not only delete the NIM resource from the ODM, but to also delete the backing mksysb file from the filesystem.

Detect orphaned NIM mksysb resources


Here is a handy one-line script that will check all your NIM mksysb resources and tell you if you have any orphaned mksysb resources that don't have a backing file present:

for mksysb in `lsnim -t mksysb | awk '{print $1}'`; do printf "%-20s " $mksysb; location=`lsnim -l $mksysb | grep location | awk '{print $3}'`; [ -e "$location" ] && echo " OK         $location" || echo " Not Found  $location"; done

The output looks like this:
aix1_mksysb           OK         /tmp/aix1_mksysb
aix2_mksysb           OK         /tmp/aix2_mksysb
aix3_mksysb           OK         /tmp/aix3_mksysb
aix4_mksysb           OK         /tmp/aix4_mksysb
aix5_mksysb           OK         /tmp/aix5_mksysb
aix6_mksysb           Not Found  /tmp/aix6_mksysb
aix7_mksysb           OK         /tmp/aix7_mksysb
aix8_mksysb           OK         /tmp/aix8_mksysb

Based on the output we can clearly see all of the mksysb's are good except for aix6_mksysb which doesn't have a a backing mksysb file present in the filesystem.

Using savevg on AIX to save time creating filesystems, LV's and volume groups

If you ever need to build multiple servers that will all have the same volume groups, logical volumes, and filesystems you can use "savevg" and "restvg" to save yourself a bunch of time and duplicated work. 
This also works if you are ever asked to build a new server that should be setup with the same VG/LV/FS's as an older server. 
You start by setting up one of the servers with the volume groups, logical volumes, and filesystems that you will need.  Next you can use the "savevg -r" command to backup just the volume group/LV/Filesystem structure information.  With the "-r" flag it doesn't backup any data in the filesystems, which makes it quick and the backup file very small. 
In this example we want to duplicate the "appvg" structure on to another server:
# lsvg -l appvg
appvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv02             jfs2log    1       1       1    open/syncd    N/A
applv05             jfs2       52      52      4    open/syncd    /app5
fslv05              jfs2       25      50      2    open/syncd    /app2
loglv03             jfslog     1       1       1    closed/syncd  N/A
#
# savevg -r -f /appvg.savevg appvg

Creating information file for volume group appvg.................................................................

Backing up user Volume Group information files only.
Creating list of files to back up.
Backing up 6 files

6 of 6 files (100%)
0512-038 savevg: Backup Completed Successfully.

 
 
Next you copy the "/appvg.savevg" file on to all the servers that you want to setup the VG/LV/FS's on.  You can use something like "scp" or "sftp" or another protocol to transfer the file. 
On the other servers, you run "restvg" to restore the VG/LV/FS structures from the file:
 
# restvg -r -f /appvg.savevg hdisk1 hdisk2

Will create the Volume Group:   appvg
Target Disks:   hdisk1 hdisk2
Allocation Policy:
        Shrink Filesystems:     no
        Preserve Physical Partitions for each Logical Volume:   no

Enter y to continue: y
0516-1254 /usr/sbin/mkvg: Changing the PVID in the ODM.
appvg
loglv02
applv05
fslv05
loglv03
#
# lsvg -l appvg
appvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv02             jfs2log    1       1       1    open/syncd    N/A
applv05             jfs2       52      52      2    open/syncd    /app5
fslv05              jfs2       25      50      2    open/syncd    /app2
loglv03             jfslog     1       1       1    closed/syncd  N/A
 
As you can see the "restvg -r" command restored the volume group, logical volumes, and filesystems, and even mounts the filesystems for you!  Note that the fslv05 LV was mirrored before on the original server and when restored it is still mirrored.   The original volume group on the source server was on 4 hdisks, but on the destination server only had 2 hdisks.   But this isn't a problem, restvg is able to take care of it as long as you have enough disks to accommodate the mirroring and enough total space for all the logical volumes.  You can even use the "-s" flag on restvg to attempt to shrink the filesystems if the destination hdisks aren't large enough to hold all the original LV's. 

The Shell Scripts that make up AIX

Over the years I've noticed that a lot of the core utilities on AIX are actually shell scripts. 
Here are some examples of these utilities on AIX that are either shell scripts (ksh/csh) or in some cases Perl scripts:
 
mksysb
oslevel
mkcd / mkdvd
useradd
userdel
usermod
prtconf
bosboot
mklv
shutdown
snap
lsconf
dsh
lsmksysb
savevg
which
chpv
chvg
cplv
exportvg
extendlv
migratelp
migratepv
mirrorvg
mktcpip
mkwpar
multibos
reducevg
reorgvg
replacepv
rmlv
rmlvcopy
splitlvcopy
splitvg
unmirrorvg
varyoffvg
 
As you can see, there are some pretty important commands in this list.  And this is just a small sample of them.  On my AIX server I found that there are over 400 scripts included as part of base AIX!  You can see a full list of all the scripts that make up your system by running a command like this:
 
for dir in `echo $PATH | tr ":" " "`; do for file in `ls -1 "$dir" 2>/dev/null`; do [ -x "$dir/$file" ] && file "$dir/$file"; done; done | grep -i script
 
It is pretty cool that so many of the core commands/utilities on AIX are made up of shell scripts.  For one, it shows that shell scripts can take on very important and critical tasks.  It can also be extremely helpful to be able to review the scripts if you are having any issues with any of these commands.  And these scripts can be an excellent learning tool.  These are extremely well written and robust scripts many of which have been used for decades on thousands and thousands of servers. 

Display the contents of gzip text files without unzipping

To save the disk space on my unix server, I have compressed thousands of reports[0-1000].txt files using gunzip program. However, I need to cat .gz file (or open in vim text editor) for reference purposes. I have a log files stores on my server in compressed format using gzip command. How do I display compressed Apache log file without using cat command? How to display the contents of gzip text files in screen without unzipping?

You can easily display compressed files on Linux or Unix without using using the cat command, less, or more command. In this example, show the contents of a text file called resume.txt.gz that has been compressed using gzip and friends. Open the Terminal and then type the following commands.

Syntax

 

Display resume.txt.gz on screen using cat command like syntax:

zcat resume.txt.gz

Display access_log_1.gz on screen at a time:

 zmore access_log_1.gz

Or try zless (less command):

zless access_log_1.gz

Search access_log_1.gz for 1.2.3.4 IP address using grep command like syntax:

zgrep '1.2.3.4' access_log_1.gz

You can use egrep command like syntax:

egrep 'regex' access_log_1.gz
egrep 'regex1|regex2' access_log_1.gz