DISCLAIMER : Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility.If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.
Kubectl commands - Kubernetes
Kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers
kubectl
syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation.List all services
kubectl get services
kubectl get svc
List everything
kubectl get all --all-namespaces
Describe service <name>
kubectl describe svc <name>
Get services sorted by name
kubectl get services –sort-by=.metadata.name
List all pods
kubectl get pods
Watch nodes continuously
kubectl get pods -w
Get version information
kubectl get version
Get cluster information
kubectl get cluster-info
Get the configuration
kubectl config view
Output information about a node
kubectl describe node <node-name>
List the replication controllers
kubectl get rc
List the replication controllers in specific <namespace>
kubectl get rc -n <namespace-name>
Describe replication controller <name>
kubectl describe rc <name>
Delete pod <name>
kubectl delete pod <name>
Delete replication controller <name>
kubectl delete rc <name>
Delete service <name>
kubectl delete svc <name>
Remove <node> from the cluster
kubectl delete node <name>
Show metrics for nodes
kubectl top nodes
Show metrics for pods
kubectl top pods
Watch the Kublet logs
watch -n 2 cat /var/log/kublet.log
Get logs from service <name>, optionally selecting container <$container>
kubectl logs -f <name> [-c <$container>]
execute <command> on <service>, optionally selecting container <$container>
kubectl exec <service> <command> [-c <$container>]
Initialize your master node
kubeadm init
Join a node to your Kubernetes cluster
kubeadm join --token <token> <master-ip>:<master-port>
Create namespace <name>
kubectl create namespace <namespace>
Allow Kubernetes master nodes to run pods
kubectl taint nodes --all node-role.kubernetes.io/master
Reset current state
kubeadm reset
List all secrets
kubectl get secrets
Launch a pod called <name> ,using image <image-name>
kubectl run <name> --image=<image-name>
Create a service described in <manifest.yaml> file
kubectl create -f <manifest.yaml>
Validate yaml file with dry run
kubectl create --dry-run --validate -f sysaix.yaml
Scale replication controller
kubectl scale rc <name> --replicas=<count>
Stop all pods on specific pods
kubectl drain <n> --delete-local-data --force --ignore-deamonsets
Explain resource
kubectl explain pods
kubectl explain svc
Open a bash terminal in a pod
kubectl exec -it sysa'xpod sh
Check pod environment variables
kubectl exec sysaixpod env
Filter pods by label
kubectl get pods -l owner=emre
List statefulset
kubectl get sts
Scale statefulset
kubectl scale sts <stateful_set_name> --replicas=5
Delete statefulset only (not pods)
kubectl delete sts <stateful_set_name> --cascade=false
View all events
kubectl get events --all-namespaces
How to enable Accelerated Networking on an existing VM - AZURE
First, not all VM size support Accelerated Network, the supported OS and VM size can be found here:
https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli
https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-powershell
so I’ve deallocate the VM and change the size to DS3 v2, that’s a supported VM size.
Then run the following PowerShell Commands
https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli
https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-powershell
so I’ve deallocate the VM and change the size to DS3 v2, that’s a supported VM size.
Then run the following PowerShell Commands
Zabbix JMX monitoring setup for Linux server
JMX monitoring can be used to monitor JMX counters of a Java application.
JMX monitoring has native support in Zabbix in the form of a Zabbix daemon called “Zabbix Java gateway”, introduced since Zabbix 2.0.
To retrieve the value of a particular JMX counter on a host, Zabbix server queries the Zabbix Java gateway, which in turn uses the JMX management API to query the application of interest remotely.
JMX monitoring can be used to measure all JMX counters of a Java application. Java Management Extensions (JMX) is a Java Community Process (JSR-3) specification for managing and monitoring Java applications. Via a so-called Java gateway, the Zabbix server can address the JMX monitoring services, read data from a Java application and save and process it as an item.
A typical use case is the monitoring of the memory consumption of a Java application or of the Java runtime environment in which the application is operated.
JMX monitoring has native support in Zabbix in the form of a Zabbix daemon called “Zabbix Java gateway”, introduced since Zabbix 2.0.
To retrieve the value of a particular JMX counter on a host, Zabbix server queries the Zabbix Java gateway, which in turn uses the JMX management API to query the application of interest remotely.
What is Java and JMX monitoring?
JMX monitoring can be used to measure all JMX counters of a Java application. Java Management Extensions (JMX) is a Java Community Process (JSR-3) specification for managing and monitoring Java applications. Via a so-called Java gateway, the Zabbix server can address the JMX monitoring services, read data from a Java application and save and process it as an item.
A typical use case is the monitoring of the memory consumption of a Java application or of the Java runtime environment in which the application is operated.
Zabbix Java gateway
The so-called Zabbix Java Gateway is a special bollard process that can retrieve data via JMX. Unlike the previously mentioned boll processes, this is not an "internal" process within the Zabbix server. The Java Gateway is a stand-alone daemon that provides data to the Zabbix server through a TCP port. The Java Gateway is Java software that requires a JRE.
Enable Java Gateway
To use JMX monitoring, you must install the Zabbix Java Gateway, which is typically not included in the default installation. If you are using Zabbix LLC DEB or RPM packages , install the Java Gateway as follows. The Java Gateway requires no configuration and can be started immediately.
apt-get install zabbix-java-gateway # Debian/Ubuntu
/etc/init.d/zabbix-java-gateway start # Debian/Ubuntu
yum install zabbix-java-gateway # Red Hat/CentOS
service zabbix-java-gateway start # Red Hat/CentOS
systemctl enable zabbix-java-gateway # Red Hat/CentOS
You should now find startup and shutdown scripts in the /opt/zabbix-java-gateway/sbin/zabbix_java folder .
You should now find startup and shutdown scripts in the /opt/zabbix-java-gateway/sbin/zabbix_java folder .
cd /opt/zabbix-java-gateway/sbin/zabbix_java
./startup.sh
Test if the Java Gateway is running.
Test if the Java Gateway is running.
# ps -ef| grep -i java
Java Gateway as data provider for the Zabbix server
If the host on which JMX application is running is monitored by Zabbix proxy, then you specify the connection parameters in proxy configuration file instead.
JavaGateway=<Client IP Address> JavaGatewayPort=10052
By default, server does not start any processes related to JMX monitoring. If you wish to use it, however, you have to specify the number of pre-forked instances of Java pollers. You do this in the same way you specify regular pollers and trappers.
StartJavaPollers=5
Do not forget to restart server or proxy, once you are done with configuring them.
Enter the Java Gateway in the file /etc/zabbix/zabbix_server.conf . In the Zabbix server, activate at least one Java bollard, which "requests" the item data from the Java gateway and forwards the server process. After you make the changes, restart the Zabbix server.
NOTE: This is a one time setup in the zabbix server and need NOT be modified again for every new server that we add to the Zabbix WebUI.
Enabling remote JMX monitoring for Java application
A Java application does not need any additional software installed, but it needs to be started with the command-line options specified below to have support for remote JMX monitoring.
As a bare minimum, if you just wish to get started by monitoring a simple Java application on a local host with no security enforced, start it with these options:
export CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10052 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=ClientIP -Djava.net.preferIPv4Stack=true"
NOTE: Communication between Java gateway and the monitored JMX application should not be firewalled.
So disable the server level internal firewall in the linux client using the below command.
Configuring JMX interfaces and items in Zabbix frontend
With Java gateway running, server knowing where to find it and a Java application started with support for remote JMX monitoring, it is time to configure the interfaces and items in Zabbix GUI.Before retrieving data from the JMX interface, you must specify for each host on which IP address and on which TCP port the JMX interface of the Java program listens. Navigate to the host configuration and add a JMX interface. If you want to monitor several Java programs via JMX on a host, you must use different TCP ports.
Apply the appropriate JMX template to the server in the zabbix webUI like JMX generic template and you can see that the JMX turns green and is enabled.
Modern way of /etc/motd - FireMotd for linux
While developing, playing or working on Linux systems, a dynamic MotD generator script can quickly give you an overview of all used components of your Linux systems.
FireMotD can show you this information in a sanitized and colorful way while you log in with SSH or console.
Depending of the chosen theme FireMotD will output all information defined in the theme on your server.
We need EPEL repository to be installed and enabled in the server to install the dependency packages.
If you don't have root access, just install everything on your user's folder and source the file from your user's .profile file
The recommended way to generate /var/tmp/FireMotD.json is by creating a separate cron file for firemotd like this:
sudo vim /etc/cron.d/firemotd
sudo crontab -e
You may call FireMotD from a few different locations for running globally.
Eg./etc/bash.bashrc, /etc/profile.
we use /etc/profile, so add the line to this file.
FireMotD can show you this information in a sanitized and colorful way while you log in with SSH or console.
Depending of the chosen theme FireMotD will output all information defined in the theme on your server.
We need EPEL repository to be installed and enabled in the server to install the dependency packages.
Install Dependencies
You need to install the required dependencies as shown below.
yum install bc sysstat jq moreutilsAfter installing the dependencies, clone/download or copy the FireMotd to the root directory as shown.
git clone https://github.com/OutsideIT/FireMotD.gitChange to the FireMotd directory and run the below commands
You need to have make installed on the system, if you want to use the Makefile.
To install to /usr/local/bin/FireMotD
sudo make installWith this you can probably run FireMotD from anywhere in your system. If not, you need to add /usr/local/bin to your $PATH variable. To adjust the installation path, change the var IDIR=/usr/local/bin in the Makefile to the path you want.
To install bash autocompletion support
sudo make bash_completionWith this you can use TAB to autocomplete parameters and options with FireMotD. Does not require the sudo make install above (system install), but requires the bash-completion package to be installed and working. Then you should logout-login or source the bash completion file, eg. $ . /etc/bash_completion.d/FireMotD
If you don't have root access, just install everything on your user's folder and source the file from your user's .profile file
Crontab to get system information
Root privilege is required for this operation. Only /etc/crontab and the files in /etc/cron.d/ have a username field.
The recommended way to generate /var/tmp/FireMotD.json is by creating a separate cron file for firemotd like this:
sudo vim /etc/cron.d/firemotd
# FireMotD system updates check (randomly execute between 0:00:00 and 5:59:59)
0 0
* * * root perl -e 'sleep int(rand(21600))' && /usr/local/bin/FireMotD -S &>/dev/null
But you can also put it in root's crontab (without the user field):
But you can also put it in root's crontab (without the user field):
sudo crontab -e
# FireMotD system updates check (randomly execute between 0:00:00 and 5:59:59)
0 0
* * * perl -e 'sleep int(rand(21600))' && /usr/local/bin/FireMotD -S &>/dev/null
Edit the user's ~/.profile file, ~/.bash_profile file, or the ~/.bashrc file
nano ~/.profile
Add the FireMotD call at the end of the file (choose your theme)
Adding FireMotD to run on login
Choosing where to run your script is kind of situational. Some files will only run on remote logins, other local logins, or even both. You should find out what suits best your needs on each case.
To add FireMotD to a single user
Edit the user's ~/.profile file, ~/.bash_profile file, or the ~/.bashrc file
nano ~/.profile
Add the FireMotD call at the end of the file (choose your theme)
/usr/local/bin/FireMotD -t blue
To add FireMotD to all users
You may call FireMotD from a few different locations for running globally.
Eg./etc/bash.bashrc, /etc/profile.
we use /etc/profile, so add the line to this file.
Color outputs :
Enable EPEL Repository for RHEL/CentOS 7.x
What is EPEL
Epel project is not a part of RHEL/CentOS but it is designed for major Linux distributions by providing lots of open source packages like networking, sys admin, programming, monitoring and so on. Most of the epel packages are maintained by Fedora repo.
Why we use EPEL repository?
Provides lots of open source packages to install via Yum.
Epel repo is 100% open source and free to use.
It does not provide any core duplicate packages and no compatibility issues.
All epel packages are maintained by Fedora repo.
Epel repo is 100% open source and free to use.
It does not provide any core duplicate packages and no compatibility issues.
All epel packages are maintained by Fedora repo.
How To Enable EPEL Repository in RHEL/CentOS 7?
# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -ivh epel-release-latest-7.noarch.rpm
# yum repolist
Add SWAP to Linux VM’s on Azure
Every virtual machine (VM) on Azure has what we call a temporary (ephemeral) disk which is recommended to be used ONLY as temporary storage and that includes SWAP files or data that does not need to be available upon a reboot or saved , the data stored in this drive will be lost.
To create a swap file in the directory that's defined by the ResourceDisk.MountPoint parameter, you can update the /etc/waagent.conf file by setting the following three parameters:
ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=xx
Note The xx placeholder represents the desired number of megabytes (MB) for the swap file.
Where the size is in MB, so for instance, to create a SWAP file of 4GB you could use these lines:
ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=4096
Restart the WALinuxAgent service by running one of the following commands, depending on the system in question:
Ubuntu: service walinuxagent restart
Red Hat/Centos: service waagent restartRun one of the following commands to show the new swap apace that's being used after the restart:
dmesg | grep swap
swapon -s
cat /proc/swaps
file /mnt/resource/swapfile
free| grep -i swap
How to reduce LVM partition size in RHEL and CentOS
Sometimes when we are running out of disk space in our Linux box and if partition created on LVM , then we can make some free space in the volume group by reducing the LVM using lvreduce command.In this article we will discuss the required steps to reduce the size of LVM safely on CentOS and RHEL Servers, Below steps are eligible when the LVM partition is formated either as ext
Scenario : Suppose we want to reduce /home by 2GB which is on LVM partition & formated as ext4.
[root@cloud ~]# df -h /home/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_cloud-LogVol00 12G 9.2G 1.9G 84% /home
Step:1 Umount the file system
Use the beneath umount command
[root@cloud ~]# umount /home/
Step:2 check the file system for Errors using e2fsck command.
[root@cloud ~]# e2fsck -f /dev/mapper/vg_cloud-LogVol00 e2fsck 1.41.12 (17-May-2010) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mapper/vg_cloud-LogVol00: 12/770640 files (0.0% non-contiguous), 2446686/3084288 blocks
Note: In the above command e2fsck , we use the option ‘-f’ to forcefully check the file system, even if the file system is clean.
Step:3 Reduce or Shrink the size of /home to desire size.
As shown in the above scenario, size of /home is 12 GB , so by reducing it by 2GB , then the size will become 10GB.
[root@cloud ~]# resize2fs /dev/mapper/vg_cloud-LogVol00 10G resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/mapper/vg_cloud-LogVol00 to 2621440 (4k) blocks. The filesystem on /dev/mapper/vg_cloud-LogVol00 is now 2621440 blocks long.
Step:4 Now reduce the size using lvreduce command.
[root@cloud ~]# lvreduce -L 10G /dev/mapper/vg_cloud-LogVol00 WARNING: Reducing active logical volume to 10.00 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce LogVol00? [y/n]: y Reducing logical volume LogVol00 to 10.00 GiB Logical volume LogVol00 successfully resized
Step:5 (Optional) For the safer side, now check the reduced file system for errors
[root@cloud ~]# e2fsck -f /dev/mapper/vg_cloud-LogVol00 e2fsck 1.41.12 (17-May-2010) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mapper/vg_cloud-LogVol00: 12/648960 files (0.0% non-contiguous), 2438425/2621440 blocks
Step:6 Mount the file system and verify its size.
[root@cloud ~]# mount /home/ [root@cloud ~]# df -h /home/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_cloud-LogVol00 9.9G 9.2G 208M 98% /home
Add Multiple IP address to single NIC in Suse Linux - Azure
Some of you might wondering why would we assign multiple IP addresses to single Network card. There can be many reasons. Say for example, you are doing some testing on your Linux box that requires two or more network cards. Would you buy new one? No, It is not necessary to buy a new one.
You can set multiple IP series, for example 192.168.1.0, 192.168.2.0, 192.168.3.0 etc., for a network card, and use all of them at the same time. Sounds useful? Of course, it is!
This method might be helpful when setting up Internet sharing servers, like Squid proxy.
To add multiple IP address for SUSE Linux VM on Azure, you may can take reference from below steps:
Login Azure portal then navigate to VM - Networking - Network Interface, then click the NIC
Click "Add" to add secondary DIP with static type
Login the VM then run "yast" to add the second IP address
Press "F4 Edit" – "F3 Add" then input the assigned IP address from the portal and follow the steps to press F10 Next|OK to save and exit
Now the VM has added the second IP address
And it's able to ping from another VM within same VNET
Now you have successfully configured the multiple address fro single NIC.
You can set multiple IP series, for example 192.168.1.0, 192.168.2.0, 192.168.3.0 etc., for a network card, and use all of them at the same time. Sounds useful? Of course, it is!
This method might be helpful when setting up Internet sharing servers, like Squid proxy.
To add multiple IP address for SUSE Linux VM on Azure, you may can take reference from below steps:
Login Azure portal then navigate to VM - Networking - Network Interface, then click the NIC
$ sudo yast
$ sudo ifconfig
$ ip a
$ ip a
Install AzCopy on Linux - Fastest way to copy in Azure
There are two versions of AzCopy that you can download. AzCopy on Linux is built with .NET Core Framework, which targets Linux platforms offering POSIX style command-line options. AzCopy on Windows is built with .NET Framework, and offers Windows style command-line options.
This article covers AzCopy on Linux.
Installation on Linux
Install and enable the .NET SDK
In your command prompt, run the following commands:
yum install rh-dotnet20 -y scl enable rh-dotnet20 bash
Once you have installed .NET Core, download and install AzCopy.
wget -O azcopy.tar.gz https://aka.ms/downloadazcopyprlinux tar -xf azcopy.tar.gz sudo ./install.sh
You can remove the extracted files once AzCopy on Linux is installed. Alternatively if you do not have superuser privileges, you can also run AzCopy using the shell script 'azcopy' in the extracted folder.
The basic syntax for AzCopy commands is:
azcopy --source <source> --destination <destination> [Options]
The following examples demonstrate various scenarios for copying data to and from Microsoft Azure Blobs and Files. Refer to the azcopy --help menu for a detailed explanation of the parameters used in each sample.
If any user face issue while running azcopy command which asks for dotnet files, enter the below line in the users .bashrc file.
source scl_source enable rh-dotnet20
If a user oracle needs to run this command, then add the below line to the oracle .bash_profile under PATH section
/opt/rh/rh-dotnet20/root/usr/bin/
Add the below line in the script if you would use azcopy in a shell script.
source scl_source enable rh-dotnet20
Now type azcopy in the server and it shows the available options to use, you can explore the options and use this command as required.