DISCLAIMER : Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility.If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.


Introduction to kubernetes - k8s

For a new learner, Kubernetes (commonly referred to as K8s) can be a steep learning curve. This article covers a brief conceptual overview to kickstart your journey.



Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running production workloads at scale with best-of-breed ideas and practices from the community.

Kubernetes Cluster

A Kubernetes cluster has two components:

1. master nodes which run the Kubernetes related daemons (Kube API, Kube proxy, Kube DNS, Kube dashboard,)
2. Cluster nodes belonging to one or more node pools which serve as the underlying physical resources for all the containers.

Node pools

A homogenous set of physical resources that provide the underlying resources for the cluster. A cluster can have one or more instance groups with labels to provide information to the Kubernetes scheduler on what can be run on the provided hardware.

Containers

A container is a resource running a single image (generally a docker image) containing all executable packages, runtime, operating system, system libraries. This is analogous to docker containers used by other orchestration tools.

Pod

Pods are the smallest deployable units of computing that can be created and managed in Kubernetes. A pod is a set of one or more containers deployed and run together, sharing the same local network space. The recommendation is to run one container on a pod so that you can atomically control a process at a Kubernetes level but multi-container pods are not uncommon.

ReplicaSet

A ReplicaSet ensures that a specified number of pod replicas are running at any time. In other words, a ReplicaSet makes sure that a pod or a homogeneous set of pods is always up and available.

Deployment

A Deployment controller provides a declarative wrapper on top of replica sets defining the templates used to build the homogenous pods and the replication number. Even if you need to just run one container instance of a particular service, it is recommended to create a deployment for it with a configuration of 1 replica.

Horizontal Pod Scaler

With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).

Daemonset

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:
  1. running a cluster storage daemon, such as glusterd, ceph, on each node.
  2. running a logs collection daemon on every node, such as fluentd or logstash.

StatefulSets

  1. Used to manage stateful applications.
  2. Provides guarantees about the ordering and uniqueness of these Pods
  3. StatefulSet maintains a sticky identity for each of their Pods
  4. Used typically for cases where predictability of deletion and creation order is important.

Service

A Service is an abstraction that defines a logical set of Pods and a policy by which to access them — sometimes called a micro-service. So if a deployment from the previous step exposed a web service through a port, then a service is how you would expose the deployment.

A service can be exposed outside using a load balancer or a cluster IP that is reachable from within the K8s cluster. The load balancer is a cloud-native LB like ELB in AWS. The cluster IP is a virtually routable IP.

Additionally, Kubernetes also gives each service a routable A record of the form $service-name.$namespace.svc.cluster.local

Configmap

Every K8s cluster can have one or more config maps. A configmap is a key-value store that is that can be made available to each container as part of the environment variables.

Network Topology


Every Kubernetes cluster satisfies the following requirements
  1. all pods can communicate with all other pods without NAT
  2. all nodes can communicate with all pods (and vice-versa) without NAT
  3. the IP that a pod sees itself as is the same IP that others see it as

Running Commands as Another User via sudo

You want one user to run commands as another, without sharing passwords.
Suppose you want user smith to be able to run a given command as user jones. 


               /etc/sudoers:
smith  ALL = (jones) /usr/local/bin/mycommand
User smith runs:
smith$ sudo -u jones /usr/local/bin/mycommand
smith$ sudo -u jones mycommand                     If /usr/local/bin is in $PATH
User smith will be prompted for his own password, not jones’s.
The ALL keyword, which matches anything, in this case specifies that the line is valid on any host.
sudo exists for this very reason!
To authorize root privileges for smith, replace “jones” with “root” in the above example.


Kubectl commands - Kubernetes

Kubectl is a command line interface for running commands against Kubernetes clusters. This overview covers kubectl syntax, describes the command operations, and provides common examples. For details about each command, including all the supported flags and subcommands, see the kubectl reference documentation.

List all services

kubectl get services 
kubectl get svc

List everything

kubectl get all --all-namespaces

Describe service <name>

kubectl describe svc <name>

Get services sorted by name

kubectl get services –sort-by=.metadata.name

List all pods

kubectl get pods

Watch nodes continuously

kubectl get pods -w

Get version information

kubectl get version

Get cluster information

kubectl get cluster-info

Get the configuration

kubectl config view

Output information about a node

kubectl describe node <node-name>

List the replication controllers

kubectl get rc

List the replication controllers in specific <namespace>

kubectl get rc -n <namespace-name>

Describe replication controller <name>

kubectl describe rc <name>

Delete pod <name>

kubectl delete pod <name>

Delete replication controller <name>

kubectl delete rc <name>

Delete service <name>

kubectl delete svc <name>

Remove <node> from the cluster

kubectl delete node <name>

Show metrics for nodes

kubectl top nodes

Show metrics for pods

kubectl top pods

Watch the Kublet logs

watch -n 2 cat /var/log/kublet.log

Get logs from service <name>, optionally  selecting container <$container>

kubectl logs -f <name> [-c <$container>] 

execute <command> on <service>, optionally  selecting container <$container>

kubectl exec <service> <command> [-c <$container>]

Initialize your master node

kubeadm init

Join a node to your Kubernetes cluster

kubeadm join --token <token> <master-ip>:<master-port>

Create namespace <name>

kubectl create namespace <namespace>

Allow Kubernetes master nodes to run pods

kubectl taint nodes --all node-role.kubernetes.io/master

Reset current state

kubeadm reset

List all secrets

kubectl get secrets

Launch a pod called <name> ,using image <image-name>

kubectl run <name> --image=<image-name>

Create a service described in <manifest.yaml> file

kubectl create -f <manifest.yaml>

Validate yaml file with dry run

kubectl create --dry-run --validate -f sysaix.yaml

Scale replication controller

kubectl scale rc <name> --replicas=<count>

Stop all pods on specific pods

kubectl drain <n> --delete-local-data --force --ignore-deamonsets

Explain resource

kubectl explain pods
kubectl explain svc

Open a bash terminal in a pod

kubectl exec -it sysa'xpod sh

Check pod environment variables

kubectl exec sysaixpod env

Filter pods by label

kubectl get pods -l owner=emre

List statefulset

kubectl get sts

Scale statefulset

kubectl scale sts <stateful_set_name> --replicas=5

Delete statefulset only (not pods)

kubectl delete sts <stateful_set_name> --cascade=false

View all events

kubectl get events --all-namespaces

How to enable Accelerated Networking on an existing VM - AZURE

First, not all VM size support Accelerated Network, the supported OS and VM size can be found here:

https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli

https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-powershell

so I’ve deallocate the VM and change the size to DS3 v2, that’s a supported VM size.

image

Then run the following PowerShell Commands

$nic = Get-AzureRmNetworkInterface -ResourceGroupName "bigvnetgroup" -Name "server2016344"
$nic.EnableAcceleratedNetworking = $true
$nic | Set-AzureRmNetworkInterface
image

Zabbix JMX monitoring setup for Linux server

JMX monitoring can be used to monitor JMX counters of a Java application.

JMX monitoring has native support in Zabbix in the form of a Zabbix daemon called “Zabbix Java gateway”, introduced since Zabbix 2.0.

To retrieve the value of a particular JMX counter on a host, Zabbix server queries the Zabbix Java gateway, which in turn uses the JMX management API to query the application of interest remotely.

What is Java and JMX monitoring?


JMX monitoring can be used to measure all JMX counters of a Java application. Java Management Extensions (JMX) is a Java Community Process (JSR-3) specification for managing and monitoring Java applications. Via a so-called Java gateway, the Zabbix server can address the JMX monitoring services, read data from a Java application and save and process it as an item.

A typical use case is the monitoring of the memory consumption of a Java application or of the Java runtime environment in which the application is operated.


Java Gateway and Java Pollers

Zabbix Java gateway


The so-called Zabbix Java Gateway is a special bollard process that can retrieve data via JMX. Unlike the previously mentioned boll processes, this is not an "internal" process within the Zabbix server. The Java Gateway is a stand-alone daemon that provides data to the Zabbix server through a TCP port. The Java Gateway is Java software that requires a JRE.

Enable Java Gateway

To use JMX monitoring, you must install the Zabbix Java Gateway, which is typically not included in the default installation. If you are using Zabbix LLC DEB or RPM packages , install the Java Gateway as follows. The Java Gateway requires no configuration and can be started immediately.

apt-get install zabbix-java-gateway # Debian/Ubuntu 

/etc/init.d/zabbix-java-gateway start # Debian/Ubuntu

yum install zabbix-java-gateway # Red Hat/CentOS 

 service zabbix-java-gateway start # Red Hat/CentOS 

 systemctl enable zabbix-java-gateway # Red Hat/CentOS


You should now find startup and shutdown scripts in the /opt/zabbix-java-gateway/sbin/zabbix_java folder .

cd /opt/zabbix-java-gateway/sbin/zabbix_java

 ./startup.sh


Test if the Java Gateway is running.

# ps -ef| grep -i java


Java Gateway as data provider for the Zabbix server


Now that Java gateway is running, you have to tell Zabbix server where to find Zabbix Java gateway. This is done by specifying JavaGateway and JavaGatewayPort parameters in server configuration file.

If the host on which JMX application is running is monitored by Zabbix proxy, then you specify the connection parameters in proxy configuration file instead.
JavaGateway=<Client IP Address> JavaGatewayPort=10052

By default, server does not start any processes related to JMX monitoring. If you wish to use it, however, you have to specify the number of pre-forked instances of Java pollers. You do this in the same way you specify regular pollers and trappers.
StartJavaPollers=5

Do not forget to restart server or proxy, once you are done with configuring them.

Enter the Java Gateway in the file /etc/zabbix/zabbix_server.conf . In the Zabbix server, activate at least one Java bollard, which "requests" the item data from the Java gateway and forwards the server process. After you make the changes, restart the Zabbix server.

NOTE: This is a one time setup in the zabbix server and need NOT be modified again for every new server that we add to the Zabbix WebUI.

Enabling remote JMX monitoring for Java application


A Java application does not need any additional software installed, but it needs to be started with the command-line options specified below to have support for remote JMX monitoring.

As a bare minimum, if you just wish to get started by monitoring a simple Java application on a local host with no security enforced, start it with these options:

export CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10052 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=ClientIP -Djava.net.preferIPv4Stack=true"

NOTE: Communication between Java gateway and the monitored JMX application should not be firewalled.
So disable the server level internal firewall in the linux client using the below command.

systemctl disable firewalld


Configuring JMX interfaces and items in Zabbix frontend

With Java gateway running, server knowing where to find it and a Java application started with support for remote JMX monitoring, it is time to configure the interfaces and items in Zabbix GUI.

Before retrieving data from the JMX interface, you must specify for each host on which IP address and on which TCP port the JMX interface of the Java program listens. Navigate to the host configuration and add a JMX interface. If you want to monitor several Java programs via JMX on a host, you must use different TCP ports.




Apply the appropriate JMX template to the server in the zabbix webUI like JMX generic template and you can see that the JMX turns green and is enabled.

Modern way of /etc/motd - FireMotd for linux

While developing, playing or working on Linux systems, a dynamic MotD generator script can quickly give you an overview of all used components of your Linux systems.
FireMotD can show you this information in a sanitized and colorful way while you log in with SSH or console.
Depending of the chosen theme FireMotD will output all information defined in the theme on your server.
We need EPEL repository to be installed and enabled in the server to install the dependency packages.

Install Dependencies


You need to install the required dependencies as shown below.
yum install bc sysstat jq moreutils
After installing the dependencies, clone/download or copy the FireMotd to the root directory as shown.
git clone https://github.com/OutsideIT/FireMotD.git
Change to the FireMotd directory and run the below commands
You need to have make installed on the system, if you want to use the Makefile.

To install to /usr/local/bin/FireMotD

sudo make install
With this you can probably run FireMotD from anywhere in your system. If not, you need to add /usr/local/bin to your $PATH variable. To adjust the installation path, change the var IDIR=/usr/local/bin in the Makefile to the path you want.

To install bash autocompletion support

sudo make bash_completion
With this you can use TAB to autocomplete parameters and options with FireMotD. Does not require the sudo make install above (system install), but requires the bash-completion package to be installed and working. Then you should logout-login or source the bash completion file, eg. $ . /etc/bash_completion.d/FireMotD

If you don't have root access, just install everything on your user's folder and source the file from your user's .profile file

Crontab to get system information


Root privilege is required for this operation. Only /etc/crontab and the files in /etc/cron.d/ have a username field.

The recommended way to generate /var/tmp/FireMotD.json is by creating a separate cron file for firemotd like this:

sudo vim /etc/cron.d/firemotd 

# FireMotD system updates check (randomly execute between 0:00:00 and 5:59:59) 0 0 
* * * root perl -e 'sleep int(rand(21600))' && /usr/local/bin/FireMotD -S &>/dev/null

But you can also put it in root's crontab (without the user field):

sudo crontab -e 

# FireMotD system updates check (randomly execute between 0:00:00 and 5:59:59) 0 0 
* * * perl -e 'sleep int(rand(21600))' && /usr/local/bin/FireMotD -S &>/dev/null

Adding FireMotD to run on login


Choosing where to run your script is kind of situational. Some files will only run on remote logins, other local logins, or even both. You should find out what suits best your needs on each case.

To add FireMotD to a single user


Edit the user's ~/.profile file, ~/.bash_profile file, or the ~/.bashrc file
nano ~/.profile

Add the FireMotD call at the end of the file (choose your theme)
/usr/local/bin/FireMotD -t blue

To add FireMotD to all users


You may call FireMotD from a few different locations for running globally.
Eg./etc/bash.bashrc, /etc/profile.

we use /etc/profile, so add the line to this file.

Color outputs :

Blue

FireMotD Blue

Red

FireMotD Red

Gray

MotD FireMotD Gray


Enable EPEL Repository for RHEL/CentOS 7.x

What is EPEL


EPEL (Extra Packages for Enterprise Linux) is open source and free community based repository project from Fedora team which provides 100% high quality add-on software packages for Linux distribution including RHEL (Red Hat Enterprise Linux) and CentOS.

Epel project is not a part of RHEL/CentOS but it is designed for major Linux distributions by providing lots of open source packages like networking, sys admin, programming, monitoring and so on. Most of the epel packages are maintained by Fedora repo.

Why we use EPEL repository?


Provides lots of open source packages to install via Yum.
Epel repo is 100% open source and free to use.
It does not provide any core duplicate packages and no compatibility issues.
All epel packages are maintained by Fedora repo.

How To Enable EPEL Repository in RHEL/CentOS 7?


First, you need to download the file using Wget and then install it using RPM on your system to enable the EPEL repository. Use below links based on your Linux OS versions. (Make sure you must be root user).
 # wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -ivh epel-release-latest-7.noarch.rpm

You need to run the following command to verify that the EPEL repository is enabled. Once you ran the command you will see epel repository.
# yum repolist


Add SWAP to Linux VM’s on Azure


Every virtual machine (VM) on Azure has what we call a temporary (ephemeral) disk which is recommended to be used ONLY as temporary storage and that includes SWAP files or data that does not need to be available upon a reboot or saved , the data stored in this drive will be lost.
To create a swap file in the directory that's defined by the ResourceDisk.MountPoint parameter, you can update the /etc/waagent.conf file by setting the following three parameters:

ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=xx

Note The xx placeholder represents the desired number of megabytes (MB) for the swap file.
Where the size is in MB, so for instance, to create a SWAP file of 4GB you could use these lines:

ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=4096


Restart the WALinuxAgent service by running one of the following commands, depending on the system in question:

Ubuntu: service walinuxagent restart
Red Hat/Centos: service waagent restart
Run one of the following commands to show the new swap apace that's being used after the restart:

dmesg | grep swap
swapon -s
cat /proc/swaps
file /mnt/resource/swapfile
free| grep -i swap

How to reduce LVM partition size in RHEL and CentOS

Sometimes when we are running out of disk space in our Linux box and if partition created on LVM , then we can make some free space in the volume group by reducing the LVM using lvreduce command.In this article we will discuss the required steps to reduce the size of LVM safely on CentOS and RHEL Servers, Below steps are eligible when the LVM partition is formated either as ext
Scenario : Suppose we want to reduce /home by 2GB which is on LVM partition & formated as ext4.
[root@cloud ~]# df -h /home/
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/vg_cloud-LogVol00
                       12G   9.2G  1.9G  84%  /home

Step:1 Umount the file system

Use the beneath umount command
[root@cloud ~]# umount /home/

Step:2 check the file system for Errors using e2fsck command.

[root@cloud ~]# e2fsck -f /dev/mapper/vg_cloud-LogVol00
 e2fsck 1.41.12 (17-May-2010)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 /dev/mapper/vg_cloud-LogVol00: 12/770640 files (0.0% non-contiguous), 2446686/3084288 blocks
Note: In the above command e2fsck , we use the option ‘-f’ to forcefully check the file system, even if the file system is clean.

Step:3 Reduce or Shrink the size of /home to desire size.

As shown in the above scenario, size of /home is 12 GB , so by reducing it by 2GB , then the size will become 10GB.
[root@cloud ~]# resize2fs /dev/mapper/vg_cloud-LogVol00 10G
 resize2fs 1.41.12 (17-May-2010)
 Resizing the filesystem on /dev/mapper/vg_cloud-LogVol00 to 2621440 (4k) blocks.
 The filesystem on /dev/mapper/vg_cloud-LogVol00 is now 2621440 blocks long.

Step:4 Now reduce the size using lvreduce command.

[root@cloud ~]# lvreduce -L 10G /dev/mapper/vg_cloud-LogVol00
 WARNING: Reducing active logical volume to 10.00 GiB
 THIS MAY DESTROY YOUR DATA (filesystem etc.)
 Do you really want to reduce LogVol00? [y/n]: y
 Reducing logical volume LogVol00 to 10.00 GiB
 Logical volume LogVol00 successfully resized

Step:5 (Optional) For the safer side, now check the reduced file system for errors

[root@cloud ~]# e2fsck -f /dev/mapper/vg_cloud-LogVol00
 e2fsck 1.41.12 (17-May-2010)
 Pass 1: Checking inodes, blocks, and sizes
 Pass 2: Checking directory structure
 Pass 3: Checking directory connectivity
 Pass 4: Checking reference counts
 Pass 5: Checking group summary information
 /dev/mapper/vg_cloud-LogVol00: 12/648960 files (0.0% non-contiguous), 2438425/2621440 blocks

Step:6 Mount the file system and verify its size.

[root@cloud ~]# mount /home/
 [root@cloud ~]# df -h /home/
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/vg_cloud-LogVol00
                       9.9G  9.2G  208M  98% /home