DISCLAIMER : Please note that blog owner takes no responsibility of any kind for any type of data loss or damage by trying any of the command/method mentioned in this blog. You may use the commands/method/scripts on your own responsibility.If you find something useful, a comment would be appreciated to let other viewers also know that the solution/method work(ed) for you.


🚀DevOps Zero to Hero: 💡Day 15-Mastering Shell Scripting — Basics to Advanced🔥

 

Shell scripting is an essential skill for DevOps engineers, as it empowers you to automate tasks, streamline processes, and manage infrastructure more efficiently. In this comprehensive guide, we’ll take you on a journey from the basics of shell scripting to advanced techniques, complete with real-time examples and code snippets.

Table of Contents

  1. Introduction to Shell Scripting
  2. Getting Started with Bash
  3. Basic Script Structure
  4. Variables and Data Types
  5. Control Structures
  6. Functions
  7. File Handling
  8. Advanced Techniques
  9. Best Practices
  10. Real-world Examples

1. Introduction to Shell Scripting

Shell scripting is the art of writing scripts that run in a command-line shell. In the DevOps world, this usually means using Bash (Bourne Again Shell), which is the default shell on most Linux systems.

Shell scripts are used for various purposes, such as automating repetitive tasks, configuring servers, and managing deployments.

2. Getting Started with Bash

Before diving into scripting, make sure you have a basic understanding of Bash. You can start by opening a linux terminal and trying out simple commands like lspwd, and echo.

3. Basic Script Structure

A Bash script typically starts with a shebang line that specifies the interpreter to use. Here’s a simple script:

#!/bin/bash
# This is a comment
echo "Hello, World!"
  • The #!/bin/bash line tells the system to use the Bash interpreter.
  • Comments start with # and are ignored by the shell.
  • echo is used to print text to the console.

4. Variables and Data Types

In Bash, you can declare variables like this:

name="DevOps"

Bash has no explicit data types. Variables are treated as strings by default, but you can perform arithmetic operations using (( )):

count=5
((count++))
echo "Count: $count"

5. Control Structures

Control structures help you make decisions and control the flow of your scripts. Here are some common ones:

  • If statements:
if [ "$var" == "value" ]; then
echo "Variable is equal to 'value'"
fi
  • For loops:
for fruit in apple banana cherry; do
echo "I like $fruit"
done
  • While loops:
count=0
while [ $count -lt 5 ]; do
echo "Count: $count"
((count++))
done

6. Functions

Functions allow you to modularize your code. Here’s how to define and call a function:

say_hello() {
echo "Hello, $1!"
}

say_hello "Alice"

7. File Handling

Dealing with files is common in DevOps tasks. You can read, write, and manipulate files in Bash:

  • Reading a file:
while read line; do
echo "Line: $line"
done < file.txt
  • Writing to a file:
echo "Hello, World!" > output.txt

8. Advanced Techniques

To become a proficient DevOps scripter, you should explore advanced techniques:

  • Command-line arguments: Parse and use command-line arguments in your scripts.
  • Error handling: Implement error-checking and logging in your scripts.
  • Regular expressions: Use regex for pattern matching and text manipulation.
  • Piping and redirection: Combine commands using pipes (|) and redirect input/output.

9. Best Practices

Follow these best practices for writing maintainable and efficient shell scripts:

  • Use meaningful variable and function names.
  • Comment your code to explain complex logic.
  • Modularize your code with functions.
  • Test your scripts thoroughly before deploying them.
  • Use version control to track changes.

10. Real-world Examples

Here are some real-world scenarios where shell scripting is invaluable:

  • Automating deployments: Write scripts to deploy applications and configurations.
  • Server provisioning: Automate server setup and configuration.
  • Backup and cleanup: Schedule backups and perform routine system maintenance.
  • Monitoring and alerts: Use scripts to monitor system metrics and send alerts.
  • Log analysis: Analyze log files for errors and trends.

File Handling and Text Processing

a. Searching for Keywords in Log Files

Suppose you need to search for specific keywords in log files for troubleshooting. You can use grep for this:

#!/bin/bash

search_term="error"
log_file="application.log"
if grep -q "$search_term" "$log_file"; then
echo "Found '$search_term' in $log_file"
else
echo "No '$search_term' found in $log_file"
fi

b. Parsing CSV Files

You often need to work with CSV files in DevOps tasks. Here’s a script that reads a CSV file and extracts data:

#!/bin/bash

csv_file="data.csv"
while IFS=',' read -r col1 col2 col3; do
echo "Column 1: $col1, Column 2: $col2, Column 3: $col3"
done < "$csv_file"

Automation and Server Management

a. Automating Software Updates

Automation is crucial in DevOps. You can create a script to update your system and installed packages:

#!/bin/bash

# Update system packages
sudo apt update
sudo apt upgrade -y
# Update Docker containers (if applicable)
docker-compose -f /path/to/docker-compose.yaml pull
docker-compose -f /path/to/docker-compose.yaml up -d

b. Server Backup Script

Creating regular backups of your servers is essential. Here’s a simple backup script using rsync:

#!/bin/bash

backup_dir="/backup"
source_dir="/var/www/html"
# Create a backup directory
mkdir -p "$backup_dir"
# Perform the backup
rsync -av "$source_dir" "$backup_dir"

Error Handling and Logging

a. Logging Script Output

Logging helps you keep track of script execution and errors:

#!/bin/bash

log_file="/var/log/my_script.log"
# Redirect stdout and stderr to a log file
exec > "$log_file" 2>&1
echo "Script started at $(date)"
# Your script logic here
echo "Script finished at $(date)"

b. Error Handling

You can add error handling to your scripts using set -e to exit on error:

#!/bin/bash

set -e
# Your script logic here
# If an error occurs, the script will exit here

Automation with Cron Jobs

Cron jobs are scheduled tasks in Unix-like systems. You can use them for regular DevOps tasks:

# Edit the crontab using 'crontab -e'
# This script will run every day at midnight
0 0 * * * /path/to/your/script.sh

Managing Environment Variables

Managing environment variables is crucial for configuration in DevOps:

#!/bin/bash

# Define environment variables
export DATABASE_URL="mysql://username:password@localhost/database"
# Use environment variables in your scripts
echo "Database URL: $DATABASE_URL"

Check web status

Let’s create a script to automate a common DevOps task — checking the status of a web server. Create a file named check_web_status.sh and add the following code:

#!/bin/bash

# Define a function to check the status of a website
check_website() {
local url="$1"
local response=$(curl -s -o /dev/null -w "%{http_code}" "$url")

if [ "$response" == "200" ]; then
echo "Website $url is up and running!"
else
echo "Website $url is down!"
fi
}
# Call the function with a sample website
check_website "https://www.example.com"

In this script:

  • We use the curl command to send an HTTP request to the website.
  • The -s flag makes curl operate in silent mode, suppressing progress and error messages.
  • -o /dev/null discards the response body.
  • -w "%{http_code}" instructs curl to print only the HTTP response code.
  • We compare the response code to determine if the website is up or down.

Run the script with ./check_web_status.sh, and it will check the status of "https://www.example.com" and provide the result.

Automate server monitoring

Let’s create a script to automate server monitoring by checking CPU usage. Create a file named monitor_cpu.sh and add the following code:

#!/bin/bash

# Get CPU usage percentage
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2 + $4}')
echo "CPU Usage: $cpu_usage%"

In this script:

  • We use the top command to get CPU usage information.
  • top -bn1 runs top in batch mode for a single iteration.
  • grep "Cpu(s)" extracts the line with CPU usage details.
  • awk '{print $2 + $4}' calculates the sum of user and system CPU usage percentages.

Run the script with ./monitor_cpu.sh, and it will display the current CPU usage percentage.

Monitor disk space

Let’s create a script to automate server monitoring by checking disk space. Create a file named monitor_disk_space.sh and add the following code:

#!/bin/bash

# Set the threshold for disk usage (in percentage)
threshold=90
# Get the disk usage percentage
disk_usage=$(df -h / | tail -n 1 | awk '{print $5}' | sed 's/%//')
if [ "$disk_usage" -ge "$threshold" ]; then
echo "Disk space is running low! Disk Usage: $disk_usage%"
else
echo "Disk space is within acceptable limits. Disk Usage: $disk_usage%"
fi

In this script:

  • We set a threshold for disk usage (in this case, 90%).
  • We use the df command to get disk usage information for the root filesystem (/).
  • tail -n 1 extracts the last line of the df output.
  • awk '{print $5}' extracts the fifth column, which contains the usage percentage.
  • We compare the usage percentage to the threshold and provide a warning if it exceeds the limit.

Run the script with ./monitor_disk_space.sh, and it will check the disk space usage and issue a warning if it's above the threshold.

Automate package installations using Functions

Let’s create a script to automate the installation of packages using a function. Create a file named install_packages.sh and add the following code:

#!/bin/bash

# Define a function to install packages
install_packages() {
local package_manager=""

# Check which package manager is available
if [ -x "$(command -v apt-get)" ]; then
package_manager="apt-get"
elif [ -x "$(command -v yum)" ]; then
package_manager="yum"
else
echo "Error: No supported package manager found."
exit 1
fi
echo "Updating package lists..."
sudo $package_manager update -y
echo "Installing packages..."
sudo $package_manager install -y package1 package2 package3
}
# Call the function to install packages
install_packages

In this script:

  • We define a function install_packages that checks for available package managers (apt-get or yum) and installs specified packages.
  • We use command -v to check if a command is available.
  • The -y flag is used to automatically answer yes to prompts during package installation.

Run the script with ./install_packages.sh, and it will update the package lists and install the specified packages based on the available package manager.

These are just a few examples of how shell scripting can be applied in real-world DevOps scenarios. As you gain experience, you’ll encounter more complex tasks that require custom scripts tailored to your infrastructure and requirements. Remember to follow best practices, document your scripts, and continually refine your skills to become a more proficient DevOps engineer.

In conclusion, mastering shell scripting is a critical skill for DevOps engineers. This guide provides you with a solid foundation and real-world examples to help you become proficient in shell scripting and streamline your DevOps tasks. Happy scripting!

🚀DevOps Zero to Hero: 💡Day 14 — Communication & Collaboration Tools🛠

 

Welcome back to our 30-day DevOps odyssey! Today, on Day 14, we’re immersing ourselves in the captivating world of collaboration and communication tools, the driving force behind impeccable teamwork and efficient project management in any DevOps arena. We’ll venture into the realms of Slack, Microsoft Teams, and Atlassian tools, and discover how they intertwine seamlessly with DevOps instruments to streamline your workflow. Let’s start this informative journey into the core of effective communication.

Elevating Collaboration through Cutting-edge Tools

In the dynamic universe of software development, the seamless exchange of ideas among team members is not just a luxury but a necessity. Effective DevOps teams thrive on cohesive communication channels to discuss ideas, troubleshoot challenges, and synchronize efforts. Today, we’re diving into three heavyweights among collaboration tools, reshaping the DevOps collaboration paradigm:

1. Slack: Real-time Conversations Redefined

At the forefront of collaborative messaging apps, Slack emerges as a unifying platform for instant communication. With its versatile features encompassing channels, private groups, direct messaging, and fluid file sharing, Slack transforms into a robust nexus for team cohesion.

Key Features:
Channels: Slack channels are dedicated spaces for specific topics, projects, or departments. Team members can join relevant channels to participate in discussions and share updates related to their work.

Private Groups: Private groups allow selected team members to have secure, confidential discussions away from the public channels.

Direct Messages: Team members can send direct messages to one another for quick and private communication.

File Sharing: Slack supports easy file sharing, allowing teams to exchange documents, code snippets, images, and other files.

2. Microsoft Teams: An Ecosystem of Unified Collaboration

Microsoft Teams emerges as the epitome of integrated collaboration, seamlessly weaving together chat, video conferencing, file storage, and application integration within the Microsoft 365 ecosystem. It’s the ultimate workspace, especially for teams capitalizing on Microsoft tools.

Key Features:
Chat: Like Slack, Microsoft Teams enables real-time messaging in channels and private chats.

Video Conferencing: Teams supports seamless video conferencing, allowing team members to hold virtual meetings, conduct stand-ups, and collaborate face-to-face.

File Sharing and Collaboration: Teams integrates with Microsoft SharePoint and OneDrive, making it easy to share and collaborate on documents, presentations, and other files.

Application Integration: Teams integrates with a wide range of Microsoft and third-party apps, streamlining workflows and centralizing information.

3. Atlassian Tools: Orchestrating Development and Management

The Atlassian toolkit, boasting Jira, Confluence, and Bitbucket, empowers teams with advanced project management and development workflows. Jira assists in tracking projects, Confluence fosters collaborative documentation, and Bitbucket streamlines the management of Git repositories.

Key Features:
Jira: Jira is an issue and project tracking tool that allows teams to plan, track, and manage their work. It provides customizable workflows, issue prioritization, and reporting capabilities.

Confluence: Confluence is a collaborative documentation platform where teams can create, share, and organize project documentation, meeting notes, and knowledge bases.

Bitbucket: Bitbucket is a web-based Git repository management solution that allows teams to host, review, and manage code repositories securely.

Architecting Effective Communication Channels

Now, let’s blueprint communication channels that are the cornerstone of effective collaboration:

1. Channel Segmentation: In platforms like Slack and Microsoft Teams, crafting dedicated channels for specific projects or themes maintains focus and organization. Channels like #development and #operations ensure discussions stay on track.

2. Channel Guidelines: Clearly outlining the purpose and guidelines for each channel prevents clutter and keeps conversations purposeful. For instance, channels like #feedback invite open expression, while guidelines for #development channel ensure conversations stay code-centric.

3. Harnessing Tags and Mentions: The art of tagging and mentioning directs communication effectively. Tag a team member using “@username” to bring their insight into discussions, while “ @channel” guarantees crucial announcements don’t slip through the cracks.

4. A Culture of Open Communication: Channels like #ideas incubate open dialogue, nurturing innovation and collective growth.

5. The Evolution of Channels: Just as projects evolve, so should communication channels. Regular review ensures they stay relevant, productive, and aligned with project goals.

Integration: Elevating Synergy with DevOps Tools

The harmonious blend of collaboration platforms with DevOps tools yields unparalleled efficiency. Integrations amplify visibility, streamline workflows, and catalyze real-time feedback. Let’s embark on a journey through common integration scenarios, complete with tangible examples:

1. CI/CD Integration: By melding CI/CD tools like Jenkins or CircleCI with Slack or Microsoft Teams, teams remain informed about code changes, builds, and deployments.

Example: Jenkins Integration with Slack Use Jenkins’ Slack plugin to send automated notifications to a designated Slack channel whenever a build is triggered or deployment takes place.

# Jenkinsfile
stage('Build') {
steps {
// Build your code here
slackSend(channel: '#build-notifications', message: "Build successful!")
}
}

2. Issue Tracking Integration: The fusion of issue tracking tools like Jira or GitHub Issues with collaboration platforms ensures that the entire team is privy to critical updates and discussions.

Example: GitHub Issues Integration with Microsoft Teams Utilize GitHub Actions to trigger notifications in a linked Microsoft Teams channel every time a new issue is created or updated.

# .github/workflows/issue-notifications.yml
on:
issues:
types: [opened, edited]

jobs:
notify_teams:
runs-on: ubuntu-latest
steps:
- name: Notify Microsoft Teams
uses: microsoft/Teams-Notify@v1
with:
title: New GitHub Issue
message: A new issue has been created or updated.

3. Monitoring Alerts Integration: Integrating monitoring tools such as Prometheus or Grafana with Slack ensures swift alerts during incidents.

Example: Prometheus Alerts Integration with Slack Configure Prometheus to fire alerts to a dedicated Slack channel when predefined thresholds are breached.

# prometheus.yml
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093

rule_files:
- alerts.rules.yml
# alerts.rules.yml
groups:
- name: example
rules:
- alert: HighCpuUsage
expr: node_cpu_seconds_total / node_time_seconds_total > 0.9
for: 5m
labels:
severity: critical
annotations:
summary: High CPU usage detected

4. Documentation Integration: Integrating documentation platforms like Confluence with collaboration tools fosters easy access to and discussion of project knowledge.

Example: Confluence Integration with Microsoft Teams Connect your Confluence space with a Microsoft Teams channel, allowing direct access to crucial project documentation.

# In a Microsoft Teams conversation
@Confluence Document: "Explore our DevOps Best Practices for the latest insights!"

Embrace the DevOps Collaboration Edge

By using teamwork tools and communication methods along with DevOps strategies, teams can achieve greater efficiency, clear understanding, and success. Strong communication channels and smart tool combinations make teamwork smoother, help respond quickly to changes, and give teams the ability to create excellent software products. As you continue learning about DevOps with us, make the most of collaboration tools to improve your skills and reach the highest level of DevOps expertise and success.

Thank you for joining us on Day 14! Stay tuned for more captivating insights as we march toward DevOps brilliance. Happy collaborating!

🚀DevOps Zero to Hero: 💡Day 13 🐧Leveraging Linux for DevOps: Powering Efficiency and Collaboration🐧

 

Introduction

In the ever-evolving landscape of software development and IT operations, the DevOps approach has emerged as a game-changer. It focuses on breaking down the traditional silos between development and operations teams, fostering collaboration, and promoting continuous integration and delivery. At the core of this methodology lies the utilization of robust tools and technologies, with Linux leading the charge as the preferred operating system for DevOps practices. In this article, we will explore the role of Linux in DevOps, its advantages, and some of the key tools that make this combination a force to be reckoned with.

Linux and DevOps: A Synergistic Relationship

Linux, the open-source operating system, forms the backbone of many DevOps practices due to its inherent flexibility, stability, and a vast array of tools available within its ecosystem. Its ability to integrate seamlessly into various environments, from on-premises servers to cloud-based solutions, is a key factor in its popularity among DevOps professionals.

Linux comes in a variety of distributions, often referred to as “distros,” each tailored to specific use cases and preferences. These distributions offer different package managers, default desktop environments, software repositories, and levels of support. Here are some of the most popular and notable Linux distributions:

  1. Ubuntu: Known for its user-friendliness and strong community support, Ubuntu is a widely used distribution. It offers regular releases and Long Term Support (LTS) versions for stable environments. The default desktop environment is GNOME, but there are official flavors with different desktops like KDE, Xfce, and more.
  2. Debian: Debian is one of the oldest and most respected Linux distributions. It emphasizes stability and reliability, making it a popular choice for servers. Ubuntu itself is based on Debian, and many other distributions are also derived from it.
  3. Fedora: Fedora is focused on innovation and tends to include cutting-edge software. It often serves as a testing ground for new technologies that might eventually make their way into Red Hat Enterprise Linux (RHEL). Fedora Workstation is a user-friendly version with the GNOME desktop.
  4. CentOS: CentOS was known for providing a free, community-supported version of RHEL. However, the project shifted its focus to CentOS Stream, which tracks RHEL’s development more closely. CentOS Stream is seen as a rolling-release testing environment for RHEL.
  5. Red Hat Enterprise Linux (RHEL): This distribution is geared towards enterprise environments, offering long-term support, certifications, and specialized tools. It’s known for stability, security, and scalability, making it a popular choice for corporate servers.
  6. Arch Linux: Arch Linux is a distribution for more experienced users who appreciate a DIY approach. It provides a rolling-release model, where packages are updated continuously. Arch Linux offers a high degree of customization and control over the system.
  7. openSUSE: openSUSE offers two main variants: Leap and Tumbleweed. Leap focuses on stability and is suitable for servers and workstations. Tumbleweed is a rolling-release version with more up-to-date software.
  8. Kali Linux: Kali Linux is a specialized distribution designed for penetration testing and cybersecurity professionals. It comes preloaded with a wide range of security tools.
  9. Linux Mint: Linux Mint aims to provide a polished and user-friendly experience. It offers different desktop environments, such as Cinnamon and Xfce, and includes multimedia codecs by default.
  10. Manjaro: Based on Arch Linux, Manjaro aims to make Arch more accessible to a broader audience. It provides an easier installation process, pre-installed software, and access to the Arch User Repository (AUR).
  11. Gentoo: Gentoo is a distribution for enthusiasts who enjoy extreme customization. It uses a source-based package management system that compiles software on the user’s machine, allowing for optimized performance.
  12. Slackware: One of the oldest distributions, Slackware follows a simple and minimalist philosophy. It’s known for its adherence to the UNIX principles and straightforward approach.

Each Linux distribution has its strengths and is tailored to specific use cases. DevOps professionals may choose distributions based on factors such as familiarity, support, stability, or the specific tools and technologies they need for their projects. The beauty of the Linux ecosystem lies in its diversity, enabling users to find a distribution that aligns perfectly with their needs and preferences.

Advantages of Linux for DevOps

1. Flexibility and Customizability: DevOps environments require customization to meet specific project needs. Linux allows practitioners to tailor their systems to exact requirements, from minimalistic server setups to complex, multifunctional environments.

2. Automation Capabilities: Automation is a cornerstone of DevOps. Linux’s command-line interface (CLI) and scripting capabilities enable developers and operations teams to automate repetitive tasks, streamline processes, and maintain consistency across the development and deployment pipeline.

3. Strong Security: Linux’s robust security features and permissions system are crucial in safeguarding sensitive data and applications. Its open-source nature also facilitates rapid response to security vulnerabilities, enhancing overall system security.

4. Rich Package Management: Package managers like APT (Advanced Package Tool) and YUM (Yellowdog Updater, Modified) simplify software installation and updates, aiding in the management of dependencies and ensuring consistent environments across development, testing, and production stages.

5. Containerization and Orchestration: Linux has played a pivotal role in the rise of containerization technologies like Docker and container orchestration platforms like Kubernetes. These tools revolutionize application deployment by offering portability, scalability, and efficient resource utilization.

Key DevOps Tools in the Linux Ecosystem

1. Docker: This platform enables the creation, distribution, and execution of applications within lightweight, isolated containers. Docker accelerates development by providing consistent environments and simplifying application deployment.

2. Kubernetes: Building on the concept of containerization, Kubernetes automates the deployment, scaling, and management of containerized applications. It ensures high availability, efficient resource utilization, and easy scaling.

3. Ansible: As a configuration management and automation tool, Ansible leverages SSH to automate tasks, including application deployment, server provisioning, and configuration management, across a range of Linux systems.

4. Jenkins: An open-source automation server, Jenkins supports the entire DevOps lifecycle, from building, testing, and deploying to monitoring and reporting. It integrates well with various Linux distributions.

5. Git: Although not exclusive to Linux, Git is a distributed version control system that is widely used in DevOps workflows. It enables collaborative code management and version tracking, crucial for ensuring code quality and traceability.

Linux Filesystem Structure

Linux follows a hierarchical filesystem structure that organizes files and directories in a logical manner. The root directory, denoted by ‘/’, serves as the starting point for all other directories. Here’s a breakdown of key directories and their purposes:

  1. /bin and /sbin: Essential system binaries and administrator binaries respectively, used for core system functionality and management.
  2. /etc: Configuration files for system and application settings.
  3. /home: Home directories for users, containing personal files and configurations.
  4. /var: Variable data files, such as log files, spool files, and temporary files.
  5. /tmp: Temporary files that are cleared upon reboot.
  6. /usr: User programs and data, including subdirectories like /usr/bin, /usr/lib, and /usr/share.
  7. /opt: Optional software packages, often added by users or third-party software.
  8. /lib: Libraries needed for programs in /bin and /sbin.
  9. /dev: Device files representing hardware devices.
  10. /proc: Virtual filesystem that provides information about running processes and system resources.

Basic Linux Commands for DevOps

pwd (Print Working Directory): Displays the current directory’s absolute path.

pwd

ls (List): Lists files and directories in the current directory.

ls

ls -l # Detailed list ls -a # Show hidden files

cd (Change Directory): Moves to a specified directory.

cd /path/to/directory 
cd .. # Move up one directory
cd ~ # Move to the user's home directory

mkdir (Make Directory): Creates a new directory.

mkdir new_directory

rm (Remove): Deletes files or directories. Be extra careful while using these remove commands.

rm file.txt 
rm -r directory # Recursive deletion

cp (Copy): Copies files or directories.

cp file.txt /path/to/destination 
cp -r directory /path/to/destination # Recursive copy

mv (Move): Moves or renames files or directories.

mv file.txt new_location 
mv old_name new_name

touch: Creates an empty file or updates the timestamp of an existing file.

touch file.txt

cat (Concatenate): Displays the content of a file.

cat file.txt

echo: Prints text to the terminal or a file.

echo "Hello, DevOps!" 
echo "Hello, DevOps!" > greeting.txt # Redirect to a file

chmod (Change Mode): Changes file permissions.

chmod +x script.sh   # Make a script executable

chown (Change Ownership): Changes file or directory ownership.

chown user:group file.txt

ps (Process Status): Lists currently running processes.

ps aux

top: Interactive process viewer, showing system statistics and active processes.

top

df (Disk Free): Displays filesystem disk space usage.

df -h   # Human-readable format

These are just a few of the fundamental commands that can significantly enhance your productivity as a DevOps practitioner. By mastering these commands and understanding the Linux filesystem structure, you’ll be better equipped to manage servers, automate tasks, and streamline your DevOps workflows effectively. Linux’s flexibility, combined with your command-line prowess, will empower you to excel in the dynamic world of DevOps.

Conclusion

Linux has emerged as a linchpin in the DevOps movement, offering an environment rich in tools, flexibility, and automation capabilities. Its open-source nature aligns well with the principles of collaboration and continuous improvement that DevOps espouses. By harnessing Linux’s power, DevOps teams can streamline their workflows, enhance security, and accelerate the delivery of high-quality applications. As the DevOps landscape continues to evolve, Linux is poised to remain a central player in shaping the future of efficient, collaborative, and agile software development and IT operations.