Linuxtechi

9 tee Command Examples in Linux

Linux Tee command is a command line tool, it reads from the standard input and write the result to standard output and files at the same time.In other words, we can say, tee command in Linux used for hitting two birds with one stone: reading from standard input and printing the result on a file and to standard output at the same time. What do we mean by this? In this guide, we shed more light on Linux tee command and use a few examples to demonstrate its usage.

Tee Command Syntax

The tee command syntax is quite simple and takes the following format:

$ tee OPTIONS filename

Here are some of the options that you can use with tee command:

linux-tee-command-options

In tee command’s syntax, filename refers to one or more files.

With that in mind let’s check out a few examples on how the command is used.

Example 1) Basic usage of tee command

As described earlier, the main function of the tee command is to display the output of a command (stdout) and save it in a file. In the example below, the command we are inspecting the block devices in our system and piping the results to tee command which display the output to the terminal while simultaneously saving it on a new file called block_devices.txt

$ lsblk | tee block_devices.txt

lsblk-tee-command-output-linux

Feel free to examine the contents of the block_devices.txt file using the cat command as shown:

$ cat block_devices.txt

Example 2) Save command output to multiple files using tee

Additionally, you can write a command’s output to several space-separated files as shown in the syntax below.

$ command | tee file1 file2 file3 . . .

In the following example, we have invoked the hostnamectl command to print the hostname of our system among other details and save the standard output to two files file1.txt, and file2.txt

$ hostnamectl | tee file1.txt file2.txt

tee-command-output-files-linux

Once again, you can confirm the existence of the output in the two files using the cat command as shown:

$ cat file1.txt
$ cat file2.txt

Example 3) Suppress output of tee command

If you want to hide or suppress tee command from printing the output on the screen then redirect the output to /dev/null as shown:

$ command | tee file > /dev/null

For example,

$ df -Th | tee file4.txt > /dev/null

tee-command-suppress-output

Example 4) Append output to a file with tee command

By default, tee command overwrites the contents of a file. To append the output and prevent the erasure of the current content, use the -a or –append options.

$ command | tee -a file

In the second command, as shown, we have appended the output of date command to file1.txt which already contains the information about the USB devices on the system.

$ date | tee -a file1.txt

Append-output-tee-command-linux

Example 5) Use tee together with sudo command

Suppose that as a sudo user, you want to write on a file that is owned by the root user. Naturally, any elevated operation will require that you invoke the sudo user before the command.

To achieve this, simply prefix the tee command with sudo as shown below.

$ echo "10.200.50.20 db-01" | sudo tee -a /etc/hosts/

tee-with-sudo-command-linux

So, tee receives the …

0
Read More

9 Quick chmod Command Examples in Linux

Chmod command in Linux is used to change or assign permissions on files and directories. In Linux / Unix systems, accessibility to files and directories is determined by file ownership and permissions. In a previous article, we looked at how to manage file & directory ownership using the chown command. In this tutorial, we look at the chmod command.

The chmod command, short for change mode is used to manage file and directory permissions and determines who can access them. Let’s now dive in and explore the nature of file & directory permissions and how they can be modified.

Linux permissions

To better understand how the chmod command works, it’s prudent that we study the Linux file permissions model.

In Linux, we have 3 types of file permissions: read (r), write (w) and execute (x) permissions. These permissions determine which users can read, write or execute the files. You can assign these permissions using the text or octal (numeric) notation as we shall later discuss in this tutorial.

Files and directories can belong to either the owner of the file (u), group (g) or others (o)

  • u   –  Owner of the file
  • g   –  Group
  • o   –  Others

File permissions are listed using the ls -l command. The -l flag lists the file permissions. The permissions are arranged in three sets: the user, group and others respectively

To get a better understanding of file permissions, we are going to list the contents of our directory as shown:

$ ls -l

file-directory-permissions-linux

Starting from the extreme left, the first character/symbol indicates the file type. A hyphen (-) indicates that the file is a regular file. The symbol d indicates that it is a directory. Symbol l indicates that it’s a symbolic link.

The remaining nine characters are segmented into 3 triplets each bearing three symbols r(read), w(write) and x(execute). As pointed out earlier, the first segment points to the owner permissions, the second indicates the group permissions and the last portion specifies the permissions that other users have on the file or directory.

From the output, we can see that we have 2 files (hello.txt & reports.zip) and a single directory.

Let’s examine the first file

-rw-rw-r-- 1 linuxtechi linuxtechi   35 Aug 17 15:42 hello.txt

For the first file, the -rw-rw-r– permissions imply that the owner of the file has read and write permissions, the group also bears read & write permissions, while other users only have read permissions. The same permissions also apply for the reports.zip compressed file.

Let’s look at the directory’s permissions:

drwxrwxr-x 2 linuxtechi linuxtechi 4096 Aug 17 15:43 sales

We can see that the owner of the directory and group has all the permissions (read, write and execute) while other users have read and execute permissions only.

The triple hyphen symbols — indicate no permissions have been granted for either the owner of the file, group or other users.

Using chmod command to set file & directory permissions

Having looked at the file permissions and how to view them, let’s no focus on how to modify these permissions.

The chmod command in Linux is used to change file and directory permissions using either text (symbolic) or numeric (octal) notation. It takes …

0
Read More

Top 12 Command Line Tools to Monitor Linux

Being a Linux administrator is not an easy job. It takes lot of time, patience, and hard work to keep the systems up and running. But Linux System Admins can take some breather as they have some help in the form of command line monitoring tools. These tools help them to keep a tab on the Linux server performances and fix anything if found abnormal. In this article, we will look at the top 12 command line tools to monitor Linux performance.

1) Top

Without any doubt the top command is the number one command line tool to monitor Linux. It is one of the most widely used commands by Linux system administrators all over the world. It quickly provides details about all running processes in an ordered list. The list also keeps updating in real time. Not only the process names, it also displays the memory usage and CPU usage etc.

top-command-line-tool-monitor-linux

Also Read: 25 Top Command Examples to Monitor Linux Server Performance

2 ) vmstat

vmstst is the command line utility that occupies the 2nd position in our list. Its main task is used to display virtual memory statistics. It also helps you to display various information including all system processes, CPU activity, paging, block IO, kernel threads and disks etc. vmstat is the part of default installation in almost all the Linux distribution, so it is available straight way after the installation.

vmstat-command-output-linux

3) lsof

If you want to look at all the files currently opened in the system, then you need to make use of the lsof command. It is also used to monitor all processes currently in use. One of the major advantages of this command is that it helps administrators to see the files currently in use when a disk cannot be unmounted. Using this command, these files can be identified easily. lsof command is not available after the default Linux OS installation, so first we have to install it using following command:

For CentOS / RHEL

$ sudo yum install -y lsof              // CentOS 7 / RHEL 7 or before
$ sudo dnf install -y lsof              // CentOS 8 / RHEL 8

For Ubuntu / Debian

$ apt-get install -y lsof
Or
$ apt-get install -y lsof

To use lsof commmand, type lsof and hit enter

lsof-command-output-linux

Also Read : 18 Quick ‘lsof’ command examples for Linux Geeks

4) tcpdump

Tcpdump is another command line utility that allows Linux system administrators and network engineers to monitor all TCP/IP packets transferred over a network. Using tcpdump, one can also save all the packets in a separate file for analysis in the future.

Tcpdump is not part of default OS installation, so before start using it first install via following commands:

$ sudo yum install tcpdump -y    // CentOS 7 / RHEL 7 or before
$ sudo dnf install tcpdump -y    //CentOS 8 / RHEL 8
$ sudo apt install tcpdump -y    // Ubuntu / Debian

To Start capturing the packets on specific interface, run the following command,

# tcpdump -i enp0s3

tcpdump-command-line-tool-linux

Also Read: How to capture and analyze packets with tcpdump command on Linux

5) netstat

Netstat is one of the oldest command line utility used for network troubleshooting. Using netstat we can easily find network …

0
Read More

How to Add Remote Linux Host to Cacti for Monitoring

In the previous guide, we demonstrated how you can install Cacti monitoring server on CentOS 8. This tutorial goes a step further and shows you how you can add and monitor remote Linux hosts on Cacti. We are going to add remote Ubuntu 20.04 LTS and CentOS 8 systems to the cacti server for monitoring.

Let’s begin.

Step 1)  Install SNMP service on Linux hosts

SNMP, short for Simple Network Management Protocol is a protocol used for gathering information about devices in a network. Using SNMP, you can poll metrics such as CPU utilization, memory usage, disk utilization, network bandwidth etc. This information will, later on, be graphed in Cacti to provide an intuitive overview of the remote hosts’ performance.

With that in mind, we are going to install and enable SNMP service on both Linux hosts:

On Ubuntu 20.04

To install snmp agent, run the command:

$ sudo apt install snmp snmpd -y

On CentOS 8

$ sudo dnf install net-snmp net-snmp-utils -y

SNMP starts automatically upon installation. To confirm this, confirm the status by running:

$ sudo systemctl status snmpd

If the service is not running yet, start and enable it on boot as shown:

$ sudo systemctl start snmpd

We can clearly see that the service is up and running. By default, SNMP runs listens on UDP port 161, You can verify this using the netstat command as shown.

$ sudo netstat -pnltu | grep snmpd

netstat-snmp-linux

Step 2) Configuring SNMP service

So far, we have succeeded in installing snmp service and confirmed that it is running as expected. The next course of action is to configure the snmp service so that data can be collected and shipped to the Cacti service.

The configuration file is located at /etc/snmp/snmpd.conf

For Ubuntu 20.04

We need to configure a few parameters. First, locate the sysLocation and sysContact directives. These define your Linux client’s Physical location.

Default-syslocation-syscontact-snmpd-linux

Therefore, feel free to provide your client’s location.

Syslocation-Syscontact-snmpd-ubuntu-20-04

Next, locate the agentaddress directive. This refers to the IP address and the port number that the agent will listen to.

Default-agent-address-snmpd-ubuntu-20-04

Adjust the directive as shown below where 192.168.2.106 is my client system’s address.

agentaddress  udp:192.168.2.106:161

AgentAddress-cacti-server-Ubuntu-20-04

The directive will now allow the system’s local IP to listen to any snmp requests. Next up, add the view directive below above the other view directives:

view     all      included     .1      80

View-Directive-snmpd-Ubuntu-20-04

Next, change the rocommunity attribute shown below

rocommunity  public default -V systemonly
to:
rocommunity  public default -V all

rocommunity-snmpd-linux

Finally, to ensure the snmp service is working as expected, run the command below on the Linux host.

$ sudo snmpwalk -v 1 -c public -O e 192.168.2.106

You should get some massive output as shown.

snmpwalk-command-cacti-ubuntu-20-04

For CentOS 8

In CentOS 8, the configuration is slightly different. First, locate the line that begins with the com2sec  directive as shown:

default-com2sec-directive-snmpd-centos8

We will specify a new security name known as AllUser and delete the notConfigUser as shown:

Update-com2sec-directive-snmpd-centos8

Next, locate the line that starts with the group directive as shown.

Default-Group-directive-snmpd-centos8

We will modify the second attribute and specify AllGroup as the group name and AllUser as the security name as previously defined.Change-group-directive-snmpd-centos8

In the view section, add this line

view    AllView         included        .1

View-Directive-snmpd-centos8

Finally, locate the line beginning with the access directive.

Default-access-directive-snmpd-centos8

Modify …

0
Read More

How to Add Remote Linux Host to Cacti for Monitoring

In the previous guide, we demonstrated how you can install Cacti monitoring server on CentOS 8. This tutorial goes a step further and shows you how you can add and monitor remote Linux hosts on Cacti. We are going to add remote Ubuntu 20.04 LTS and CentOS 8 systems to the cacti server for monitoring.

Let’s begin.

Step 1)  Install SNMP service on Linux hosts

SNMP, short for Simple Network Management Protocol is a protocol used for gathering information about devices in a network. Using SNMP, you can poll metrics such as CPU utilization, memory usage, disk utilization, network bandwidth etc. This information will, later on, be graphed in Cacti to provide an intuitive overview of the remote hosts’ performance.

With that in mind, we are going to install and enable SNMP service on both Linux hosts:

On Ubuntu 20.04

To install snmp agent, run the command:

$ sudo apt install snmp snmpd -y

On CentOS 8

$ sudo dnf install net-snmp net-snmp-utils -y

SNMP starts automatically upon installation. To confirm this, confirm the status by running:

$ sudo systemctl status snmpd

If the service is not running yet, start and enable it on boot as shown:

$ sudo systemctl start snmpd

We can clearly see that the service is up and running. By default, SNMP runs listens on UDP port 161, You can verify this using the netstat command as shown.

$ sudo netstat -pnltu | grep snmpd

netstat-snmp-linux

Step 2) Configuring SNMP service

So far, we have succeeded in installing snmp service and confirmed that it is running as expected. The next course of action is to configure the snmp service so that data can be collected and shipped to the Cacti service.

The configuration file is located at /etc/snmp/snmpd.conf

For Ubuntu 20.04

We need to configure a few parameters. First, locate the sysLocation and sysContact directives. These define your Linux client’s Physical location.

Default-syslocation-syscontact-snmpd-linux

Therefore, feel free to provide your client’s location.

Syslocation-Syscontact-snmpd-ubuntu-20-04

Next, locate the agentaddress directive. This refers to the IP address and the port number that the agent will listen to.

Default-agent-address-snmpd-ubuntu-20-04

Adjust the directive as shown below where 192.168.2.106 is my client system’s address.

agentaddress  udp:192.168.2.106:161

AgentAddress-cacti-server-Ubuntu-20-04

The directive will now allow the system’s local IP to listen to any snmp requests. Next up, add the view directive below above the other view directives:

view     all      included     .1      80

View-Directive-snmpd-Ubuntu-20-04

Next, change the rocommunity attribute shown below

rocommunity  public default -V systemonly
to:
rocommunity  public default -V all

rocommunity-snmpd-linux

Finally, to ensure the snmp service is working as expected, run the command below on the Linux host.

$ sudo snmpwalk -v 1 -c public -O e 192.168.2.106

You should get some massive output as shown.

snmpwalk-command-cacti-ubuntu-20-04

For CentOS 8

In CentOS 8, the configuration is slightly different. First, locate the line that begins with the com2sec  directive as shown:

default-com2sec-directive-snmpd-centos8

We will specify a new security name known as AllUser and delete the notConfigUser as shown:

Update-com2sec-directive-snmpd-centos8

Next, locate the line that starts with the group directive as shown.

Default-Group-directive-snmpd-centos8

We will modify the second attribute and specify AllGroup as the group name and AllUser as the security name as previously defined.Change-group-directive-snmpd-centos8

In the view section, add this line

view    AllView         included        .1

View-Directive-snmpd-centos8

Finally, locate the line beginning with the access directive.

Default-access-directive-snmpd-centos8

Modify …

0
Read More

How to Install Zimbra Mail Server on CentOS 8 / RHEL 8

Mail server is one of the important server for any organization as all the communication are done via emails. There are number free and enterprise mail servers available in the IT world. Zimbra is one of the high rated mail server that comes in open source and enterprise edition. In this article, we touch base on how to install and configure single node open-source Zimbra mail server on CentOS 8 / RHEL 8 system.

Zimbra is also known as Zimbra Collaboration Suite (ZCS) because it consists numbers of components like MTA (postfix), Database (MariaDB), LDAP and MailboxdUI etc. Below is the architecture of Zimbra

Zimbra-Architecure-Overview

Minimum System Requirements for Open Source Zimbra Mail Server

  • Minimal CentOS 8/ RHEL 8
  • 8 GB RAM
  • 64-bit Intel / AMD CPU (1.5 GHz)
  • Separate Partition as /opt with at least 5 GB free space
  • Fully Qualified Domain Name (FQDN), like ‘zimbra.linuxtechi.com’
  • Stable Internet Connection with Fixed Internal / Public IP

Following are my Zimbra Lab Setup details:

  • Hostname: zimbra.linuxtechi.com
  • Domain: linuxtechi.com
  • IP address: 192.168.1.60
  • DNS Server: 192.168.1.51
  • SELinux : Enabled
  • Firewall : Enabled

Before jumping into the installation steps of Zimbra, let’s verify DNS records (A & MX) for our Zimbra Server, Login to your CentOS 8 / RHEL 8 system and use dig command to query dns records

Note: In case dig command is not available then install ‘bind-utils’ package

Run following dig command to query A record of our Zimbra server

[[email protected] ~]# dig -t A zimbra.linuxtechi.com

DNS-A-Record-Zimbra-CentOS8-RHEL8

Run following dig command to query MX record for our domain ‘linuxtechi.com

[[email protected] ~]# dig -t MX linuxtechi.com

Query-MX-Record-Zimbra-dig-command-CentOS8

Above outputs confirm that DNS records are configured correctly for our Zimbra mail server.

Read Also : How to Setup DNS Server (Bind) on CentOS 8 / RHEL8

Note: Before starting Zimbra installation, please make sure no MTA ( or mail server) configured on the system. In case it is installed then first disable its service and remove its package

# systemctl stop postfix
# dnf remove postfix -y

Let’s dive into Zimbra installation steps,

Step 1) Apply Updates, add entry in hosts file and reboot your system

Add the hostname entry in hosts file, run the following echo command,

[[email protected] ~]# echo "192.168.1.60  zimbra.linuxtechi.com" >> /etc/hosts

Run the beneath command to apply all the available updates,

[[email protected] ~]# dnf update -y

Once all the updates have been installed then reboot your system once.

[[email protected] ~]# reboot

Step 2) Download Open source Zimbra Collaboration suite

As we discussed above, Zimbra comes in two editions, so use the following URLs to download

To Download it from the command line, run following commands,

[[email protected] ~]# dnf install wget tar perl net-tools nmap-ncat -y
[[email protected] ~]# wget https://files.zimbra.com/downloads/8.8.15_GA/zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz

Step 3) Start Zimbra Installation via installation script

Once the compressed Zimbra tar file is downloaded in step 2 then extract it in your current working directory using tar command,

[[email protected] ~]# tar zxpvf zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz
[[email protected] ~]# ls -l
total 251560
-rw-------. 1 root root      1352 Aug 30 10:46 anaconda-ks.cfg
drwxrwxr-x. 8 1001 1001      4096 Jun 29 11:39 zcs-8.8.15_GA_3953.RHEL8_64.20200629025823
-rw-r--r--. 1 root root 257588163 Jul  1 07:16 zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz
[[email protected] ~]#

Go to the extracted directory and execute install script to …

0
Read More

How to Install Cacti Monitoring Tool on CentOS 8 / RHEL 8

Cacti is a free and open source front-end network monitoring tool used to monitor and graph time-series metrics of various IT resources in your LAN. It uses the RRDtool to poll services at specified intervals and thereafter represent them on intuitive graphs.

Cacti monitors various metrics such as CPU, memory and bandwidth utilization, disk space, filesystems and running processes to mention a few. You can monitor devices such as servers, routers, switches and even firewalls. Additionally, you can configure alerts such that in case of system downtime, you can receive notifications on email. In this guide, we will walk you through the installation of the Cacti monitoring tool on CentOS 8 / RHEL 8. At the time of penning down this tutorial, the latest Cacti version is version 1.2.14.

Step 1) Install Apache web server

Cacti is a web-based graphing tool, and therefore, we need to install a web server on which the monitoring tool will run on. Here, we are going to install the Apache web server. To do so, execute the command:

$ sudo dnf install httpd -y

Step 2 ) Install PHP and additional PHP extensions

The front-end of the Cacti monitoring tool is purely PHP driven and with that, we need to install PHP and the required PHP modules. Therefore, execute the command:

$ sudo dnf install -y php php-xml php-session php-sockets php-ldap php-gd php-json php-mysqlnd php-gmp php-mbstring php-posix php-snmp php-intl

Step 3) Install MariaDB database server

During installation, Cacti requires its own database to store its files. Additionally, it needs a database for storing all the required data that is needed in populating graphs.

MariaDB is a fork and a drop-in replacement for MySQL. It’s considered more robust and feature-rich and while MySQL would still work, MariaDB comes highly recommended. To install the MariaDB server, run the command:

$ sudo dnf install -y mariadb-server mariadb

Step 4) Install SNMP and RRD tool

Next, we are going to install SNMP and RRDtool which are essential in gathering and processing system metrics.

$ sudo dnf install -y net-snmp net-snmp-utils net-snmp-libs rrdtool

Step 5)  Start and enable services

Having installed all the necessary services required for Cacti to run, we are going to start them as shown:

$ sudo systemctl start httpd
$ sudo systemctl start snmpd
$ sudo systemctl start mariadb

Additionally, ensure to enable them on boot, such that they automatically start upon booting or a reboot.

$ sudo systemctl enable httpd
$ sudo systemctl enable snmpd
$ sudo systemctl enable mariadb

Step 6) Create a database for Cacti

In this step, we are going to create a database and user for cacti and later grant all privileges to the cacti user. Run the following commands:

CREATE DATABASE cactidb;
GRANT ALL ON cactidb.* TO [email protected] IDENTIFIED  BY ‘cactipassword’;
FLUSH PRIVILEGES;
EXIT;

Be sure to note down the database name, user and password as these will be required later on in the installation process.

Create-Cactidb-CentOS8

Next, we need to import the mysql_test_data_timezone.sql to mysql database as shown.

$ mysql -u root -p mysql < /usr/share/mariadb/mysql_test_data_timezone.sql

Then log in to mysql database and grant the cacti user access to the mysql.time_zone_name table.

GRANT SELECT ON mysql.time_zone_name TO [email protected];
FLUSH PRIVILEGES;
EXIT;

Grant-access-cacti-user-centos8

Some database tuning is recommended …

0
Read More

How to Change File/Group Owner with chown Command in Linux

Short for change ownership, Chown command is a command-line utility that is used to change the user or group ownership of a file or directory and even links. The Linux philosophy is such that every file or directory is owned by a specific user or group with certain access rights.

Using different examples, we will try and see the various use cases of the chown command. Chown command employs quite a simple and straight forward syntax.

$ chown OPTIONS USER: GROUP file(s)

Let’s briefly flesh out the parameters:

The attribute USER refers to the username of the user that will own the file. You can specify either the username or the UID ( User ID). Meanwhile, the GROUP option indicates the name of the new group that the file will acquire after running the command. The file option represents a regular file or a directory or even a symbolic link. These are the three entities whose permissions can be altered.

A few points to note:

1) When the  USER option is specified alone, ownership of the file/directory changes to that of the specified user while the group ownership remains unchanged. Here’s an example:

$ chown john file1.txt

In the above command, user ownership of the file file1.txt changes from the current user to the user john.

2) If the  USER option is proceeded by a full colon i.e. USER: and the group name is not provided, then the user takes ownership of the file but the file’s group ownership switches to the user’s login group. For example:

$ chown john: file1.txt

In this example, the user john takes ownership of the file file1.txt, but the group ownership of the file changes to john’s login group.

3) When both the user and group options are specified separated by a colon i.e USER:GROUP  – without any spaces therein – the file takes ownership of the new user and the group as specified

$ chown john:john file1.txt

In the above example, the file takes the user and group ownership of user john.

4)  When the USER option is left out and instead the group option is preceded by the full colon :GROUP then, only the group ownership of the file changes.

How to view file permissions

To view file permissions, simply use the ls -l command followed by the file name

$ ls -l filename

For example:

$ ls -l file1.txt

list-file-permissions-linux

From the output, we can see that the file is owned by user linuxtechi which and belongs to the group linuxtechi in the 3rd and 4th columns respectively.

How to change file owner with chown command

Before changing permissions, always invoke sudo if you are not working as the root user. This gives you elevated privileges to change user and group ownership of a file.

To change file ownership, use the syntax:

$ sudo chown user filename

For example,

$ sudo chown james file1.txt

Change-file-owner-linux-chown-command

From the output, you can clearly see that the ownership of the file has changed from linuxtechi to user james.

Alternatively, instead of using the username, you can pass the UID of the user instead. To get the UID, view the /etc/passwd file.

$ cat /etc/passwd | grep username

From the example below, we can …

0
Read More

How to Configure NFS based Persistent Volume in Kubernetes

It is recommended to place pod’s data into some persistent volume so that data will be available even after pod termination. In Kubernetes (k8s), NFS based persistent volumes can be used inside the pods. In this article we will learn how to configure persistent volume and persistent volume claim and then we will discuss, how we can use the persistent volume via its claim name in k8s pods.

I am assuming we have a functional k8s cluster and NFS Server. Following are details for lab setup,

  • NFS Server IP = 192.168.1.40
  • NFS Share = /opt/k8s-pods/data
  • K8s Cluster = One master and two worker Nodes

Note: Make sure NFS server is reachable from worker nodes and try to mount nfs share on each worker once for testing.

Create a Index.html file inside the nfs share because we will be mounting this share in nginx pod later in article.

[[email protected] ~]$ echo "Hello, NFS Storage NGINX" > /opt/k8s-pods/data/index.html

Comfigure NFS based PV (Persistent Volume)

To create an NFS based persistent volume in K8s, create the yaml file on master node with the following contents,

[[email protected] ~]$ vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /opt/k8s-pods/data
    server: 192.168.1.40

Save and exit the file

NFS-PV-Yaml-File-K8s

Now create persistent volume using above created yaml file, run

[[email protected] ~]$ kubectl create -f nfs-pv.yaml
persistentvolume/nfs-pv created
[[email protected] ~]$

Run following kubectl command to verify the status of persistent volume

[[email protected] ~]$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   10Gi       RWX            Recycle          Available           nfs                     20s
[[email protected] ~]$

Above output confirms that PV has been created successfully and it is available.

Configure Persistent Volume Claim

To mount persistent volume inside a pod, we have to specify its persistent volume claim. So,let’s create persistent volume claim using the following yaml file

[[email protected] ~]$ vi nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Save and exit file.

NFS-PVC-Yaml-k8s

Run the beneath kubectl command to create pvc using above yaml file,

[[email protected] ~]$ kubectl create -f nfs-pvc.yaml
persistentvolumeclaim/nfs-pvc created
[[email protected] ~]$

After executing above, control plane will look for persistent volume which satisfy the claim requirement with same storage class name and then it will bind the claim to persistent volume, example is shown below:

[[email protected] ~]$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   10Gi       RWX            nfs            3m54s
[[email protected] ~]$
[[email protected] ~]$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   10Gi       RWX            Recycle          Bound    default/nfs-pvc   nfs                     18m
[[email protected] ~]$

Above output confirms that claim (nfs-pvc) is bound with persistent volume (nfs-pv).

Now we are ready to use nfs based persistent volume nside the pods.

Use NFS based Persistent Volume inside a Pod

Create a nginx pod using beneath yaml file, it will mount persistent volume claim on ‘/usr/share/nginx/html

[[email protected] ~]$ vi nfs-pv-pod
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pv-pod
spec:
  volumes:
    - name: nginx-pv-storage
      persistentVolumeClaim:
        claimName: nfs-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "nginx-server"
      volumeMounts:
        - mountPath: 
0
Read More

How to Install Cockpit Web Console on Ubuntu 20.04 Server

Cockpit is a free and open source web console tool for Linux administrators and used for day to day administrative and operations tasks. Initially Cockpit was only available for RHEL based distributions but now a days it is available for almost for all Linux distributions. In this article we will demonstrate how to install Cockpit on Ubuntu 20.04 LTS Server (focal fossa) and what administrative tasks can be performed with Cockpit Web Console.

Installation of Cockpit on Ubuntu 20.04 LTS Server

Since Ubuntu 17.04, cockpit package is available in the default package repositories. So the installation has become straight forward using apt command,

$ sudo apt update
$ sudo apt install cockpit -y

apt-install-cockpit-ubuntu-20-04-lts-server

Once cockpit package is installed successfully then start its service using the following systemctl command,

$ sudo systemctl start cockpit

Run the following to verify the status of cockpit service,

$ sudo systemctl status cockpit

cockpit-service-status-ubuntu-20-04

Above output confirms that cockpit has been started successfully.

Access Cockpit Web Console

Cockpit listen its service on 9090 tcp port, in case firewall is configured on your Ubuntu server 20.04 then you have to allow 9090 port in firewall.

[email protected]:~$ ss -tunlp | grep 9090
tcp   LISTEN  0       4096                  *:9090               *:*
[email protected]:~$

Run following ‘ufw’ command to allow cockpit port in OS firewall,

[email protected]:~$ sudo ufw allow 9090/tcp
Rule added
Rule added (v6)
[email protected]:~$

Now access Cockpit web console using following url:

https://<Your-Server-IP>:9090

Cockpit-Ubuntu-20-04-Login-Console

Use the root credentials or sudo user credentials to login, in my case ‘pkumar’ is the sudo user for my setup.

Cockpit-Dashboard-Ubuntu-20-04

Perfect above screen confirms that we have successfully able to access and login cockpit dashboard. Let’s see what are the different administrative task that can be performed from this dashboard.

Administrative Task from Cockpit Web Console on Ubuntu 20.04 LTS Server

When we first time login to dashboard, it shows basic information about our system like package updates, RAM & CPU utilization and Hardware and system configuration etc.

1)    Apply System Updates

One of important administrative task is to apply system updates, from cockpit web console we can easily do this, go to the ‘System Updates’ option where we you will get the available updates for your system, example is shown below,

Software-Updates-Cockpit-Ubuntu-20-04-Server

If you wish to install all available updates, then click on “Install All Updates” option

Apply-Updates-Cockpit-Ubuntu-20-04

We will get the message on the screen to reboot the system after applying the updates. So, go ahead and click on “Restart System”

2)    Managing KVM Virtual Machine with cockpit

It is also possible that we can manage KVM VMs using cockpit web console but by default ‘Virtual Machine’ option is not enabled. To enable this option install ‘cockpit-machines’ using apt command,

$ sudo apt install cockpit-machines -y

Once the package is installed then logout and login to Cockpit console.

Virtual-Machines-Ubuntu-20-04-Cockpit

3)    View System Logs

From the ‘Logs’ tab we can view our system logs. System logs can also be viewed based on its severity.

Logs-ubuntu-20-04-cockpit

4)    Manage Networking with Cockpit

System networking can easily be managed via networking tab from cockpit web console. Here we can view our system Ethernet cards speed. We have the features like creating bonding and bridge interface. Apart from this we …

0
Read More