Servers

How to Add Remote Linux Host to Cacti for Monitoring

In the previous guide, we demonstrated how you can install Cacti monitoring server on CentOS 8. This tutorial goes a step further and shows you how you can add and monitor remote Linux hosts on Cacti. We are going to add remote Ubuntu 20.04 LTS and CentOS 8 systems to the cacti server for monitoring.

Let’s begin.

Step 1)  Install SNMP service on Linux hosts

SNMP, short for Simple Network Management Protocol is a protocol used for gathering information about devices in a network. Using SNMP, you can poll metrics such as CPU utilization, memory usage, disk utilization, network bandwidth etc. This information will, later on, be graphed in Cacti to provide an intuitive overview of the remote hosts’ performance.

With that in mind, we are going to install and enable SNMP service on both Linux hosts:

On Ubuntu 20.04

To install snmp agent, run the command:

$ sudo apt install snmp snmpd -y

On CentOS 8

$ sudo dnf install net-snmp net-snmp-utils -y

SNMP starts automatically upon installation. To confirm this, confirm the status by running:

$ sudo systemctl status snmpd

If the service is not running yet, start and enable it on boot as shown:

$ sudo systemctl start snmpd

We can clearly see that the service is up and running. By default, SNMP runs listens on UDP port 161, You can verify this using the netstat command as shown.

$ sudo netstat -pnltu | grep snmpd

netstat-snmp-linux

Step 2) Configuring SNMP service

So far, we have succeeded in installing snmp service and confirmed that it is running as expected. The next course of action is to configure the snmp service so that data can be collected and shipped to the Cacti service.

The configuration file is located at /etc/snmp/snmpd.conf

For Ubuntu 20.04

We need to configure a few parameters. First, locate the sysLocation and sysContact directives. These define your Linux client’s Physical location.

Default-syslocation-syscontact-snmpd-linux

Therefore, feel free to provide your client’s location.

Syslocation-Syscontact-snmpd-ubuntu-20-04

Next, locate the agentaddress directive. This refers to the IP address and the port number that the agent will listen to.

Default-agent-address-snmpd-ubuntu-20-04

Adjust the directive as shown below where 192.168.2.106 is my client system’s address.

agentaddress  udp:192.168.2.106:161

AgentAddress-cacti-server-Ubuntu-20-04

The directive will now allow the system’s local IP to listen to any snmp requests. Next up, add the view directive below above the other view directives:

view     all      included     .1      80

View-Directive-snmpd-Ubuntu-20-04

Next, change the rocommunity attribute shown below

rocommunity  public default -V systemonly
to:
rocommunity  public default -V all

rocommunity-snmpd-linux

Finally, to ensure the snmp service is working as expected, run the command below on the Linux host.

$ sudo snmpwalk -v 1 -c public -O e 192.168.2.106

You should get some massive output as shown.

snmpwalk-command-cacti-ubuntu-20-04

For CentOS 8

In CentOS 8, the configuration is slightly different. First, locate the line that begins with the com2sec  directive as shown:

default-com2sec-directive-snmpd-centos8

We will specify a new security name known as AllUser and delete the notConfigUser as shown:

Update-com2sec-directive-snmpd-centos8

Next, locate the line that starts with the group directive as shown.

Default-Group-directive-snmpd-centos8

We will modify the second attribute and specify AllGroup as the group name and AllUser as the security name as previously defined.Change-group-directive-snmpd-centos8

In the view section, add this line

view    AllView         included        .1

View-Directive-snmpd-centos8

Finally, locate the line beginning with the access directive.

Default-access-directive-snmpd-centos8

Modify …

0
Read More

How to Install Zimbra Mail Server on CentOS 8 / RHEL 8

Mail server is one of the important server for any organization as all the communication are done via emails. There are number free and enterprise mail servers available in the IT world. Zimbra is one of the high rated mail server that comes in open source and enterprise edition. In this article, we touch base on how to install and configure single node open-source Zimbra mail server on CentOS 8 / RHEL 8 system.

Zimbra is also known as Zimbra Collaboration Suite (ZCS) because it consists numbers of components like MTA (postfix), Database (MariaDB), LDAP and MailboxdUI etc. Below is the architecture of Zimbra

Zimbra-Architecure-Overview

Minimum System Requirements for Open Source Zimbra Mail Server

  • Minimal CentOS 8/ RHEL 8
  • 8 GB RAM
  • 64-bit Intel / AMD CPU (1.5 GHz)
  • Separate Partition as /opt with at least 5 GB free space
  • Fully Qualified Domain Name (FQDN), like ‘zimbra.linuxtechi.com’
  • Stable Internet Connection with Fixed Internal / Public IP

Following are my Zimbra Lab Setup details:

  • Hostname: zimbra.linuxtechi.com
  • Domain: linuxtechi.com
  • IP address: 192.168.1.60
  • DNS Server: 192.168.1.51
  • SELinux : Enabled
  • Firewall : Enabled

Before jumping into the installation steps of Zimbra, let’s verify DNS records (A & MX) for our Zimbra Server, Login to your CentOS 8 / RHEL 8 system and use dig command to query dns records

Note: In case dig command is not available then install ‘bind-utils’ package

Run following dig command to query A record of our Zimbra server

[[email protected] ~]# dig -t A zimbra.linuxtechi.com

DNS-A-Record-Zimbra-CentOS8-RHEL8

Run following dig command to query MX record for our domain ‘linuxtechi.com

[[email protected] ~]# dig -t MX linuxtechi.com

Query-MX-Record-Zimbra-dig-command-CentOS8

Above outputs confirm that DNS records are configured correctly for our Zimbra mail server.

Read Also : How to Setup DNS Server (Bind) on CentOS 8 / RHEL8

Note: Before starting Zimbra installation, please make sure no MTA ( or mail server) configured on the system. In case it is installed then first disable its service and remove its package

# systemctl stop postfix
# dnf remove postfix -y

Let’s dive into Zimbra installation steps,

Step 1) Apply Updates, add entry in hosts file and reboot your system

Add the hostname entry in hosts file, run the following echo command,

[[email protected] ~]# echo "192.168.1.60  zimbra.linuxtechi.com" >> /etc/hosts

Run the beneath command to apply all the available updates,

[[email protected] ~]# dnf update -y

Once all the updates have been installed then reboot your system once.

[[email protected] ~]# reboot

Step 2) Download Open source Zimbra Collaboration suite

As we discussed above, Zimbra comes in two editions, so use the following URLs to download

To Download it from the command line, run following commands,

[[email protected] ~]# dnf install wget tar perl net-tools nmap-ncat -y
[[email protected] ~]# wget https://files.zimbra.com/downloads/8.8.15_GA/zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz

Step 3) Start Zimbra Installation via installation script

Once the compressed Zimbra tar file is downloaded in step 2 then extract it in your current working directory using tar command,

[[email protected] ~]# tar zxpvf zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz
[[email protected] ~]# ls -l
total 251560
-rw-------. 1 root root      1352 Aug 30 10:46 anaconda-ks.cfg
drwxrwxr-x. 8 1001 1001      4096 Jun 29 11:39 zcs-8.8.15_GA_3953.RHEL8_64.20200629025823
-rw-r--r--. 1 root root 257588163 Jul  1 07:16 zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz
[[email protected] ~]#

Go to the extracted directory and execute install script to …

0
Read More

How to Install Cacti Monitoring Tool on CentOS 8 / RHEL 8

Cacti is a free and open source front-end network monitoring tool used to monitor and graph time-series metrics of various IT resources in your LAN. It uses the RRDtool to poll services at specified intervals and thereafter represent them on intuitive graphs.

Cacti monitors various metrics such as CPU, memory and bandwidth utilization, disk space, filesystems and running processes to mention a few. You can monitor devices such as servers, routers, switches and even firewalls. Additionally, you can configure alerts such that in case of system downtime, you can receive notifications on email. In this guide, we will walk you through the installation of the Cacti monitoring tool on CentOS 8 / RHEL 8. At the time of penning down this tutorial, the latest Cacti version is version 1.2.14.

Step 1) Install Apache web server

Cacti is a web-based graphing tool, and therefore, we need to install a web server on which the monitoring tool will run on. Here, we are going to install the Apache web server. To do so, execute the command:

$ sudo dnf install httpd -y

Step 2 ) Install PHP and additional PHP extensions

The front-end of the Cacti monitoring tool is purely PHP driven and with that, we need to install PHP and the required PHP modules. Therefore, execute the command:

$ sudo dnf install -y php php-xml php-session php-sockets php-ldap php-gd php-json php-mysqlnd php-gmp php-mbstring php-posix php-snmp php-intl

Step 3) Install MariaDB database server

During installation, Cacti requires its own database to store its files. Additionally, it needs a database for storing all the required data that is needed in populating graphs.

MariaDB is a fork and a drop-in replacement for MySQL. It’s considered more robust and feature-rich and while MySQL would still work, MariaDB comes highly recommended. To install the MariaDB server, run the command:

$ sudo dnf install -y mariadb-server mariadb

Step 4) Install SNMP and RRD tool

Next, we are going to install SNMP and RRDtool which are essential in gathering and processing system metrics.

$ sudo dnf install -y net-snmp net-snmp-utils net-snmp-libs rrdtool

Step 5)  Start and enable services

Having installed all the necessary services required for Cacti to run, we are going to start them as shown:

$ sudo systemctl start httpd
$ sudo systemctl start snmpd
$ sudo systemctl start mariadb

Additionally, ensure to enable them on boot, such that they automatically start upon booting or a reboot.

$ sudo systemctl enable httpd
$ sudo systemctl enable snmpd
$ sudo systemctl enable mariadb

Step 6) Create a database for Cacti

In this step, we are going to create a database and user for cacti and later grant all privileges to the cacti user. Run the following commands:

CREATE DATABASE cactidb;
GRANT ALL ON cactidb.* TO [email protected] IDENTIFIED  BY ‘cactipassword’;
FLUSH PRIVILEGES;
EXIT;

Be sure to note down the database name, user and password as these will be required later on in the installation process.

Create-Cactidb-CentOS8

Next, we need to import the mysql_test_data_timezone.sql to mysql database as shown.

$ mysql -u root -p mysql < /usr/share/mariadb/mysql_test_data_timezone.sql

Then log in to mysql database and grant the cacti user access to the mysql.time_zone_name table.

GRANT SELECT ON mysql.time_zone_name TO [email protected];
FLUSH PRIVILEGES;
EXIT;

Grant-access-cacti-user-centos8

Some database tuning is recommended …

0
Read More

How to Change File/Group Owner with chown Command in Linux

Short for change ownership, Chown command is a command-line utility that is used to change the user or group ownership of a file or directory and even links. The Linux philosophy is such that every file or directory is owned by a specific user or group with certain access rights.

Using different examples, we will try and see the various use cases of the chown command. Chown command employs quite a simple and straight forward syntax.

$ chown OPTIONS USER: GROUP file(s)

Let’s briefly flesh out the parameters:

The attribute USER refers to the username of the user that will own the file. You can specify either the username or the UID ( User ID). Meanwhile, the GROUP option indicates the name of the new group that the file will acquire after running the command. The file option represents a regular file or a directory or even a symbolic link. These are the three entities whose permissions can be altered.

A few points to note:

1) When the  USER option is specified alone, ownership of the file/directory changes to that of the specified user while the group ownership remains unchanged. Here’s an example:

$ chown john file1.txt

In the above command, user ownership of the file file1.txt changes from the current user to the user john.

2) If the  USER option is proceeded by a full colon i.e. USER: and the group name is not provided, then the user takes ownership of the file but the file’s group ownership switches to the user’s login group. For example:

$ chown john: file1.txt

In this example, the user john takes ownership of the file file1.txt, but the group ownership of the file changes to john’s login group.

3) When both the user and group options are specified separated by a colon i.e USER:GROUP  – without any spaces therein – the file takes ownership of the new user and the group as specified

$ chown john:john file1.txt

In the above example, the file takes the user and group ownership of user john.

4)  When the USER option is left out and instead the group option is preceded by the full colon :GROUP then, only the group ownership of the file changes.

How to view file permissions

To view file permissions, simply use the ls -l command followed by the file name

$ ls -l filename

For example:

$ ls -l file1.txt

list-file-permissions-linux

From the output, we can see that the file is owned by user linuxtechi which and belongs to the group linuxtechi in the 3rd and 4th columns respectively.

How to change file owner with chown command

Before changing permissions, always invoke sudo if you are not working as the root user. This gives you elevated privileges to change user and group ownership of a file.

To change file ownership, use the syntax:

$ sudo chown user filename

For example,

$ sudo chown james file1.txt

Change-file-owner-linux-chown-command

From the output, you can clearly see that the ownership of the file has changed from linuxtechi to user james.

Alternatively, instead of using the username, you can pass the UID of the user instead. To get the UID, view the /etc/passwd file.

$ cat /etc/passwd | grep username

From the example below, we can …

0
Read More

Rolling in the Deep(fake)

In this era of fake news, it’s almost a given that you’ve come across videos like Jon Snow apologizing for the rather underwhelming finale episode of Game of Thrones, Mark Zuckerberg boasting of being the owner of people’s stolen data, and Steven Buscemi attending the Golden Globes wearing what people remember as Tilda Swinton’s gown. These videos have been our exposure to deepfake, the 21st century equivalent to photoshopped pictures. These videos are possible through artificial intelligence that produces fake images of events that have never happened. Most deepfakes have become tools for personalities to be embarrassed and have their reputations ruined, with some becoming effective in “fooling” unsuspecting viewers and in millennial-speak, “canceling” personalities.

Given how gullible we have all become to these fake videos, we need to inform ourselves of them lest we are fine with being unresisting victims. How they are made, how to know the difference between real and deepfake graphics, what the consequences of deepfake are, and possible solutions are some things we need to know. After all, there will come a time when technology becomes more common and accessible to just about everyone. When this time comes, deepfake will not only victimize celebrities and world-renowned personalities but even ordinary citizens like you and I. You definitely wouldn’t want to be left scratching your head when you fall target to a deepfake item.

What do you need to make a deepfake?

As of the moment, it takes a lot to produce deepfake. A standard laptop or desktop PC won’t do. You need a high-end desktop armed with professional-grade graphic cards and storage capacity. After all, one deepfake video alone may need a least 40,000 high definition pictures of the person you would like to be put in the video. HD photos are huge files that cloud storage is actually preferred. Fast computing power is also a must, or you will end up taking a month just to come up with a video that’s two minutes long. You also need tools and apps, which cost according to the quality of their outputs.

Right now, it’s actually not difficult to make deepfake videos, thanks to free apps and programs that allow even ordinary people to make them. Cost-free source codes and machine-learning algorithms are abundant online, and the only things you need to make yourself a video are time and materials.

(Via: https://koreajoongangdaily.joins.com/2020/05/17/features/deepfake-artificial-intelligence-pornography/20200517190700189.html)

 

Can you tell the difference?

Prior to 2019, when technology experts were asked how one can detect a deepfake, they will readily answer that it’s all in the eyes. Eyes in deepfake videos don’t naturally move, much less blink. After all, you can’t really see a lot of photos where a celebrity blinks or closes his or her eyes.  But then, deepfake technology is fast to provide solutions, so it was no sooner when the blinking weakness was revealed that a solution emerged.

It comes down to being super vigilant. Organizations can help by making sure employees undergo a thorough cybersecurity awareness programme that is updated frequently to inform them about the latest threats, and how to react.

(Via: https://www.weforum.org/agenda/2020/01/how-to-spot-a-deepfake/)

 

Will deepfakes destroy the world?

Deepfakes are meant to embarrass, intimidate, and destabilize individuals. The best ones might send …

0
Read More

How to Configure NFS based Persistent Volume in Kubernetes

It is recommended to place pod’s data into some persistent volume so that data will be available even after pod termination. In Kubernetes (k8s), NFS based persistent volumes can be used inside the pods. In this article we will learn how to configure persistent volume and persistent volume claim and then we will discuss, how we can use the persistent volume via its claim name in k8s pods.

I am assuming we have a functional k8s cluster and NFS Server. Following are details for lab setup,

  • NFS Server IP = 192.168.1.40
  • NFS Share = /opt/k8s-pods/data
  • K8s Cluster = One master and two worker Nodes

Note: Make sure NFS server is reachable from worker nodes and try to mount nfs share on each worker once for testing.

Create a Index.html file inside the nfs share because we will be mounting this share in nginx pod later in article.

[[email protected] ~]$ echo "Hello, NFS Storage NGINX" > /opt/k8s-pods/data/index.html

Comfigure NFS based PV (Persistent Volume)

To create an NFS based persistent volume in K8s, create the yaml file on master node with the following contents,

[[email protected] ~]$ vim nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /opt/k8s-pods/data
    server: 192.168.1.40

Save and exit the file

NFS-PV-Yaml-File-K8s

Now create persistent volume using above created yaml file, run

[[email protected] ~]$ kubectl create -f nfs-pv.yaml
persistentvolume/nfs-pv created
[[email protected] ~]$

Run following kubectl command to verify the status of persistent volume

[[email protected] ~]$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   10Gi       RWX            Recycle          Available           nfs                     20s
[[email protected] ~]$

Above output confirms that PV has been created successfully and it is available.

Configure Persistent Volume Claim

To mount persistent volume inside a pod, we have to specify its persistent volume claim. So,let’s create persistent volume claim using the following yaml file

[[email protected] ~]$ vi nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Save and exit file.

NFS-PVC-Yaml-k8s

Run the beneath kubectl command to create pvc using above yaml file,

[[email protected] ~]$ kubectl create -f nfs-pvc.yaml
persistentvolumeclaim/nfs-pvc created
[[email protected] ~]$

After executing above, control plane will look for persistent volume which satisfy the claim requirement with same storage class name and then it will bind the claim to persistent volume, example is shown below:

[[email protected] ~]$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   10Gi       RWX            nfs            3m54s
[[email protected] ~]$
[[email protected] ~]$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   10Gi       RWX            Recycle          Bound    default/nfs-pvc   nfs                     18m
[[email protected] ~]$

Above output confirms that claim (nfs-pvc) is bound with persistent volume (nfs-pv).

Now we are ready to use nfs based persistent volume nside the pods.

Use NFS based Persistent Volume inside a Pod

Create a nginx pod using beneath yaml file, it will mount persistent volume claim on ‘/usr/share/nginx/html

[[email protected] ~]$ vi nfs-pv-pod
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pv-pod
spec:
  volumes:
    - name: nginx-pv-storage
      persistentVolumeClaim:
        claimName: nfs-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "nginx-server"
      volumeMounts:
        - mountPath: 
0
Read More

Exclusive Savvii Promo Codes for 2020

Savvii is a European managed WordPress hosting provider that started in 2013. Read our review here. Use one of our exclusive promo codes to get a discount on your next order. Use This Promo Code and Get €10 EUR Off Shared One Use This Code and Get €15 EUR Off the Shared Five Plan Exclusive […]

Source

from ThisHosting.Rocks https://thishosting.rocks/exclusive-savvii-promo-codes/…

0
Read More

How to Install Cockpit Web Console on Ubuntu 20.04 Server

Cockpit is a free and open source web console tool for Linux administrators and used for day to day administrative and operations tasks. Initially Cockpit was only available for RHEL based distributions but now a days it is available for almost for all Linux distributions. In this article we will demonstrate how to install Cockpit on Ubuntu 20.04 LTS Server (focal fossa) and what administrative tasks can be performed with Cockpit Web Console.

Installation of Cockpit on Ubuntu 20.04 LTS Server

Since Ubuntu 17.04, cockpit package is available in the default package repositories. So the installation has become straight forward using apt command,

$ sudo apt update
$ sudo apt install cockpit -y

apt-install-cockpit-ubuntu-20-04-lts-server

Once cockpit package is installed successfully then start its service using the following systemctl command,

$ sudo systemctl start cockpit

Run the following to verify the status of cockpit service,

$ sudo systemctl status cockpit

cockpit-service-status-ubuntu-20-04

Above output confirms that cockpit has been started successfully.

Access Cockpit Web Console

Cockpit listen its service on 9090 tcp port, in case firewall is configured on your Ubuntu server 20.04 then you have to allow 9090 port in firewall.

[email protected]:~$ ss -tunlp | grep 9090
tcp   LISTEN  0       4096                  *:9090               *:*
[email protected]:~$

Run following ‘ufw’ command to allow cockpit port in OS firewall,

[email protected]:~$ sudo ufw allow 9090/tcp
Rule added
Rule added (v6)
[email protected]:~$

Now access Cockpit web console using following url:

https://<Your-Server-IP>:9090

Cockpit-Ubuntu-20-04-Login-Console

Use the root credentials or sudo user credentials to login, in my case ‘pkumar’ is the sudo user for my setup.

Cockpit-Dashboard-Ubuntu-20-04

Perfect above screen confirms that we have successfully able to access and login cockpit dashboard. Let’s see what are the different administrative task that can be performed from this dashboard.

Administrative Task from Cockpit Web Console on Ubuntu 20.04 LTS Server

When we first time login to dashboard, it shows basic information about our system like package updates, RAM & CPU utilization and Hardware and system configuration etc.

1)    Apply System Updates

One of important administrative task is to apply system updates, from cockpit web console we can easily do this, go to the ‘System Updates’ option where we you will get the available updates for your system, example is shown below,

Software-Updates-Cockpit-Ubuntu-20-04-Server

If you wish to install all available updates, then click on “Install All Updates” option

Apply-Updates-Cockpit-Ubuntu-20-04

We will get the message on the screen to reboot the system after applying the updates. So, go ahead and click on “Restart System”

2)    Managing KVM Virtual Machine with cockpit

It is also possible that we can manage KVM VMs using cockpit web console but by default ‘Virtual Machine’ option is not enabled. To enable this option install ‘cockpit-machines’ using apt command,

$ sudo apt install cockpit-machines -y

Once the package is installed then logout and login to Cockpit console.

Virtual-Machines-Ubuntu-20-04-Cockpit

3)    View System Logs

From the ‘Logs’ tab we can view our system logs. System logs can also be viewed based on its severity.

Logs-ubuntu-20-04-cockpit

4)    Manage Networking with Cockpit

System networking can easily be managed via networking tab from cockpit web console. Here we can view our system Ethernet cards speed. We have the features like creating bonding and bridge interface. Apart from this we …

0
Read More

How to Install and Configure Jenkins on Ubuntu 20.04

Automation of tasks can be quite tricky especially where multiple developers are submitting code to a shared repository. Poorly executed automation processes can often lead to inconsistencies and delays. And this is where Jenkins comes in.  Jenkins is a free and opensource continuous integration tool that’s predominantly used in the automation of tasks. It helps to streamline the continuous development, testing and deployment of newly submitted code.

In this guide, we will walk you through the installation and configuration of Jenkins on Ubuntu 20.04 LTS system.

Step 1:  Install Java with apt command

Being a Java application, Jenkins requires Java 8 and later versions to run without any issues. To check if Java is installed on your system, run the command:

$ java --version

If Java is not installed, you will get the following output.

Java-version-output-before-installation

To install Java on your system, execute the command:

$ sudo apt install openjdk-11-jre-headless

Install-Java-Ubuntu-20-04

After the installation, once again verify that Java is installed:

$ java --version

Java-version-command-ubuntu-20-04

Perfect! We now have OpenJDK installed. We can now proceed.

Step 2:  Install Jenkins via its official repository

With Java installed, we can now proceed to install Jenkins. The second step is to import the Jenkins GPG key from Jenkins repository as shown:

$ wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -

Next, configure Jenkins repository to the sources list file as shown.

$ sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'

Next, update the system’s package list.

$ sudo apt update

And install Jenkins as follows.

$ sudo apt install jenkins

Install-Jenkins-ubuntu-20-04

Once the installation is complete, Jenkins should start automatically. To confirm this, run the command:

$ sudo systemctl status jenkins

Jenkins-service-status-ubuntu-20-04

If by any chance Jenkins is not running, execute the following command to start it.

$ sudo systemctl start jenkins

Step 3: Configuring the firewall rules for Jenkins

As we have seen, Jenkins natively listens on port 8080, and if you have installed Jenkins on a server with UFW enabled, you need to open that port to allow traffic.

To enable firewall on Ubuntu 20.04 LTS run,

$ sudo ufw enable

To open port 8080 on ufw firewall, run the command:

$ sudo ufw allow 8080/tcp

Then reload the firewall to effect the changes.

$ sudo ufw reload

To confirm that port 8080 is open on the firewall, execute the command:

$ sudo ufw status

Ubuntu-firewall-status-output

From the output, we can clearly see that Port 8080 has been opened on the system.

Step 4:  Configure Jenkins with GUI

We are almost done now. The only thing remaining is to set up Jenkins using your favorite browser. So, head over to the URL bar and browse your server’s address as shown:

http://server-IP:8080

To check your server’s IP address, use the ifconfig command.

ifconfig-output-ubuntu-20-04

You will get the page similar to what we have below prompting you to provide the Administrator’s password. As per the instructions, the password is located in the file:

/var/lib/jenkins/secrets/initialAdminPassword

Unlock-Jenkins-Ubuntu-20-04

To view the password, simply switch to root user and use the cat command as shown:

$ cat /var/lib/jenkins/secrets/initialAdminPassword

Jenkins-password-Ubuntu-20-04

Copy the password and paste it in the text field shown and click the “Continue” button.

Enter-Jenkins-Password-ubuntu-20-04

In the next step, select ‘Install suggested plugin‘ for simplicity’s sake.

Install-suggested-plugins-ubuntu-20-04

Thereafter, the installation of …

0
Read More

Getting Financially Fit Even During a Lockdown

While our ancestors have braved pandemics before and we have also had our fair share of crises and dealing with them, I do not think the world has faced a disaster of this magnitude. The various ways we cope with the COVID-19 calamity are quite pragmatic, as this is the first time we are facing problems like this. There is no blanket, one-size-fits-all solution yet, so we try to manage the best we can.  But then, there may be suggestions on how we can secure our survival that we can all try and apply. Thousands are losing their means of income with mass layoffs happening daily. Businesses have contracted, with some deciding to shut off permanently.  We can try and extend a hand to everyone in need of help, but before we can do that, we need to look out for ourselves as well. After all, if we are not that strong ad secure financially, how can we help others?

Thus, with still months (and even years, according to some) to go before we adopt a “new normal”, let us assess ourselves and see whether we can be confident with our state of fiscal being. That way, we can ease mental stress, focus our energies on the more important things such as keeping physically healthy, and be our full selves so we can possibly have something to give other people. Here are some ways we can be assured of strong financial health during this COVID-19 crisis.

 

Try to scrap unnecessary expenses

As you may already be working from home for the past weeks, if not months, try to do an inventory of your expenses since the lockdown started. See which areas you can cut down and calculate how much you can save. Be detailed down to the last cent. Remember, you should already have been saving on gas and transportation costs since you haven’t been using your car anywhere. But then, you might already have been using your savings to buy something online or add to the grocery budget. Set aside these savings instead of funneling them on things you don’t necessarily need.

There are ways to boost your saving skills while you’re stuck at home, though, to get your finances looking a little healthier (even if only to blow everything you have on holidays, pub trips, and fast food the second this is all over).

(Via: https://metro.co.uk/2020/04/24/simple-ways-cut-spending-save-money-lockdown-12604973/)

 

Get reminded of how important an emergency fund is

Remember when you were encouraged to set aside a portion of your income so that you can have something “for a rainy day’? Well, that rainy day is the present, with people losing jobs and getting their salaries cut. If you don’t have an emergency fund, start as soon as possible. Three to six months of what you usually earn a month would be best practice for an emergency fund. And if you have this fund already, congratulations! Just make sure you don’t spend your emergency fund on frivolous and unnecessary things.

An emergency fund forms the basis of financial security in any household. Having one has become even more important in light of the Covid-19 pandemic. In case you haven’t created one yet, you have little time to lose.

0
Read More