Ceph is free and open source distributed storage solution through which we can easily provide and manage block storage, object storage and file storage. Ceph storage solution can be used in traditional IT infrastructure for providing the centralize storage, apart from this it also used in private cloud (OpenStack & Cloudstack). In Red Hat OpenStack Ceph is used as cinder backend.
In this article, we will demonstrate how to install and configure Ceph Cluster(Mimic) on CentOS 7 Servers.
In Ceph Cluster following are the major components:
- Monitors (ceph-mon) : As the name suggests a ceph monitor nodes keep an eye on cluster state, OSD Map and Crush map
- OSD ( Ceph-osd): These are the nodes which are part of cluster and provides data store, data replication and recovery functionalities. OSD also provides information to monitor nodes.
- MDS (Ceph-mds) : It is a ceph meta-data server and stores the meta data of ceph file systems like block storage.
- Ceph Deployment Node : It is used to deploy the Ceph cluster, it is also called as Ceph-admin or Ceph-utility node.
My Lab setup details :
- Ceph Deployment Node: (Minimal CentOS 7, RAM: 4 GB, vCPU: 2, IP: 192.168.1.30, Hostname: ceph-controller)
- OSD or Ceph Compute 1: (Minimal CentOS 7, RAM: 10 GB, vCPU: 4, IP: 192.168.1.31, Hostname: ceph-compute01)
- OSD or Ceph Compute 2: (Minimal CentOS 7, RAM: 10 GB, vCPU: 4, IP: 192.168.1.32, Hostname: ceph-compute02)
- Ceph Monitor: (Minimal CentOS 7, RAM: 10 GB, vCPU: 4, IP: 192.168.1.33, Hostname: ceph-monitor)
Note: In all the nodes we have attached two nics (eth0 & eth1), on eth0 IP from the VLAN 192.168.1.0/24 is assigned . On eth1 IP from VLAN 192.168.122.0/24 is assigned and will provide the internet access.
Let’s Jump into the installation and configuration steps:
Step:1) Update /etc/hosts file, NTP, Create User & Disable SELinux on all Nodes
Add the following lines in /etc/hosts file of all the nodes so that one can access these nodes via their hostname as well.
192.168.1.30 ceph-controller 192.168.1.31 ceph-compute01 192.168.1.32 ceph-compute02 192.168.1.33 ceph-monitor
Configure all the Ceph nodes with NTP Server so that all nodes have same time and there is no drift in time,
~]# yum install ntp ntpdate ntp-doc -y ~]# ntpdate europe.pool.ntp.org ~]# systemctl start ntpd ~]# systemctl enable ntpd
Create a user with name “cephadm” on all the nodes and we will be using this user for ceph deployment and configuration
~]# useradd cephadm && echo "[email protected]#" | passwd --stdin cephadm
Now assign admin rights to user cephadm via sudo, execute the following commands,
~]# echo "cephadm ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadm ~]# chmod 0440 /etc/sudoers.d/cephadm
Disable SELinux on all the nodes using beneath sed command, even ceph official site recommends to disable SELinux ,
~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
Reboot all the nodes now using beneath command,
Step:2 Configure Passwordless authentication from Ceph admin to all OSD and monitor nodes
From Ceph-admin node we will use the utility known as “ceph-deploy“, it will login to each ceph node and will install ceph package and will do all the required configurations. While accessing the Ceph node it will not prompt us to enter the credentials of ceph …