Deploy Ceph Cluster using Cephadm on CentOS 8

Ach.Chusnul Chikam
7 min readOct 20, 2021

--

Ceph is an open-source software (software-defined storage) storage platform, implements object storage on a single distributed computer cluster, and provides 3-in-1 interfaces for object-, block and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level, and freely available.

There are several different ways to install Ceph. Choose the method that best suits your needs. For recommendation on ceph documentation is used cephadm. Cephadm installs and manages a Ceph cluster using containers and systemd, with tight integration with the CLI and dashboard GUI. Cephadm only supports Octopus and newer releases. Cephadm is fully integrated with the new orchestration API and fully supports the new CLI and dashboard features to manage cluster deployment. Cephadm requires container support (podman or docker) and Python 3.

Prerequisites:

  • OS CentOS 8
  • 3 Nodes with 3 additional disk (sdb, sdc, sdd)

Lab Environment:

Setup Ceph Cluster:

Let’s take a look at the steps required to set up Ceph Cluster using kubeadm on CestOS 8. The steps look something like the following:

  1. Login as root and add host on /etc/hosts
$ sudo -i
$ sudo tee -a /etc/hosts<<EOF
192.168.1.231 ceph-01 ceph-01.localdomain
192.168.1.232 ceph-02 ceph-02.localdomain
192.168.1.233 ceph-03 ceph-03.localdomain
EOF
Set hostname for 3 node ceph cluster
# hostname ceph-01
# ssh ceph-02 hostname ceph-02
# ssh ceph-03 hostname ceph-03

2. Set passwordless

# ssh-keygen
# ssh-copy-id ceph-01
# ssh-copy-id ceph-02
# ssh-copy-id ceph-03

3. Update dan Upgrade package

# dnf update -y; dnf upgrade -y

4. Install python3, lvm2, and podman

# dnf install -y python3 lvm2 podman

5. Install Cephadm using curl-based installation method

Use curl to fetch the most recent version of the standalone script
[root@ceph-01 ~]# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/pacific/src/cephadm/cephadm
Make the cephadm script executable:
[root@ceph-01 ~]# chmod +x cephadm
To install the packages that provide the cephadm command, run the following commands:
[root@ceph-01 ~]# ./cephadm add-repo --release pacific
[root@ceph-01 ~]# ./cephadm install
Install ceph-common and Confirm that cephadm is now in your PATH by running which:
[root@ceph-01 ~]# dnf install -y ceph-common
[root@ceph-01 ~]# which cephadm

6. Bootstrap new Ceph cluster

The first step in creating a new Ceph cluster is running the cephadm bootstrap command on the Ceph cluster’s first host. The act of running the cephadm bootstrap command on the Ceph cluster’s first host creates the Ceph cluster’s first “monitor daemon”, and that monitor daemon needs an IP address. You must pass the IP address of the Ceph cluster’s first host to the ceph bootstrap command, so you’ll need to know the IP address of that host.

[root@ceph-01 ~]# cephadm bootstrap --mon-ip 192.168.1.231

Check Ceph dashboard, access IP address of ceph-01 https://192.168.1.231:8443/ and use credentials from the cephadm bootstrap output then set a new password

Verify Ceph CLI

Confirm that the ceph command is accessible with:
[root@ceph-01 ~]# ceph -v
ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

Check status of ceph cluster, OK for [HEALTH_WARN] because OSDs are not added yet

[root@ceph-01 ~]# ceph -s
cluster:
id: 588df728-316c-11ec-b956-005056aea762
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph-01 (age 14m)
mgr: ceph-01.wgdjcn(active, since 12m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

Verify containers are running for each service and check status for systemd service for each containers

[root@ceph-01 ~]# podman ps
[root@ceph-01 ~]# systemctl status ceph-* --no-pager

7. Adding hosts to the cluster
To add each new host to the cluster, perform two steps:

Install the cluster’s public SSH key in the new host’s root user’s
[root@ceph-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-02
[root@ceph-01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-03
Tell Ceph that the new node is part of the cluster, make sure python3 installed and available on new node
[root@ceph-01 ~]# ceph orch host add ceph-02 192.168.1.232
Added host 'ceph-02' with addr '192.168.1.232'
[root@ceph-01 ~]#
[root@ceph-01 ~]# ceph orch host add ceph-03 192.168.1.233
Added host 'ceph-03' with addr '192.168.1.233'
[root@ceph-01 ~]#
[root@ceph-01 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-01 192.168.1.231 _admin
ceph-02 192.168.1.232
ceph-03 192.168.1.233
[root@ceph-01 ~]#

8. Deploy OSDs to the cluster

Run this command to display an inventory of storage devices on all cluster hosts:

[root@ceph-01 ~]# ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
ceph-01 /dev/sdb ssd 10.7G Unknown N/A N/A Yes
ceph-01 /dev/sdc ssd 10.7G Unknown N/A N/A Yes
ceph-01 /dev/sdd ssd 10.7G Unknown N/A N/A Yes
ceph-02 /dev/sdb ssd 10.7G Unknown N/A N/A Yes
ceph-02 /dev/sdc ssd 10.7G Unknown N/A N/A Yes
ceph-02 /dev/sdd ssd 10.7G Unknown N/A N/A Yes
ceph-03 /dev/sdb ssd 10.7G Unknown N/A N/A Yes
ceph-03 /dev/sdc ssd 10.7G Unknown N/A N/A Yes
ceph-03 /dev/sdd ssd 10.7G Unknown N/A N/A Yes

Tell Ceph to consume any available and unused storage device execute ceph orch apply osd --all-available-devices

[root@ceph-01 ~]# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
[root@ceph-01 ~]# ceph -s
cluster:
id: 588df728-316c-11ec-b956-005056aea762
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-01,ceph-02,ceph-03 (age 5m)
mgr: ceph-01.wgdjcn(active, since 41m), standbys: ceph-02.rmltzq
osd: 9 osds: 0 up, 9 in (since 10s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[root@ceph-01 ~]#
[root@ceph-01 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.08817 root default
-5 0.02939 host ceph-01
1 ssd 0.00980 osd.1 up 1.00000 1.00000
4 ssd 0.00980 osd.4 up 1.00000 1.00000
7 ssd 0.00980 osd.7 up 1.00000 1.00000
-7 0.02939 host ceph-02
0 ssd 0.00980 osd.0 up 1.00000 1.00000
3 ssd 0.00980 osd.3 up 1.00000 1.00000
6 ssd 0.00980 osd.6 up 1.00000 1.00000
-3 0.02939 host ceph-03
2 ssd 0.00980 osd.2 up 1.00000 1.00000
5 ssd 0.00980 osd.5 up 1.00000 1.00000
8 ssd 0.00980 osd.8 up 1.00000 1.00000

9. Deploy ceph-mon (ceph monitor daemon)

Ceph-mon is the cluster monitor daemon for the Ceph distributed file system. One or more instances of ceph-mon form a Paxos part-time parliament cluster that provides extremely reliable and durable storage of cluster membership, configuration, and state. Add ceph-mon to all node using placement option

[root@ceph-01 ~]# ceph orch apply mon --placement="ceph-01,ceph-02,ceph-03"
Scheduled mon update...
[root@ceph-01 ~]# ceph orch ps | grep mon
mon.ceph-01 ceph-01 running (63m) 7m ago 63m 209M 2048M 16.2.6 02a72919e474 952d7
mon.ceph-02 ceph-02 running (27m) 7m ago 27m 104M 2048M 16.2.6 02a72919e474 f2d22
mon.ceph-03 ceph-03 running (25m) 7m ago 25m 104M 2048M 16.2.6 02a72919e474 bcc00

10. Deploy ceph-mgr (ceph manager daemon)
The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems.

[root@ceph-01 ~]# ceph orch apply mgr --placement="ceph-01,ceph-02,ceph-03"
Scheduled mgr update...
[root@ceph-01 ~]# ceph orch ps | grep mgr
mgr.ceph-01.wgdjcn ceph-01 *:9283 running (64m) 8m ago 64m 465M - 16.2.6 02a72919e474 c58a64249f9b
mgr.ceph-02.rmltzq ceph-02 *:8443,9283 running (29m) 8m ago 29m 385M - 16.2.6 02a72919e474 36f7f6a02896
mgr.ceph-03.lhwjwd ceph-03 *:8443,9283 running (7s) 2s ago 6s 205M - 16.2.6 02a72919e474 c740f964b2de

11. Set label on all nodes
The orchestrator supports assigning labels to hosts. Labels are free form and have no particular meaning by itself and each host can have multiple labels. They can be used to specify placement of daemons.

Set label osd-node on all node
[root@ceph-01 ~]# ceph orch host label add ceph-01 osd-node
[root@ceph-01 ~]# ceph orch host label add ceph-02 osd-node
[root@ceph-01 ~]# ceph orch host label add ceph-03 osd-node
Set label mon on all node
[root@ceph-01 ~]# ceph orch host label add ceph-01 mon
[root@ceph-01 ~]# ceph orch host label add ceph-02 mon
[root@ceph-01 ~]# ceph orch host label add ceph-03 mon
Set label mgr on all node
[root@ceph-01 ~]# ceph orch host label add ceph-01 mgr
[root@ceph-01 ~]# ceph orch host label add ceph-02 mgr
[root@ceph-01 ~]# ceph orch host label add ceph-03 mgr

Now verify ceph cluster and see everything is great

See other content

References :

#CEPH #Storage #CentOS8 #SDS

--

--

Ach.Chusnul Chikam
Ach.Chusnul Chikam

Written by Ach.Chusnul Chikam

Cloud Consultant | RHCSA | CKA | AWS SAA | OpenStack | OpenShift Certified | Google Cloud ACE | LinkedIn: https://www.linkedin.com/in/achchusnulchikam

No responses yet