File System Guide | Basic Ceph Administration

  1. Clients
    The CephFS clients perform I/O operations on behalf of applications using CephFS, such as, ceph-fuse for FUSE clients and kcephfs for kernel clients. CephFS clients send metadata requests to an active Metadata Server. In return, the CephFS client learns of the file metadata, and can begin safely caching both metadata and file data.
  2. Metadata Servers (MDS)
    The MDS does the following:
  • Provides metadata to CephFS clients.
  • Manages metadata related to files stored on the Ceph File System.
  • Coordinates access to the shared Ceph cluster.
  • Caches hot metadata to reduce requests to the backing metadata pool store.
  • Manages the CephFS clients’ caches to maintain cache coherence.
  • Replicates hot metadata between active MDS.
  • Coalesces metadata mutations to a compact journal with regular flushes to the backing metadata pool.
  • CephFS requires at least one Metadata Server daemon (ceph-mds) to run.

Lab Environment:


  • A running Ceph cluster.
  • Installation of the Ceph Metadata Server daemons (ceph-mds).

Basic Administration:

1. Deploy Metadata Servers

[root@ceph-01 ~]# ceph orch apply mds my-mds --placement="ceph-01,ceph-02,ceph-03"
Scheduled update...
[root@ceph-01 ~]# ceph orch ps | grep mds ceph-01 running (28s) 19s ago 28s 11.8M - 16.2.6 02a72919e474 f815be961426 ceph-02 running (26s) 20s ago 26s 15.8M - 16.2.6 02a72919e474 74e1da7386d0 ceph-03 running (24s) 20s ago 24s 12.6M - 16.2.6 02a72919e474 c9568108123f
[root@ceph-01 ~]# for i in {01..03}; do mkdir -p /var/lib/ceph/mds/ceph-$i; done
[root@ceph-01 ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-01/keyring --gen-key -n mds.ceph-01
creating /var/lib/ceph/mds/ceph-01/keyring
[root@ceph-01 ~]# chown -R ceph. /var/lib/ceph/mds/ceph-01
[root@ceph-01 ~]# ceph auth add mds.ceph-01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-01/keyring
added key for mds.ceph-01

2. Create Ceph File System

[root@ceph-01 ~]# ceph osd pool create cephfs_data 64
pool 'cephfs_data' created
[root@ceph-01 ~]# ceph osd pool create cephfs_metadata 64
pool 'cephfs_metadata' created
[root@ceph-01 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 4 and data pool 3
[root@ceph-01 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-01 ~]# ceph mds stat
cephfs:1 {0=my-mds.ceph-03.pryhgc=up:active} 2 up:standby
[root@ceph-01 ~]# ceph fs status cephfs
cephfs - 0 clients
0 active my-mds.ceph-03.pryhgc Reqs: 0 /s 10 13 12 0
cephfs_metadata metadata 96.0k 28.4G
cephfs_data data 0 28.4G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

3. Mount CephFS on a Ceph client

[root@ceph-01 ~]# ssh ceph-client
[root@ceph-client ~]# dnf -y install centos-release-ceph-pacific epel-release
[root@ceph-client ~]# dnf -y install ceph-fuse
[root@ceph-client ~]# scp ceph-01:/etc/ceph/ceph.conf /etc/ceph/
ceph.conf 100% 277 506.8KB/s 00:00
[root@ceph-client ~]# scp ceph-01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
ceph.client.admin.keyring 100% 151 330.5KB/s 00:00
[root@ceph-client ~]# chown ceph. /etc/ceph/ceph.*
[root@ceph-client ~]# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key
[root@ceph-client ~]# chmod 600 admin.key
[root@ceph-client ~]# mount -t ceph ceph-01:6789:/ /mnt -o name=admin,secretfile=admin.key
[root@ceph-client ~]# df -hT
[root@ceph-client ~]# curl -o /mnt/Ubuntu-Focal.img
[root@ceph-client ~]# df -hT /mnt
[root@ceph-client ~]# ls -lh /mnt
total 542M
-rw-r — r — . 1 root root 542M Oct 25 15:30 Ubuntu-Focal.img

4. Removing CephFS and Pools

[root@ceph-client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 891M 0 891M 0% /dev
tmpfs tmpfs 909M 0 909M 0% /dev/shm
tmpfs tmpfs 909M 8.6M 900M 1% /run
tmpfs tmpfs 909M 0 909M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 35G 2.1G 33G 6% /
/dev/sda2 ext4 976M 198M 712M 22% /boot
/dev/sda1 vfat 599M 7.3M 592M 2% /boot/efi
tmpfs tmpfs 182M 0 182M 0% /run/user/0
tmpfs tmpfs 182M 0 182M 0% /run/user/1000 ceph 29G 544M 28G 2% /mnt
[root@ceph-client ~]# umount /mnt
[root@ceph-01 ~]# ceph orch ls | grep mds 3/3 3m ago 64m ceph-01;ceph-02;ceph-03
[root@ceph-01 ~]# ceph orch stop
Scheduled to stop on host 'ceph-01'
Scheduled to stop on host 'ceph-02'
Scheduled to stop on host 'ceph-03'
[root@ceph-01 ~]# ceph fs rm cephfs --yes-i-really-mean-it
[root@ceph-01 ~]# ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
[root@ceph-01 ~]# ceph osd pool ls | grep cephfs
[root@ceph-01 ~]# ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
[root@ceph-01 ~]# ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
pool 'cephfs_metadata' removed




Cloud Consultant | RHCSA | RHCE in Red Hat OpenStack | Google Cloud ACE | AWS SAA | LinkedIn:

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

The Theory of Old = Irrelovant

What you need to know before selecting a Django development company in the USA

Video how-to: Netlify deployment of the Medusa Admin

cs-371p — Week 9 Abhijit Raman

A Reference Architecture for Deploying WSO2 Middleware on Kubernetes

My Udacity Journey

📣 NFT Land Minting & More!

TryHackMe: Introductory Researching

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ach.Chusnul Chikam

Ach.Chusnul Chikam

Cloud Consultant | RHCSA | RHCE in Red Hat OpenStack | Google Cloud ACE | AWS SAA | LinkedIn:

More from Medium

Setting up Python Interpreter and running Python Code on Docker Container

Python short-trick to load any local directory as python module in script


Limit User’s Access [ Linux Syst