File System Guide | Basic Ceph Administration

Ach.Chusnul Chikam
6 min readOct 25, 2021


The Ceph File System (CephFS) is a file system compatible with POSIX standards that are built on top of Ceph’s distributed object store, called RADOS (Reliable Autonomic Distributed Object Storage). CephFS provides file access to a Ceph cluster and uses the POSIX semantics wherever possible. For example, in contrast to many other common network file systems like NFS, CephFS maintains strong cache coherency across clients. The goal is for processes using the file system to behave the same when they are on different hosts as when they are on the same host. However, in some cases, CephFS diverges from the strict POSIX semantics.

The Ceph File System has two primary components:

  1. Clients
    The CephFS clients perform I/O operations on behalf of applications using CephFS, such as, ceph-fuse for FUSE clients and kcephfs for kernel clients. CephFS clients send metadata requests to an active Metadata Server. In return, the CephFS client learns of the file metadata, and can begin safely caching both metadata and file data.
  2. Metadata Servers (MDS)
    The MDS does the following:
  • Provides metadata to CephFS clients.
  • Manages metadata related to files stored on the Ceph File System.
  • Coordinates access to the shared Ceph cluster.
  • Caches hot metadata to reduce requests to the backing metadata pool store.
  • Manages the CephFS clients’ caches to maintain cache coherence.
  • Replicates hot metadata between active MDS.
  • Coalesces metadata mutations to a compact journal with regular flushes to the backing metadata pool.
  • CephFS requires at least one Metadata Server daemon (ceph-mds) to run.

The diagram below shows the component layers of the Ceph File System.

The bottom layer represents the underlying core storage cluster components:
- Ceph OSDs (ceph-osd) where the Ceph File System data and metadata are stored.
- Ceph Metadata Servers (ceph-mds) that manages Ceph File System metadata.
- Ceph Monitors (ceph-mon) that manages the master copy of the cluster map.

The Ceph Storage protocol layer represents the Ceph native librados library for interacting with the core storage cluster. The CephFS library layer includes the CephFS libcephfs library that works on top of librados and represents the Ceph File System. The top layer represents two types of Ceph clients that can access the Ceph File Systems.

The diagram below shows more details on how the Ceph File System components interact with each other.

Lab Environment:


  • A running Ceph cluster.
  • Installation of the Ceph Metadata Server daemons (ceph-mds).

Basic Administration:

1. Deploy Metadata Servers

Each CephFS file system requires at least one MDS. The cluster operator will generally use their automated deployment tool to launch required MDS servers as needed. In this case, used ceph orch apply mds and orchestrate with placement option

[root@ceph-01 ~]# ceph orch apply mds my-mds --placement="ceph-01,ceph-02,ceph-03"
Scheduled update...
[root@ceph-01 ~]# ceph orch ps | grep mds ceph-01 running (28s) 19s ago 28s 11.8M - 16.2.6 02a72919e474 f815be961426 ceph-02 running (26s) 20s ago 26s 15.8M - 16.2.6 02a72919e474 74e1da7386d0 ceph-03 running (24s) 20s ago 24s 12.6M - 16.2.6 02a72919e474 c9568108123f

Configure ceph keyring related MDS

[root@ceph-01 ~]# for i in {01..03}; do mkdir -p /var/lib/ceph/mds/ceph-$i; done
[root@ceph-01 ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-01/keyring --gen-key -n mds.ceph-01
creating /var/lib/ceph/mds/ceph-01/keyring
[root@ceph-01 ~]# chown -R ceph. /var/lib/ceph/mds/ceph-01
[root@ceph-01 ~]# ceph auth add mds.ceph-01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-01/keyring
added key for mds.ceph-01

2. Create Ceph File System

A Ceph file system requires at least two RADOS pools, one for data and one for metadata.

[root@ceph-01 ~]# ceph osd pool create cephfs_data 64
pool 'cephfs_data' created
[root@ceph-01 ~]# ceph osd pool create cephfs_metadata 64
pool 'cephfs_metadata' created

Once the pools are created, you may enable the file system using the ceph fs new command. Once a file system has been created, your MDS(s) will be able to enter an active state and other standby. Notice the available size.

[root@ceph-01 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 4 and data pool 3
[root@ceph-01 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-01 ~]# ceph mds stat
cephfs:1 {0=my-mds.ceph-03.pryhgc=up:active} 2 up:standby
[root@ceph-01 ~]# ceph fs status cephfs
cephfs - 0 clients
0 active my-mds.ceph-03.pryhgc Reqs: 0 /s 10 13 12 0
cephfs_metadata metadata 96.0k 28.4G
cephfs_data data 0 28.4G
MDS version: ceph version 16.2.6 (ee28fb57e47e9f88813e24bbf4c14496ca299d31) pacific (stable)

3. Mount CephFS on a Ceph client

Access Ceph-client host then install required package

[root@ceph-01 ~]# ssh ceph-client
[root@ceph-client ~]# dnf -y install centos-release-ceph-pacific epel-release
[root@ceph-client ~]# dnf -y install ceph-fuse

Copy related files for authentication purposes between Ceph cluster and client

[root@ceph-client ~]# scp ceph-01:/etc/ceph/ceph.conf /etc/ceph/
ceph.conf 100% 277 506.8KB/s 00:00
[root@ceph-client ~]# scp ceph-01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
ceph.client.admin.keyring 100% 151 330.5KB/s 00:00
[root@ceph-client ~]# chown ceph. /etc/ceph/ceph.*
[root@ceph-client ~]# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key
[root@ceph-client ~]# chmod 600 admin.key

Mount the file system and verify that the file system is mounted and showing the correct size

[root@ceph-client ~]# mount -t ceph ceph-01:6789:/ /mnt -o name=admin,secretfile=admin.key
[root@ceph-client ~]# df -hT

Try to write files on /mnt

[root@ceph-client ~]# curl -o /mnt/Ubuntu-Focal.img
[root@ceph-client ~]# df -hT /mnt
[root@ceph-client ~]# ls -lh /mnt
total 542M
-rw-r — r — . 1 root root 542M Oct 25 15:30 Ubuntu-Focal.img

on Ceph Dashboard => File Systems will look like:

4. Removing CephFS and Pools

To delete CephFS or Pools, follow these steps. First, unmount /mnton the Client host.

[root@ceph-client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs devtmpfs 891M 0 891M 0% /dev
tmpfs tmpfs 909M 0 909M 0% /dev/shm
tmpfs tmpfs 909M 8.6M 900M 1% /run
tmpfs tmpfs 909M 0 909M 0% /sys/fs/cgroup
/dev/mapper/cl-root xfs 35G 2.1G 33G 6% /
/dev/sda2 ext4 976M 198M 712M 22% /boot
/dev/sda1 vfat 599M 7.3M 592M 2% /boot/efi
tmpfs tmpfs 182M 0 182M 0% /run/user/0
tmpfs tmpfs 182M 0 182M 0% /run/user/1000 ceph 29G 544M 28G 2% /mnt
[root@ceph-client ~]# umount /mnt

Back to Ceph cluster node, stop MDS daemon use ceph orch command then delete cephfs

[root@ceph-01 ~]# ceph orch ls | grep mds 3/3 3m ago 64m ceph-01;ceph-02;ceph-03
[root@ceph-01 ~]# ceph orch stop
Scheduled to stop on host 'ceph-01'
Scheduled to stop on host 'ceph-02'
Scheduled to stop on host 'ceph-03'
[root@ceph-01 ~]# ceph fs rm cephfs --yes-i-really-mean-it

To delete pools execute the following, replace POOL_NAME with the name of the pool:

[root@ceph-01 ~]# ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool

If you get an error when deleting pools, please follow this guide. Finally pools can be delete like this

[root@ceph-01 ~]# ceph osd pool ls | grep cephfs
[root@ceph-01 ~]# ceph osd pool rm cephfs_data cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
[root@ceph-01 ~]# ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
pool 'cephfs_metadata' removed

See other content

References :

#CEPH #Storage #CentOS8 #SDS



Ach.Chusnul Chikam

Cloud Consultant | RHCSA | OpenStack Certified | OpenShift Certified | Google Cloud ACE | AWS SAA | LinkedIn: