Block Device Guide | Basic Ceph Administration

Ach.Chusnul Chikam
5 min readOct 23, 2021

--

A block is a set length of bytes in a sequence, for example, a 512-byte block of data. Combining many blocks together into a single file can be used as a storage device that you can read from and write to. Block-based storage interfaces are the most common way to store data with rotating media such as:

  • Hard drives
  • CD/DVD discs
  • Floppy disks
  • Traditional 9-track tapes

Ceph block devices are thin-provisioned, resizable and store data striped over multiple Object Storage Devices (OSD) in a Ceph storage cluster. Ceph block devices are also known as Reliable Autonomic Distributed Object Store (RADOS) Block Devices (RBDs).
Ceph block devices deliver high performance with infinite scalability to Kernel Virtual Machines (KVMs), such as Quick Emulator (QEMU), and cloud-based computing systems, like OpenStack, that rely on the libvirt and QEMU utilities to integrate with Ceph block devices. You can use the same storage cluster to operate the Ceph Object Gateway and Ceph block devices simultaneously. As a storage administrator, being familiar with Ceph’s block device commands can help you effectively manage the Ceph cluster.

Prerequisites:

  • A running Ceph cluster.
  • Root-level access to the client node.

Basic Administration:

1. Creating a block device pool

Use the rbd help command to display help for a particular rbd command and its subcommand. Storage administrators must create a pool first before you can specify it as a source. To create an rbd pool, execute the following:

[root@ceph-01 ~]# ceph osd pool create my-pool 128
pool 'my-pool' created
[root@ceph-01 ~]# ceph osd pool application enable my-pool rbd
[root@ceph-01 ~]# rbd pool init -p my-pool

2. Creating a block device image

Before adding a block device to a node, create an image for it in the Ceph storage cluster. To create a block device image, execute the following command:

[root@ceph-01 ~]# rbd create data --size 10G --pool my-pool
[root@ceph-01 ~]# rbd ls -l my-pool
NAME SIZE PARENT FMT PROT LOCK
data 10 GiB 2

Retrieve information from an image within a pool, execute the following but replace IMAGE_NAME with the name of the image and replace POOL_NAME with the name of the pool:

[root@ceph-01 ~]# rbd --image data  -p my-pool info
rbd image 'data':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 3963beb71cee
block_name_prefix: rbd_data.3963beb71cee
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat Oct 23 08:09:01 2021
access_timestamp: Sat Oct 23 08:09:01 2021
modify_timestamp: Sat Oct 23 08:09:01 2021

3. Resizing a block device image

Ceph block device images are thin provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size option. To increase or decrease the maximum size of a Ceph block device image:

[root@ceph-01 ~]# rbd resize --image data -p my-pool --size 12G
Resizing image: 100% complete...done.
[root@ceph-01 ~]#
[root@ceph-01 ~]# rbd ls -l my-pool
NAME SIZE PARENT FMT PROT LOCK
data 12 GiB 2

4. Map and mount a Ceph Block Device on client host

On the Ceph client node, install required packages:

[root@ceph-01 ~]# ssh ceph-client
Last login: Sat Oct 23 07:58:58 2021 from 192.168.1.231
[root@ceph-client ~]# dnf -y install centos-release-ceph-pacific epel-release
[root@ceph-client ~]# dnf -y install ceph-common

Copy the required file from a cluster node to the Client node:

[root@ceph-client ~]# scp root@ceph-01:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
ceph.conf 100% 277 39.4KB/s 00:00
[root@ceph-client ~]#
[root@ceph-client ~]# scp root@ceph-01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
ceph.client.admin.keyring 100% 151 21.7KB/s 00:00

Map the image:

[root@ceph-client ~]# rbd ls -p my-pool -l
NAME SIZE PARENT FMT PROT LOCK
data 12 GiB 2
[root@ceph-client ~]# rbd map -p my-pool data
/dev/rbd0
[root@ceph-client ~]# rbd showmapped
id pool namespace image snap device
0 my-pool data - /dev/rbd0

Format the partition with XFS:

[root@ceph-client ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=16, agsize=196608 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=3145728, imaxpct=25
= sunit=16 swidth=16 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Discarding blocks...Done.

Mount the file system and verify that the file system is mounted and showing the correct size:

[root@ceph-client ~]# mount /dev/rbd0 /mnt
[root@ceph-client ~]# df -hT

Try to write files on /mnt

[root@ceph-client ~]# curl https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 -o /mnt/CentOS-8.qcow2
[root@ceph-client ~]#
[root@ceph-client ~]# df -hT /mnt
Filesystem Type Size Used Avail Use% Mounted on
/dev/rbd0 xfs 12G 802M 12G 7% /mnt
[root@ceph-client ~]# ls -lh /mnt
total 683M
-rw-r--r--. 1 root root 683M Oct 23 08:54 CentOS-8.qcow2

5. Removing a block device image

To delete Block devices or Pools, follow these steps. First, unmount /mnton Client node.

[root@ceph-client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 600M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 38.4G 0 part
├─cl-root 253:0 0 34.4G 0 lvm /
└─cl-swap 253:1 0 4G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
rbd0 252:0 0 12G 0 disk /mnt
[root@ceph-client ~]# umount /mnt

Displaying mapped block devices then unmap the block device image

[root@ceph-01 ~]# rbd device list
id pool namespace image snap device
0 my-pool data — /dev/rbd0
[root@ceph-01 ~]# rbd device unmap /dev/rbd/my-pool/data

To remove a block device from a pool, execute the following, but replace IMAGE_NAME with the name of the image to remove and replace POOL_NAME with the name of the pool:

[root@ceph-01 ~]# rbd -p my-pool ls
data
[root@ceph-01 ~]# rbd rm -p my-pool data
Removing image: 100% complete...done.

See other content

References :

#CEPH #Storage #CentOS8 #SDS

--

--

Ach.Chusnul Chikam

Cloud Consultant | RHCSA | CKA | AWS SAA | OpenStack Certified | OpenShift Certified | Google Cloud ACE | LinkedIn: https://www.linkedin.com/in/achchusnulchikam