Prerequisites

To follow this article, you will need:

  • Ubuntu 20.04 server set up,  including a root account, non-root account with sudo
  • All nodes should have docker installed please see: Installing Docker-CE for details.


We will deply ceph via cephadm on 3 nodes, with the following configuration:


Role
IP
OS
Monitor
169.254.68.3
Ubuntu 20.04
datanode
169.254.68.4
Ubuntu 20.04
datanode
169.254.68.5
Ubuntu 20.04


Process

One each of the nodes, first edit /etc/hosts to replicate the above node:

192.168.68.3 node1 
192.168.68.4 node2 
192.168.68.5 node3 


Set hostnames using the following commands:

sudo hostnamectl set-hostname node1
sudo hostnamectl set-hostname node2
sudo hostnamectl set-hostname node3


Setup the ntp on all of these nodes using

sudo apt-get install vim ntp ntpdate ntp-doc -y
ntpdate 0.us.pool.ntp.org
hwclock --systohc 
systemctl start ntp
systemctl enable ntp

Append  /etc/sysctl.conf with the following configuration

kernel.pid_max = 4194303
fs.aio-max-nr=1048576


Refresh the sysctl parameter with:

sysctl -p


Create a user cephuser:

sudo useradd -d /home/cephuser -m ceph-admin -s /bin/bash
sudo passwd cephuser
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-admin
sudo chmod 0440 /etc/sudoers.d/ceph-admin


Create ssh config for user:

su cephuser 
cd $HOME 

Generate and copy keys on all nodes:

ssh-keygen
ssh-copy-id ceph-admin@node1 && ssh-copy-id ceph-admin@node2 && ssh-copy-id ceph-admin@node3


Create a file  ~/.ssh/config and add the following config:


Host *
   UserKnownHostsFile /dev/null
   StrictHostKeyChecking ask
   IdentitiesOnly yes
   ConnectTimeout 0
   ServerAliveInterval 300
Host node1
   Hostname node1
   User ceph-admin
   UserKnownHostsFile /dev/null
   StrictHostKeyChecking ask
   IdentitiesOnly yes
   ConnectTimeout 0
   ServerAliveInterval 300
Host node2
   Hostname node2
   User ceph-admin
   UserKnownHostsFile /dev/null
   StrictHostKeyChecking ask
   IdentitiesOnly yes
   ConnectTimeout 0
   ServerAliveInterval 300
Host node3
   Hostname node3
   User ceph-admin
   UserKnownHostsFile /dev/null
   StrictHostKeyChecking ask
   IdentitiesOnly yes
   ConnectTimeout 0
   ServerAliveInterval 300


Ceph Initial Deployment


Now we are ready to deploy Ceph from node1 (192.168.68.3)

su ceph-admin
sudo curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm -o /usr/local/bin/cephadm
sudo chmod +x /usr/local/bin/cephadm
sudo /usr/local/bin/cephadm add-repo --release octopus
sudo /usr/local/bin/cephadm install
if [[ ! -d "/etc/ceph" ]]; then sudo mkdir -p /etc/ceph;fi
sudo /usr/local/bin/cephadm bootstrap --mon-ip  192.168.68.3

The output of the above would be something similar to :

* Output 
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit ntp.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit ntp.service is enabled and running
Host looks OK
Cluster fsid: 49dd71cc-975c-11eb-821c-145e450203c1
Verifying IP 169.254.68.3 port 3300 ...
Verifying IP 169.254.68.3 port 6789 ...
Mon IP 169.254.68.3 is in CIDR network 169.254.0.0/16
Pulling container image docker.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network...
Creating mgr...
Verifying port 9283 ...
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/10)...
mgr not available, waiting (2/10)...
mgr not available, waiting (3/10)...
mgr not available, waiting (4/10)...
mgr not available, waiting (5/10)...
mgr not available, waiting (6/10)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 5...
Mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to to /etc/ceph/ceph.pub
Adding key to root@localhost's authorized_keys...
Adding host node1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling mgr prometheus module...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for Mgr epoch 13...
Mgr epoch 13 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

         URL: https://node1:8443/
        User: admin
    Password: c15qt8acjq

You can access the Ceph CLI with:

    sudo /usr/local/bin/cephadm shell --fsid 49dd71cc-975c-11eb-821c-145e450203c1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

    ceph telemetry on

For more information see:

    https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.


Log in into the Ceph shell (actually, we log in in docker), using the command stated in the above message:

 sudo /usr/local/bin/cephadm shell --fsid 49dd71cc-975c-11eb-821c-145e450203c1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring


Exit from docker:

exit


Install the auxiliary Ceph packages for installing it on other machines(from cluster) to mount ceph(node2, node3):

sudo cephadm install ceph-common -y 

Copy keys to root user to other nodes

ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3

or

cat /etc/ceph/ceph.pub

and copy to /root/authorized_keys on node2, node3

Ceph nodes add via orch(orchestrator)

sudo ceph orch host add node2
sudo ceph orch host add node3


Set public network for our Ceph instance:

sudo ceph config set mon public_network 192.168.0.0/24

Ceph add label mon:

sudo ceph orch host label add node1 mon

Ceph list hosts:

ceph orch host ls

The output of the above would be similar to:

Output

HOST   ADDR   LABELS  STATUS  
node1 node1 mon
node2 node2
node3 node3


This completes the basic cluster bootstraping. Please refer to Ceph documentation for further configuration of volumes on the nodes.