This article uses the terms "nodes" and "cluster" to refer to Ceph servers. These terms do not refer to VMmanager nodes and clusters.

Before connecting Ceph storage to the VMmanager cluster, you should pre-configure the Ceph cluster nodes. This article provides general information about the installation. It is recommended that you create a Ceph cluster according to the official documentation.

Before creating a cluster, make sure that the equipment used matches the system requirements. It is recommended using Ceph software version no older than 13.2.0.

Requirements for cluster nodes


The following physical or virtual servers should be part of the cluster:

  • data server (OSD);
  • at least three monitor servers (MON);
  • administrative server (ADM);
  • monitoring service (MGR);
  • metadata server (MDS). It is necessary if you use the CephFS file system.

Servers must meet the following requirements:

  1. A server with VMmanager platform cannot be used as a cluster node.
  2. It is not recommended to use nodes of VMmanager cluster. This places high load on the server and complicates the recovery process in the event of a failure.
  3. It is recommended using servers that are in the same rack and the same network segment.
  4. It is recommended using a high-speed network connection between cluster nodes.
  5. One operating system must be installed on all servers.
  6. Port 6789/TCP should be available on the monitor servers; and ports 6800 to 7300/TCP should be available on the data servers.
  7. All nodes in the cluster must have an unmounted partition or disk to install the Ceph software.

Example of cluster node preparation


The example describes how to create a cluster using servers:

  1. ceph-cluster-1 with IP address 172.31.245.51. Designation — MON, OSD, ADM, MGR.
  2. ceph-cluster-2 with IP address 172.31.246.77. Designation — MON, OSD.
  3. ceph-cluster-3 with IP address 172.31.246.82. Designation — MON, OSD.

 If the error "RuntimeError: NoSectionError: No section: 'ceph'" occurs while executing the ceph-deploy command, rerun the command.

All servers

  1. Install Ceph software:
    1. Execute the command:

      yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
      CODE
    2. Create the file /etc/yum.repos.d/ceph.repo and add the following lines to it:

      [ceph-noarch]
      name=Ceph noarch packages
      baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
      enabled=1
      gpgcheck=1
      type=rpm-md
      gpgkey=https://download.ceph.com/keys/release.asc
      CODE
    3. Execute the command:

      yum update
      CODE
  2.  Install software to run NTP. This will prevent problems arising from the system time shift.

    yum install ntp ntpdate ntp-doc
    CODE
  3. Create a ceph user and set the necessary permissions:

    useradd -d /home/ceph -m ceph
    passwd ceph
    echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    chmod 0440 /etc/sudoers.d/ceph
    CODE
  4. Create aliases for cluster nodes in /etc/hosts:

    172.31.245.51 ceph1.example.com ceph1
    172.31.246.77 ceph2.example.com ceph2
    172.31.246.82 ceph3.example.com ceph3
    CODE
  5. Add Ceph services to firewalld settings:

    firewall-cmd --zone=public --add-service=ceph-mon --permanent
    firewall-cmd --zone=public --add-service=ceph --permanent
    firewall-cmd --reload
    CODE

Administrative server

  1. Install the ceph-deploy and python-setuptools packages:

    yum install ceph-deploy python-setuptools
    CODE
  2. Create ssh keys and copy them to all nodes of the cluster:

    ssh-keygen
    ssh-copy-id ceph@ceph1
    ssh-copy-id ceph@ceph2
    ssh-copy-id ceph@ceph3
    CODE
  3. Add lines to the file ~/.ssh/config:

    Host ceph1
    Hostname ceph1
    User ceph
    Host ceph2
    Hostname ceph2
    User ceph
    Host ceph3
    Hostname ceph3
    User ceph
    CODE
  4. Create my-cluster directory for configuration and ceph-deploy files and enter to that directory:

    mkdir my-cluster
    cd my-cluster
    CODE
  5. Create the cluster configuration file:

    ceph-deploy new ceph1 ceph2 ceph3
    CODE

    ceph1 ceph2 ceph3 — cluster nodes that are monitor servers

    When using Ceph Storage with one cluster node replace the value of "osd_crush_chooseleaf_type" in configuration ceph.conf with 0.

  6. Add information about the cluster node network to the ceph.conf configuration file: 

    echo "public_network = 172.31.240.0/20" >> ceph.conf
    CODE
  7. Install ceph-deploy on cluster codes:

    ceph-deploy install ceph1 ceph2 ceph3
    CODE
  8. Deploy a monitoring service:

    ceph-deploy mgr create ceph1
    CODE
  9. Create and initialize the monitor servers:

    ceph-deploy mon create-initial
    CODE
  10. Copy the configuration files to the cluster nodes:

    ceph-deploy admin ceph1 ceph2 ceph3
    CODE
  11. Add data servers to the cluster:

    ceph-deploy osd create --data /dev/sdb ceph1
    ceph-deploy osd create --data /dev/sdb ceph2
    ceph-deploy osd create --data /dev/sdb ceph3
    CODE

    /dev/sdb — storage device on the cluster node

  12. Erase /dev/sdb drives on data servers:

    ceph-deploy disk zap ceph1:sdb ceph2:sdb ceph3:sdb 
    BASH