The virtual machine (VM) drive is a hard disk image. It can be located on a local or network device  storage.

VMmanager supports the following VM drive formats:

  • RAW  contains raw or minimally processed data. In this format, the VM takes up as much disk space as it was allocated during creation;
  • Qcow2  disk image format of the QEMU software. In this format, disk space occupied by the VM depends on the actual amount of data in the machine.

When creating a cluster, you select the storages that will be used in it.

VMmanager supports the following types of storage:

  • File storage file system of a cluster node;
  • LVM logical volume manager. Allows you to use different areas of the same hard drive and/or areas from different hard drives as one logical volume;
  • Ceph — software-defined fault-tolerant distributed network storage. Allows you to set up a scalable cluster at several nodes;
  • ZFS — file system supporting large data volumes;
  • Network LVMLVM on a SAN. The cluster nodes work with the storage as a block device using the iSCSI protocol;
  • NAS — network storage that provides file-level access to data. NAS can be used to store VM images and linked clones.

Support for formats of each storage type in clusters with KVM virtualization:

Storage type

RAW

Qcow2

File storage

krest.png

galka.png

LVM

galka.png

krest.png

Ceph

galka.png

krest.png

Network LVM

galka.png

krest.png

NAS

krest.png

galka.png

In clusters with KVM virtualization, you can use file storage, LVM, NAS, or Ceph; while in clusters with LXD virtualization — only ZFS. Several different types of storages can be connected to a cluster with KVM virtualization. For example, you can create a cluster with two file storages, one LVM storage and one Ceph storage.

When a cluster is created, you choose one of the storages as the main one. This storage will be used by default for deploying VMs. If a cluster node loses access to the main storage, VMmanager will assign the "Corrupted" status to that node.

File storage


The file system of a cluster node is used as the storage. The size of the storage is limited to one partition of one disk.

When selecting this type of storage, you can specify:

  • storage name;
  • directory for the VM. Default is /vm;
  • directory for storing images and backups. Default is /image;
  • directory for storing operating systems (OS). Default is /share.

    Directory names on all nodes of the cluster must be the same.

LVM


Logical Volume Manager (LVM) is a subsystem that allows you to use different areas of the same hard disk and/or areas from different hard disks as one logical volume.

The size of file systems of logical volumes is not limited to one disk, as the volume can be located on different disks and partitions.

Main LVM designations:

  • Physical Volume (PV)  a disk partition or the entire disk;
  • Volume Group (VG)  a set of physical volumes combined into one virtual device;
  • Logical Volume (LV)  section created in volume group.

When selecting this type of storage, you can specify:

  • volume group name for the VM. Default is lvm0;
  • directory for storing images and backups. Default is /image;
  • directory for storing OS. Default is /share

    The names of volume groups and directories for storage on all nodes of the cluster must be the same.

When adding a cluster node with LVM, VMmanager calculates the number of VG on the node and searches for the VG with the specified name: 

  • if it finds one or several VG with the appropriate name, the node will be added;
  • if no VGs are found, the error message will be shown and the node won't be added.

LVM Storage can only support the RAW virtual disk image format. Read more about LVM in the official documentation.

Ceph


Ceph is a software-defined fault-tolerant distributed network storage.

With Ceph you can set up a scalable node cluster. If any drive, node or group of nodes fails, Ceph automatically recovers a copy of the lost data on other nodes. The data recovery process does not cause any downtime in the cluster operation.

Ceph provides various options for the client to access data: a block device, file system or object storage. VMmanager supports RBD  distributed block device with kernel client and QEMU/KVM driver. When using RBD, virtual disks are distributed to several objects and stored in this form in a distributed Ceph storage (RADOS). RBD supports only the RAW virtual disk image format.

There are two ways to store data in Ceph RBD: replication and erasure. The chosen method determines the size of the data parts and the number of their copies.

During replication, copies of incoming data are created — replicas. Replicas are stored in different nodes of the cluster.

In case of erasure, the incoming data are divided into K parts of the same size. Additionally, M parts of the same size are created for data recovery. All parts are distributed among K+M nodes of the cluster - one part per cluster node. The cluster will maintain operability and data integrity as long as the number of failed nodes is less than or equal to M.

Ceph cluster functionality and operability is supported by Ceph services:

  • MON — monitoring service;
  • OSD — storage service;
  • MDS — metadata server. Only necessary if CephFS is used.

In small clusters, it is allowed to use one server in two roles, for example, as a data storage and as a monitor. In large clusters, it is recommended to run services on separate servers. 

Ceph storage is used only for VM disks. VM images and backups are stored on the cluster node in the /image/ directory.

For more information on connecting Ceph, see the section Configuring Ceph RBD.

ZFS


ZFS is a file system combined with a logical volume manager. Advantages of ZFS:

  • supports large size of files and partitions;
  • files can be stored on several devices;
  • on-the-fly checksum verification and file encryption
  • snapshots creation.

Read more about ZFS in the official documentation.

ZFS uses virtual data storage pools. The pool is created from virtual devices — physical disks or RAIDs.

VMmanager uses ZFS only in clusters with LXD virtualization. VM images are stored on the cluster node, while LXD containers with VMs and operating systems are stored in the ZFS pool. For more information about preparing a cluster node to work with ZFS, refer to LXD.

Before connecting the server to the cluster, configure the ZFS pool on the server:

  1. Before installing the OS, leave an unmarked area on the server disk.
  2. Install the zfsutils-linux utility:

    sudo apt install zfsutils-linux
    BASH

     

  3.   Create the ZFS pool:

    zpool create <zpool_name> <device>
    BASH

    <zpool_name> — pool name

    <device> — partition name

Network LVM


To connect LVM network storages, VMmanager uses SAN. SAN (Storage Area Network) is a technology for connecting external storage devices to servers. Server operating systems work with the connected devices as if they were local.

When you add a storage to VMmanager, the platform will automatically make the necessary settings on the block device: it will create PV and VG.

NAS


Read more in the article NAS.