A cluster is a set of servers located at a single location. Distinctive features of such servers (cluster nodes) are their location and high speed of data transfer between them. This article describes the requirements to cluster nodes.

Platform and cluster node cannot be located on the same server. Technical support for VMmanager with this configuration is not provided.

Nodes homogeneity


We recommend using nodes in one cluster that are homogeneous in terms of network settings, routing, and software versions. This will provide the best conditions for virtual machine migration between cluster nodes.

The platform does not support clusters with both Red Hat (CentOS, AlmaLinux) and Debian based OS nodes.

Virtualization support 


A KVM cluster node must support CPU-level virtualization. To check whether support of Intel and AMD CPUs is available, enter the command:

grep -P 'vmx|svm' /proc/cpuinfo
BASH

If the reply is not an empty string, the CPU supports virtualization.

To use virtualization, make sure it is enabled in the server BIOS.

Hardware requirements


The cluster node must be a physical server with the following characteristics:


MinimumRecommended
CPU

2.4 GHz

3 GHz

Core number

4 pcs.

8 pcs.

RAM

8 GB

16 GB

Disk space

1 TB

2 TB

Motherboard

We recommend using a server motherboard. A cluster node with a desktop motherboard may not work properly.

If you have problems with disk subsystem performance it is recommended to enable the power consumption mode with maximum performance in the BIOS settings.

CPU

Supported processors are Intel and AMD with x86_64 architecture. Processors with ARM architecture are not supported.

Disk and storages

When partitioning the disk, we recommend allocating the main volume to the root directory.

Before adding a node to an existing cluster, configure all storage used in the cluster on the server. LXD clusters require configuring a ZFS storage. Read more in LXD.

Software requirements


Operating system

The operating system (OS) requirements depend on the type of virtualization and cluster network configurations:


KVMLXD

Switching

AlmaLinux 8.6, 8.7, 8.8, 8.9

Ubuntu 20.04

Routing

AlmaLinux 8.6, 8.7, 8.8, 8.9

-

IP fabric

AlmaLinux 8.6, 8.7, 8.8, 8.9

Ubuntu 20.04

Minor versions of the OS and OS kernel versions may be different between the cluster nodes.

Always use an unmodified OS in its minimum version: without third-party repositories or pre-installed additional services.

To ensure system software homogeneity, it is recommended to periodically update the OS on the cluster nodes.

AlmaLinux

For AlmaLinux versions below 8.8-3.el8, before adding a cluster node, follow the instructions in the Knowledge Base article Almalinux repositories GPG key validation error.

CentOS

Cluster nodes with CentOS 8 are not supported. If you have CentOS 8 installed, you can migrate to AlmaLinux 8 OS according to the instructions.

CentOS 7 operating system:

  • not supported for new product installations;
  • for existing product installations is supported until EOL on June 30, 2024.

Software

For the cluster node to work correctly, do not change the default command prompt greeting in the .bashrc file.

In order for the platform to connect to the cluster node, the dmidecode and python3 software packages must be installed on the node. If this software is not included in the OS, install it manually.

Disabling SELinux service

The SELinux service is used as an additional security feature for the operating system. We recommend disabling the SELinux service, as it slows down the operation of the platform and may prevent its correct installation.

Once SELinux is disabled, the server resources will remain protected by the built-in discretionary access control service.

To disable SELinux:

  1. Check the service status:

    sestatus
    CODE
  2. If the reply contains enforcing or permissive:

    1. In the /etc/selinux/config file, set the SELINUX parameter to disabled:

      sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
      CODE
    2. Reboot the server.

OS update

After updating the OS on the cluster node, restart the libvirt service:

systemctl restart libvirtd
BASH

System time

The system time on the cluster node must be synchronized with the time on the server with the platform. You can use chrony software to synchronize the time.

For servers in OVH data center

When installing the OS through the OVH control panel, enable the Install original kernel option. Use the original OS kernel for the cluster servers to work correctly.

Network settings


The network configuration of the cluster nodes must meet the following requirements:

  • each server must have a unique hostname;
  • in clusters with "Switching" and "Routing" configuration type, the server must have access to the Internet. This is required to download OS templates and software packages from external sources;
  • the IP address must be on the physical interface of the server or in the VLAN and be static;
  • the IP address must be static and set via the network interface configuration file (without using DHCP);

  • If IPv6 addresses are allocated to VMs in a cluster with the "Switching" configuration type, one of the addresses of the IPv6 network must be assigned to the node interface;
  • the default gateway should be available for verification by ping utility;

  • names of the network interfaces should not contain letters other than Latin.

See the requirements to cluster nodes with two network interfaces in Main and additional network.

If the server is located in a Hetzner data center, we do not recommend using the vSwitch feature. This feature limits the total number of MAC addresses used by physical and virtual server interfaces to 32.

The platform does not manage VLAN connections if IP addresses are assigned to them. To add a host that uses an IP address assigned to a VLAN for the default route:

  1. Configure the bridge manually.
  2. When adding a node, enable the Do not configure network automatically option.

Example settings for OS AlmaLinux

Initial configuration

/etc/sysconfig/network-scripts/ifcfg-bond0.4

DEVICE="bond0.4"
VLAN="yes"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR=10.3.0.62
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.3.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
CODE

/etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE="bond0"
TYPE="Bond"
BONDING_MASTER="yes"
ONBOOT="yes"
BOOTPROTO="none"
BONDING_OPTS="mode=active-backup"
CODE

/etc/sysconfig/network-scripts/ifcfg-enp1s0f0

TYPE=Ethernet
DEVICE=enp1s0f0
UUID=b47d0044-55ca-40d2-be0c-3d4a65a9985d
ONBOOT=yes
DEFROUTE=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
NAME="System enp1s0f0"
SLAVE=yes
MASTER=bond0
CODE

Configuration with a bridge

/etc/sysconfig/network-scripts/ifcfg-bond0.4

DEVICE="bond0.4"
VLAN="yes"
ONBOOT="yes"
BOOTPROTO="none"
BRIDGE="vmbr0"
CODE

/etc/sysconfig/network-scripts/ifcfg-vmbr0

DEVICE="vmbr0"
TYPE="Bridge"
ONBOOT="yes"
BOOTPROTO="none"
STP="no"
IPV6INIT="no"
IPADDR=10.3.0.62
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.3.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
CODE

The MTU settings are made at the operating system (OS) level of the cluster node. The platform supports jumbo frames (frames with MTU over 1500 bytes).

/etc/hosts file

Make sure that the /etc/hosts file has an entry for the server in the format:

<server IP address> <server hostname>
CODE

/etc/resolv.conf file

Make sure that the /etc/resolv.conf file has entries in the format:

nameserver <IP address of the DNS server>
CODE

If the IP address of the systemd-resolved local service (127.0.0.53) is specified as the DNS server, check that the DNS server addresses are specified in /etc/systemd/resolved.conf:

DNS=<servers list>
CODE

Incoming connection settings

KVM virtualization

Allow incoming connections to the ports:

  • 22/tcp — SSH service;
  • 179/tcp, 4789/udp — Virtual networks (VxLAN);
  • 5900-6900/tcp — QEMU VNC, SPICE. If access is only provided through the server with VMmanager, the port range must be open to the network connecting the cluster nodes;
  • 16514/tcp — libvirt virtual machines management service;
  • 49152-49215/tcp — libvirt migration services.

LXD virtualization

Allow incoming connections to the ports:

  • 22/tcp — SSH service;
  • 179/tcp, 4789/udp — Virtual networks (VxLAN);
  • 8443/tcp — LXD container management service.