Server requirements for the cluster
A cluster is a set of servers located at a single location. Distinctive features of such servers (cluster nodes) are their location and high speed of data transfer between them. This article describes the requirements to cluster nodes.
Platform and cluster node can operate together on the same server only when the platform is migrated to a VM in an HA cluster.
In other cases, platform and cluster node cannot be located on the same server. Technical support for VMmanager with this configuration is not provided.
Nodes homogeneity
We recommend using nodes in one cluster that are homogeneous in terms of network settings, routing, and software versions. This will provide the best conditions for virtual machine migration between cluster nodes.
The platform does not support clusters with both Red Hat (CentOS, AlmaLinux) and Debian based OS nodes.
Virtualization support
A KVM cluster node must support CPU-level virtualization. To check whether support of Intel and AMD CPUs is available, enter the command:
grep -P 'vmx|svm' /proc/cpuinfo
If the reply is not an empty string, the CPU supports virtualization.
To use virtualization, make sure it is enabled in the server BIOS.
Hardware requirements
The cluster node must be a physical server with the following characteristics:
Minimum | Recommended | |
---|---|---|
CPU | 2.4 GHz | 3 GHz |
Core number | 4 pcs. | 8 pcs. |
RAM | 8 Gb | 16 Gb |
Disk space | 1 Tb | 2 Tb |
Motherboard
We recommend using a server motherboard. A cluster node with a desktop motherboard may not work properly.
If you have problems with disk subsystem performance it is recommended to enable the power consumption mode with maximum performance in the BIOS settings.
CPU
Supported processors are Intel and AMD with x86_64 architecture. Processors with ARM architecture are not supported.
Disk and storages
When partitioning the disk, we recommend allocating the main volume to the root directory.
Before adding a node to an existing cluster, configure all storage used in the cluster on the server. LXD clusters require configuring a ZFS storage. Read more in LXD.
Software requirements
Operating system
The operating system (OS) requirements depend on the type of virtualization and cluster network configurations:
KVM | LXD | |
---|---|---|
Switching | AlmaLinux 8.6, 8.7, 8.8 CentOS 7 x64 | Ubuntu 20.04 |
Routing | AlmaLinux 8.6, 8.7, 8.8 CentOS 7 x64 | - |
IP fabric | AlmaLinux 8.6, 8.7, 8.8 | Ubuntu 20.04 |
Minor versions of the OS and OS kernel versions may be different between the cluster nodes.
Always use an unmodified OS in its minimum version: without third-party repositories or pre-installed additional services.
Since January 2022, support for CentOS 8 has ended. Running a cluster node on a CentOS 8 server is not supported. You can migrate to AlmaLinux 8 OS according to the instructions.
Software
For the cluster node to work correctly, do not change the default command prompt greeting in the .bashrc file.
In order for the platform to connect to the cluster node, the dmidecode and python3 software packages must be installed on the node. If this software is not included in the OS, install it manually.
Disabling SELinux service
The SELinux service is used as an additional security feature for the operating system. We recommend disabling the SELinux service, as it slows down the operation of the platform and may prevent its correct installation.
Once SELinux is disabled, the server resources will remain protected by the built-in discretionary access control service.
To disable SELinux:
Check the service status:
sestatus
CODEIf the reply contains enforcing or permissive:
In the /etc/selinux/config file, set the SELINUX parameter to disabled:
sed -i s/^SELINUX=.*$/SELINUX=disabled/ /etc/selinux/config
CODEReboot the server.
OS update
After updating the OS on the cluster node, restart the libvirt service:
systemctl restart libvirtd
System time
The system time on the cluster node must be synchronized with the time on the server with the platform.
For servers in OVH data center
When installing the OS through the OVH control panel, enable the Install original kernel option. Use the original OS kernel for the cluster servers to work correctly.
Network settings
The network configuration of the cluster nodes must meet the following requirements:
- each server must have a unique hostname;
- in clusters with "Switching" and "Routing" configuration type, the server must have access to the Internet;
- the IP address must be on the physical interface of the server or in the VLAN and be static;
the IP address must be static and set via the network interface configuration file (without using DHCP);
- If IPv6 addresses are allocated to VMs in a cluster with the "Switching" configuration type, one of the addresses of the IPv6 network must be assigned to the node interface;
the default gateway should be available for verification by ping utility.
See the requirements to cluster nodes with two network interfaces in Main and additional network.
If the server is located in a Hetzner data center, we do not recommend using the vSwitch feature. This feature limits the total number of MAC addresses used by physical and virtual server interfaces to 32.
The platform does not manage VLAN connections if IP addresses are assigned to them. To add a host that uses an IP address assigned to a VLAN for the default route:
- Configure the bridge manually.
- When adding a node, enable the Do not configure network automatically option.
Example settings for OS AlmaLinux
Initial configuration
/etc/sysconfig/network-scripts/ifcfg-bond0.4
DEVICE="bond0.4"
VLAN="yes"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR=10.3.0.62
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.3.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
/etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE="bond0"
TYPE="Bond"
BONDING_MASTER="yes"
ONBOOT="yes"
BOOTPROTO="none"
BONDING_OPTS="mode=active-backup"
/etc/sysconfig/network-scripts/ifcfg-enp1s0f0
TYPE=Ethernet
DEVICE=enp1s0f0
UUID=b47d0044-55ca-40d2-be0c-3d4a65a9985d
ONBOOT=yes
DEFROUTE=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
NAME="System enp1s0f0"
SLAVE=yes
MASTER=bond0
Configuration with a bridge
/etc/sysconfig/network-scripts/ifcfg-bond0.4
DEVICE="bond0.4"
VLAN="yes"
ONBOOT="yes"
BOOTPROTO="none"
BRIDGE="vmbr0"
/etc/sysconfig/network-scripts/ifcfg-vmbr0
DEVICE="vmbr0"
TYPE="Bridge"
ONBOOT="yes"
BOOTPROTO="none"
STP="no"
IPV6INIT="no"
IPADDR=10.3.0.62
PREFIX=24
NETMASK=255.255.255.0
GATEWAY=10.3.0.1
DNS1=8.8.8.8
DNS2=8.8.4.4
The MTU settings are made at the operating system (OS) level of the cluster node. The platform supports jumbo frames (frames with MTU over 1500 bytes).
/etc/hosts file
Make sure that the /etc/hosts file has an entry for the server in the format:
<server IP address> <server hostname>
/etc/resolv.conf file
Make sure that the /etc/resolv.conf file has entries in the format:
nameserver <IP address of the DNS server>
If the IP address of the systemd-resolved local service (127.0.0.53) is specified as the DNS server, check that the DNS server addresses are specified in /etc/systemd/resolved.conf:
DNS=<servers list>
Incoming connection settings
KVM virtualization
Allow incoming connections to the ports:
- 22/tcp — SSH service;
- 179/tcp, 4789/udp — Virtual networks (VxLAN);
- 5900-6900/tcp — QEMU VNC, SPICE. If access is only provided through the server with VMmanager, the port range must be open to the network connecting the cluster nodes;
- 16514/tcp — libvirt virtual machines management service;
- 49152-49261/tcp — libvirt migration services.
LXD virtualization
Allow incoming connections to the ports:
- 22/tcp — SSH service;
- 179/tcp, 4789/udp — Virtual networks (VxLAN);
- 8443/tcp — LXD container management service.