Skip to content

Latest commit

 

History

History
428 lines (306 loc) · 13.8 KB

ha-manager.adoc

File metadata and controls

428 lines (306 loc) · 13.8 KB

High Availability

Our modern society depends heavily on information provided by computers over the network. Mobile devices amplified that dependency, because people can access the network any time from anywhere. If you provide such services, it is very important that they are available most of the time.

We can mathematically define the availability as the ratio of (A) the total time a service is capable of being used during a given interval to (B) the length of the interval. It is normally expressed as a percentage of uptime in a given year.

Table 1. Availability - Downtime per Year
Availability % Downtime per year

99

3.65 days

99.9

8.76 hours

99.99

52.56 minutes

99.999

5.26 minutes

99.9999

31.5 seconds

99.99999

3.15 seconds

There are several ways to increase availability. The most elegant solution is to rewrite your software, so that you can run it on several host at the same time. The software itself need to have a way to detect erors and do failover. This is relatively easy if you just want to serve read-only web pages. But in general this is complex, and sometimes impossible because you cannot modify the software yourself. The following solutions works without modifying the software:

  • Use reliable "server" components

Note
Computer components with same functionality can have varying reliability numbers, depending on the component quality. Most verdors sell components with higher reliability as "server" components - usually at higher price.
  • Eliminate single point of failure (redundant components)

    • use an uniteruptable power supply (UPS)

    • use redundant power supplies on the main boards

    • use ECC-RAM

    • use redundant network hardware

    • use RAID for local storage

    • use distributed, redundant storage for VM data

  • Reduce downtime

    • rapidly accessible adminstrators (24/7)

    • availability of spare parts (other nodes is a {pve} cluster)

    • automatic error detection (ha-manager)

    • automatic failover (ha-manager)

Virtualization environments like {pve} makes it much easier to reach high availability because they remove the "hardware" dependency. They also support to setup and use redundant storage and network devices. So if one host fail, you can simply start those services on another host within your cluster.

Even better, {pve} provides a software stack called ha-manager, which can do that automatically for you. It is able to automatically detect errors and do automatic failover.

{pve} ha-manager works like an "automated" administrator. First, you configure what resources (VMs, containers, …​) it should manage. ha-manager then observes correct functionality, and handles service failover to another node in case of errors. ha-manager can also handle normal user requests which may start, stop, relocate and migrate a service.

But high availability comes at a price. High quality components are more expensive, and making them redundant duplicates the costs at least. Additional spare parts increase costs further. So you should carefully calculate the benefits, and compare with those additional costs.

Tip
Increasing availability from 99% to 99.9% is relatively simply. But increasing availability from 99.9999% to 99.99999% is very hard and costly. ha-manager has typical error detection and failover times of about 2 minutes, so you can get no more than 99.999% availability.

Requirements

  • at least three cluster nodes (to get reliable quorum)

  • shared storage for VMs and containers

  • hardware redundancy (everywhere)

  • hardware watchdog - if not available we fall back to the linux kernel software watchdog (softdog)

  • optional hardware fencing devices

Resources

We call the primary management unit handled by ha-manager a resource. A resource (also called "service") is uniquely identified by a service ID (SID), which consists of the resource type and an type specific ID, e.g.: vm:100. That example would be a resource of type vm (virtual machine) with the ID 100.

For now we have two important resources types - virtual machines and containers. One basic idea here is that we can bundle related software into such VM or container, so there is no need to compose one big service from other services, like it was done with rgmanager. In general, a HA enabled resource should not depend on other resources.

How It Works

This section provides an in detail description of the {PVE} HA-manager internals. It describes how the CRM and the LRM work together.

To provide High Availability two daemons run on each node:

pve-ha-lrm

The local resource manager (LRM), it controls the services running on the local node. It reads the requested states for its services from the current manager status file and executes the respective commands.

pve-ha-crm

The cluster resource manager (CRM), it controls the cluster wide actions of the services, processes the LRM result includes the state machine which controls the state of each service.

Note
Locks in the LRM & CRM
Locks are provided by our distributed configuration file system (pmxcfs). They are used to guarantee that each LRM is active and working as a LRM only executes actions when he has its lock we can mark a failed node as fenced if we get its lock. This lets us then recover the failed HA services securely without the failed (but maybe still running) LRM interfering. This all gets supervised by the CRM which holds currently the manager master lock.

Local Resource Manager

The local resource manager (pve-ha-lrm) is started as a daemon on boot and waits until the HA cluster is quorate and thus cluster wide locks are working.

It can be in three states:

  • wait for agent lock: the LRM waits for our exclusive lock. This is also used as idle sate if no service is configured

  • active: the LRM holds its exclusive lock and has services configured

  • lost agent lock: the LRM lost its lock, this means a failure happened and quorum was lost.

After the LRM gets in the active state it reads the manager status file in /etc/pve/ha/manager_status and determines the commands it has to execute for the service it owns. For each command a worker gets started, this workers are running in parallel and are limited to maximal 4 by default. This default setting may be changed through the datacenter configuration key "max_worker".

Note
Maximal Concurrent Worker Adjustment Tips
The default value of 4 maximal concurrent Workers may be unsuited for a specific setup. For example may 4 live migrations happen at the same time, which can lead to network congestions with slower networks and/or big (memory wise) services. Ensure that also in the worst case no congestion happens and lower the "max_worker" value if needed. In the contrary, if you have a particularly powerful high end setup you may also want to increase it.

Each command requested by the CRM is uniquely identifiable by an UID, when the worker finished its result will be processed and written in the LRM status file /etc/pve/nodes/<nodename>/lrm_status. There the CRM may collect it and let its state machine - respective the commands output - act on it.

The actions on each service between CRM and LRM are normally always synced. This means that the CRM requests a state uniquely marked by an UID, the LRM then executes this action one time and writes back the result, also identifiable by the same UID. This is needed so that the LRM does not executes an outdated command. With the exception of the stop and the error command, those two do not depend on the result produce and are executed always in the case of the stopped state and once in the case of the error state.

Note
Read the Logs
The HA Stack logs every action it makes. This helps to understand what and also why something happens in the cluster. Here its important to see what both daemons, the LRM and the CRM, did. You may use journalctl -u pve-ha-lrm on the node(s) where the service is and the same command for the pve-ha-crm on the node which is the current master.

Cluster Resource Manager

The cluster resource manager (pve-ha-crm) starts on each node and waits there for the manager lock, which can only be held by one node at a time. The node which successfully acquires the manager lock gets promoted to the CRM master.

It can be in three states: TODO

  • wait for agent lock: the LRM waits for our exclusive lock. This is also used as idle sate if no service is configured

  • active: the LRM holds its exclusive lock and has services configured

  • lost agent lock: the LRM lost its lock, this means a failure happened and quorum was lost.

It main task is to manage the services which are configured to be highly available and try to get always bring them in the wanted state, e.g.: a enabled service will be started if its not running, if it crashes it will be started again. Thus it dictates the LRM the wanted actions.

When an node leaves the cluster quorum, its state changes to unknown. If the current CRM then can secure the failed nodes lock, the services will be stolen and restarted on another node.

When a cluster member determines that it is no longer in the cluster quorum, the LRM waits for a new quorum to form. As long as there is no quorum the node cannot reset the watchdog. This will trigger a reboot after 60 seconds.

Configuration

The HA stack is well integrated int the Proxmox VE API2. So, for example, HA can be configured via ha-manager or the PVE web interface, which both provide an easy to use tool.

The resource configuration file can be located at /etc/pve/ha/resources.cfg and the group configuration file at /etc/pve/ha/groups.cfg. Use the provided tools to make changes, there shouldn’t be any need to edit them manually.

Node Power Status

If a node needs maintenance you should migrate and or relocate all services which are required to run always on another node first. After that you can stop the LRM and CRM services. But note that the watchdog triggers if you stop it with active services.

Fencing

What Is Fencing

Fencing secures that on a node failure the dangerous node gets will be rendered unable to do any damage and that no resource runs twice when it gets recovered from the failed node.

Configure Hardware Watchdog

By default all watchdog modules are blocked for security reasons as they are like a loaded gun if not correctly initialized. If you have a hardware watchdog available remove its module from the blacklist and restart the watchdog-mux service.

Groups

A group is a collection of cluster nodes which a service may be bound to.

Group Settings

nodes

list of group node members

restricted

resources bound to this group may only run on nodes defined by the group. If no group node member is available the resource will be placed in the stopped state.

nofailback

the resource won’t automatically fail back when a more preferred node (re)joins the cluster.

Recovery Policy

There are two service recover policy settings which can be configured specific for each resource.

max_restart

maximal number of tries to restart an failed service on the actual node. The default is set to one.

max_relocate

maximal number of tries to relocate the service to a different node. A relocate only happens after the max_restart value is exceeded on the actual node. The default is set to one.

Note that the relocate count state will only reset to zero when the service had at least one successful start. That means if a service is re-enabled without fixing the error only the restart policy gets repeated.

Error Recovery

If after all tries the service state could not be recovered it gets placed in an error state. In this state the service won’t get touched by the HA stack anymore. To recover from this state you should follow these steps:

  • bring the resource back into an safe and consistent state (e.g: killing its process)

  • disable the ha resource to place it in an stopped state

  • fix the error which led to this failures

  • after you fixed all errors you may enable the service again

Service Operations

This are how the basic user-initiated service operations (via ha-manager) work.

enable

the service will be started by the LRM if not already running.

disable

the service will be stopped by the LRM if running.

migrate/relocate

the service will be relocated (live) to another node.

remove

the service will be removed from the HA managed resource list. Its current state will not be touched.

start/stop

start and stop commands can be issued to the resource specific tools (like qm or pct), they will forward the request to the ha-manager which then will execute the action and set the resulting service state (enabled, disabled).

Service States

stopped

Service is stopped (confirmed by LRM)

request_stop

Service should be stopped. Waiting for confirmation from LRM.

started

Service is active an LRM should start it ASAP if not already running.

fence

Wait for node fencing (service node is not inside quorate cluster partition).

freeze

Do not touch the service state. We use this state while we reboot a node, or when we restart the LRM daemon.

migrate

Migrate service (live) to other node.

error

Service disabled because of LRM errors. Needs manual intervention.