Example, Failover Cluster in Which All Nodes Run Hyper-V
Applies To: Windows Server 2008
This topic provides diagrams showing an example of a failover cluster in which the nodes run Hyper-V. For examples of other designs, see Evaluating Failover Cluster Design Examples.
In this example, the fictitious company A. Datum needs to make available several client-server applications which run on two different operating systems. The company accomplishes this through server consolidation, that is, running the applications in two virtual machines on a single physical server using Hyper-V. To increase the availability of the virtual machines and avoid causing the physical server to become a single point of failure, A. Datum runs Hyper-V in the context of a two-node failover cluster. This means that either of two physical servers (two nodes in the failover cluster) can run either or both of the virtual machines at any given time. The two nodes in the failover cluster use very similar hardware, run the same version of Windows Server 2008, and have exactly the same software updates (patches).
Important
For details about this design, see Design for a Failover Cluster in Which All Nodes Run Hyper-V and Requirements and Recommendations for Failover Clusters in Which All Nodes Run Hyper-V.
This topic illustrates the following:
Clustered instances of Hyper-V before failover
Clustered instances of Hyper-V when a node develops difficulties
Clustered instances of Hyper-V after failover
Clustered instances of Hyper-V before failover
When the failover cluster begins providing service, the clustered instances of the two virtual machines running in Hyper-V—called VMachine1 and VMachine2—are owned by Node 1. This is shown in the following diagram.
Node 1 uses a shared bus or iSCSI connection to the cluster storage and has ownership of any disks (or LUNs) assigned to VMachine1 or VMachine2. Node 1 also uses a network to send regular signals, called “heartbeat” signals, to Node 2, and receives heartbeat signals from Node 2. In this way, both nodes have a way of determining whether the other node is functioning and whether it is able to communicate through the network. In addition, both nodes are also on at least one network that connects them to clients and to the administrator of the cluster.
Clustered instances of Hyper-V when a node develops difficulties
At some point, Node 1 develops difficulties, and has almost stopped functioning. Node 1 loses the ability to send regular heartbeat signals across the network to Node 2. The following diagram shows the brief time just after Node 1 stops sending the signals.
Clustered instances of Hyper-V after failover
Shortly after heartbeat signals stop arriving from Node 1, Node 2 begins an orderly process of taking over the functionality of VMachine1 and VMachine2. For each of the virtual servers, Node 2 brings resources (such as virtual disks) online in an orderly fashion, as specified (by the administrator) in the failover cluster configuration. If one resource depends on another resource being present, that dependent resource is brought online after the resource it depends on. On clients, there is only a short interruption of service, not noticed by most users. The following diagram shows failover occurring.
A similar process can be initiated by a system administrator for scheduled downtime. For example, if Node 1 is running correctly and is the current owner of VMachine1 and VMachine2, but software updates need to be applied to Node 1, the administrator can use the Failover Cluster Management snap-in to deliberately move VMachine1 and VMachine2 to Node 2 so that the software updates can be applied. Of course, when applying software updates to a cluster node, it is important to apply the same updates to other cluster nodes as soon as possible. This ensures that all cluster nodes will consistently respond in the same way.
Additional references
Design for a Failover Cluster in Which All Nodes Run Hyper-V
Checklist: Failover Cluster in Which All Nodes Run Hyper-V (https://go.microsoft.com/fwlink/?LinkId=129123)