Do I need a second domain for the client network beside the domain in the management network for the HCI Cluster?

Hermann Stys 21 Reputation points
2021-05-18T14:45:21.487+00:00

Hi,
I am looking for an explanation on the correct procedure for a HCI Cluster (Windows Server 2019) with two nodes when using a domain for the client network. In a HCI cluster, normally the two nodes join a domain through the management network to get the full feature set of capabilities (e.g. live migration). Since the client network should normally not have access to the management network, I would like to know if we should create an independent domain for the virtual machines in an independent forest without trusting the domain responsible for the cluster. Can this domain be created only on the cluster nodes or should the domain controller also be installed on a physical computer as for the cluster. The answer would be really important for me, as I have not found an answer to the usual approach in any of my books or on the internet. If there is a different approach common here please explain it to me as I would like to avoid creating a second domain to manage the virtual servers, personal computers and users.
I would be very happy about a positive answer.

Many greetings

Hermann

Azure Stack HCI
Azure Stack HCI
A hyperconverged infrastructure operating system delivered as an Azure service that provides security, performance, and feature updates.
301 questions
0 comments No comments
{count} votes

Accepted answer
  1. Trent Helms - MSFT 2,536 Reputation points Microsoft Employee
    2021-05-20T12:30:32.553+00:00

    Hi anonymous usertys-8905,

    I hope that I am able to answer your question. Ultimately, I do not see a need for creating a separate domain for your virtual machines. When it is said that the VMs shouldn't have access to the management network, this is referring to the network segment itself (such as the VLAN/subnet) and not the actual domain. It is a good practice that you have a dedicated VLAN/subnet that you can use to access and manage your cluster nodes, but even this is not a hard-set rule. You can create this separation in a couple different ways:

    1. You can have a dedicated network card (such as a 1 or 10Gbps on-board card) which could be assigned to a dedicated VLAN/subnet for management purposes. Obviously, if this is the network you want to use for Live Migration too, a faster card is better. Then you could have your compute (i.e. VM) traffic physically separated onto other NICs in the system and your virtual switch would not need to share a virtual adapter with the host.
    2. You could use a converged setup. In this case, your virtual switch is bound to one or more physical NICs (using a Switch-Embedded Team) and a virtual adapter is created and shared with your host. In this type of setup, you separate your traffic again based on VLAN assignments, so you could assign your host virtual adapter to the 'management' VLAN then keep your VMs assigned to other VLANs for their respective traffic (this could be just one VLAN or multiple depending on how you want to separate your traffic).

    With this said, I did not mention the storage traffic. As this is an S2D cluster, you also want to have two or more dedicated VLANs/subnets for your storage traffic as well. Depending on your setup, you may have this traffic on its own set of dedicated NICs (I see a lot of customers that create switchless storage networks by connecting the NICs directly to each node for storage traffic) or you can also use a hyperconverged setup and have all three traffic types (management, compute and storage) going over the same physical NICs and separating the traffic types by VLAN/subnet.

    I think the key is just keeping your traffic logically separated, but you can certainly still join your VMs to the same domain.

    I hope this information is helpful!
    Trent


0 additional answers

Sort by: Most helpful