Why NLB?
When implementing Microsoft servers in your infrastructure, sometimes there will be the need to use Network Load Balancing (NLB). Services like the proxy (ISA) through which all users connect Single Sign-On to Sharepoint and Exchange, are just too important to not be redundant and scalable. Where Exchange mailbox servers can use Cluster Continuous Replication for redundancy and protection against database corruption, Exchange Client Access Servers (CAS) use NLB to be redundant and scalable.
Unicast Mode
The most obvious and MS-preferred way to setup NLB is using unicast mode. However when you're running the servers on Vmware ESX it will not work immediately. The unicast mode leads to flooding of MS NLB packets. Hardware servers will obey and flood your network. A virtual server on Vmware ESX will not. The virtual switch inside the ESX server will stop the flooding due to a default setting: "Notify Switches=Yes". Turning this setting to "No" will allow NLB unicast to work.
So NLB unicast mode works on hardware servers and on Vmware ESX virtual servers. But what about the flooding of packets on your network. How severe is that? To find out, I installed two servers in NLB unicast and a client with a network sniffer (Wireshark; http://www.wireshark.org/). I sniffed 5 minutes and got 120.000 MS-NLB packets on my clients network interface. Now imagine all this traffic on every single interface in your server network, that's unacceptable.
Multicast Mode
Fortunately, there is also NLB multicast mode. This will not work straight away. Once you've set your NLB cluster to multicast mode you will not be able to reach the clusters IP address (VIP). This is because NLB multicast uses a Multicast MAC address with a Unicast IP address. Wether you're on ESX or hardware, your physical switch will refuse to learn this combination because obviously a Multicast MAC address should not have a Unicast IP address. The solution here is to force the physical switch to know by adding a static ARP entry to the physical switch, following these steps:
- On an NLB server run: nlb ip2mac
to find the multicast macaddress. - On this physical switch (Cisco in this example) add the following line to the global config:
arp (example: arp 10.0.0.10 0100.25fe.25fe ARPA)ARPA
You will now be able to reach your cluster IP address. Check the network with Wireshark and it will be a calm sea. No flooding there.
Conclusion
So we might as well conclude: Always run NLB in multicast mode, it cant be a coincidence that the latest Service Pack for ISA 2006 fixes NLB multicast mode support. Probably the only real alternative is creating a special unicast storm vlan to place your NLB cluster interfaces in.
A note on additional VIP's for ISA arrays
On an ISA NLB array it's very common to have multiple virtual IP address. Although the nlb ip2mac command can show you the macaddress of any given IP, the macaddress for any additional VIP is the same macaddress as for the primary VIP. So you will have to add arp entries for different ipaddresses to the same macaddress on your physical switch.
No comments:
Post a Comment