Setup VLANs
Unfortunately, we've only got one switch to work with, but at least it's gigabit and managed. I setup three VLANs, one to carry public network traffic, one for the first iSCSI subnet, and one for the second iSCSI subnet. Why two VLANs for iSCSI? I don't want to have the broadcast communication from one interfering with another, as there will be no default gateway on either subnet to assist with routing. It's left as an exercise for the reader to learn the VLAN configuration for your particular model switch. But if you've got an HP ProCurve, they do provide some sample configurations.
Configure SAN
Thanks to Alan Hirt's (blog | twitter) book, I was able to plan my LUNs, and even then as requirements changed I had to redo them several times. For my SAN, and HP P2000 G3 iSCSI bundle with 12 2.5" SAS disks, HP uses the term "volume" to refer to LUNs. Don't confuse this with the term volume used to describe a filesystem container on a partition. All told, we needed eight LUNs for basic quorum, MSDTC, two data volumes, two log volumes, and two backup volumes. I avoided the use of mountpoints to keep the configuration easily readable. In the SMB market, mountpoints have been basically unheard of.
Configure IP addressing
For Public, it's simple enough. For the Private (or heartbeat), we used a cross-over cable and 10.50.50.0/24. Then to differentiate our iSCSI controllers, we used 10.10.10.0/24 and 10.10.20.0/24 for Controller A and Controller B, respectively. Those subnets are over-zealous, but for the sake of support-ability, I chose a commonly known subnet mask.
Configuring iSCSI
This is where things got heady and confusing. At this point there's about 10 different IPs in the mix, plus those VLANs, and a bunch of identically colored cables. When picking a target for discovery, go with the IP you know and love on Controller A. Remember to check Enable Multi-Path when adding a Target. It'll chose the default binding for the initiator (0.0.0.0), which might not be what we want, but it's a start. It's those sessions that really matter. Don't forget to switch to the Volumes and Devices tab and use Auto Configure to associate those LUNs.
Connected to a Quick Connect Target |
Go ahead and add a second target for Controller B on the Discovery tab. Then make sure that the MPIO feature is installed and run mpclaim -n -i -a, to grab all of the MPIO enabled drives. You'll likely have to reboot after this. Note that my HP P2000i G3 didn't have a vendor supplied DSM because it's so new, so the Microsoft DSM will work just fine. Repeat on your second host and reboot. Now in the MPIO control panel (Windows 2008 R2 only), you should see MSFT2005iSCSIBusType_0x9 as a claimed device. That's about it for your visits to the MPIO tool.
Now let's confirm that we've got multiple paths or add them explicitly if you need particular source and target IPs. On the Target tab, click on Properties. Here you'll see sessions connected to your SAN. It's the multi-pathing of the sessions that keeps things running.
The list of sessions in Target properties |
Drill down further on a particular session by clicking on Devices. Here you'll see what LUNs are associated with that connection. If MPIO is correctly enabled, the MPIO button will be available.
The list of disks associated with a session |
Click on the MPIO button after highliting a LUN to look at it's paths a policies.
MPIO settings on a device |
You can see that I've got mine already configured with 3 paths at this point. I've already set the policy to Fail Over Only and two of those paths have been set to Standby. To confirm each path's settings, highlight it and click on Details.
Details of an MPIO path |
Now you can see what each connection is using for source and target. You see from the example that its explicitly set to use the Microsoft iSCSI Initiator and it's source is explicitly set to the second iSCSI NIC's IP hitting the third port on Controller A.
Now that seems like too much clicking, doesn't it? For my deployment, this was fast enough and it re-enforced the learning process visually. You can leverage the mpclaim command line tool to set these policies via scripts.
Open Disk Manager, bring the disks Online then Initialize them. On the same node, you can format and assign drive letters. After this, the failover cluster installation will perform the magic to make them clustered resources.
Installing Failover Cluster Feature
Now it's a matter of running some wizards. At this point, the configuration passed validation successfully. Everything is wizard driven, so I recommend Alan Hirt's book, Pro SQL Server 2008 Failover Clustering from Apress. Its what you need to understand the theory behind the process and points out several pitfalls during the architecture process, something I had to learn ad-hoc while running this last-minute project. And if you want to get perfect validation, you'll need to read Nic Cain's (blog | twitter) blog on the hidden cluster adapter. If you're looking for more information, check out both Alan and Nic's blogs. Nic's running a series on large cluster deployments right now.
That helped a lot. Thanks!
ReplyDelete