At my local PASS chapter meeting last Wednesday, I discovered I was behind on the latest news. SQL Server 2008 R2 Service Pack 1 was released! Having just done a cluster deployment and three internal upgrades, I had to get a plan in order for deployment.
Beforehand, I thankfully discovered the latest from Microsoft Release Services. Apparently the RTM of SP1 is essentially Cumulative Updates 1 through 6. Really? I'm ahead of the curve? In my zeal for fresh deployments (and for a customer's requirements), I had slipstreamed Cumulative Update 8 on all of my recent deployments. Those changsets don't get included until Service Pack 1 Cumulative Update 1. That certainly saves me some time!
Now if I could only find time to finish deploying SSRS internally.
Friday, July 15, 2011
Wednesday, July 13, 2011
MPIO, iSCSI, and the Cluster
I've been working away at a Windows Server 2008 R2 based cluster for the eventual deployment of two SQL Server 2008 R2 instances. I've dug, and I've dug, and I've dug, but I couldn't find a good order of configuration for getting the basics of the iSCSI SAN configured for multi-pathing, so here's what I've experienced.
Setup VLANs
Unfortunately, we've only got one switch to work with, but at least it's gigabit and managed. I setup three VLANs, one to carry public network traffic, one for the first iSCSI subnet, and one for the second iSCSI subnet. Why two VLANs for iSCSI? I don't want to have the broadcast communication from one interfering with another, as there will be no default gateway on either subnet to assist with routing. It's left as an exercise for the reader to learn the VLAN configuration for your particular model switch. But if you've got an HP ProCurve, they do provide some sample configurations.
Configure SAN
Thanks to Alan Hirt's (blog | twitter) book, I was able to plan my LUNs, and even then as requirements changed I had to redo them several times. For my SAN, and HP P2000 G3 iSCSI bundle with 12 2.5" SAS disks, HP uses the term "volume" to refer to LUNs. Don't confuse this with the term volume used to describe a filesystem container on a partition. All told, we needed eight LUNs for basic quorum, MSDTC, two data volumes, two log volumes, and two backup volumes. I avoided the use of mountpoints to keep the configuration easily readable. In the SMB market, mountpoints have been basically unheard of.
Configure IP addressing
For Public, it's simple enough. For the Private (or heartbeat), we used a cross-over cable and 10.50.50.0/24. Then to differentiate our iSCSI controllers, we used 10.10.10.0/24 and 10.10.20.0/24 for Controller A and Controller B, respectively. Those subnets are over-zealous, but for the sake of support-ability, I chose a commonly known subnet mask.
Configuring iSCSI
This is where things got heady and confusing. At this point there's about 10 different IPs in the mix, plus those VLANs, and a bunch of identically colored cables. When picking a target for discovery, go with the IP you know and love on Controller A. Remember to check Enable Multi-Path when adding a Target. It'll chose the default binding for the initiator (0.0.0.0), which might not be what we want, but it's a start. It's those sessions that really matter. Don't forget to switch to the Volumes and Devices tab and use Auto Configure to associate those LUNs.
Go ahead and add a second target for Controller B on the Discovery tab. Then make sure that the MPIO feature is installed and run mpclaim -n -i -a, to grab all of the MPIO enabled drives. You'll likely have to reboot after this. Note that my HP P2000i G3 didn't have a vendor supplied DSM because it's so new, so the Microsoft DSM will work just fine. Repeat on your second host and reboot. Now in the MPIO control panel (Windows 2008 R2 only), you should see MSFT2005iSCSIBusType_0x9 as a claimed device. That's about it for your visits to the MPIO tool.
Now let's confirm that we've got multiple paths or add them explicitly if you need particular source and target IPs. On the Target tab, click on Properties. Here you'll see sessions connected to your SAN. It's the multi-pathing of the sessions that keeps things running.
Drill down further on a particular session by clicking on Devices. Here you'll see what LUNs are associated with that connection. If MPIO is correctly enabled, the MPIO button will be available.
Click on the MPIO button after highliting a LUN to look at it's paths a policies.
You can see that I've got mine already configured with 3 paths at this point. I've already set the policy to Fail Over Only and two of those paths have been set to Standby. To confirm each path's settings, highlight it and click on Details.
Now you can see what each connection is using for source and target. You see from the example that its explicitly set to use the Microsoft iSCSI Initiator and it's source is explicitly set to the second iSCSI NIC's IP hitting the third port on Controller A.
Now that seems like too much clicking, doesn't it? For my deployment, this was fast enough and it re-enforced the learning process visually. You can leverage the mpclaim command line tool to set these policies via scripts.
Open Disk Manager, bring the disks Online then Initialize them. On the same node, you can format and assign drive letters. After this, the failover cluster installation will perform the magic to make them clustered resources.
Installing Failover Cluster Feature
Now it's a matter of running some wizards. At this point, the configuration passed validation successfully. Everything is wizard driven, so I recommend Alan Hirt's book, Pro SQL Server 2008 Failover Clustering from Apress. Its what you need to understand the theory behind the process and points out several pitfalls during the architecture process, something I had to learn ad-hoc while running this last-minute project. And if you want to get perfect validation, you'll need to read Nic Cain's (blog | twitter) blog on the hidden cluster adapter. If you're looking for more information, check out both Alan and Nic's blogs. Nic's running a series on large cluster deployments right now.
Setup VLANs
Unfortunately, we've only got one switch to work with, but at least it's gigabit and managed. I setup three VLANs, one to carry public network traffic, one for the first iSCSI subnet, and one for the second iSCSI subnet. Why two VLANs for iSCSI? I don't want to have the broadcast communication from one interfering with another, as there will be no default gateway on either subnet to assist with routing. It's left as an exercise for the reader to learn the VLAN configuration for your particular model switch. But if you've got an HP ProCurve, they do provide some sample configurations.
Configure SAN
Thanks to Alan Hirt's (blog | twitter) book, I was able to plan my LUNs, and even then as requirements changed I had to redo them several times. For my SAN, and HP P2000 G3 iSCSI bundle with 12 2.5" SAS disks, HP uses the term "volume" to refer to LUNs. Don't confuse this with the term volume used to describe a filesystem container on a partition. All told, we needed eight LUNs for basic quorum, MSDTC, two data volumes, two log volumes, and two backup volumes. I avoided the use of mountpoints to keep the configuration easily readable. In the SMB market, mountpoints have been basically unheard of.
Configure IP addressing
For Public, it's simple enough. For the Private (or heartbeat), we used a cross-over cable and 10.50.50.0/24. Then to differentiate our iSCSI controllers, we used 10.10.10.0/24 and 10.10.20.0/24 for Controller A and Controller B, respectively. Those subnets are over-zealous, but for the sake of support-ability, I chose a commonly known subnet mask.
Configuring iSCSI
This is where things got heady and confusing. At this point there's about 10 different IPs in the mix, plus those VLANs, and a bunch of identically colored cables. When picking a target for discovery, go with the IP you know and love on Controller A. Remember to check Enable Multi-Path when adding a Target. It'll chose the default binding for the initiator (0.0.0.0), which might not be what we want, but it's a start. It's those sessions that really matter. Don't forget to switch to the Volumes and Devices tab and use Auto Configure to associate those LUNs.
Connected to a Quick Connect Target |
Go ahead and add a second target for Controller B on the Discovery tab. Then make sure that the MPIO feature is installed and run mpclaim -n -i -a, to grab all of the MPIO enabled drives. You'll likely have to reboot after this. Note that my HP P2000i G3 didn't have a vendor supplied DSM because it's so new, so the Microsoft DSM will work just fine. Repeat on your second host and reboot. Now in the MPIO control panel (Windows 2008 R2 only), you should see MSFT2005iSCSIBusType_0x9 as a claimed device. That's about it for your visits to the MPIO tool.
Now let's confirm that we've got multiple paths or add them explicitly if you need particular source and target IPs. On the Target tab, click on Properties. Here you'll see sessions connected to your SAN. It's the multi-pathing of the sessions that keeps things running.
The list of sessions in Target properties |
Drill down further on a particular session by clicking on Devices. Here you'll see what LUNs are associated with that connection. If MPIO is correctly enabled, the MPIO button will be available.
The list of disks associated with a session |
Click on the MPIO button after highliting a LUN to look at it's paths a policies.
MPIO settings on a device |
You can see that I've got mine already configured with 3 paths at this point. I've already set the policy to Fail Over Only and two of those paths have been set to Standby. To confirm each path's settings, highlight it and click on Details.
Details of an MPIO path |
Now you can see what each connection is using for source and target. You see from the example that its explicitly set to use the Microsoft iSCSI Initiator and it's source is explicitly set to the second iSCSI NIC's IP hitting the third port on Controller A.
Now that seems like too much clicking, doesn't it? For my deployment, this was fast enough and it re-enforced the learning process visually. You can leverage the mpclaim command line tool to set these policies via scripts.
Open Disk Manager, bring the disks Online then Initialize them. On the same node, you can format and assign drive letters. After this, the failover cluster installation will perform the magic to make them clustered resources.
Installing Failover Cluster Feature
Now it's a matter of running some wizards. At this point, the configuration passed validation successfully. Everything is wizard driven, so I recommend Alan Hirt's book, Pro SQL Server 2008 Failover Clustering from Apress. Its what you need to understand the theory behind the process and points out several pitfalls during the architecture process, something I had to learn ad-hoc while running this last-minute project. And if you want to get perfect validation, you'll need to read Nic Cain's (blog | twitter) blog on the hidden cluster adapter. If you're looking for more information, check out both Alan and Nic's blogs. Nic's running a series on large cluster deployments right now.
Tuesday, July 12, 2011
The Joys of SSL
So you just upgrade to Exchange 2010 and you bought and installed your UCC certificate. Things look great right? But that was five months ago.
The SharePoint team called and they need an SSL certificate too. So can you use a UCC certificate? Yes, but it requires a lot more work. Oh, and the accounting department's web application, that can't handle Subject Alternate Names.
Time to call the aforementioned accounting department and get approval for purchasing that wildcard certificate. But the question is, do we just deploy it or setup Active Directory Certificate Services?
The SharePoint team called and they need an SSL certificate too. So can you use a UCC certificate? Yes, but it requires a lot more work. Oh, and the accounting department's web application, that can't handle Subject Alternate Names.
Time to call the aforementioned accounting department and get approval for purchasing that wildcard certificate. But the question is, do we just deploy it or setup Active Directory Certificate Services?
Wednesday, July 6, 2011
Organizational Charts
Having been a part of at least five organizational changes in my 30 month tenure with my current employer, this infographic speaks volumes.
http://www.bonkersworld.net/2011/06/27/organizational-charts/
http://www.bonkersworld.net/2011/06/27/organizational-charts/
Subscribe to:
Posts (Atom)