Friday, December 16, 2011

SSL for SSRS Using a CNAME

Recently, I've been working on creating a basic reporting infrastructure.  For simplicity's sake, we're leveraging Log Shipping on a relatively small application database using SQL Server 2008 R2.  (Note: since this second instance is in active use, a second SQL Server license is required.)

Since we have a server naming convention that states if the server's role is replaced, the new host must have a different name, I want to make that transition transparent in the future.  I thought, let's leverage CNAMEs.  I created a new CNAME record in our internal DNS and aliased the actual host name.

Now, we wanted to force clients to only use HTTPS when connecting to SSRS, so I needed to get a certificate issued and installed and setup bindings.  Since this is for internal use and we have an Enterprise CA deployed, it was just a matter of requesting a certificate, except that I couldn't request a certificate based on the Web Server properties via the Computer certificate store.

Certificate Templates on my Enterprise Certificate Authority
A quick hop over the the CA to adjust the security permissions to add Domain Computers for enrollment and I could request what I needed.

Adjusting security permissions for enrollment

And now I can finally get what I need:
Initiate a certificate request from the local Computer certificate store

Selecting a Web Server certificate
This is the catch, I need to add my internal single-part CNAME DNS record and then add Subject Alternate Names for my current server which is hidden behind that CNAME.  Currently, SSRS will bind the site to the primary name on the certificate by default.  If I simply requested a computer certificate, my subject name would be the internal FQDN of my server, which would render my CNAME setup useless.

Adding my CNAME and SANs
Practical applications for doing this are for manually performed failover, or if you're really doing things right, you'll need this certificate (or a certificate like it) applied to every node in your NLB farm.  Additionally, consider the use of aliases for the convenience of client application deployments.

Sunday, December 4, 2011

Analyzing Vendor Recommendations

Recently, I received some feedback from a vendor regarding changes I made to our customer's space-constrained application server, which hosts two user databases in a SQL Server 2005 instance.  Let's look at some of these recommendations, shall we?  These responses have more value for the vendor as learning points and thus weren't directly communicated with my customer.  Rather the discussion with my customer revolves around application ownership and recoverability, and how those items align with their overall business needs.

This is primarily in defense of my original actions, but brings up some important misconceptions around SQL Server, provides documentation for those myths, and discusses how to redress the issues so that the customer is served.

Vendor Recommendations
I realize it has been a while since I had setup my maintenance plan to backup the user databases and transaction logs, so I went in to see how they were doing.  But I see they are not there anymore.
Yes, I did change the existing maintenance plans.  They didn't cover system databases, which are key for recoverability.  
I also had a weekly maintenance plan to shrink the user databases.
The application server is space constrained, so for performance's sake, the original Shrink Database item was removed from the plan and replaced with T-SQL code for targeting log files only for DBCC SHRINKFILE commands.  Joined with regular log backups, this would allow the OS to retain more free space for regular operation but still let us enjoy point-in-time recovery for user databases.  For a long term solution, this system needs more storage provisioned, period.
The alerts you have setup have thresholds too low.  They don't take into account a database in "full" mode.  
The metrics gathered were from PerfMon stats, namely \\SERVER\MSSQL$INSTNACE\Database\Percent Log Used >= 90.  This metric is valid for any recovery model and can be easily illustrated by the well known DBCC SQLPERF(LOGSPACE) query, which will show us both the percentage used and the size of the log itself.  This metric matters in any case, and is logged to the Windows event log as an Error, not a Warning; its that important.
And you should never get an alert on the tempdb system database. 
A full transaction log, regardless of which database it is for, will halt the writing of data.  Things that might be missing from this systems configuration is a service account with the SE_MANAGE_VOLUME_NAME permission to allow for instant file initialization when auto-growth kicks in frequently to address the utilization issue, further propagating the alerts.
I went out and looked at the physical size of these databases and log sizes.  They are very small compared to normal operation.  The alert settings need to be changed.
The database size isn't the issue in question (thought with Shrink Database commands, there will be fragmentation issues), but with a properly scheduled BACKUP LOG in conjuction with auto-growth enabled, a database log file will balance out with the volume of transactions hitting the database.  Another item to consider is the number of VLFs since there's a history of growing and shrinking logs.
I am changing the user databases to "simple" recovery model for now.  Probably need to turn off the hourly transaction log backup for now.  
Yes, if you're changing recovery models, you're going to cause those explicitly typed BACKUP LOG jobs to fail.
I have run the shrink database on the user databases and on tempdb.
I'm glad database size is a concern, but we should look at sp_spaceused to see if we just plain need more space or if we're potentially over-indexed.  And shrinking tempdb is documented to cause corruption.
I also see that the SQL backups are only for three days.  Are these being swapped out so that there are actually more days being stored somewhere else (offsite maybe)?.
Looking at the larger picture, there is a VSS aware 3rd party backup system in place.  SQL Server backups exist for the purpose of near-line recoverablity and point-in-time recoverability.  Remember, we're space constrained, and you've just implimented a Shrink Database to address it, so we've only got so much space for backup retention. 
Please do not change any general or options properties on the user databases.  I will maintain these.  Just let me know if you need something changed as some items can have repercussions on the functionallity of applications.
It looks like you've got this under control.  The fact that compatibility_level is set to 80 and page_verify is set to CHECKSUM has me concerned.  Which do we really need?
The biggest point here is that user applications are two completely separate systems sharing the SQL server.  They really need separate backup and maintenance plans as the data is used and maintained differently.
Taking a holistic view of the application and server together, these backup plans may need crash consistency between databases.  We've already shown that there are some unique issues with the compatilbity_level and "other general settings". 
The SQL server is two service packs behind.  SQL Server 2005 should be on SP4.  
SQL Server 2005, even at Service Pack 4 with be out of maintenance from Microsoft as of 12/2011.  And this doesn't jive with the compatibility_level request from earlier.  If I'm looking to future proof my application and move to SQL Server 2012, I won't be able to attach my database, let alone restore it.

How can we be productive about these recommendations as consultant generalists and DBAs?

Communicate

Get the stake holders into a stand-up meeting.  Get the overall goals for the application written down.  What's our time to recovery needs?  What's our point in time recovery needs?  Who's responsible for what part of a disaster recovery?  How about performance?  What's the life-cycle of this application?

Delegate

Once the questions are answered, don't be afraid to let the 3rd party vendor handle the application the way your customer wants.  They've elected to utilize this vendor.  But be cautious, and take precautions.  In this example, I'll have to let the customer know we'll still be getting alerts and inform them of the ramifications of ignoring or permanently suppressing those notifications.

Act

Put your plan into action.  And, of course, backup, backup, backup.  Then test, test, test!  There's a ton of great information in the links above, so go do your homework so that you can act on knowledge, not assumptions.

Monday, November 28, 2011

A New Challenger Has Arrived

Historically, I've run this blog on hand coded HTML, Wordpress, and Textpattern.  Those CMS platforms ran on Linux, Macs, and Solaris platforms.  And now it's all Google.
But what I really miss is my favorite CMS, Textpattern.  So in the interest of restoring some of my historical posts (namely, my Japan trips) and to challenge myself to learn the .NET development stack and Windows Azure platform, I've begun the challenge to recode Textpattern.  Okay, maybe not the whole thing, since Azure supports PHP, but at least get it into my RDBMS of choice, SQL Server.  So, where to begin?

Development Platform

To easy my way into the Windows development world, I'm going to cheat and start with WebMatrix.  It's fast, easy, free and has a one-click installer for everything I need, named:
  • WebMatrix IDE
  • MySQL 5
  • SQL Server Compac 4.0
  • IIS Express 7.5
  • PHP Extensions for PHP
After I've installed the lot, it's a matter of getting the data replayed into a new database.

Getting Your Data Back

Before restoring, I had to brush up on MySQL syntax, but quickly discovered what I needed.  First open the MySQL prompt, connecting with the root password you set during the install.  Then create a target database:
mysql> create database textpattern;
mysql> use textpattern;
mysql> source C:\path\to\mysql-dump.sql;
mysql> exit;
That's all there is to it!

I can has data back!


Connecting The Dots

Next I created a new project in WebMatrix at the root folder of my extracted download of the latest version of Textpattern.



If you'd like to take a look at your data from within WebMatrix, setup a database source connection to MySQL as in the screenshot below.

Setup a MySQL Connection

Look at those pretty tables

Then, I renamed config-dist.php to match the MySQL settings I had configured, just as if I was deploying Textpattern for the first time.

Launching the Textpattern admin interface for the first time forced a minor schema upgrade.  Then switching to the Admin interface, I updated my website URL to match my development environment for relative URIs, and that was all I had to do!

Take a quick look at my 404 page, which is clearly not Apache!
Definitely not Kansas

Friday, November 18, 2011

Why, SSRS, Why?

Once again, I've been digging into SSRS reporting.  Today, I thought to myself, I should had a multi-valued parameter to this report for filtering my tablix aggregations.  So, I created a data-set that was a self-join for gathering a list of Managers.  Easy enough, that'll work great for a drop-down selection.  But let's say a manager wants to only look at certain employees that they manage.  A multi-valued parameter would be great for this!  

I'd like a side of extra values in an array, please.
Add one data-set for that scenario, and re-order the parameters for proper cascading.

Then it's just a matter of adding my managed employees MVP to the tablix filter, right?  Okay, here it is, I'll just double click it and get the right syntax.

What's wrong with this picture?

So, why am I only getting one member's aggregates in my tablix?  This is some serious fail!

Oh, of course!
A few searches through MSDN for syntax and we discover that for multi-valued parameters, appending "(0)" in your expression causes SSRS to query only the first value in your array.  I'm glad that got added by default when the parameter was an explicitly typed array.  Not very conducive for learning syntax on the fly.  But now we've got one (significantly more readable) report.

Tuesday, October 25, 2011

More Schema Fun

So let's say you don't have any sort of Exchange on premises for an Office 365 deployment.  You have Active Directory on premises for authentication and some local storage for things like QuickBooks or large volumes of CAD drawings.  Office 365 seems like a good fit for a small organization because you just don't need those extra servers and management overhead.

Wait, wait! Don't tell me!

But once again, schema matters, because when you created a new user, even though you set their eUPN accordingly, they're getting email aliases with that pesky onmicrosoft.com domain as the primary alias.  So you think, It's just Exchange 2010, I'll use PowerShell remoting and do some wizardry behind the scenes.  Nope, fail.  You can't.  The primary SMTP alias comes from an Active Directory property that's automatically populated in domains with on premises Exchange when you create a user's mailbox.

Enter the ADSIEdit

If you've been an Exchange administrator for a while, you've probably played with the proxyAddresses attribute before.  I've mentioned it before when you're looking to add a secondary alias.  But if you don't even have a primary alias, you'll be seeing this:

proxyAddresses is undefined

Launching ADSIEdit from Administrator Tools on the Start Menu gets us going.  Connect to your Default Naming Context and drill down to your user object.  Right clicking the user allows you to select Properties.  Double check that the Show only attributes that have values is unchecked.

Drilling down through our list you'll see that proxyAddresses is blank.  Set it to SMTP:youralias@vanitydomain.com.  Make sure that you have one and only one address with the SMTP capitalized.  Hop over to your Directory Synchronization host and re-run the Directory Synchronization Configuration wizard.

Other Cool Stuff

Let's say you want to see who you've missed this property on before.  If you launch ADUC, you can create a custom query and configure an LDAP filter:

(&(objectClass=User)(!proxyAddresses=*SMTP*))

Thursday, October 20, 2011

Office 365: Schema Matters

We just pushed an a la carte deployment of Microsoft Online services in order to eliminate the on-premises requirements for Lync.  In the process, we discovered that our legacy OCS 2007 internal-only deployment caused some schema issues for us.  We had politely decommissioned the OCS host by un-registering the various roles, uninstalling the application, then eventually disjoining it from the domain.

The evils of directory synchronization

But this environment is running ADFS2 and DirSync for a unified single sign-on experience.  That means any AD schema extensions we have regarding Exchange or Lync/OCS are propagated to the cloud.  The most critical property is msRTCSIP-UserEnabled.  Anyone who logged into OCS in the legacy deployment has this attribute, and during the above decommissioning procedures, it is set to FALSE by the installers.  Give it 3 hours, and now you have licensed users in Office 365, who have AD properties to the contrary.  But you can't change it from the cloud, you have to change it in your AD and force a synchronization.

Don't let this happen to you.
ADModify.net to the rescue

Enter my favorite ADDS tool (and soon to be yours), ADModify.net.  This tool uses LDAP calls to ADDS, scripting mass changes behind the scenes.  This is useful not only for resolving this problem, but also for setting your eUPN to your vanity domain for all desired Office 365 users.  It's also a portable app you can keep on a USB drive for quickly making changes at customer sites.

Before we solve our problem, don't forget to add %sAMAccountName% in the Legacy Account field of the Account tab, or you'll blank out everyone's UPNs!  It's simple to resolve, but easy to miss.

Set that eUPN, but don't forget %sAMAccountName%

Moving on to the Custom tab, check the Make a customized attribute modification and specify our troublesome property, msRTCSIP-UserEnabled, and set it to TRUE.  Note, this will add the property to any objects that currently do not have it, so select users judiciously.

Set a custom attribute, for everyone.

Hit Go! then jump on your DirSync server and force a synchronization and it's off to the races for Lync.

Why do I care?

You probably asking why this even matters.  Well, with DirSync, if you have a property Microsoft cares about (pretty much from any co-existence scenario), then Microsoft is going to take that objects property and put it in their AD.  Say you're in an organization that never had on-premises Exchange and you move to Office 365.  Well that means with SSO and DirSync, you won't be able to set the msExchangeHideFromAddressLists property, because your schema doesn't have it.  And you can't make that change via remote PowerShell or the Exchange Online console because Microsoft expects it to come from your on-premises AD.

So, know your limits.  And also know your recent schema changes and how to look for differences between objects.

Friday, August 26, 2011

Add a secondary alias to a federated Office 365 mailbox

Today, Joe called and asked me why he couldn't get emails at joesph@contoso.com.  Well, his account was setup for joe@contoso.com and that's also his UPN in Active Directory.  His business cards have already been printed and he's been using his email to send, so we can't just re-create things.

Office 365, open the pod bay doors

Now, you could just log into Office 365, click on Manage under Exchange Online, find the user's mailbox and add the address under E-mail Addresses.  But since your using SSO with Federation, you're getting this lovely message:
I'm sorry, Dave. I can't do that.
Really, open the bay doors, please

Well, if you managed to get SSO and Federation configured, you're probably already familiar with the key to this process, ADSIEdit.  Launch it and connect to the Default naming context.

Step 1: Get the Blue Key
Drill down through the schema until you find your user object, right click it and select Properties.

Step 2: Get the Red key
 Locate the proxyAddresses property and edit it.  Add your secondary alias using the following format (capitalization matters, significantly!):

smtp:useralternate@contoso.com

Note the existing alias with a capitalized SMTP.  That means it's the primary, so we don't want to mess with it.  Click OK all the way back out of ADSIEdit and head over to your DirectorySync server.  Launch %programfiles%\Microsoft Online Directory Sync\DirSyncConfigShell.ps1 and type:

Start-OnlineCoexistenceSync

Later, log into the management console of Exchange Online and you should see the results you need:

Look, it's the pod bay, finally!

And according to the references below, if we extend the Active Directory schema for Exchange 2010 SP1, we can leverage additional Active Directory properties that will replicate up to Office 365 and take effect.  Or we could all learn PowerShell remoting, which I've used a few times already for other items, like mass contact creation, but I'll leave that as an exercise for the reader.

References:




Wednesday, August 17, 2011

Refactoring sp_spaceused

A while back I refactored sp_spaceused into CTEs.  One query would return the total for a database, the other would return totals for individual user objects within the current execution context.  This only accounted for on-page data, which the original stored procedure does manage.  I keep finding myself running the object level query to determine which tables are index heavy or are ready for historical archiving, both of which are especially handy when you're working with the storage limitation of Express Edition.  (Sample use cases)

Disclaimer: By following any of these activities you hold the author harmless of any effects, averse or otherwise.  There's no fitness or guarantee for this information. 

WITH size_calculations (objectid, reserved_pages, used_pages, pages, num_rows)AS (SELECT [OBJECT_ID] AS [objectid]
  
, SUM(reserved_page_count) AS [reserved_pages]
  
, SUM(used_page_count) AS [used_pages]
  
, SUM( CASE
          
WHEN (index_id < 2)
          
THEN (in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count)
          
ELSE lob_used_page_count + row_overflow_used_page_count
      
END
  
) AS [pages]
  
, SUM( CASE
          
WHEN (index_id < 2) THEN row_count
          
ELSE 0
      
END
  
) AS [num_rows]FROM sys.dm_db_partition_stats GROUP BY [object_id])SELECT OBJECT_NAME(SC.objectid) AS [name]
  
, SC.num_rows AS [rows]
  
, (SC.reserved_pages * 8) AS [reserved_kb]
  
, (SC.pages * 8) AS [data_kb]
  
, ((CASE WHEN SC.used_pages > SC.pages THEN (SC.used_pages - SC.pages) ELSE 0 END) * 8) AS [index_size_kb]
  
, ((CASE WHEN SC.reserved_pages > SC.used_pages THEN (SC.reserved_pages -SC.used_pages) ELSE 0 END) * 8) AS [unused_kb]

FROM size_calculations AS SC
INNER JOIN sys.objects AS O
ON SC.[objectid] = O.[object_id]
WHERE O.TYPE = 'U'
ORDER BY SC.reserved_pages DESC;

Monday, August 8, 2011

Where are my SQL Servers?

How can you discover where SQL Servers are in your enterprise environment?

There's always the handy sqlcmd -L, but how can you know you've found everything?

How about this for starters:

csvde -f out.csv -r "(&(objectClass=Computer)(servicePrincipalName=*SQL*))" -l name,servicePrincipalName

It's left as an exercise for the reader to find the intersection of the results of the above tools, along with validation.

Monday, August 1, 2011

Fixing Old Laptops

For the last ten years, I've been the computer guy in the family.  That means when I'm home or visiting friends, there's an awkward moment after "How have you been?" and before "So, my computer..."

Last week, I went up to see my sister-in-law and her family, and her girls were complaining about their old "homework" laptop.  It was essentially the same hardware as my ThinkPad x41t, which actually can manage to run Windows 7 Ultimate.  That being said, with Windows XP on a Pentium M with a 4500RPM IDE drive, things just aren't going to be that fast.  And there was malware involved.  So where do we start?


Disclaimer: By following any of these activities you hold the author harmless of any effects, averse or otherwise.  There's no fitness or guarantee for this information. 

Safe Mode Cleaning

First, boot the computer into Safe Mode with Networking.  At this point, I like to cleanup as much temporary data as possible to reduce malware removal tool scan time.  Tools like Piriform's CCleaner work great to clean up the local profile.  If you need to clean other user profiles, you're on your own or logging in and out several times to run the tool.

Following up with a scan with tools like Malwarebytes (a current favorite) and you've got a head start on the malware problem.

Additionally, I typically like to check a few registry locations, however the majority of the startup items can be located by TrendMicro's Hijackthis tool.  Be cautious as many of the keys it enumerates are, in fact, supposed to exist.

Disk Problems

In safe mode, you can also check the health of the logical disk using chkdsk.  If I'm having a performance issue loading or opening programs, it's important to confirm that you're not getting disk read errors causing re-reads.  Realistically, the speed at which disks operate today actually causes more read errors that you know, but there's automatic disk correction occurring under the hood.  What I'm looking for is significant logical errors on the disk, which can also mean pending disk failure.

If chkdsk reports more errors than a volume bitmap error, then I'll schedule a chkdsk /f on the next reboot to get those index errors repaired.

Useless Programs and Features

Once back in Windows, I'll load up Add and Remove Programs and audit the installed applications.  Frequently there's a plethora of toolbars, which can slow down browser performance.  Occasionally, there's two versions of Java, which I recommend running only the latest, most up-to-date version, unless you have an application compatibility issue, such as using the web management software for Cisco PIX firewalls (Requires Java SE 1.4.14, I believe).

The biggest culprit I see slowing down older hardware running Windows XP is the Windows Desktop Search 4.0 feature that's installed via Windows Update.  It's a great tool, and I leverage it on my Windows 7 workstation, but that's newer hardware.  It's the Microsoft equivalent to Google Desktop.  If you don't need it, then you don't need it crawling across your slow disk, reading and indexing files in the background.

Defragmentation

Speaking of disks, how about defragmenting?  Diskeeper swears by it as a performance tool, however the measurable results can be negligible.  Rather, if you have the option to re-install Windows from scratch and happen to have a Windows 7 install CD lying around, consider using it to pre-format your disks for your Windows XP install so that you can take advantage of disk partition alignment, which applies not only to servers, but to workstations.

As for actually defragmenting, I prefer both Piriform's Defraggler and Sysinternal's contig.  The former allows for a more efficient whole disk defragmentation including defragmenting free-space.  Both utilities allow for single-file defragmentation.

Additionally, some system files, namely the paging file and registry hives, cannot be defragmented in user-mode.  Instead, you could defragment free space, set your paging file to zero, reboot, set your paging file back to a 1:1 ratio with you physical memory, and reboot again, hoping that the file will be created in contiguous free space.  Or you could just use PageDefrag.  The tool does a boot-time defrag of the paging file and registry hives, just like a chkdsk /f.  I don't run this tool on a regular basis, but only when I'm being incredibly aggressive with my disk fragmentation.

General System Optimizations

Do you really need those fancy themes?  Or Fast User Switching on a computer with a single account?

Right clicking My Computer and selecting properties brings up all the tabs we need.  Under the Remote tab, are you even using Remote Desktop or Remote Assistance?  How about unchecking the option?  How about System Restore?  Checking the turn off box will also clear out the disk space it uses, where old malware files tend to hide.

The Advanced tab is where the real magic happens.  For Performance settings, set to Adjust for best performance.  You'll lose that fancy blue color, but it's a few less bitmaps to load into memory.  Switch over to the Advanced tab under Peformance, and click on Change to set your page file size.  Regardless of the size you set (I prefer a 1:1 ratio to physical memory), you should set the Custom Size minimum and maximum to the same size.  Why should Windows waste disk IO and CPU resources managing and resizing the paging file?  Don't forget to click Set to actually apply the setting, and then later reboot.  If you can put the paging file on a different volume that resides on a different physical disk, you can experience improved performance.

Under the Startup and Recovery section of the Advanced tab is also where you control the BSOD actions. If at all possible set it to not automatically reboot and to write a full memory dump.  This allows you to use tools like Nirsoft's BlueScreenView to inspect the root cause of the crash. (It's left as an exercise to the reader to learn about Crash Dump Analysis.)

The next stop is services.msc.  Now that we've got the windowing system down to the minimums, does the Themes service need to be running?  Nah, let's set it to Stopped and Disabled.  And if we're using a single account on the system, stop and disable Fast User Switching.  What about Windows Wireless Zero Configuration on a desktop system?  Or Print Spooler?  If you disabled System Restore earlier, you can also disable the service.  Windows Indexer?  Windows Search? (Which you hopefully uninstalled already.)  Be very judicious about disabling services though, as some of the dependencies are not as obvious and your risk creating an unstable system.

The End Result

Once I went through all of these items, I was able to reduce CPU usage, stabilize a dying disk, and reduce memory utilization by a whopping 50%.  The main problems I found were malware, logical disk corruption, and Windows Search causing high IO.

Friday, July 15, 2011

Mega-Patch Tuesday

At my local PASS chapter meeting last Wednesday, I discovered I was behind on the latest news.  SQL Server 2008 R2 Service Pack 1 was released!  Having just done a cluster deployment and three internal upgrades, I had to get a plan in order for deployment.

Beforehand, I thankfully discovered the latest from Microsoft Release Services.  Apparently the RTM of SP1 is essentially Cumulative Updates 1 through 6.  Really?  I'm ahead of the curve?  In my zeal for fresh deployments (and for a customer's requirements), I had slipstreamed Cumulative Update 8 on all of my recent deployments.  Those changsets don't get included until Service Pack 1 Cumulative Update 1.  That certainly saves me some time!

Now if I could only find time to finish deploying SSRS internally.

Wednesday, July 13, 2011

MPIO, iSCSI, and the Cluster

I've been working away at a Windows Server 2008 R2 based cluster for the eventual deployment of two SQL Server 2008 R2 instances.  I've dug, and I've dug, and I've dug, but I couldn't find a good order of configuration for getting the basics of the iSCSI SAN configured for multi-pathing, so here's what I've experienced.

Setup VLANs

Unfortunately, we've only got one switch to work with, but at least it's gigabit and managed.  I setup three VLANs, one to carry public network traffic, one for the first iSCSI subnet, and one for the second iSCSI subnet.  Why two VLANs for iSCSI?  I don't want to have the broadcast communication from one interfering with another, as there will be no default gateway on either subnet to assist with routing.  It's left as an exercise for the reader to learn the VLAN configuration for your particular model switch.  But if you've got an HP ProCurve, they do provide some sample configurations.

Configure SAN

Thanks to Alan Hirt's (blog | twitter) book, I was able to plan my LUNs, and even then as requirements changed I had to redo them several times.  For my SAN, and HP P2000 G3 iSCSI bundle with 12 2.5" SAS disks, HP uses the term "volume" to refer to LUNs.  Don't confuse this with the term volume used to describe a filesystem container on a partition.  All told, we needed eight LUNs for basic quorum, MSDTC, two data volumes, two log volumes, and two backup volumes.  I avoided the use of mountpoints to keep the configuration easily readable.  In the SMB market, mountpoints have been basically unheard of.

Configure IP addressing

For Public, it's simple enough.  For the Private (or heartbeat), we used a cross-over cable and 10.50.50.0/24.  Then to differentiate our iSCSI controllers, we used 10.10.10.0/24 and 10.10.20.0/24 for Controller A and Controller B, respectively.  Those subnets are over-zealous, but for the sake of support-ability, I chose a commonly known subnet mask.

Configuring iSCSI

This is where things got heady and confusing.  At this point there's about 10 different IPs in the mix, plus those VLANs, and a bunch of identically colored cables.  When picking a target for discovery, go with the IP you know and love on Controller A.  Remember to check Enable Multi-Path when adding a Target.  It'll chose the default binding for the initiator (0.0.0.0), which might not be what we want, but it's a start.  It's those sessions that really matter.  Don't forget to switch to the Volumes and Devices tab and use Auto Configure to associate those LUNs.

Connected to a Quick Connect Target

Go ahead and add a second target for Controller B on the Discovery tab.  Then make sure that the MPIO feature is installed and run mpclaim -n -i -a, to grab all of the MPIO enabled drives.  You'll likely have to reboot after this. Note that my HP P2000i G3 didn't have a vendor supplied DSM because it's so new, so the Microsoft DSM will work just fine.  Repeat on your second host and reboot.  Now in the MPIO control panel (Windows 2008 R2 only), you should see MSFT2005iSCSIBusType_0x9 as a claimed device.  That's about it for your visits to the MPIO tool.

Now let's confirm that we've got multiple paths or add them explicitly if you need particular source and target IPs.  On the Target tab, click on Properties.  Here you'll see sessions connected to your SAN.  It's the multi-pathing of the sessions that keeps things running.

The list of sessions in Target properties

Drill down further on a particular session by clicking on Devices.  Here you'll see what LUNs are associated with that connection.  If MPIO is correctly enabled, the MPIO button will be available.

The list of disks associated with a session


Click on the MPIO button after highliting a LUN to look at it's paths a policies.

MPIO settings on a device

You can see that I've got mine already configured with 3 paths at this point.  I've already set the policy to Fail Over Only and two of those paths have been set to Standby.  To confirm each path's settings, highlight it and click on  Details.

Details of an MPIO path

Now you can see what each connection is using for source and target.  You see from the example that its explicitly set to use the Microsoft iSCSI Initiator and it's source is explicitly set to the second iSCSI NIC's IP hitting the third port on Controller A.

Now that seems like too much clicking, doesn't it?  For my deployment, this was fast enough and it re-enforced the learning process visually.  You can leverage the mpclaim command line tool to set these policies via scripts.

Open Disk Manager, bring the disks Online then Initialize them.  On the same node, you can format and assign drive letters.  After this, the failover cluster installation will perform the magic to make them clustered resources.

Installing Failover Cluster Feature

Now it's a matter of running some wizards.  At this point, the configuration passed validation successfully.  Everything is wizard driven, so I recommend Alan Hirt's book, Pro SQL Server 2008 Failover Clustering from Apress.  Its what you need to understand the theory behind the process and points out several pitfalls during the architecture process, something I had to learn ad-hoc while running this last-minute project.  And if you want to get perfect validation, you'll need to read Nic Cain's (blog | twitter) blog on the hidden cluster adapter.  If you're looking for more information, check out both Alan and Nic's blogs.  Nic's running a series on large cluster deployments right now.

Tuesday, July 12, 2011

The Joys of SSL

So you just upgrade to Exchange 2010 and you bought and installed your UCC certificate.  Things look great right?  But that was five months ago.

The SharePoint team called and they need an SSL certificate too.  So can you use a UCC certificate?  Yes, but it requires a lot more work.  Oh, and the accounting department's web application, that can't handle Subject Alternate Names.

Time to call the aforementioned accounting department and get approval for purchasing that wildcard certificate. But the question is, do we just deploy it or setup Active Directory Certificate Services?

Wednesday, July 6, 2011

Organizational Charts

Having been a part of at least five organizational changes in my 30 month tenure with my current employer, this infographic speaks volumes.

http://www.bonkersworld.net/2011/06/27/organizational-charts/

Friday, June 24, 2011

Cheating at Redirection

Recently, I've been challenged to add a network share to a Documents library, to essentially mimic the folder redirection the were familiar with in Windows XP, on a Windows Server 2008 Terminal Server.

Using a terminal server is the problem.  To add a folder, local or remote, to a library, it must be indexed.  Try it for yourself and you'll get an error stating that there's no index.  Your recourse is to index it, or if it's a remote share, enable Offline Files.  Offline Files?  On a Terminal Server?  Nope, by design.

Okay, then I'll enable Windows Search Service on the server hosting the share.  It already has the File Services role enabled, so I'll just add the feature.  And don't be confused if you see Windows 2003 Indexing Service, it's not the same!  You'll want the Search Service feature.  Then it's just a matter of adding your volume or folder via the Indexing Options in Control Panel.

Back to the Terminal Server and... what do you mean I can't add it?  It's indexed!  Well, you need the Search Service feature installed on the Terminal Server.  But, when you go to Add Feature, it's not there.  Rather, you've got to pretend you're going to install the File Server role, uncheck File Server, and check Search Service.



Now your Terminal Server can read the remote index on that remote share and you can add it to the Documents library.  It all seems a little confusing to install a File Server role on a Terminal Server, doesn't it?

Tuesday, June 7, 2011

Dude, Where's My Maintenance Plan?

I just finished an in-place, cross-version, cross-SKU upgrade of SQL Server 2005 Express with Advanced Services to SQL Server 2008 R2 Standard.  It was actually very easy and required about an hour's downtime for this SMB client.

However, when I went to make a quick Maintenance Plan from SSMS in order to schedule some basic tasks, I discovered I couldn't create a new plan and edit it, but I could create it via the wizard.  A few clicks and about 22MB of installers later, and I realized I hadn't updated the one requirement: Management Tools - Complete.  Management Tools - Basic won't let you create SSIS packages via SSMS because you've got no SQL Agent to run them!

I need all of those tools, please!

Wednesday, May 25, 2011

Exchange Item-Level Manipulation with PowerShell

Shameless Plug

Mike Pfeiffer (blog | twitter) is a recently christened MCM for Exchange 2010.  His blog is awesome.  You should read it.  And then when you have a problem like I have, you should leverage what you've learned.

The Problem

Building on yesterday's Linked Mailbox creation issue for an Exchange server in a resource forest, I now need to perform item level manipulation for Contacts and Calenders.  We've been working on migration to Exchange 2010 from an obscure messaging system designed by a company in Redwood City, who shall not be named.  My co-worker, Andrew Healey (blog | twitter), has done an excellent job solving mail item synchronization.  But we're still stuck doing manual PST exports of non-mail items.

After completing our testing phase, we want to clear out all the non-mail items from user's mailboxes then perform fresh PST imports, thus avoiding any chance of item duplication.  In terms of native capabilities, Exchange 2010's lowest level of granularity using PowerShell is the mailbox.  So, knowing a little about EWS from another project I was considering to sync Out-of-Office replies to a Public Folder Vacation calendar, I found Mike Pfeiffer's article on this very topic.  His EWSMail module is phenomenal and incredibly well documented, but he did leave some things out on purpose.
The cmdlets in this module could use some enhancements though. They run under the security context of the logged on user, the EWS endpoint is set using autodiscover, and the Exchange version will default to Exchange 2010 SP1. If you want to extend this code, it might be useful to add parameters for specifying alternate credentials, manually setting the EWS URL, and specifying the Exchange version.
The Solution

First, we had to patch Exchange 2010 to SP1.  This saved some time (and was best practices) so that I didn't have to add version detection.  Incidentally, you can see an example where Mike instantiated an object with that exact property.

Secondly, I needed to extend his PowerShell module to allow me to pass the EWS URL as a parameter.  Adding the following snippets in the appropriate places did the job.

In the param() declarations, add the following snippet, updating the Position value accordingly:
[Parameter(Position = 4, Mandatory = $false)] [string] $WebServicesUrl
Then in the EWS Autodiscover section, I created some conditional logic:
#Determine the EWS end-point using Autodiscover
if ($WebServicesUrl -eq $null) { $service.AutodiscoverUrl($Mailbox)}
else { $service.Url = $WebServicesUrl }
It was after this that I discovered my biggest challenge, discovering what data structure each item used.  Mike's script handles Email Messages using the EmailMessage class without problems.  Typically you'll find those in the "Inbox".  Calendar items use the Appointment class and reside in "Calendars", of course.  Contacts are the most logical and use the Contact class and reside in the "Contacts" folders.  Same with Tasks, using Task, and residing in "Tasks".

Notes on the other hand were unique.  In terms of data structures, they're the same as Email Messages, using the EmailMessage class, but they'll sit in the "Notes" folder.  Go figure!
Note: Don't forget to update the default folder if none is specified in the params() block.
Rather than creating conditional logic, I chose to duplicate the main Get-* and Remove-* scripts and update the class binding line accordingly, e.g., for Contacts, I changed:
$email = [Microsoft.Exchange.WebServices.Data.EmailMessage]::Bind($service, $_.Id, $emailProps)
to
$email = [Microsoft.Exchange.WebServices.Data.Contact]::Bind($service, $_.Id, $emailProps)
Now after adding the additional Get-* and Remove-* scripts into the EWSMail.psm1 file, and dropping the whole module folder into my $env:PSModulePath, I was ready to go!

Final Details


I had to enable impersonation for my executing account:
New-ManagementRoleAssignment –Name:impersonationAssignmentName –Role:ApplicationImpersonation –User:CONTOSO\Administrator
Then launch the EMS, and run the script below and it’ll kill 2000 items at a time for all Mailboxes with a name like “Test*”.  There will be no visual confirmation of success in EMS.

Edit the text of that script to remove the Where-Object {$_.Name –like “Test*”} and it’ll do every mailbox.  See the text of that script below.
Import-Module EWSMail
$MBX = Get-Mailbox -ResultSize unlimited | Where-Object {$_.Name -like "Test*"}
$EWS = "https://exchange.corp.contoso.com/EWS/Exchange.asmx"
$Limit = 2000
$MBX | ForEach-Object {
    Get-EWSCalendarItem -Mailbox $_.WindowsEmailAddress -WebServicesUrl $EWS -ResultSize $Limit | Remove-EWSCalendarItem -WebServicesUrl $EWS -Confirm:$false
    Get-EWSContact -Mailbox $_.WindowsEmailAddress -WebServicesUrl $EWS -ResultSize $Limit | Remove-EWSContact -WebServicesUrl $EWS -Confirm:$false
    Get-EWSTask -Mailbox $_.WindowsEmailAddress -WebServicesUrl $EWS -ResultSize $Limit | Remove-EWSTask -WebServicesUrl $EWS -Confirm:$false
    Get-EWSMailMessage -Mailbox $_.WindowsEmailAddress -WebServicesUrl $EWS -ResultSize $Limit -Folder Notes | Remove-EWSMailMessage -Mailbox $_.WindowsEmailAddress -WebServicesUrl $EWS -Confirm:$false
}
Learning Points

Now, run all of this at your own risk.  And if you can do better, please do so and let me know.  More importantly, let Mike know.  Thanks to his template, I've learned a great deal about:


And I hope the rest of you never have to migrate between systems that don't provide vendor neutral protocols for all supplied services.

I do have one apology and that is that I haven't attached my scripts here as my blogging platform doesn't support attachments and to past ~500 script inline would be beastly to read.

Tuesday, May 17, 2011

The Power of PowerShell

Today I was faced with the challenge of creating an enterprise worth of Linked Mailboxes on Exchange 2010.  The Exchange server was sitting in a resource forest, adjacent to the production forest with two domains.  Linked mailboxes allow us to have a disabled User account in the resource forest, but grant authorization to the resource to a remote user account in a different trusted forest.  All this and more, just to migrate off of a previously unheard-of MTA.

I tested things out manually with a test mailbox, following Microsoft's article on the topic without any trouble.  Quickly noting the PowerShell example, I thought, "Hey, let's pipe in a list of user accounts from the production forest's Active Directory servers."

It was a great idea until:


[PS] C:\>Get-ADUser -Filter {Name -eq "Test"} -SearchBase "DC=contoso,DC=com" -Server ad01.contoso.com:3268
Get-ADUser : Unable to contact the server. This may be because this server does not exist, it is currently down, or it does not have the Active Directory Web Services running.

Uh oh.  Can I use a different cmdlet to connect to Server 2008 RTM?  Nope, not that I could find.  Active Directory Web Services was introduced with Windows Server 2008 R2.  So how about a different tool?  Remember CSVDE?  I had used it once before to perform a parallel Active Directory migration.  Not pretty, but it gets the job done.

A little hack here, and a little test there and viola:
csvde -f users.csv -d "OU=Users,DC=contoso,DC=com" -s AD01 -r "(&(objectClass=user)(userAccountControl:1.2.840.113556.1.4.803:=2))" -l "name,cn,sn,description,givenName,displayName,proxyAddresses,sAMAccountName,userPrincipalName"

Okay, I've got a CSV with some pretty useful information.  How do I loop through it with PowerShell to make those linked mailboxes?  A little searching through Technet, and I find J. Vosloo's article on a similar procedure for migrating mailboxes.  Import-CSV is a pretty powerful tool.  Now I can filter and purify my data in Excel, then run my script without any trouble.

$CSV = Import-CSV -path ".\users.csv"Foreach ($line in $CSV) {                $UPN = $line.sAMAccountName + "@resource.fabrikam.com"                $LMA = "CONTOSO\" + $line.sAMAccountName                New-Mailbox -Name $line.name -Alias $line.sAMAccountName -UserPrincipalName $UPN -SamAccountName $line.sAMAccountName -FirstName $line.givenName -Initials '' -LastName $line.sn -LinkedMasterAccount $LMA -LinkedDomainController 'ad01.contoso.com' }

And there we go!  All my mailboxes are created and linked.  There's more fun things you can do, like specify a particular OU for your target mailboxes based off of the OU of the originating Master Account, but that's left as an exercise to the reader.

Tuesday, May 3, 2011

Playing "What Device Is It?"

If you ever have to guess a device on the network without logging into it via common protocols (e.g., HTTP, HTTPS, FTP, Telnet, SMTP, etc), you can actually find out its NIC vendor.

First ping the device to get it into your ARP cache.

C:\>ping 172.16.100.254

Pinging 172.16.100.254 with 32 bytes of data:
Reply from 172.16.100.254: bytes=32 time<1ms TTL=64
Reply from 172.16.100.254: bytes=32 time<1ms TTL=64
Reply from 172.16.100.254: bytes=32 time<1ms TTL=64
Reply from 172.16.100.254: bytes=32 time<1ms TTL=64

Ping statistics for 172.16.100.254:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms

Then, leverage ARP to check it’s MAC address.

C:\>arp -a | find ".254"
  172.16.100.254        00-90-7f-3c-e0-b8     dynamic

Then use Coffer’s MAC Find tool to find out the manufacturer using the first 6 digits of the MAC address.



Coffer's MAC Find lookup results

I've had to leverage this many a time when hunting down rogue devices.  Unfortunately, its less common for workstations to have a workstation OEM branded NIC, but embedded devices tend to yield  more conclusive results.