Another solution for Autodiscover 401 woes in #MSExchange

Earlier tonight, I was helping a customer troubleshoot why users in their mixed Exchange 2013/2007 organization were getting 401 errors when trying to use Autodiscover to set up profiles. Well, more accurately, the Remote Connectivity Analyzer was getting a 401, and users were getting repeating authentication prompts. However, when we tested internally against the Autodiscover endpoints everything worked fine, and manual testing externally against the Autodiscover endpoint also worked.

So why did our manual tests work when the automated tests and Outlook didn’t?

Well, some will tell you it’s because of bad NTFS permissions on the virtual directory, while others will say it’s because of the loopback check being disabled. And in your case, that might in fact be the cause…but it wasn’t in mine.

In my case, the clue was in the Outlook authentication prompt (users and domains have been changed to protect the innocent):



I’m attempting to authenticate with the user’s UPN, and it’s failing…hey.

Re-run the Exchange Remote Connectivity analyzer, this time with the Domain\Username syntax, and suddenly I pass the Autodiscover test. Time to go view the user account – and sure enough, the account’s UPN is not set to the primary SMTP address.

Moral of the story: check your UPNs.

Upgrade Windows 2003 crypto in #MSExchange migrations

Just had this bite me at one of my customers. Situation: Exchange Server 2007 on Windows Server 2003 R2, upgrading to Exchange Server 2013 on Windows Server 2012. We ordered a new SAN certificate from GoDaddy (requesting it from Exchange 2013) and installed it on the Exchange 2013 servers with no problems. When we installed it on the Exchange 2007 servers, however, the certificates would import but the new certificates (and its chain) all showed the dreaded red X.

Looking at the certificate, we saw the following error message:



If you look more closely at the certificates in GoDaddy’s G2 root chain, you’ll see it’s signed both in SHA1 and SHA2-256. And the latter is the problem for Windows Server 2003 – it has an older cryptography library that doesn’t handle the newer cypher algorithms.

The solution: Install KB968730 on your Windows Server 2003 machines, reboot, and re-check your certificate. Now you should see the “This certificate is OK” message we all love.

Load Balancing ADFS on Windows 2012 R2

Greetings, everyone! I ran across this issue recently with a customer’s Exchange Server 2007 to Office 365 migration and wanted to pass along the lessons learned.

The Plan

It all started so innocently: the customer was going to deploy two Exchange Server 2013 hybrid servers into their existing Exchange Server 2007 organization for a Hybrid organization using directory synchronization and SSO with ADFS. They’ve been investing a lot of work into upgrading their infrastructure and have been upgrading systems to newer versions of Windows, including some spiffy new Windows Server 2012 Hyper-V servers. We decided that we’d deploy all of the new servers on Windows Server 2012 R2, the better to future-proof them. We were also going to use Windows NLB for the ADFS and ADFS proxy servers instead of using their existing F5 BIG-IP load balancer, as the network team is in the middle of their own projects.

The Problem

There were actually two problems. The first, of course, was the combination of Hyper-V and Windows NLB. Unicast was obviously no good, multicast has its issues, and because we needed to get the servers up and running as fast as possible we didn’t have time to explore using IGMP with Multicast. Time to turn to the F5. The BIG-IP platform is pretty complex and full of features, but F5 is usually good about documentation. Sure enough, the F5 ADFS 2.0 deployment guide (Deploying F5 with Microsoft Active Directory Federation Services) got us most of the way there. If we had been deploying ADFS  2.0 on Server 2012 and the ADFS proxy role, I’d have been home free.

In Windows 2012 R2 ADFS, you don’t have the ADFS proxy role any more – you use the Web Application Proxy (WAP) role service component of the Remote Access role. However, that’s not the only change. If you follow this guide with Windows Server 2012 R2, your ADFS and WAP pools will fail their health checks (F5 calls them monitors) and the virtual server will not be brought online because the F5 will mistakenly believe that your pool servers are down. OOPS!

The Resolution

So what’s different and how do we fix it?

ADFS on Windows Server 2012 R2 is still mostly ADFS 2.0, but some things have been changed – out with the ADFS proxy role, in with the WAP role service. That’s the most obvious change, but the real sticker here is under the hood in the guts of the Windows Server 2012 R2 HTTP server. In Windows Server 2012 R2, IIS and the Web server engine has a new architecture that supports the SNI extension to TLS. SNI is insanely cool. The connecting machine tells it what host name it’s trying to connect to as part of the HTTPS session setup so that one IP address can be used host multiple HTTPS sites with different certificates, just like HTTP 1.1 added the Hosts: header to HTTP.

But the fact that Windows 2012 R2 uses SNI gets in the way of the HTTPS health checks that the F5 ADFS 2.0 deployment guide has you configure. We were able to work around it by replacing the HTTPS health checks with TCP Half Open checks, which connect to the pool servers on the target TCP port and wait for the ACK. If they receive it, the server is marked up.

For long-term use, the HTTPS health checks are better; they allow you to configure the health check to probe a specific URL and get a specific response back before it declares a server in the pool is healthy. This is better than ICMP or TCP checks which only check for ping responses or TCP port responses. It’s totally possible for a machine to be up on the network and IIS answering connections but something is misconfigured with WAP or ADFS so it’s not actually a viable service. Good health checks save debugging time.

The Real Fix

As far as I know there’s no easy, supported way to turn SNI off, nor would I really want to; it’s a great standard that really needs to be widely deployed and supported because it will help servers conserve IP addresses and allow them to deploy multiple HTTPS sites on fewer IP/port combinations while using multiple certificates instead of big heavy SAN certificates. Ultimately, load balancer vendors and clients need to get SNI-aware fixes out for their gear.

If you’re an F5 user, the right way is to read and follow this F5 DevCentral blog post Big-IP and ADFS Part 5 – “Working with ADFS 3.0 and SNI” to configure your BIG-IP device with a new SNI-aware monitor; you’re going to want it for all of the Windows Server 2012 R2 Web servers you deploy over the next several years. This process is a little convoluted – you have to upload a script to the F5 and pass in custom parameters, which just seems really wrong (but is a true measure of just how powerful and beastly these machines really are) – but at the end of the day, you have a properly configured monitor that not only supports SNI connections to the correct hostname, but uses the specific URI to ensure that the ADFS federation XML is returned by your servers.

An SNI-aware F5 monitor (from DevCentral)

What do you do if you don’t have an F5 load balancer and your vendor doesn’t support F5? Remember when I said that there’s no way to turn SNI off? That’s not totally true. You can go mess with the SNI configuration and change the SSL bindings in a way that seems to mimic the old behavior. You run the risk of really messing things up, though. What you can do is follow the process in this TechNet blog post How to support non-SNI capable Clients with Web Application Proxy and AD FS 2012 R2.



As a side note, almost everyone seems to be calling the ADFS flavor on Windows Server 2012 R2 “ADFS 3.0.” Everyone, that is, except for Microsoft. It’s not a 3.0; as I understand it the biggest differences have to do with the underlying server architecture, not the ADFS functionality on top of it per se. So don’t call it that, but recognize most other people will. It’s just AD FS 2012 R2.

Why Virtualization Still Isn’t Mature

As a long-time former advocate for Exchange virtualization (and virtualization in general), it makes me glad to see other pros pointing out the same conclusions I reached a while ago about the merits of Exchange virtualization. In general, it’s not a matter of whether you can solve the technological problems; I’ve spent years proving for customer after customer that you can. Tony does a great job of talking about the specific mismatch between Exchange and virtualization. I agree with everything he said, but I’m going to go one further and say that part of the problem is that virtualization is still an immature technology.

Now when I say that, you have to understand: I believe that virtualization is more than just the technology you use to run virtual machines. It includes the entire stack. And obviously, lots of people agree with me, because the core of private cloud technology is creating an entire stack of technology to wrap around your virtualization solution, such as Microsoft System Center or OpenStack. These solutions include software defined networking, operating system configuration, dynamic resource management, policy-driven allocation, and more. There are APIs, automation technologies, de facto standards, and interoperability technologies. The goal is to reduce or remove the amount of human effort required to deploy virtual solutions by bringing every piece of the virtualization pie under central control. Configure policies and templates and let automation use those to guide the creation and configuration of your specific instances, so that everything is consistent.

But there’s a missing piece – a huge one – one that I’ve been saying for years. And that’s the application layer. When you come right down to it, the Exchange community gets into brawls with the virtualization community (and the networking community, and the storage community, but let’s stay focused on one brawl at a time please) because there are two different and incompatible principles at play:

  • Exchange is trying to be as aware of your data as possible and take every measure to keep it safe, secure, and available by making specific assumption about how the system is deployed and configured.
  • Your virtualization product is trying to treat all applications (including Exchange) as if they are completely unaware of the virtualization stack and provide features and functionality whether they were designed for it or not.

The various stack solutions are using the right approach, but I believe they are doing it in the wrong direction; they work great in the second scenario, but they create exceptions and oddities for Exchange and other programs like Exchange that fit the first scenario. So what’s missing? How do I think virtualization stacks need to fix this problem?

Create a standard by which Exchange and other applications can describe what capabilities they offer and define the dependencies and requirements for those capabilities that must in turn be provided by the stack. Only by doing this can policy-driven private cloud solutions close that gap and make policies extend across the entire stack, continuing to reduce the change for human error.

With a standard like this, virtualizing Exchange would become a lot easier. As an example, consider VM to host affinity. Instead of admins having to remember to manually configure Exchange virtual DAG members to not be on the same host, Exchange itself would report  this requirement to the virtualization solution. DAG Mailbox servers would never be on the same host, and the FSW wouldn’t be on the same host as any of the Mailbox servers. And when host outages resulted in the loss of redundant hosts, the virtualization solution could throw an event caught by the monitoring system that explained the problem before you got into a situation where this constraint was broken. But don’t stop there. This same standard could be applied to network configuration, allowing Exchange and other applications to have load balancing automatically provisioned by the private cloud solution.  Or imagine deploying Exchange mailbox servers into a VMware environment that’s currently using NFS. The minute the Mailbox role is deployed, the automation carves off the appropriate disk blocks and presents them as iSCSI to the new VM (either directly or through the hypervisor as an RDM, based on the policy) so that the storage meets Exchange’s requirements.

Imagine the arguments that could solve. Instead of creating problems, applications and virtualization/private cloud stacks would be working together — a very model of maturity.

Windows 2012 R2 and #MSExchange: not so fast

Updated 9/18/2014: As of this writing, Windows Server 2012 R2 domain controllers are supported against all supported Microsoft Exchange environments:

  • Exchange Server 2013 with CU3 or later (remember, CU5 and CU6 are the two versions currently in support; SP1 is effectively CU4)
  • Exchange Server 2010 with SP3 and RU5 or later
  • Exchange Server 2007 with SP3 and RU13 or later

Take particular note that Exchange Server 2010 with SP2 (any rollup) and earlier are NOT supported with Windows Server 2012 R2 domain controllers.

Also note that if you want to enabled Windows Server 2012 R2 domain and forest functional level, you must have Exchange Server 2013 SP1 or later OR Exchange Server 2010 + SP3 + RU5 or later. Exchange Server 2013 CU3 and Exchange Server 2007 (any level) are not supported for these levels.


In the past couple of months since Windows Server 2012 R2 has dropped, a few of my customers have asked about rolling out new domain controllers on this version – in part because they’re using it for other services and they want to standardize their new build outs as much as they can.

My answer right now? Not yet.

Whenever I get a compatibility question like this, the first place I go is the Exchange Server Supportability Matrix on TechNet. Now, don’t let the relatively old “last update” time dismay you; the support matrix is generally only updated when major updates to Exchange (a service pack or new version) come out. (In case you haven’t noticed, Update Rollups don’t change the base compatibility requirements.)

Not this kind of matrix...

Not that kind of matrix…

If we look on the matrix under the Supported Active Directory Environments heading, we’ll see that as of right now Windows Server 2012 R2 isn’t even on the list! So what does this tell us? The same thing I tell my kids instead of the crappy old “No means No” chestnut: only Yes means Yes. Unless the particular combination you’re looking for is listed, then the answer is that it’s not supported at this time.

I’ve confirmed this by talking to a few folks at Microsoft – at this time, the Exchange requirements and pre-requisites have not changed. Are they expected to? No official word, but I suspect if there is a change we’ll see it when Exchange 2013 SP1 is released; that seems a likely time given they’ve already told us that’s when we can install Exchange 2013 on Windows 2012 R2.

In the meantime, if you have Exchange, hold off from putting Windows 2012 R2 domain controllers in place. Will they work? Probably, but you’re talking about untested schema updates and an untested set of domain controllers against a very heavy consumer of Active Directory. I can’t think of any compelling reasons to rush this one.

Finding Differences in Exchange objects (#DoExTip)

Many times, when I’m troubleshooting Exchange issues I need to compare objects (such as user accounts in Active Directory, or mailboxes) to figure out why there is a difference in behavior. Many times, the difference is tiny and hard-to-spot. It may not even be visible through the GUI.

To do this, I first dump the objects to separate text files. How I do this depends on the type of object I need to compare. If I can output the object using Exchange Management Shell, I pipe it through Format-List and dump it to text there:

Get-Mailbox –Identity Devin | fl > Mailbox1.txt

If it’s a raw Active Directory object I need, I use the built-in Windows LDP tool and copy and paste the text dump to separate files in a text editor.

Once the objects are in text file format, I use a text comparison tool, such as the built-in comparison tool in my preferred text editor (UltraEdit) or the standalone tool WinDiff.The key here is to quickly highlight the differences. Many of those differences aren’t important (metadata such as last time updated, etc.) but I can spend my time quickly looking over the properties that are different, rather than brute-force comparing everything about the different objects.

I can hear many of you suggesting other ways of doing this:

  • Why are you using text outputs even in PowerShell? Why not export to XML or CSV?
    If I dump to text, PowerShell displays the values of multi-value properties and other property types that it doesn’t show if I export the object to XML or CSV. This is very annoying, as the missing values are typically the source of the key difference. Also, text files are easy for my customers to generate, bundle, and email to me without any worries that virus scanners or other security policies might intercept them.
  • Why do you run PowerShell cmdlets through Format-List?
    To make sure I have a single property per line of text file. This helps ensure that the text file runs through WinDiff properly.
  • Why do you run Active Directory dumps through LDP?
    Because LDP will dump practically any LDAP property and value as raw text as I access a given object in Active Directory. I can easily walk a customer through using LDP and pasting the results into Notepad while browsing to the objects graphically, as per ADSIedit. There are command line tools that will export in other formats such as LDIF, but those are typically overkill and harder to use while browsing for what you need (you typically have to specify object DNs).
  • PowerShell has a Compare-Object cmdlet. Why don’t you use that for comparisons instead of WinDiff or text editors?
    First, it only works for PowerShell objects, and I want a consistent technique I can use for anything I can dump to text in a regular format. Second, Compare-Object changes its output depending on the object format you’re comparing, potentially making the comparison useless. Third, while Compare-Object is wildly powerful because it can hook into the full PowerShell toolset (sorting, filters, etc.) this complexity can eat up a lot of time fine-tuning your command when the whole point is to save time. Fourth, WinDiff output is easy to show customers. For all of these reasons, WinDiff is good enough.

Using Out-GridView (#DoExTip)

My second tip in this series is going to violate the ground rules I laid out for it, because they’re my rules and I want to. This tip isn’t a tool or script. It’s a pointer to an insanely awesome feature of Windows PowerShell that just happens to nicely solve many problems an Exchange administrator runs across on a day-to-day basis.

I only found out about Out-GridView two days ago, the day that Tony Redmond’s Windows IT Pro post about the loss of the Message Tracking tool hit the Internet. A Twitter conversation started up, and UK Exchange MCM Brian Reid quickly chimed in with a link to a post from his blog introducing us to using the Out-GridView control with the message tracking cmdlets in Exchange Management Shell.

This is a feature introduced in PowerShell 2.0, so Exchange 2007 admins won’t have it available. What it does is simple: take a collection of objects (such as message tracking results, mailboxes, public folders — the output of any Get-* cmdlet, really) and display it in a GUI gridview control. You can sort, filter, and otherwise manipulate the data in-place without having to export it to CSV and get it to a machine with Excel. Brian’s post walks you through the basics.

In just two days, I’ve already started changing how I interact with EMS. There are a few things I’ve learned from Get-Help Out-GridView:

  • On PowerShell 2.0 systems, Out-GridView is the endpoint of the pipeline. However, if you’re running it on a system with PowerShell 3.0 installed (Windows Server 2012), Out-GridView can be used to interactively filter down a set of data and then pass it on in the pipeline to other commands. Think about being able to grab a set of mailboxes, fine-tune the selection, and pass them on to make modifications without having to get all the filtering syntax correct in PowerShell.
  • Out-GridView is part of the PowerShell ISE component, so it isn’t present if you don’t have ISE installed or are running on Server Core. Exchange can’t run on Server Core, but if you want to use this make sure the ISE feature is installed.
  • Out-GridView allows you to select and copy data from the gridview control. You can then paste it directly into Excel, a text editor, or some other program.

This is a seriously cool and useful tip. Thanks, Brian!