Leaving 3Sharp

3Sharp has been a fantastic place to work; for the last six and half years, my co-workers and I have walked the road together. One of the realities of growth, though, is that you often reach the fork in the road where you have to move down different paths. Working with Paul, Tim, Missy, Kevin, and the rest of the folks who have been part of the Platform Services Group here at 3Sharp over the years has been a wild journey, but we were only one of three groups at 3Sharp; the other two groups are also chock-full of smart people doing wonderful things with SharePoint and Office. 3Sharp will be moving forward to focus on those opportunities, and the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) is closing its doors. My last day here will be tomorrow, Friday, October 16.

I think that the Ecclesiastes 3:1 says it best; in the King James Version, the poet says, “To every thing there is a season, and a time to every purpose under the heaven.” It has been my privilege to use this blog to talk about Exchange, data protection, and all the other topics I’ve talked about since my first post here five years ago (holy crap, has it really been five years???) With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted here over on my personal blog, Devin on Earth. If you have a link or bookmark for this blog or are following me via RSS, please take a moment to update it now (Devin on Earth RSS feed). I’ve got a few new posts cooking, but this will be my last post here.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years. Thank you, my readers. You all have helped me grow and solidify my skills, and I hope I returned the favor. I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.

OneNote 2010 Keeps Your Brains In Your Head

Some months back, those of you who follow me on Twitter (@devinganger) may have a noticed a series of teaser Tweets about a project I was working on that involved zombies.

Yes, that’s right, zombies. The RAHR-BRAINS-RAHR shambling undead kind, not the “mystery objects in Active Directory” kind.

Well, now you can see what I was up to.

I was working with long-time fellow 3Sharpie David Gerhardt on creating a series of 60-second vignettes for the upcoming Office 2010 application suite. Each vignette focuses on a single new area of functionality in one of the Office products. I got to work with OneNote 2010.

Here’s where the story gets good.

I got brought into the project somewhat late, after a bunch of initial planning and prep work had been done. The people who had been working on the project had decided that they didn’t want to do the same boring business-related content in their OneNote 2010 vignettes; oh, no! Instead, they hit upon the wonderful idea of using a Zombie Plan as the base document. Now, I don’t really like zombies, but this seemed like a great way to spice up a project!

The rest, as they say, is history. Check out the results (posted both at GetSharp and somewhere out on YouTube) for yourself:

One of the best parts of this project, other than getting a chance to learn about some of the wildly cool stuff the OneNote team is doing to enhance an already wonderful product, was the music selection. We worked a deal with local artist Dave Pezzner to use some of his short music clips for these videos. Dave is immensely talented and provided a wide selection of material, so I enjoyed being able to pick and choose just the right music for each video. It did occur to me how cool it would be if I could use Jonathan Coulton’s fantastic song Re: Your Brains, but somehow I think his people lost my query email. Such is life – and I think Mr. Pezzner’s music provided just the right accompaniment to the Zombie Plan content.

Enjoy!

Why Aren’t My Exchange Certificates Validating?

Updated 10/13: Updated the link to the blog article on configuring Squid for Exchange per the request of the author Owen Campbell. Thank you, Owen, for letting me know the location had changed!

By now you should be aware that Microsoft strongly recommends that you publish Exchange 2010/2007 client access servers (and Exchange 2003/2000 front-end servers) to the Internet through a reverse proxy like Microsoft’s Internet Security and Acceleration Server 2006 SP1 (ISA) or the still-in-beta Microsoft Forefront Threat Management Gateway (TMG). There are other reverse proxy products out there, such as the open source Squid (with some helpful guides on how to configure it for EAS, OWA, and Outlook Anywhere), but many of them can only be used to proxy the HTTP-based protocols (for example, the reverse proxy module for the Apache web server) and won’t handle the RPC component of Outlook Anywhere.

When you’re following this recommendation, you keep your Exchange CAS/HT/front-end servers in your private network and place the ISA Server (or other reverse proxy solution) in your perimeter (DMZ) network. In addition to ensuring that your reverse proxy is scrubbing incoming traffic for you, you can also gain another benefit: SSL bridging. SSL bridging is where there are two SSL connections – one between the client machine and the reverse proxy, and a separate connection (often using a different SSL certificate) between the reverse proxy and the Exchange CAS/front-end server. SSL bridging is awesome because it allows you radically reduce the number of commercial SSL certificates you need to buy. You can use Windows Certificate Services to generate and issue certificates to all of your internal Exchange servers, creating them with all of the Subject Alternate Names that you need and desire, and still have a commercial certificate deployed on your Internet-facing system (nice to avoid certificate issues when you’re dealing with home systems, public kiosks, and mobile devices, no?) that has just the public common namespaces like autodiscover.yourdomain.tld and mail.yourdomain.tld (or whatever you actually use).

In the rest of this article, I’ll be focusing on ISA because, well, I don’t know Squid that well and haven’t actually seen it in use to publish Exchange in a customer environment. Write what you know, right?

One of the most irritating experiences I’ve consistently had when using ISA to publish Exchange securely is getting the certificate configuration on ISA correct. If you all want, I can cover certificate namespaces in another post, because that’s not what I’m talking about – I actually find that relatively easy to deal with these days. No, what I find annoying about ISA and certificates is getting all of the proper root CA certificates and intermediate CA certificates in place. The process you have to go through varies on who you buy your certificates from. There are a couple, like GoDaddy, that offer inexpensive certificates that do exactly what Exchange needs for a decent price – but they require an extra bit of configuration to get everything working.

The problem you’ll see is two-fold:

  1. External clients will not be able to connect to Exchange services. This will be inconsistent; some browsers and some Outlook installations (especially those on new Windows installs or well-updated Windows installs) will work fine, while others won’t. You may have big headaches getting mobile devices to work, and the error messages will be cryptic and unhelpful.
  2. While validating your Exchange publishing rules with the Exchange Remote Connectivity Analyzer (ExRCA), you get a validation error on your certificate as shown in Figure 1.

ExRCA can't find the intermediate certificate on your ISA server
Figure 1: Missing intermediate CA certificate validation error in ExRCA

The problem is that some devices don’t have the proper certificate chain in place. Commercial certificates typically have two or three certificates in their signing chain: the root CA certificate, an intermediate CA certificate, and (optionally) an additional intermediate CA certificate. The secondary intermediate CA certificate is typically the source of the problem; it’s configured as a cross-signing certificate, which is intended to help CAs transition old certificates from one CA to another without invalidating the issued certificates. If your certificate was issued by a CA that has these in place, you have to have both intermediate CA certificates in place on your ISA server in the correct certificate stores.

By default, CAs will issue the entire certificate chain to you in a single bundle when they issue your cert. You have to import this bundle on the machine you issued the request from or else you don’t get the private key associated with the certificate. Once you’ve done that, you need to re-export the certificate, with the private key and its entire certificate chain, so that you can import it in ISA. This is important because ISA needs the private key so it can decrypt the SSL session (required for bridging), and ISA needs all the certificate signing chain so that it can hand out missing intermediate certificates to devices that don’t have them (such as Windows Mobile devices that have the root CA certificates). If the device doesn’t have the right intermediates, can’t download it itself (like Internet Explorer can), and can’t get it from ISA, you’ll get the certificate validation errors.

Here’s what you need to do to fix it:

  • Ensure that your server certificate has been exported with the private key and *all* necessary intermediate and root CA certificates.
  • Import this certificate bundle into your ISA servers. Before you do this, check the computer account’s personal certificate store and make sure any root or intermediate certificates that got accidentally imported there are deleted.
  • Using the Certificate MMC snap-in, validate that the certificate now shows as valid when browsing the certificate on your ISA server, as shown in Figure 2.

Even though the Certificates MMC snap-in shows this certificate as valid, ISA won't serve it out until the ISA Firewall Service is restarted!
Figure 2: A validated server certificate signing chain on ISA Server

  • IMPORTANT STEP: restart the ISA Firewall Service on your ISA server (if you’re using an array, you have to do this on each member; you’ll want to drain the connections before restarting, so it can take a while to complete). Even though the Certificate MMC snap-in validates the certificate, the ISA Firewall only picks up the changes to the certificate chain on startup. This is annoying and stupid and has caused me pain in the past – most recently, with 3Sharp’s own Exchange 2010 deployment (thanks to co-worker and all around swell guy Tim Robichaux for telling me how to get ISA to behave).

Also note that many of the commercial CAs specifically provide downloadable packages of their root CA and intermediate CA certificates. Some of them get really confusing – they have different CAs for different tiers or product lines, so you have to match the server certificate you have with the right CA certificates. GoDaddy’s CA certificate page can be found here.

Some Thoughts on FBA (part 2)

As promised, here’s part 2 of my FBA discussion, in which we’ll talk about the interaction of ISA’s forms-based authentication (FBA) feature with Exchange 2010. (See part 1 here.)

Offloading FBA to ISA

As I discussed in part 1, ISA Server includes the option of performing FBA pre-authentication as part of the web listener. You aren’t stuck with FBA – you can use other pre-auth methods too. The thinking behind this is that ISA is the security server sitting in the DMZ, while the Exchange CAS is in the protected network. Why proxy an incoming connection from the Internet into the real world (even with ISA’s impressive HTTP reverse proxy and screening functionality) if it doesn’t present valid credentials? In this configuration, ISA is configured for FBA while the Exchange 2010/2007 CAS or Exchange 2003 front-end server are configured for Windows Integrated or Basic as shown in Figure 1 (a figure so nice I’ll re-use it):

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Moving FBA off of ISA

Having ISA (and Threat Management Gateway, the 64-bit successor to ISA 2006) perform pre-auth in this fashion is nice and works cleanly. However, in our Exchange 2010 deployment, we found a couple of problems with it:

The early beta releases of Entourage for EWS wouldn’t work with this configuration; Entourage could never connect. If our users connected to the 3Sharp VPN, bypassing the ISA publishing rules, Entourage would immediately see the Exchange 2010 servers and do its thing. I don’t know if the problem was solved for the final release.

We couldn’t get federated calendar sharing, a new Exchange 2010 feature, to work. Other Exchange 20120 organizations would get errors when trying to connect to our organization. This new calendar sharing feature uses a Windows Live-based central brokering service to avoid the need to provision and manage credentials.

Through some detailed troubleshooting with Microsoft and other Exchange 2010 organizations, we finally figured out that our ISA FBA configuration was causing the problem. The solution was to disable ISA pre-authentication and re-enable FBA on the appropriate virtual directories (OWA and ECP) on our CAS server. Once we did that, not only did federated calendar sharing start working flawlessly, but our Entourage users found their problems had gone away too. For more details of what we did, read on.

How Calendar Sharing Works in Exchange 2010

If you haven’t seen other descriptions of the federated calendar sharing, here’s a quick primer on how it works. This will help you understand why, if you’re using ISA pre-auth for your Exchange servers, you’ll want to rethink it.

In Exchange 2007, you could share calendar data with other Exchange 2007 organizations. Doing so meant that your CAS servers had to talk to their calendar servers, and the controls around it were not that granular. In order to do it, you either needed to establish a forest trust and grant permissions to the other forest’s CAS servers (to get detailed per-user free/busy information) or set up a separate user in your forest for the foreign forests to use (to get default per-org free/busy data). You also have to fiddle around with the Autodiscover service connection points and ensure that you’ve got pointers for the foreign Autodiscover SCPs in your own AD (and the foreign systems have yours). You also have to publish Autodiscover and EWS externally (which you have to do for Outlook Anywhere) and coordinate all your certificate CAs. While this doesn’t sound that bad, you have to do these steps for every single foreign organization you’re sharing with. That adds up, and it’s a poorly documented process – you’ll start at this TechNet topic about the Availability service and have to do a lot of chasing around to figure out how certificates fit in, how to troubleshoot it, and the SCP export and import process.

In Exchange 2010, this gets a lot easier; individual users can send sharing invitations to users in other Exchange 2010 organizations, and you can set up organization relationships with other Exchange 2010 organizations. Microsoft has broken up the process into three pieces:

  1. Establish your organization’s trust relationship with Windows Live. This is a one-time process that must take place before any sharing can take place – and you don’t have to create or manage any service or role accounts. You just have to make sure that you’re using a CA to publish Autodiscover/EWS that Windows Live will trust. (Sorry, there’s no list out there yet, but keep watching the docs on TechNet.) From your Exchange 2010 organization (typically through EMC, although you can do it from EMS) you’ll swap public keys (which are built into your certificates) with Windows Live and identify one or more accepted domains that you will allow to be federated. Needless to say, Autodiscover and EWS must be properly published to the Internet. You also have to add a single DNS record to your public DNS zone, showing that you do have authority over the domain namespace. If you have multiple domains and only specify some of them, beware: users that don’t have provisioned addresses in those specified domains won’t be able to share or receive federated calendar info!
  2. Establish one or more sharing policies. These policies control how much information your users will be able to share with external users through sharing invitations. The setting you pick here defines the maximum level of information that your users can share from their calendars: none, free/busy only, some details, or all details. You can create a single policy for all your users or use multiple policies to provision your users on a more granular basis. You can assign these policies on a per-user basis.
  3. Establish one or more sharing relationships with other organizations. When you want to view availability data of users in other Exchange 2010 organizations, you create an organization relationship with them. Again, you can do this via EMC or EMS. This tells your CAS servers to lookup information from the defined namespaces on behalf of your users – contingent, of course, that the foreign organization has established the appropriate permissions in their organization relationships. If the foreign namespace isn’t federated with Windows Live, then you won’t be allowed to establish the relationship.

You can read more about these steps in the TechNet documentation and at this TechNet topic (although since TechNet is still in beta, it’s not all in place yet). You should also know that these policies and settings combine with the ACLs on users calendar folders, and as is the typical case in Exchange when there are multiple levels of permission, the most restrictive level wins.

What’s magic about all of this is that, at no point along the way other than the initial first step, do you have to worry consciously about the certificates you’re using. You never have to provide or provision credentials. As you create your policies and sharing relationships with other organizations – and other organizations create them with yours – Windows Live is hovering silently in the background, acting as a trusted broker for the initial connections. When your Exchange 2010 organization interacts with another, your CAS servers receive a SAML token from Windows Live. This token is then passed to the foreign Exchange 2010 organization, which can validate it because of its own trust relationship with Windows Live. All this token does is validate that your servers are really coming from the claimed namespace – Windows Live plays no part in authorization, retrieving the data, or managing the sharing policies.

However, here’s the problem: when my CAS talks to your CAS, they’re using SAML tokens – not user accounts – to authenticate against IIS for EWS calls. ISA Server (and, IIRC, TMG) don’t know how to validate these tokens, so the incoming requests can’t authenticate and pass on to the CAS. The end result is that you can’t get a proper sharing relationship set up and you can’t federate calendar data.

What We Did To Fix It

Once we knew what the problem was, fixing it was easy:

  1. Modify the OWA and ECP virtual directors on all of our Exchange 2010 CAS servers to perform FBA. These are the only virtual directories that permit FBA, so they’re the only two you need to change:Set-OWAVirtualDirectory -Identity “CAS-SERVER\owa (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUESet-ECPVirtualDirectory -Identity “CAS-SERVER\ecp (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUE
  2. Modify the Web listener on our ISA server to disable pre-authentication. In our case, we were using a single Web listener for Exchange (and only for Exchange), so it was a simple matter of changing the authentication setting to a value of No Authentication.
  3. Modify each of the ISA publishing rules (ActiveSync, Outlook Anywhere, and OWA):On the Authentication tab, select the value No delegation, but client may authenticate directly.On the Users tab, remove the value All Authenticated Users and replace it with the value All Users. This is important! If you don’t do this, ISA won’t pass any connections on!

You may also need to take a look at the rest of your Exchange virtual directories and ensure that the authentication settings are valid; many places will allow Basic authentication between ISA and their CAS servers and require NTLM or Windows Integrated from external clients to ISA.

Calendar sharing and ISA FBA pre-authentication are both wonderful features, and I’m a bit sad that they don’t play well together. I hope that future updates to TMG will resolve this issue and allow TMG to successfully pre-authenticate incoming federated calendar requests.

Stolen Thunder: Outlook for the Mac

I was going to write up a quick post about the release of Entourage for EWS (allowing it to work in native Exchange 2007, and, more importantly, Exchange 2010 environments) and the announcement that Office 2010 for the Mac would have Outlook, not Entourage, but Paul beat me to it, including my whole take on the thing. So go read his.

For those keeping track at home, yes, I still owe you a second post on the Exchange 2010 calendar sharing. I’m working on it! Soon!

EAS: King of Sync?

Seven months or so ago, IBM surprised a bunch of people by announcing that they were licensing Microsoft’s Exchange ActiveSync protocol (EAS) for use with a future version of Lotus Notes. I’m sure there were a few folks who saw it coming, but I cheerfully admit that I was not one of them. After about 30 seconds of thought, though, I realized that it made all kinds of sense. EAS is a well-designed protocol, I am told by my developer friends, and I can certainly attest to the relative lightweight load it puts on Exchange servers as compared to some of the popular alternatives – enough so that BlackBerry add-ons that speak EAS have become a not-unheard of alternative for many organizations.

So, imagine my surprise when my Linux geek friend Nick told me smugly that he now had a new Palm Pre and was synching it to his Linux-based email system using the Pre’s EAS support. “Oh?” said I, trying to stay casual as I was mentally envisioning the screwed-up mail forwarding schemes he’d put in place to route his email to an Exchange server somewhere. “Did you finally break down and migrate your email to an Exchange system? If not, how’d you do that?”

Nick then proceeded to point me in the direction of Z-Push, which is an elegant little open source PHP-based implementation of EAS. A few minutes of poking around and I became convinced that this was a wicked cool project. I really like how Z-Push is designed:

  • The core PHP module answers incoming requests for the http://server/Microsoft-Server-ActiveSync virtual directory and handles all the protocol-level interactions. I haven’t dug into this deeply, but although it appears it was developed against Apache, folks have managed to get it working on a variety of web servers, including IIS! I’m not clear on whether authentication is handled by the package itself or by the web server. Now that I think about it, I suspect it just proxies your provided credentials on to the appropriate back-end system so that you don’t have to worry about integrating Z-Push with your authentication sources.
  • One or more back-end modules (also written in PHP), which read and write data from various data sources such as your IMAP server, a Maildir file system, or some other source of mail, calendar, or contact information. These back-end modules are run through a differential engine to help cut down on the amount of synching the back-end modules must perform. It looks like the API for these modules is very well thought-out; they obviously want developers to be able to easily write backends to tie in to a wide variety of data sources. You can mix and match multiple backends; for example, get your contact data from one system, your calendar from another, and your email from yet a third system.
  • If you’re running the Zarafa mail server, there’s a separate component that handles all types of data directly from Zarafa, easing your configuration. (Hey – Zarafa and Z-Push…I wonder if Zarafa provides developer resources; if so, way to go, guys!)

You do need to be careful about the back-end modules; because they’re PHP code running on your web server, poor design or bugs can slam your web server. For example, there’s currently a bug in how the IMAP back-end re-scans messages, and the resulting load can create a noticeable impact on an otherwise healthy Apache server with just a handful of users. It’s a good thing that there seems to be a lively and knowledgeable community on the Z-Push forums; they haven’t wasted any time in diagnosing the bug and providing suggested fixes.

Very deeply cool – folks are using Z-Push to provide, for example, an EAS connection point on their Windows Home Server, synching to their Gmail account. I wonder how long it will take for Linux-based “Exchange killers” (other than Zarafa) to wrap this product into their overall packages.

It’s products like this that help reinforce the awareness that EAS – and indirectly, Exchange – are a dominant enough force in the email market to make the viability of this kind of project not only potentially useful, but viable as an open source project.

Comparing PowerShell Switch Parameters with Boolean Parameters

If you’ve ever take a look at the help output (or TechNet documentation) for PowerShell cmdlets, you see that they list several pieces of information about each of the various parameters the cmdlet can use:

  • The parameter name
  • Whether it is a required or optional parameter
  • The .NET variable type the parameter expects
  • A description of the behavior the parameter controls

Let’s focus on two particular types of parameters, the Switch (System.Management.Automation.SwitchParameter) and the Boolean (System.Boolean). While I never really thought about it much before reading a discussion on an email list earlier, these two parameter types seem to be two ways of doing the same thing. Let me give you a practical example from the Exchange 2007 Management Shell: the New-ExchangeCertificate cmdlet. Table 1 lists an excerpt of its parameter list from the current TechNet article:

Table 1: Selected parameters of the New-ExchangeCertificate cmdlet

Parameter Description

GenerateRequest
SwitchParameter)

 

Use this parameter to specify the type of certificate object to create.

By default, this parameter will create a self-signed certificate in the local computer certificate store.

To create a certificate request for a PKI certificate (PKCS #10) in the local request store, set this parameter to $True.

PrivateKeyExportable
(Boolean)

Use this parameter to specify whether the resulting certificate will have an exportable private key.

By default, all certificate requests and certificates created by this cmdlet will not allow the private key to be exported.

You must understand that if you cannot export the private key, the certificate itself cannot be exported and imported.

Set this parameter to $true to allow private key exporting from the resulting certificate.

On quick examination, both parameters control either/or behavior. So why the two different types? The mailing list discussion I referenced earlier pointed out the difference:

Boolean parameters control properties on the objects manipulated by the cmdlets. Switch parameters control behavior of the cmdlets themselves.

So in our example, a digital certificate has a property as part of the certificate that marks whether the associated private key can be exported in the future. That property goes along with the certificate, independent of the management interface or tool used. For that property, then, PowerShell uses the Boolean type for the -PrivateKeyExportable property.

On the other hand, the –GenerateRequest parameter controls the behavior of the cmdlet. With this property specified, the cmdlet creates a certificate request with all of the specified properties. If this parameter isn’t present, the cmdlet creates a self-signed certificate with all of the specified properties. The resulting object (CSR or certificate) has no corresponding sign of what option was chosen – you could just as easily submit that CSR to another tool on the same machine to create a self-signed certificate.

I hope this helps draw the distinction. Granted, it’s one I hadn’t thought much about before today, but now that I have, it’s nice to know that there’s yet another sign of intelligence and forethought in the PowerShell architecture.