Leaving 3Sharp

3Sharp has been a fantastic place to work; for the last six and half years, my co-workers and I have walked the road together. One of the realities of growth, though, is that you often reach the fork in the road where you have to move down different paths. Working with Paul, Tim, Missy, Kevin, and the rest of the folks who have been part of the Platform Services Group here at 3Sharp over the years has been a wild journey, but we were only one of three groups at 3Sharp; the other two groups are also chock-full of smart people doing wonderful things with SharePoint and Office. 3Sharp will be moving forward to focus on those opportunities, and the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) is closing its doors. My last day here will be tomorrow, Friday, October 16.

I think that the Ecclesiastes 3:1 says it best; in the King James Version, the poet says, “To every thing there is a season, and a time to every purpose under the heaven.” It has been my privilege to use this blog to talk about Exchange, data protection, and all the other topics I’ve talked about since my first post here five years ago (holy crap, has it really been five years???) With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted here over on my personal blog, Devin on Earth. If you have a link or bookmark for this blog or are following me via RSS, please take a moment to update it now (Devin on Earth RSS feed). I’ve got a few new posts cooking, but this will be my last post here.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years. Thank you, my readers. You all have helped me grow and solidify my skills, and I hope I returned the favor. I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.

OneNote 2010 Keeps Your Brains In Your Head

Some months back, those of you who follow me on Twitter (@devinganger) may have a noticed a series of teaser Tweets about a project I was working on that involved zombies.

Yes, that’s right, zombies. The RAHR-BRAINS-RAHR shambling undead kind, not the “mystery objects in Active Directory” kind.

Well, now you can see what I was up to.

I was working with long-time fellow 3Sharpie David Gerhardt on creating a series of 60-second vignettes for the upcoming Office 2010 application suite. Each vignette focuses on a single new area of functionality in one of the Office products. I got to work with OneNote 2010.

Here’s where the story gets good.

I got brought into the project somewhat late, after a bunch of initial planning and prep work had been done. The people who had been working on the project had decided that they didn’t want to do the same boring business-related content in their OneNote 2010 vignettes; oh, no! Instead, they hit upon the wonderful idea of using a Zombie Plan as the base document. Now, I don’t really like zombies, but this seemed like a great way to spice up a project!

The rest, as they say, is history. Check out the results (posted both at GetSharp and somewhere out on YouTube) for yourself:

One of the best parts of this project, other than getting a chance to learn about some of the wildly cool stuff the OneNote team is doing to enhance an already wonderful product, was the music selection. We worked a deal with local artist Dave Pezzner to use some of his short music clips for these videos. Dave is immensely talented and provided a wide selection of material, so I enjoyed being able to pick and choose just the right music for each video. It did occur to me how cool it would be if I could use Jonathan Coulton’s fantastic song Re: Your Brains, but somehow I think his people lost my query email. Such is life – and I think Mr. Pezzner’s music provided just the right accompaniment to the Zombie Plan content.

Enjoy!

Why Aren’t My Exchange Certificates Validating?

Updated 10/13: Updated the link to the blog article on configuring Squid for Exchange per the request of the author Owen Campbell. Thank you, Owen, for letting me know the location had changed!

By now you should be aware that Microsoft strongly recommends that you publish Exchange 2010/2007 client access servers (and Exchange 2003/2000 front-end servers) to the Internet through a reverse proxy like Microsoft’s Internet Security and Acceleration Server 2006 SP1 (ISA) or the still-in-beta Microsoft Forefront Threat Management Gateway (TMG). There are other reverse proxy products out there, such as the open source Squid (with some helpful guides on how to configure it for EAS, OWA, and Outlook Anywhere), but many of them can only be used to proxy the HTTP-based protocols (for example, the reverse proxy module for the Apache web server) and won’t handle the RPC component of Outlook Anywhere.

When you’re following this recommendation, you keep your Exchange CAS/HT/front-end servers in your private network and place the ISA Server (or other reverse proxy solution) in your perimeter (DMZ) network. In addition to ensuring that your reverse proxy is scrubbing incoming traffic for you, you can also gain another benefit: SSL bridging. SSL bridging is where there are two SSL connections – one between the client machine and the reverse proxy, and a separate connection (often using a different SSL certificate) between the reverse proxy and the Exchange CAS/front-end server. SSL bridging is awesome because it allows you radically reduce the number of commercial SSL certificates you need to buy. You can use Windows Certificate Services to generate and issue certificates to all of your internal Exchange servers, creating them with all of the Subject Alternate Names that you need and desire, and still have a commercial certificate deployed on your Internet-facing system (nice to avoid certificate issues when you’re dealing with home systems, public kiosks, and mobile devices, no?) that has just the public common namespaces like autodiscover.yourdomain.tld and mail.yourdomain.tld (or whatever you actually use).

In the rest of this article, I’ll be focusing on ISA because, well, I don’t know Squid that well and haven’t actually seen it in use to publish Exchange in a customer environment. Write what you know, right?

One of the most irritating experiences I’ve consistently had when using ISA to publish Exchange securely is getting the certificate configuration on ISA correct. If you all want, I can cover certificate namespaces in another post, because that’s not what I’m talking about – I actually find that relatively easy to deal with these days. No, what I find annoying about ISA and certificates is getting all of the proper root CA certificates and intermediate CA certificates in place. The process you have to go through varies on who you buy your certificates from. There are a couple, like GoDaddy, that offer inexpensive certificates that do exactly what Exchange needs for a decent price – but they require an extra bit of configuration to get everything working.

The problem you’ll see is two-fold:

  1. External clients will not be able to connect to Exchange services. This will be inconsistent; some browsers and some Outlook installations (especially those on new Windows installs or well-updated Windows installs) will work fine, while others won’t. You may have big headaches getting mobile devices to work, and the error messages will be cryptic and unhelpful.
  2. While validating your Exchange publishing rules with the Exchange Remote Connectivity Analyzer (ExRCA), you get a validation error on your certificate as shown in Figure 1.

ExRCA can't find the intermediate certificate on your ISA server
Figure 1: Missing intermediate CA certificate validation error in ExRCA

The problem is that some devices don’t have the proper certificate chain in place. Commercial certificates typically have two or three certificates in their signing chain: the root CA certificate, an intermediate CA certificate, and (optionally) an additional intermediate CA certificate. The secondary intermediate CA certificate is typically the source of the problem; it’s configured as a cross-signing certificate, which is intended to help CAs transition old certificates from one CA to another without invalidating the issued certificates. If your certificate was issued by a CA that has these in place, you have to have both intermediate CA certificates in place on your ISA server in the correct certificate stores.

By default, CAs will issue the entire certificate chain to you in a single bundle when they issue your cert. You have to import this bundle on the machine you issued the request from or else you don’t get the private key associated with the certificate. Once you’ve done that, you need to re-export the certificate, with the private key and its entire certificate chain, so that you can import it in ISA. This is important because ISA needs the private key so it can decrypt the SSL session (required for bridging), and ISA needs all the certificate signing chain so that it can hand out missing intermediate certificates to devices that don’t have them (such as Windows Mobile devices that have the root CA certificates). If the device doesn’t have the right intermediates, can’t download it itself (like Internet Explorer can), and can’t get it from ISA, you’ll get the certificate validation errors.

Here’s what you need to do to fix it:

  • Ensure that your server certificate has been exported with the private key and *all* necessary intermediate and root CA certificates.
  • Import this certificate bundle into your ISA servers. Before you do this, check the computer account’s personal certificate store and make sure any root or intermediate certificates that got accidentally imported there are deleted.
  • Using the Certificate MMC snap-in, validate that the certificate now shows as valid when browsing the certificate on your ISA server, as shown in Figure 2.

Even though the Certificates MMC snap-in shows this certificate as valid, ISA won't serve it out until the ISA Firewall Service is restarted!
Figure 2: A validated server certificate signing chain on ISA Server

  • IMPORTANT STEP: restart the ISA Firewall Service on your ISA server (if you’re using an array, you have to do this on each member; you’ll want to drain the connections before restarting, so it can take a while to complete). Even though the Certificate MMC snap-in validates the certificate, the ISA Firewall only picks up the changes to the certificate chain on startup. This is annoying and stupid and has caused me pain in the past – most recently, with 3Sharp’s own Exchange 2010 deployment (thanks to co-worker and all around swell guy Tim Robichaux for telling me how to get ISA to behave).

Also note that many of the commercial CAs specifically provide downloadable packages of their root CA and intermediate CA certificates. Some of them get really confusing – they have different CAs for different tiers or product lines, so you have to match the server certificate you have with the right CA certificates. GoDaddy’s CA certificate page can be found here.

Some Thoughts on FBA (part 2)

As promised, here’s part 2 of my FBA discussion, in which we’ll talk about the interaction of ISA’s forms-based authentication (FBA) feature with Exchange 2010. (See part 1 here.)

Offloading FBA to ISA

As I discussed in part 1, ISA Server includes the option of performing FBA pre-authentication as part of the web listener. You aren’t stuck with FBA – you can use other pre-auth methods too. The thinking behind this is that ISA is the security server sitting in the DMZ, while the Exchange CAS is in the protected network. Why proxy an incoming connection from the Internet into the real world (even with ISA’s impressive HTTP reverse proxy and screening functionality) if it doesn’t present valid credentials? In this configuration, ISA is configured for FBA while the Exchange 2010/2007 CAS or Exchange 2003 front-end server are configured for Windows Integrated or Basic as shown in Figure 1 (a figure so nice I’ll re-use it):

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Moving FBA off of ISA

Having ISA (and Threat Management Gateway, the 64-bit successor to ISA 2006) perform pre-auth in this fashion is nice and works cleanly. However, in our Exchange 2010 deployment, we found a couple of problems with it:

The early beta releases of Entourage for EWS wouldn’t work with this configuration; Entourage could never connect. If our users connected to the 3Sharp VPN, bypassing the ISA publishing rules, Entourage would immediately see the Exchange 2010 servers and do its thing. I don’t know if the problem was solved for the final release.

We couldn’t get federated calendar sharing, a new Exchange 2010 feature, to work. Other Exchange 20120 organizations would get errors when trying to connect to our organization. This new calendar sharing feature uses a Windows Live-based central brokering service to avoid the need to provision and manage credentials.

Through some detailed troubleshooting with Microsoft and other Exchange 2010 organizations, we finally figured out that our ISA FBA configuration was causing the problem. The solution was to disable ISA pre-authentication and re-enable FBA on the appropriate virtual directories (OWA and ECP) on our CAS server. Once we did that, not only did federated calendar sharing start working flawlessly, but our Entourage users found their problems had gone away too. For more details of what we did, read on.

How Calendar Sharing Works in Exchange 2010

If you haven’t seen other descriptions of the federated calendar sharing, here’s a quick primer on how it works. This will help you understand why, if you’re using ISA pre-auth for your Exchange servers, you’ll want to rethink it.

In Exchange 2007, you could share calendar data with other Exchange 2007 organizations. Doing so meant that your CAS servers had to talk to their calendar servers, and the controls around it were not that granular. In order to do it, you either needed to establish a forest trust and grant permissions to the other forest’s CAS servers (to get detailed per-user free/busy information) or set up a separate user in your forest for the foreign forests to use (to get default per-org free/busy data). You also have to fiddle around with the Autodiscover service connection points and ensure that you’ve got pointers for the foreign Autodiscover SCPs in your own AD (and the foreign systems have yours). You also have to publish Autodiscover and EWS externally (which you have to do for Outlook Anywhere) and coordinate all your certificate CAs. While this doesn’t sound that bad, you have to do these steps for every single foreign organization you’re sharing with. That adds up, and it’s a poorly documented process – you’ll start at this TechNet topic about the Availability service and have to do a lot of chasing around to figure out how certificates fit in, how to troubleshoot it, and the SCP export and import process.

In Exchange 2010, this gets a lot easier; individual users can send sharing invitations to users in other Exchange 2010 organizations, and you can set up organization relationships with other Exchange 2010 organizations. Microsoft has broken up the process into three pieces:

  1. Establish your organization’s trust relationship with Windows Live. This is a one-time process that must take place before any sharing can take place – and you don’t have to create or manage any service or role accounts. You just have to make sure that you’re using a CA to publish Autodiscover/EWS that Windows Live will trust. (Sorry, there’s no list out there yet, but keep watching the docs on TechNet.) From your Exchange 2010 organization (typically through EMC, although you can do it from EMS) you’ll swap public keys (which are built into your certificates) with Windows Live and identify one or more accepted domains that you will allow to be federated. Needless to say, Autodiscover and EWS must be properly published to the Internet. You also have to add a single DNS record to your public DNS zone, showing that you do have authority over the domain namespace. If you have multiple domains and only specify some of them, beware: users that don’t have provisioned addresses in those specified domains won’t be able to share or receive federated calendar info!
  2. Establish one or more sharing policies. These policies control how much information your users will be able to share with external users through sharing invitations. The setting you pick here defines the maximum level of information that your users can share from their calendars: none, free/busy only, some details, or all details. You can create a single policy for all your users or use multiple policies to provision your users on a more granular basis. You can assign these policies on a per-user basis.
  3. Establish one or more sharing relationships with other organizations. When you want to view availability data of users in other Exchange 2010 organizations, you create an organization relationship with them. Again, you can do this via EMC or EMS. This tells your CAS servers to lookup information from the defined namespaces on behalf of your users – contingent, of course, that the foreign organization has established the appropriate permissions in their organization relationships. If the foreign namespace isn’t federated with Windows Live, then you won’t be allowed to establish the relationship.

You can read more about these steps in the TechNet documentation and at this TechNet topic (although since TechNet is still in beta, it’s not all in place yet). You should also know that these policies and settings combine with the ACLs on users calendar folders, and as is the typical case in Exchange when there are multiple levels of permission, the most restrictive level wins.

What’s magic about all of this is that, at no point along the way other than the initial first step, do you have to worry consciously about the certificates you’re using. You never have to provide or provision credentials. As you create your policies and sharing relationships with other organizations – and other organizations create them with yours – Windows Live is hovering silently in the background, acting as a trusted broker for the initial connections. When your Exchange 2010 organization interacts with another, your CAS servers receive a SAML token from Windows Live. This token is then passed to the foreign Exchange 2010 organization, which can validate it because of its own trust relationship with Windows Live. All this token does is validate that your servers are really coming from the claimed namespace – Windows Live plays no part in authorization, retrieving the data, or managing the sharing policies.

However, here’s the problem: when my CAS talks to your CAS, they’re using SAML tokens – not user accounts – to authenticate against IIS for EWS calls. ISA Server (and, IIRC, TMG) don’t know how to validate these tokens, so the incoming requests can’t authenticate and pass on to the CAS. The end result is that you can’t get a proper sharing relationship set up and you can’t federate calendar data.

What We Did To Fix It

Once we knew what the problem was, fixing it was easy:

  1. Modify the OWA and ECP virtual directors on all of our Exchange 2010 CAS servers to perform FBA. These are the only virtual directories that permit FBA, so they’re the only two you need to change:Set-OWAVirtualDirectory -Identity “CAS-SERVER\owa (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUESet-ECPVirtualDirectory -Identity “CAS-SERVER\ecp (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUE
  2. Modify the Web listener on our ISA server to disable pre-authentication. In our case, we were using a single Web listener for Exchange (and only for Exchange), so it was a simple matter of changing the authentication setting to a value of No Authentication.
  3. Modify each of the ISA publishing rules (ActiveSync, Outlook Anywhere, and OWA):On the Authentication tab, select the value No delegation, but client may authenticate directly.On the Users tab, remove the value All Authenticated Users and replace it with the value All Users. This is important! If you don’t do this, ISA won’t pass any connections on!

You may also need to take a look at the rest of your Exchange virtual directories and ensure that the authentication settings are valid; many places will allow Basic authentication between ISA and their CAS servers and require NTLM or Windows Integrated from external clients to ISA.

Calendar sharing and ISA FBA pre-authentication are both wonderful features, and I’m a bit sad that they don’t play well together. I hope that future updates to TMG will resolve this issue and allow TMG to successfully pre-authenticate incoming federated calendar requests.

Stolen Thunder: Outlook for the Mac

I was going to write up a quick post about the release of Entourage for EWS (allowing it to work in native Exchange 2007, and, more importantly, Exchange 2010 environments) and the announcement that Office 2010 for the Mac would have Outlook, not Entourage, but Paul beat me to it, including my whole take on the thing. So go read his.

For those keeping track at home, yes, I still owe you a second post on the Exchange 2010 calendar sharing. I’m working on it! Soon!

EAS: King of Sync?

Seven months or so ago, IBM surprised a bunch of people by announcing that they were licensing Microsoft’s Exchange ActiveSync protocol (EAS) for use with a future version of Lotus Notes. I’m sure there were a few folks who saw it coming, but I cheerfully admit that I was not one of them. After about 30 seconds of thought, though, I realized that it made all kinds of sense. EAS is a well-designed protocol, I am told by my developer friends, and I can certainly attest to the relative lightweight load it puts on Exchange servers as compared to some of the popular alternatives – enough so that BlackBerry add-ons that speak EAS have become a not-unheard of alternative for many organizations.

So, imagine my surprise when my Linux geek friend Nick told me smugly that he now had a new Palm Pre and was synching it to his Linux-based email system using the Pre’s EAS support. “Oh?” said I, trying to stay casual as I was mentally envisioning the screwed-up mail forwarding schemes he’d put in place to route his email to an Exchange server somewhere. “Did you finally break down and migrate your email to an Exchange system? If not, how’d you do that?”

Nick then proceeded to point me in the direction of Z-Push, which is an elegant little open source PHP-based implementation of EAS. A few minutes of poking around and I became convinced that this was a wicked cool project. I really like how Z-Push is designed:

  • The core PHP module answers incoming requests for the http://server/Microsoft-Server-ActiveSync virtual directory and handles all the protocol-level interactions. I haven’t dug into this deeply, but although it appears it was developed against Apache, folks have managed to get it working on a variety of web servers, including IIS! I’m not clear on whether authentication is handled by the package itself or by the web server. Now that I think about it, I suspect it just proxies your provided credentials on to the appropriate back-end system so that you don’t have to worry about integrating Z-Push with your authentication sources.
  • One or more back-end modules (also written in PHP), which read and write data from various data sources such as your IMAP server, a Maildir file system, or some other source of mail, calendar, or contact information. These back-end modules are run through a differential engine to help cut down on the amount of synching the back-end modules must perform. It looks like the API for these modules is very well thought-out; they obviously want developers to be able to easily write backends to tie in to a wide variety of data sources. You can mix and match multiple backends; for example, get your contact data from one system, your calendar from another, and your email from yet a third system.
  • If you’re running the Zarafa mail server, there’s a separate component that handles all types of data directly from Zarafa, easing your configuration. (Hey – Zarafa and Z-Push…I wonder if Zarafa provides developer resources; if so, way to go, guys!)

You do need to be careful about the back-end modules; because they’re PHP code running on your web server, poor design or bugs can slam your web server. For example, there’s currently a bug in how the IMAP back-end re-scans messages, and the resulting load can create a noticeable impact on an otherwise healthy Apache server with just a handful of users. It’s a good thing that there seems to be a lively and knowledgeable community on the Z-Push forums; they haven’t wasted any time in diagnosing the bug and providing suggested fixes.

Very deeply cool – folks are using Z-Push to provide, for example, an EAS connection point on their Windows Home Server, synching to their Gmail account. I wonder how long it will take for Linux-based “Exchange killers” (other than Zarafa) to wrap this product into their overall packages.

It’s products like this that help reinforce the awareness that EAS – and indirectly, Exchange – are a dominant enough force in the email market to make the viability of this kind of project not only potentially useful, but viable as an open source project.

Comparing PowerShell Switch Parameters with Boolean Parameters

If you’ve ever take a look at the help output (or TechNet documentation) for PowerShell cmdlets, you see that they list several pieces of information about each of the various parameters the cmdlet can use:

  • The parameter name
  • Whether it is a required or optional parameter
  • The .NET variable type the parameter expects
  • A description of the behavior the parameter controls

Let’s focus on two particular types of parameters, the Switch (System.Management.Automation.SwitchParameter) and the Boolean (System.Boolean). While I never really thought about it much before reading a discussion on an email list earlier, these two parameter types seem to be two ways of doing the same thing. Let me give you a practical example from the Exchange 2007 Management Shell: the New-ExchangeCertificate cmdlet. Table 1 lists an excerpt of its parameter list from the current TechNet article:

Table 1: Selected parameters of the New-ExchangeCertificate cmdlet

Parameter Description

GenerateRequest
SwitchParameter)

 

Use this parameter to specify the type of certificate object to create.

By default, this parameter will create a self-signed certificate in the local computer certificate store.

To create a certificate request for a PKI certificate (PKCS #10) in the local request store, set this parameter to $True.

PrivateKeyExportable
(Boolean)

Use this parameter to specify whether the resulting certificate will have an exportable private key.

By default, all certificate requests and certificates created by this cmdlet will not allow the private key to be exported.

You must understand that if you cannot export the private key, the certificate itself cannot be exported and imported.

Set this parameter to $true to allow private key exporting from the resulting certificate.

On quick examination, both parameters control either/or behavior. So why the two different types? The mailing list discussion I referenced earlier pointed out the difference:

Boolean parameters control properties on the objects manipulated by the cmdlets. Switch parameters control behavior of the cmdlets themselves.

So in our example, a digital certificate has a property as part of the certificate that marks whether the associated private key can be exported in the future. That property goes along with the certificate, independent of the management interface or tool used. For that property, then, PowerShell uses the Boolean type for the -PrivateKeyExportable property.

On the other hand, the –GenerateRequest parameter controls the behavior of the cmdlet. With this property specified, the cmdlet creates a certificate request with all of the specified properties. If this parameter isn’t present, the cmdlet creates a self-signed certificate with all of the specified properties. The resulting object (CSR or certificate) has no corresponding sign of what option was chosen – you could just as easily submit that CSR to another tool on the same machine to create a self-signed certificate.

I hope this helps draw the distinction. Granted, it’s one I hadn’t thought much about before today, but now that I have, it’s nice to know that there’s yet another sign of intelligence and forethought in the PowerShell architecture.

Some Thoughts on FBA (part 1)

It’s funny how topics tend to come in clumps. Take the current example: forms-based authentication (FBA) in Exchange.

An FBA Overview

FBA was introduced in Exchange Server 2003 as a new authentication method for Outlook Web Access. It requires OWA to be published using SSL – which was not yet common practice at that point in time – and in turn allowed credentials to be sent a single time using plain-text form fields. It’s taken a while for people to get used to, but FBA has definitely become an accepted practice for Exchange deployments, and it’s a popular way to publish OWA for Exchange 2003, Exchange 2007, and the forthcoming Exchange 2010.

In fact, FBA is so successful, that the ISA Server group got into the mix by including FBA pre-authentication for ISA Server. With this model, instead of configuring Exchange for FBA you instead configure your ISA server to present the FBA screen. Once the user logs in, ISA takes the credentials and submits them to the Exchange 2003 front-end server or Exchange 2007 (or 2010) Client Access Server using the appropriately configured authentication method (Windows Integrated or Basic). In Exchange 2007 and 2010, this allows each separate virtual directory (OWA, Exchange ActiveSync, RPC proxy, Exchange Web Services, Autodiscover, Unified Messaging, and the new Exchange 2010 Exchange Control Panel) to have its own authentication settings, while ISA server transparently mediates them for remote users. Plus, ISA pre-authenticates those connections – only connections with valid credentials ever get passed on to your squishy Exchange servers – as shown in Figure 1:

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Now that you know more about how FBA, Exchange, and ISA can interact, let me show you one mondo cool thing today. In a later post, we’ll have an architectural discussion for your future Exchange 2010 deployments.

The Cool Thing: Kay Sellenrode’s FBA Editor

On Exchange servers, it is possible to modify both the OWA themes and the FBA page (although you should check about the supportability of doing so). Likewise, it is also possible to modify the FBA page on ISA Server 2006. This is a nice feature as it helps companies integrate the OWA experience into the overall look and feel of the rest of their Web presence. Making these changes on Exchange servers is a somewhat well-documented process. Doing them on ISA is a bit more arcane.

Fellow Exchange 2007 MCM Kay Sellenrode has produced a free tool to simplify the process of modifying the ISA 2006 FBA – named, aptly enough, the FBA Editor. You can find the tool, as well as a YouTube video demo of how to use it, from his blog. While I’ve not had the opportunity to modify the ISA FBA form myself, I’ve heard plenty of horror stories about doing so – and Kay’s tool is a very cool, useful community contribution.

In the next day or two (edit: or more), we’ll move on to part 2 of our FBA discussion – deciding when and where you might want to use ISA’s FBA instead of Exchange’s.

You, too, can Master Exchange

One of the biggest criticisms I’ve seen of the MCM program, even when it first was announced, was the cost – at a list price of $18,500 for the actual MCM program, discounting the travel, lodging, food, and opportunity cost of lost revenue, a lot of people are firmly convinced that the program is way too expensive for anybody but the bigger shops.

This discussion has of course gone back and forth within the Exchange community. I think part of the pushback comes from the fact that MCM is the next evolution of the Exchange Ranger program, which felt very elitist and exclusive (and by many accounts was originally designed to be, back when it was only a Microsoft-only evolution designed to provide a higher degree of training for Microsoft consultants and engineers to better resolve their own customer issues). Starting off with that kind of background leaves a lot of lingering impressions, and the Exchange community has long memories. Paul has a great discussion of his point of view as a new MCM instructor and shares his take on the “is it worth it?” question.

Another reason for pushback is the economy. The typical argument is, “I can’t afford to take this time right now.” Let’s take a ballpark figure here, aimed at the coming May 4 rotation, just to have some idea of the kinds of numbers folks are thinking about:

  • Imagine a consultant working a 40-hour week. Her bosses would like her to meet 90% (36 hours) billable. Given two weeks of vacation a year, that 50 weeks at 36 hours a week.
  • We’ll also imagine that she’s able to bill out at $100/hour. This brings her minimum annual revenue to $180,000. They set her opportunity cost (lost revenue) at $3,600/week.
  • We’ll assume she have the pre-requisites nailed (MCITP Enterprise Messaging, the additional AD exam for either Windows 2003 or Windows 2008, and the field experience). No extra cost there (otherwise it’s $150/test, or $600 total).
  • Let’s say her plane tickets are $700 for round-trip to Redmond and back.
  • And we’ll say that she needs to stay at a hotel, checking in Sunday May 3rd, checking out Sunday May 24th, at a daily rate of $200.
  • Let’s also assume she’ll need $75 a day for meals.

That works out to $18,500 (class fee) + $700 (plane) + 21 x $275 (hotel + meals) + 3 x $3,600 (opportunity cost of work she won’t be doing) — $18,500 + $700 + $5,775 + $10,800 = a whopping total of $35,775. That, many people argue, is far too much for what they get out of the course – it represents just over 10 weeks of her regular revenue, or approximately 1/5th of her year’s revenue.

If those numbers were the final answer, they’d be right.

However, Paul has some great talking points in his post; although he focuses on the non-economic piece, I’d like to tie some of those back in to hard numbers.

  • The level of training. I don’t care how well you know Exchange. You will walk out of this class knowing a lot more and you will be immediately able to take advantage of that knowledge to the betterment of your customers. Plus, you will have ongoing access to some of the best Exchange people in the world. I don’t know a single consultant out there who can work on a problem that is stumping them for hours or days and be able to consistently bill every single hour they spend showing no results. Most of us end up eating time, which shows up in the bottom line. For the sake of argument, let’s say that our consultant ends up spending 30% instead of 10% of her time working on issues that she can’t directly bill for because of things like this. That drops her opportunity cost from $3,600/week to $2,520, or $7,560 for the three weeks (and it means she’s only got an annual revenue of $126,000). If she can reduce that non-billable time, she can increase my efficiency and get more real billable work done in the same calendar period. We’ll say she can gain back 10% of that lost time and get up to only 20% lost time, or 32 hours a week.
  • The demonstration of competence. This is a huge competitive advantage for two reasons. First, it helps you land work you may not have been able to land before. This is great for keeping your pipeline full – always a major challenge in a rough economy. Second, it allows you to raise your billing rates. Okay, true, maybe you can’t raise your billing rates for all the work that you do for all of your customers, but even some work at a higher rate directly translates to your pocket book. Let’s say she can bill 25% of those 32 hours at $150/hour. That turns her week’s take into (8 x $150) + (24 x $100) = $1,200 + $2,400 = $3,600. That modest gain in billing rates right there compensates for the extra 10% loss of billing hours and pays for itself every 3-4 weeks.

Let’s take another look at those overall numbers again. This time, let’s change our ballpark with numbers more closely matching the reality of the students at the classes:

  • There’s a 30% discount on the class, so she pays only $12,950 (not $18,500).
  • We’ll keep the $700 for plane tickets.
  • From above, we know that her real lost opportunity cost is more like $7,560 (3 x $2,520 and not the $10,800 worst case).
  • She can get shared apartment housing with other students right close to campus for more like $67 a night (three bedrooms).
  • Food expenses are more typically averaged out to $40 per day. You can, of course, break the bank on this during the weekends, but during the days you don’t really have time.

This puts the cost of her rotation at $12,950 + $700 + (21 x $107) + $7,560, or $23,457. That’s only 66% – two-thirds – of the worst-case cost we came up with above. With her adjusted annual revenue of $126,000, this is only 19%, or just less than 1/5th of her annual revenue.

And it doesn’t stop there. Armed with the data points I gave above, let’s see how this works out for the future and when the benefits from the rotation pay back.

Over the year, our hypothetical consultant, working only a 40-hour work week (I know, you can stop laughing at me now) brings in 50 x $2,520 = $126,000.  The MCM rotation represents 19% of her revenue for the year before costs.

However, let’s figure out earning potential in that same year: (47 x $3,600) – ($13,650 + $700 + $2247) = $152,603. That’s a 20% increase.

Will these numbers make sense for everyone? No, and I’m not trying to argue that they do. What I am trying to point out, though, is that the business justification for going to the rotation may actually make sense once you sit down and work out the numbers. Think about your current projects and how changes to hours and billing rates may improve your bottom line. Think about work you haven’t gotten or been unwilling to pursue because you or the customer felt it was out of your league. Take some time to play with the numbers and see if this makes sense for you.

If it does, or if you have any further questions, let me know.

Fixing interoperability problems between OCS 2007 R2 Public Internet Connectivity and AOL IM

One of the cool things you can do with OCS is connect your internal organization to various public IM clouds (MSN/Windows Live, Yahoo!, and AOL) using the Public Internet Connectivity, or PIC, feature. As you might imagine, though, PIC involves lots of fiddly bits that all have to work just right in order for there to be a seamless user experience. Recently, lots of people deploying OCS 2007 R2 have been reporting problems with PIC – specifically, in getting connectivity to the AOL IM cloud working properly.

Background

It turns out that the problem has to do with with changes that were made to the default SSL algorithm negotiations made in Windows Server 2008. If you deployed OCS 2007 R2 Edge roles on Windows Server 2003, you’d be fine; if you used Windows 2008, you’d see problems.

When an HTTP client and server connect (and most IM protocols use HTTPS or HTTP + TLS as a firewall-friendly transport[1]), one of the first things they do is negotiate the specific suite of cryptographic algorithms that will be used for that session. The cipher suite includes three components:

  • Key exchange method – this is the algorithm that defines the way that the two endpoints will agree upon a shared symmetric key for the session. This session key will later be used to encrypt the contents of the session, so it’s important for it to be secure. This key should never be passed in cleartext – and since the session isn’t encrypted yet, there has to be some mechanism to do it. Some of the potential methods allow digital signatures, providing an extra level of confidence against a man-in-the-middle attack. There are two main choices: RSA public-private certificates and Diffie-Hellman keyless exchanges (useful when there’s no prior communication or shared set of trusted certificates between the endpoints).
  • Session cipher – this is the cipher that will be used to encrypt all of the session data. A symmetric cipher is faster to process for both ends and reduces CPU overhead, but is more vulnerable in principal to discovery and attack (as both sides have to have the same key and therefore have to exchange it over the wire). The next choice is streaming cipher or cipher block chaining (CBC) cipher? For streaming, you have RC4 (40 and 128-bit variants). For CBC, you can choose RC2 (40-bit), DES (40-bit or 56-bit), 3DES (168-bit), Idea (128-bit), or Fortezza (96-bit). You can also choose none, but that’s not terribly secure.
  • Message digest algorithm – the message digest is a hash cipher used to create the Hashed Message Authentication Code (HMAC), which is used to help verify the integrity of the cipher. It’s also used to guard against an attacker trying to replay this stream in the future and fool the server into giving up information it shouldn’t. In SSL 3.0, this is just a MAC. There are three choices: null (none), MD5 (128-bit), and SHA-1 (160-bit).

Problem

Windows Server 2003 uses the following suites for TLS 1.0/SSL 3.0 connections by default:

  1. TLS_RSA_WITH_RC4_128_MD5 (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 128-bit MD5 HMAC; a safe, legacy choice of protocols, although definitely aging in today’s environment)
  2. TLS_RSA_WITH_RC4_128_SHA (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 160-bit SHA-1 HMAC; a bit stronger than the above, thanks to SHA-1 being not quite as brittle as MD5 yet)
  3. TLS_RSA_WITH_3DES_EDE_CBC_SHA (you can work out the rest)
  4. TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  5. TLS_RSA_WITH_DES_CBC_SHA
  6. TLS_DHE_DSS_WITH_DES_CBC_SHA
  7. TLS_RSA_EXPORT1024_WITH_RC4_56_SHA
  8. TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA
  9. TLS_DHE_DSS_EXPORT1024_WITH_DES_CBC_SHA
  10. TLS_RSA_EXPORT_WITH_RC4_40_MD5
  11. TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
  12. TLS_RSA_WITH_NULL_MD5
  13. TLS_RSA_WITH_NULL_SHA

Let’s contrast that with Windows Server 2008, which cleans out some cruft but adds support for quite a few new algorithms (new suites bolded):

  1. TLS_RSA_WITH_AES_128_CBC_SHA (Using AES 128-bit as a CBC session cipher)
  2. TLS_RSA_WITH_AES_256_CBC_SHA (Using AES 256-bit as a CBC session cipher)
  3. TLS_RSA_WITH_RC4_128_SHA
  4. TLS_RSA_WITH_3DES_EDE_CBC_SHA
  5. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256 (AES 128-bit, SHA 256-bit)
  6. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384(AES 128-bit, SHA 384-bit)
  7. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521(AES 128-bit, SHA 521-bit)
  8. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256(AES 256-bit, SHA 256-bit)
  9. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384(AES 256-bit, SHA 384-bit)
  10. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521(AES 256-bit, SHA 521-bit)
  11. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 (you can work out the rest)
  12. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384
  13. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521
  14. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256
  15. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384
  16. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521
  17. TLS_DHE_DSS_WITH_AES_128_CBC_SHA
  18. TLS_DHE_DSS_WITH_AES_256_CBC_SHA
  19. TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  20. TLS_RSA_WITH_RC4_128_MD5
  21. SSL_CK_RC4_128_WITH_MD5 (not sure)
  22. SSL_CK_DES_192_EDE3_CBC_WITH_MD5 (not sure)
  23. TLS_RSA_WITH_NULL_MD5
  24. TLS_RSA_WITH_NULL_SHA

Okay, so take a look at line 20 in the second list – see how TLS_RSA_WITH_RC4_128_MD5 got moved from first to darned near worst? Yeah, well, that’s because AES and SHA-1 are the strongest protocols of their type likely to be commonly supported, so Windows 2008 moves those to the default offered. Unfortunately, this causes problems with PIC to AOL.

Solution

Now that we know what the problem is, what can we do about it? For the fix, check out Scott Oseychik’s post here.

[1] HTTPS is really Hop Through Tightened Perimeters Simply – aka the Universal Firewall Traversal Protocol.

ExMon released (no joke!)

If you’re tempted to think this is an April Fool’s Day joke, no worries – this is the real deal. Yesterday, Microsoft published the Exchange 2007-aware version of Exchange Server User Monitor (ExMon) for download.

“ExMon?” you ask. “What’s that?” I’m happy to explain!

ExMon is a tool that gives you a real-time look inside your Exchange servers to help find out what kind of impact your MAPI clients are having on the system. That’s right – it’s a way to monitor MAPI connections. (Sorry; it doesn’t monitor WebDAV, POP3, IMAP, SMTP, OWA, EAS, or EWS.) With this release, you can now monitor the following versions of Exchange:

  • Exchange Server 2007 SP1+
  • Exchange Server 2003 SP1+
  • Exchange 2000 Server SP2+

You can find out more about it from TechNet.

Even though the release date isn’t a celebration of April 1st, there is currently a bit of an unintentional joke, as shown by the current screenshot:

image

Note that while the Date Published is March 31, the Version is only 06.05.7543 – which is the Exchange 2003 version published in 2005, as shown below:

image

So, for now, hold off trying to download and use it. I’ll update this post when the error is fixed.

Two CCR White Papers from Missy

This actually happened last week, but I’ve been remiss in getting it posted (sorry, Missy!) Missy recently completed two Exchange 2007 whitepapers, both centered around the CCR story.

The first one, High Availability Choices for Exchange Server 2007: Continuous Cluster Replication or Single Copy Clustering, provides a thorough overview of the questions and issues to be considered by companies who are looking for Exchange 2007 availability:

  • Large mailbox support. In my experience, this is a major driver for Exchange 2007 migrations and for looking at CCR. Exchange 2007’s I/O performance increases have shifted the balance for the Exchange store being always I/O bound to now sometimes being capacity bound, depending on the configuration, and providing that capacity can be extremely expensive in SCC configurations (that typically rely on SANs). CCR offers some other benefits that Missy outlines.
  • Points of failure. With SCC, you still only have a single copy of the data – making that data (and that SAN frame) a SPOF. There are mitigation steps you can take, but those are all expensive. When it comes to losing your Exchange databases, storage issues are the #1 cause.
  • Database replication. Missy takes a good look at what replication means, how it affects your environment, and why CCR offers a best-of-breed solution for Exchange database replication. She also tackles the religious issue of why SAN-based availability solutions aren’t necessarily the best solution – and why people need to re-examine the question of whether Exchange-based availability features are the right way to go.
  • RTO and RPO. These scary TLAs are popping up all over the place lately, but you really need to understand them in order to have a good handle on what your organization’s exact needs are – and which solution is going to be the best fit for you.
  • Hardware and storage considerations. Years of cluster-based availability solutions have given many Exchange administrators and consultants a blind spot when it comes to how Exchange should be provisioned and designed. These solutions have limited some of the flexibility that you may need to consider in the current economic environment.
  • Cost. Talk about money and you always get people’s attention. Missy details several areas of hidden cost in Exchange availability and shows how CCR helps address many of these issues.
  • Management. It’s not enough to design and deploy your highly available Exchange solution – if you don’t manage and monitor it, and have good operational policies and procedures, your investment will be wasted. Missy talks about several realms of management.

I really recommend this paper for anyone who is interested in Exchange availability. It’s a cogent walkthrough of the major discussion points centering around the availability debate.

Missy’s second paper, Continuous Cluster Replication and Direct Attached Storage: High Availability without Breaking the Bank, directly addresses one of the key assumptions underneath CCR – that DAS can be a sufficient solution. Years of Exchange experience have slowly moved organizations away from DAS to SAN, especially when high availability is a requirement – and many people now write off DAS solutions out of habit, without realizing that Exchange 2007 has in fact enabled a major switch in the art of Exchange storage design.

In order to address this topic, Missy takes a great look at the history of Exchange storage and the technological factors that led to the initial storage design decisions and the slow move to SAN solutions. These legacy decisions continue to box today’s Exchange organizations into a corner with unfortunate consequences – unless something breaks demand for SAN storage.

Missy then moves into how Exchange 2007 and CCR make it possible to use DAS, outlining the multiple benefits of doing so (not just cost – but there’s a good discussion of the money factor, too).

Both papers are outstanding; I highly recommend them.

Haz Firewall, Want Cheezburger

Although Window Server 2008 offers an impressive built-in firewall, in some cases we Exchange administrators don’t want to have to deal with it. Maybe you are building a demo to show a customer, or a lab environment to reproduce an issue. Maybe you just want to get Exchange installed now and will loop back to deal with fine-tuning firewall issues later. Maybe you have some other firewall product you’d rather use. Maybe, even, you don’t believe in defense in depth – or don’t think server-level firewall is useful.

Whatever the reason, you’ve decided to disable the Windows 2008 firewall for an Exchange 2007 server. It turns out that there is a right way to do it and a wrong way to do it.

The wrong way

image

This seems pretty intuitive to long-term Exchange administrators who are used to Windows Server 2003. The problem is, the Windows firewall service in Windows 2008 has been re-engineered and works a bit differently. It now includes the concept of profiles, a feature that built into the networking stack at a low level, enabling Windows to identify the network you’re on and apply the appropriate sets of configuration (such as enabling or disabling firewall rules and services).

Because this functionality is now tied into the network stack, disabling the Windows Firewall service and shutting it off can actually lead to all sorts of interesting and hard-to-fix errors.

The right way

Doing it the right way involves taking advantage of those network profiles.

Method 1 (GUI):

  1. Open the Windows Firewall with Advanced Security console (Start, Administrative Tools, Windows Firewall with Advanced Security).
  2. In the Overview pane, click Windows Firewall Properties.
  3. For each network profile (Domain network, Public network, Private network) that the server or image will be operating in, select Firewall state to Off. Typically, setting the Domain network profile is sufficient for an Exchange server, unless it’s an Edge Transport box.
  4. Once you’ve set all the desired profiles, click OK.
  5. Close the Windows Firewall with Advanced Security console.

image

Method 2 (CLI):

  1. Open your favorite CLI interface: CMD.EXE or PowerShell.
  2. Type the following command:netsh advfirewall set profiles state off

    Fill in profiles with one of the following values:

    • DomainProfile — the Domain network profile. Typically the profile needed for all Exchange servers except Edge Transport.
    • PrivateProfile — the Private network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a private network.
    • PublicProfile — the Public network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a public network (which is what I’d recommend).
    • CurrentProfile — the currently selected network profile
    • AllProfiles — all network profiles
  3. Close the command prompt.

image

And there you have it – the right way to disable the Windows 2008 firewall for Exchange Server 2007, complete with FAIL/LOLcats.

A long-overdue status update

So, you haven’t seen a lot of me on the blog lately. The sad part is that I have three or four blog posts in various states of completion, I just seem to have very little time these days to work on it. I think part of it is that ever since my MCM Exchange 2007 class last October, I felt like I had a big burden of unfinished business on my shoulders.

Happily, that’s not the case anymore. Yesterday I retook and passed the lab and received word that I am have officially earned the coveted Microsoft Certified Master | Exchange 2007 certification. While I’m taking this moment to express my utmost relief about this, be assured I’ve got plenty more to say about it in an upcoming blog post, but it’ll have to wait.

I’ve also been re-awarded as an Exchange MVP — 3 years, wow! — and continue to be going full-bore with that. I have become very deeply aware that my continued presence in the Microsoft communities is in large part due to the fantastic caliber of people who are involved in them. A friend once mentioned the “open source community” as if it was a singular community and I had to laugh; from my experience, it’s anything but. Consider the following examples:

  • KDE vs. Gnome
  • Linux vs. BSD
  • Linux distro vs. Linux distro
  • Sun Java vs. IBM Java
  • Tomcat vs. other Java frameworks
  • Sendmail vs. Postfix vs. Exim
  • Berstein vs. everyone else
  • Stallman/FSF vs. everyone else

I made the initial mental leap from “Unix IT pro who knows Windows” to being a “Windows IT pro who knows Unix” because of the management challenges I saw Active Directory and Group Policy addressing, but I stayed for the people. Including people like you, reading my blog.

On that note, since I know many of you started reading me because of seeing me at conferences: I will not be at Spring Connections this year. I know, right? Anyway, it’s all for the best; things are shaping up to be busy and it will be nice to have one year when I’m not flying to Orlando. This is even more awesome because I will be at Tech-Ed, giving both a breakout session and an Interactive Theater session. More details as we get closer. I’ve also got a great project that I’m working on that I hope to be able to announce later.

Oh, hey, have you seen 3Sharp’s new podcasting site, built entirely on the Podcasting Kit for SharePoint that we were the primary developers for? I’ve got a few podcasts in the works…so if you’ve got any questions or ideas of short subjects you’d like me to talk about, let me know!

Alright, folks — it’s late and my Xbox is calling me! (My wife and kids probably want a word with me too.)

Outlook Performance Goodness

Microsoft has recently released a pair of Outlook 2007 updates (okay, technically, they’re updates for Outlook 2007 with SP1 applied) that you might want to look at installing sooner rather than later. These two updates are together being billed as the “February cumulative update” at KB 968009, which has some interesting verbiage about how many of the fixes were originally slated to be in Outlook 2007 SP2:

The fix list for the February CU may not be identical to the fix list for SP2, but for the purposes of this article, the February CU fixes are referred to synonymously with the fixes for SP2. Also, when Office suite SP2 releases, there will not be a specific package that targets only Outlook.

Let’s start with the small one, KB 697688. This one fixes some issues with keyboard shortcuts, custom forms, and embedded Web browser controls.

Okay, with that out of the way, let’s move on to juicy KB 961752, an unlooked-for roll-up containing a delectable selection of fixes. Highlights include:

  1. Stability fixes
  2. SharePoint/Outlook integration
  3. Multiple mailbox handling behavior
  4. Responsiveness

From reports that I’ve seen, users who have applied these two patches are reporting significantly better response times in Outlook 2007 cached mode even when attaching to large mailboxes or mailboxes with folders that contain many items — traditionally, two scenarios that caused a lot of problems for Outlook because of the way the .ost stored local data. They’ve also reported that the “corrupted data file” problem that many people have complained about (close Outlook, it takes forever to shut down so writes to the .ost don’t fully happen) seems to have gone away.

Note that you may have an awkward moment after starting Outlook for the first time after applying these updates: you’re going to get a dialog something like this:

image

“Wait a minute,” you might say. “First use? Where’s my data?” Chillax [1]. It’s there — but in order to do the magic, Outlook is changing the structure of the existing .ost file. This is a one-time operation and it can take a little bit of time, depending on how much data you’ve got stuff away in there (I’ve currently got on the order of 2GB or so, so you can draw your own rough estimates; I suspect it also depends on the number/depth of folders, items per folder, number of attachments, etc.)

Once the re-order is done, though, you get all the benefits. Faster startup, quicker shut-down, and generally more responsive performance overall. This is seriously crisp stuff, folks — I opened my Deleted Items folder (I hardly ever look in there, I just occasionally nuke it from orbit) and SNAP! everything was there as close to instantly as I can measure. No waiting for 3-5 (or 10, or 20) seconds for the view to build.

 

[1] A mash-up of “chill” and “relax”. This is my new favorite word.

GetSharp lives!

If you’ve only interacted with 3Sharp through me, Paul, Missy, and Tim, then you’ve missed a whole key aspect of the talent we’ve got here at 3Sharp. Our group (Infrastructure Solution Group or ISG, formerly known as the Platform Group) is just a small part of what goes on around here.

GetSharp is 3Sharp’s personal implementation of PKS, the Podcasting Kit for Sharepoint. PKS was the brainchild of a fairish bit of the rest of the company. Quite simply, it’s podcasting for SharePoint — think something like Youtube, mixed into SharePoint with a whole lot of awesome (like the ability to use Live IDs). When I saw the first demo of what we were doing with GetSharp, I was blown away. I’m happy to have uploaded the videocast series on Exchange 2007 we did for Windows IT Pro, and I’ve got a series on virtualization I’ll be working on when I get back to work next week.

What happens in Vegas gets blogged

Update (11/15/08 1240PST): Fixed the URLs in the links to point to the actual decks. Sorry!

Time this year has flown! Hard to believe that I’ve just finished up my last conference for the year — Exchange Connections Fall at the fabulous Mandalay Bay resort and conference center in Las Vegas. This was my second trip to Vegas this year (the first was in May for the Exchange/DPM session at MMS), and I really prefer the city in November: far fewer people, much more pleasant temperatures.

I gave the following three sessions yesterday:

  • (EXC16) The Collaboration Blender — This session is adapted from the Outlook and SharePoint: Playing Well Together article I wrote for Windows IT Pro magazine (subscription required). Exchange and SharePoint are both touted as collaboration solutions and have some overlapping functionality, so this session explores some of the overlaps and compares and contrasts what each is good for. (In other words, we spend a lot of time talking about Exchange public folders.) And where does Outlook fit into this mess? There’s even a handy summary table!
  • (EXC17) Exchange Virtualization — As I confessed to my attendees, this session was a gamble that paid off. Back when I proposed the topic, there was no official statement of Microsoft support for Exchange virtualization (no, “Don’t!” doesn’t really count). I guessed that by the time November rolled around, Hyper-V would have finally shipped and they’d have shifted that stance — and I was right. Because I focus more on the Hyper-V side of things, I invited VMWare to send a representative to the session to present their take on the subject. The resulting session was very good, and I learned a bunch of things too.
  • (EXC18) Exchange Protection using Data Protection Manager — Although a lot of the content here was the same material that I’ve already presented this year (what, 4-5 times now?), I did have to make some changes thanks to the brilliant curve ball that Jason Buffington and his crew in the DPM team threw me. You see, Connections now has all Microsoft speakers speak on one day (imaginatively named “Microsoft Day” for some reason), and that day was Tuesday. While Jason couldn’t be here, Karandeep Anand (who is the DPM bomb!) was — and I’ve been trading decks and VMs and material back and forth with Jason and Karandeep for over a year now. Rather than give a less brilliant copy of the session Karandeep had already done, I added in some new material focusing on the internals of the Exchange store and how that affects Exchange protection, removed the demo, and really attacked the topic from the Exchange side of things. I think it worked. Either that or it was people staying to get free copies of the DPM book that my publisher thoughtfully provided.

A lot of my fellow speakers dread speaking on the last day, but I’ve found that I’ve come to enjoy it. Sure, you have smaller attendance numbers — but the people who are there (especially if you get lucky enough to do the last session on the last day) are the people who really want to be there. I also encourage questions from the audience during the presentation, with the caveat that if they’re too detailed or going to be answered later I’ll defer them; I like the interactivity. I usually learn something from my attendees, which makes it a good time for everyone.

Back to the grind. I know I’ve been way too quiet on the blogfront lately, and I promise, I’ve got some fresh new content in the works. First, though, I have to catch up on the paying work. For some reason, my corporate overlords seem to expect me to do billable work too, not just speak and blog. Ah, well. At least I didn’t get RickRolled on my birthday!

Masters update: short form

I have gotten a lot of email from people who wished me well and wanted to find out the status of my recent Masters rotation. I’m working on a bigger write-up, but here’s the short form:

  1. It was intense. I had a ton of fun, I learned more than I thought I could, and I met a lot of great people who are scary smart. I was also exhausted after it was all said and done.
  2. It was worth the money. Paul breaks it down for you here, and I agree with every data point. I think it’s fair to ignore the cost of travel, because no matter where you go for training, you’d have to pay it.
  3. I’m not yet a Master. There’s four tests you have to pass, and I only nailed three of them. I’m now patiently waiting word for retests, as are several of my classmates, and then we’ll knock ‘em dead.

Thank you, everyone, for your well-wishes and questions. As I said, I’m working on a longer post or series of posts, but those will be a bit delayed in coming because I want to run them by the folks at the MCM/MCA program to make sure that I’m not talking about stuff I shouldn’t be.

…does this mean I’ll get an apprentice?

For the next three weeks, I’ll be squirreled away in a hidden location, having my brains surgically removed and replaced with a quantum-computing device filled with Exchange knowledge. Good times!

Seriously, though, I’ll be off to the October rotation of the three-week Microsoft Certified Master: Microsoft Exchange Server 2007 program. The Master certification is a new certification that Microsoft is rolling out, placed between the MCITP and MCA certifications. It’s so new, in fact, that it doesn’t yet appear on the Find a Microsoft Certification by Technology page.

So, newness established, what does this Master certification entail? First, it’s not your typical Microsoft certification.

To ensure that people going through this experience are ready for it, they’re actually screening candidates. For the Exchange Master program, the published criteria are:

  • 5+ years Exchange 2003
  • 1+ years Exchange 2007
  • Thorough understanding of Exchange design/architecture, AD, DNS, and core network services
  • Certification as a MCITP: Enterprise Messaging (Exchange 2007 exams 70-236, 70-237, and 70-238)
  • Certification as a MCSE Windows 2003 or MCTS: Windows Server 2008 Active Directory Configuration (exam 70-640)

Scrape all that together, and what do you get?

  • Three weeks of “highly intensive classroom training” — and by all reports, they’re not kidding when they say that. I’ve been through plenty of Microsoft classes, and for this one, my corporate lords have completely cleared the decks for me.
  • Three computerized written tests (I assume one per week). I have no idea what these are going to be like, but after having done three exams in the past month, I really hope they’re a notch above the standard Microsoft certification exam.
  • One lab-based exam (administered at the end). Now, I really like the thought of hands-on tests; one of the best job interviews I ever went through included a hands-on test. However, they’re a lot more stressful precisely because you can’t fake things or puzzle out the the right answer through careful elimination. You have to know your stuff.

Assuming I survive and my head doesn’t asplode, in a month I’ll get to call myself an Exchange Master. This, of course, leads to the obvious question: do I get an apprentice? If so, I have a suggestion:

The determined apprentice

I really want an apprentice. I think I deserve one. You listening, 3Sharp?

Some nifty Windows Mobile tools

One of the projects I’ve been working on recently involves managing Windows Mobile devices; Tim and I have gotten to spend a bit of time playing with some very cool software. However, we both noticed that Windows Mobile makes some tasks unnecessarily complicated, such as verifying basic network connectivity. For example, can you tell me how to do any of the following under WM 6.0:

  1. Determine which network interfaces you have running at any given moment
  2. Determine the actual IP address configuration a network interface has
  3. Run basic connectivity tools such as ping and traceroute to validate that your device can talk to other network devices

Thanks to a tip from someone at Microsoft, I was introduced to the lovely free tools provided by Enterprise Mobile, including the spiffy Windows Mobile IP Utility. This lovely tool gives you a great view of what’s going on network-wise with your device…including see the pseudo-devices that are created when you cradle your device (and the funky networking that goes on there).

They also make the GUI CAB Signing Utility, which is especially useful if you’re pushing software applications out to your Windows Mobile device and want them signed. It’s basically a GUI wrapper around the .NET Framework’s signtool.exe binary, allowing you to easilly select one or more .CAB files, pick an appropriate certificate from your Personal certificate store (must have the Code Signing capability), select the output directory, and let it rip. I’ve got a screenshot of it in action in this separate picture over here. For some reason, my computer keeps giving me a signtool error, but the folks at Enterprise Mobile have contacted me and are going to help me troubleshoot this issue over the next few days. Very cool for them!

A little GPO study aid

I’ve been in study mode a lot lately, as I’ve been preparing for an upcoming class I’ll be going to. In the process, I’ve had to loop around and pick up several MCP exams I’d not gotten. Today, I’m studying Active Directory.

I knew that you could push applications out to computers via GPO, and I knew there were two different ways of doing it: publishing and assigning. What I could never keep straight, until now, was what the differences were. One choice offers the program in Add/Remove Programs and the user must go in and click Install; the other adds it to the Start Menu (and performs the installation the first time the user starts the application). As an added wrinkle, one option is available to both user policies and computer policies, while the other is available only to user policies.

Well, I finally came up with a mnemonic to help me keep ‘em straight:

PUblishing Permits the User to install. That is, you can only publish to User policies, and it offers the choice to the user to install it (via Add/Remove Programs).

ASsignment Automatically Sets up the program. That is, you can assign a program and know it will be added to the Start menu, and (by elimination) can be done both to a user and to a computer.

Hope this helps!

OCS follows Exchange into 64-bit-only land

You may have missed this interesting blog post this morning amidst all the political kerfuffle, so let me sum up: the next version of OCS will only support x64 platforms.

This isn’t the big deal it would have been for OCS 2007. A lot of the initial FUD around the 64-bit-only move in Exchange 2007 turned out to be mere steam. While there were some initial challenges involved in managing the new 64-bit Exchange deployment from 32-bit machines, Microsoft got a lot of the licensing figured out and released the appropriate sets of tools to allow management of Exchange 2007 from both 32-bit and 64-bit environments. I fully expect that the OCS group has been paying close attention to all of this and taken good notes.

There’s no denying that Exchange 2007 benefits from the “64-bit only in production” stance — and with the release of Windows Server 2008 and Hyper-V, not to mention Microsoft’s updated support statement for virtualization environments, the need for 32-bit environments is going away. My biggest reason for wanting 32-bit Exchange environments was so I could run demos under Virtual Server; now that I have Hyper-V, I’m probably not in any rush to go back to Virtual Server and the 32-bit limitation. 64-bit hardware is the norm today, and the x64 Windows variants are solid and mainstream enough for my dedicated application servers. (Maybe not so for the desktop quite yet, but still getting there rapidly.)

The one thing I’m skeptical about, though, is whether the move to 64-bits is really going to reduce the total number of servers in the deployment. In Exchange 2007, I only saw the server reductions in very large environments; the mailbox-per-server gains we got from 64-bits was offset by the explicit breakout of roles and the business needs that drove redundant configurations like CCR (which meant no co-locating roles with the Mailbox role) and multiple HT/CAS servers. I’m wondering how this is going to play out with the next version of OCS, where it already has so many distinct roles in play.

What I *hope* to see is that the maximum capacity of each server role (such as the number of users per pool or the number of streams per mediation server) can be driven upwards; this makes the large datacenter configuration options much more attractive, because it does translate to a reduced number of servers. However, for organizations that still have relatively low bandwidth separating their various locations, 64-bits won’t do much to help; OCS deployment planning is very dependent on bandwidth, and is often the top limit on scalability long before the limits of the 32-bit Windows environment.

The OCS Edge Server: how many NICs do I need?

There are a lot of people out there who want to try to get around Microsoft’s recommended configuration for the OCS Edge Server roles. For whatever reason, they don’t like the thought of have two network interfaces, one on a publicly routable IP network, the other on the private network. I’ve talked in the past about some of the reasons why this configuration is not only recommended, but actually a good idea, but let’s just say it took a lot of talking and thinking before I accepted that notion.

MVP Jeff Schertz has done a fantastic job of walking through the various permutations people have come up with, separating what will work from what won’t, and explaining the pros and cons of each variant. I highly recommend this post.

I also want to amplify a point he makes: having multiple interfaces (whether physical or virtual) on the same subnet will cause interesting and otherwise inexplicable weirdness on a Windows machine. I’ll write up the situation I’m seeing in a bit (not OCS!), but let me be clear: it’s caused me all sorts of problems. Run, do not walk, away from any “solution” that requires this.

First Look at Microsoft Online Services: the Sign-In tool

Continuing from my previous post on MOS

I didn’t really mention this in the previous post, but MOS is designed to provide a hosted alternative to the server-side applications. One of the goals is to continue working with existing native clients and client access methods, so (for example) you can access your Exchange Online mailbox through OWA (running from MOS), through Outlook, or even through EAS/Windows Mobile. In order to do this, though, your client applications need to know how to talk to MOS and provide the proper credentials.

You can do this the hard way or the easy way. The hard way is running around and reconfiguring each application by hand and teaching your users how to use a separate set of credentials. The easy way is to use the MOS Sign-In tool, a little .NET 3.0 application that runs on the client desktop. It interacts with Outlook 2007 RTM/SP1, LiveMeeting 8, and IE7+.

When this application is run, it will invite the user to logon to MOS. The first time they do so, they’re required to change their password. It then detects the apporpriate applications, offers to configure them to work with MOS, and then just sits quietly on the desktop, providing a seamless SSO experience.

To be continued…

First look at Microsoft Online Services: adding domains

I’m at an airlift here in Redmond for the new Microsoft Online Services (MOS), Microsoft’s hosted services platform. Right now, MOS offers a combination of hosted Exchange (OWA, Outlook, and even EAS!), hosted SharePoint, and Live Meeting. We’ve just gone through an overview of the service, and it looks cool — enough so that I’m now seriously considering switching my personal domains over to it (especially since they offer the ability to synchronize with your Active Directory deployment).

MOS is currently in beta and you can go sign up for a time-limited trial. There’s only a certain number of trial accounts active at any given time, so your trial request may not be provisioned immediately; however, you can go to https://mocp.microsoftonline.com and sign up for one. You’ll need a Windows Live account.

As you might imagine, MOS allows you to associate one or more DNS domains with your online account. When you register for your account, you’re asked for a domain. This domain is not verified and, in fact, seems to be used simply as an internal administrative tag — once your account and service is set up, you have to specifically add DNS domains. Adding them is a fairly simple process:

  1. Register your domain name with a registrar.
  2. Provision your domain with a DNS provider (often combined with step 1).
  3. Add the domain name to your MOS Admin Center.
  4. Run the verification wizard and add the auto-generated CNAME to your domain’s DNS zone.
  5. Validate the domain in the MOS Admin Center.
  6. Start provisioning users with this domain, enable inbound e-mail on this domain, etc.

The verfication step is an important piece, because this helps MOS make sure that you’re using a domain you’re actually in control of. Otherwise, malicious people could sign in and hijack your domain, which would suck. The way Microsoft does this is actually simple and elegant: they generate a unique CNAME record (that looks very much like a GUID), and ask you to add this CNAME record, pointing back to a server under their control, to your zone. This has lots of advantages:

  • It’s pragmatic. If you can add a CNAME record to a zone file, you effectively control the domain.
  • It avoids the nastiness that can result in WHOIS-based verification and allows people who register domains to continue using proxy companies, hiding their personal info from WHOIS spammers.
  • It’s relatively easy. You simply have to add a simple record to your DNS; if you can’t do this (or your DNS hoster can’t do it for you), then you have much bigger problems managing your DNS and verifying your DNS domain under MOS is the least of your problems.
  • It’s low-impact. The generated CNAME is highly unlikely to be queried during normal operations by your users; only MOS is likely to be looking for it. It doesn’t require you to repoint your MX records or otherwise make major modifications to your infrastructure if all you want to do is start using online SharePoint and Live Meeting.

Note that just because you add a domain to MOS doesn’t mean you have to use it for email! That’s a separate operation, which is a two-step process of enabling inbound email for that domain and then updating your MX records appropriately.

More on other MOS functionality coming later…big thanks to the event staff for their kind permission for me to blog!

DPM 2007 Rollup packages now available

While I was away on vacation last week, Microsoft finally released the DPM 2007 Rollup packages to Microsoft Downloads. (I blame Jason Buffington; I’m sure he waited until I was out of office.) There are  both x86 and x64 packages; both require you to download three separate files.

In addition to various bug fixes, this rollup (also known as a “feature pack”) provides the following new functionality:

  • Official support for protecting Windows Server 2008 servers (and supported applications, such as Exchange Server 2007, running on Windows 2008), including protecting the system state.
  • You get support for backing up clustered Virtual Server 2005 R2 SP1 environments. Before, the cluster itself was not seen as a cluster by DPM, and depending on your configuration you may have needed to do some funky scripting.
  • Better tape handling. You can now share tape libraries between multiple DPM servers, reducing the cost of long-term tape retention and allowing better utilization of high-end tape libraries. You can also put multiple protection groups on a single tape; DPM 2007 RTM would start a new tape as it began writing each protection group, even if the previous tape was not fully used. This could get expensive.

I haven’t yet been able to confirm whether the cleaning tape bug Tim noted has been fixed in this update, but I suspect not.

Applying this update is a four-step process:

  1. Install the main DPM update (DataProtectionManager2007-KB949779.exe)on your DPM servers.
  2. Install the SQL Server update (SqlPrep-KB949779.msp) on the machine hosting the SQL Server database for DPM. In a default install, this is the same machine that is your DPM server.
  3. Update the agents on your protected servers to version 2.0.8107.0. You can push them out through the console or manually run the .msp update package on your protected machines (using any supported push mechanism). You will need to restart the protected machines for the new agent version to take effect.
  4. Update the DPM Management Shell update (DPMManagementShell2007-KB949779.msp) on all of your DPM management stations (including the DPM servers themselves).

Although the official instructions give the update steps in the previous order, I have run all three udpates on my lab DPM servers before updating the agents on my protected servers, and as long as Microsoft doesn’t say that’s not supported, that’s the way I’d recommend doing it — that way, all of your PowerShell tasks are using the updates even if you don’t have all the protection agents pushed out yet.

Hyper-V in the hizzouse!

Everyone’s being so coy in the Windows blogosphere today. “As you may have heard…” Heck with that; this is wicked cool. Hyper-V has Released To Manufacturing … and is already available for download. As the link explains, it’ll start coming down the Windows Update pipe July 8th. If you don’t want your Windows Server 2008 machine to be updated yet, don’t be blindly accepting updates.

Why wouldn’t you want to get it first thing?

  • You’re running a previous version of Hyper-V. If so, be aware that upgrading your VMs is not automatic. It’s not a horrible process, but it will take some time. You have to manually export each VM, remove the VMs from the server, upgrade the server, re-import the VMs, then update the Integration Services. The more VMs you have, the more time this will take.
  • You’re running some software that is not yet compatible with Hyper-V RTM but works with an earlier build. In this case, you want to wait until that software has a patch available.

I fit into both categories. I think I’m going to wait until I’m back from vacation to do it.

Oh, yes, just because Hyper-V is now RTM doesn’t mean that you can go run to install Exchange 2007 on it in production. See Scott Schnoll’s post for more info.

These are not the solutions you’re looking for

As IT professionals, we are more than often prone to fall to the perils of magical thinking. (I’m sure this is a side-effect of being human, which is a pesky and bothersome condition I will have to do something about one of these days.) Magical thinking in this context is when we have not internalized the intricacies of a problem and instead rely on formulas rather than true understanding to come up with solutions.

At one ISP I used to work at, we had a glorious reclaimed piece of technology, an Auspex NS-5500 file server. Every now and then on reboot, this old beast of a machine would fail to boot up; the cure was to open the cover over the drive cage and give it a good swift whack. We all assumed that this was because one of the drive connectors was a bit loose, but when our “magic” fix failed to work one night I discovered that it was in fact because one of the screws holding things in place was missing, allowing the drive bay to sag just a tiny bit. It was this tiny bit of sag that put just enough stress on the connector for drive 0. Had we actually opened the case up earlier, we’d have been able to solve the problem — and prevent a year of whacking the server.

All too often, I see magical thinking in the field of security. Case in point: I recently heard about a gentleman who has a client that is requesting ETRN support be added back to Exchange 2007, either natively or through an add-on. They want to deploy the Edge role in their DMZ, have it queue up mail for the internal organization, and then have their Hub Transports (in the internal protected network) initiate a connection out to de-queue the messages using the ETRN SMTP extension. The reason they want this is that they’ve done due diligence and read some very thorough documents about computer network zones and have come to the conclusion that all network connections must be initiated from the most secure network. This, they say, removes the threat of malware taking over the Edge server in the DMZ and allowing an attacker to use it as a launching point to the protected network.

Now, the recommendation for connections to be initiated from a more secure network to a less secure network is a good general baseline to follow when it makes sense. However, it is not realistic in all cases (if we followed this to the letter, nobody would be able to receive e-mail from external senders except through random polling of Internet SMTP hosts, which is not at all scalable). This is doubly true if you don’t understand how the underlying protocols work. Case in point: ETRN, defined by RFC 1985, “SMTP Service Extension for Remote Message Queue Starting”. Quoting from section 3, “The Remote Queue Processing Declaration service extension” (emphasis added):

To save money, many small companies want to only maintain transient connections to their service providers.  In addition, there are some situations where the client sites depend on their mail arriving quickly, so forcing the queues on the server belonging to their service provider may be more desirable than waiting for the retry timeout to occur.

Both of these situations could currently be fixed using the TURN command defined in [1], if it were not for a large security loophole in the TURN command.  As it stands, the TURN command will reverse the direction of the SMTP connection and assume that the remote host is being honest about what its name is.  The security loophole is that there is no documented stipulation for checking the authenticity of the remote host name, as given in the HELO or EHLO command.  As such, most SMTP and ESMTP implementations do not implement the TURN command to avoid this security loophole.

This has been addressed in the design of the ETRN command.  This extended turn command was written with the points in the first paragraph in mind, yet paying attention to the problems that currently exist with the TURN command.  The security loophole is avoided by asking the server to start a new connection aimed at the specified client.

See the problem? ETRN was not designed to solve a security problem; it was designed to solve a financial problem back in days when always-on bandwidth was a lot more expensive and most ISPs metered traffic. It masquerades as solving a security problem only because it’s designed to avoid a loophole in an insecure and exploitable feature. As a result, ETRN won’t solve the problem these people want it to solve; all it does is tell the system in the DMZ to initiate a new connection to the Hub Transport servers. It doesn’t reuse the existing connection initiated by the Hub Transport servers. They can’t use a firewall rule to block outgoing access from the Edge to the Hub Transport and be safe, because they’ll cut off all incoming traffic.

However, let us for a moment assume that it did work the way they wanted it to: my Hub Transport initiates an outbound SMTP session to the Edge. In this session, HT is the SMTP client, ET is the SMTP server. As soon as HT issues the ETRN command, they still have to swap roles — HT is now using the SMTP server code paths, while the ET is using the SMTP client code paths. Any theoretical vulnerabilities that are in the HT SMTP implementation are still going to be there, still exposed to the message traffic about to be sent down the connection, still open to exploitation.

This is the magical thinking: firewalls and a DMZ will protect my traffic. This is not true; firewalls and networks zones are two components of a complete security plan. Neither firewalls nor network zones can protect legitimate traffic, nor are they designed to; they are designed to allow you to designate which traffic is legitimate. If you want to secure that traffic, you need to turn to other measures.

Tech-Talk: Making Backups Cool with DPM

While I was at the Tech-Ed NA IT Pro conference last week, Jason Buffington and I took the chance to invade the Tech-Ed Online fishbowl studio and record a quick Tech-Talk on using DPM. You can now view it online on the Tech-Ed IT Pro page and the Library page, or stream it directly. Now that Tech-Ed’s over, maybe we’ll both find the time to be on Xbox Live at the same time so we can continue our discussion in Call of Duty 4…

Welcome, Mike Rand!

Just a quick shout-out to fellow 3Sharpie Mike Rand, who just posted his first post to the 3Sharp blog site last week. Mike’s a super-smart developer here with mad SharePoint skills; I can’t imagine why he hasn’t blogged sooner than this, but I hope to see him posting more frequently! He’s also pretty good at foosball.

Updated Exchange Developer Roadmap

To reinforce yesterday’s post about Exchange Web Services (EWS), I wanted to draw your attention to the Exchange Developer Roadmap posted on May 22 2008 on the Exchange API-spotting blog.

There shouldn’t really be any surprises here, but there were a couple of items I wanted to highlight. First:

Given this commitment to Web services and our goal of making Exchange Web Services the richest developer interface for Exchange (emphasis added)

Next:

Here’s a preview of some of the functionality that we plan to add to the next release of Exchange Web Services:

  • Access to Folder Associated Items (FAI) and read/write access to user settings (Devin: this page in the MAPI reference indicates that FAIs are things like views and forms. I believe that this also fixes a known quirk of EWS that keeps you from creating Outlook-visible search folders that use certain property paths. I believe this also gives access to server-side rules, if they’re not already accessible through a separate part of the API.)
  • Management of Personal Distribution Lists (Devin: very cool.)
  • Throttling capabilities that give Exchange administrators control over system resource consumption (Devin: this will be very nice for helping keep poorly written applications from taking down the Exchange servers.)
  • A powerful and easy-to-use server-to-server authentication model to enable building portals and enterprise mash-ups (Devin: let’s hope this can ease some of the pain of building Exchange-aware SharePoint sites, at least those that don’t require direct access to private mailbox content.)
  • An easy-to-use Microsoft .NET API that fully wraps the Web service calls, which makes Web service development even easier (Devin: I’ll be interested in seeing how this stacks up against third-party offerings like the Independentsoft EWS client offering.)

Then they go on to list the APIs that will get removed (Exchange WebDAV, Store Events, CDO 3.0/CDOEx, and ExOLEDB) and moved to “extended support” (Exchange Server MAPI Client, CDO 1.2.1). Don’t get too excited by the MAPI client — it’s not what you think:

Provides server applications a MAPI runtime for accessing Exchange. 

Note: This is not the Outlook MAPI Client library that is included with Outlook.

and

Outlook’s Exchange MAPI Store provider, available in the Outlook MAPI Client library can also be used to access an Exchange mailbox or public folder.

If you’re going to start writing Exchange-aware applications, you should probably start looking at EWS first for future compatibility. If you’re trying to support Exchange 2003 at the same time…good luck.

A .NET add-on for working with Exchange Web Services

I just got word that Independentsoft has come out with a beta version of an EWS client API for the .NET Framework and .NET Compact Framework. I’ve not looked at it yet, but I’m particularly hopeful about having a good way to work with EWS from Windows Mobile devices.

Exchange Web Services (EWS), introduced in Exchange 2007 and enhanced in Exchange 2007 SP1, is Microsoft preferred interface for all future programmatic reach into the Exchange store. While EWS is a Web service, it can be pretty complicated to work with. Luckily, we’ve done some work with EWS here at 3Sharp; Paul’s been presenting some developer training sessions on EWS in partnership with Microsoft. We’ve found that Inside Microsoft Exchange Server 2007 Web Services has been a valuable reference on EWS.

One of the challenges for EWS development is that the schema and object model is pretty complex when compared with the typical Web service, enough so that you need to use special Visual Studio proxy classes when you use .NET to work with EWS. This, by the way, is very likely the cause of the compatibility issue I found between EWS and SharePoint Designer — Designer’s proxy classes aren’t the EWS-aware ones.

3Sharp, Podcasting, and You

The talented people at 3Sharp are one of the best reasons to work here. Our Platforms Group is just one piece of the pie here; we’ve got some top-tier development talent who can make SharePoint stand up and dance. Those guys down the hall have been working hard on a little surprise they like to call the Podcasting Kit for SharePoint, which Microsoft has just released on Codeplex as indicated in their press release. 3Sharpies John Peltonen, David Gerhardt, and Paul Robichaux are also blogging about it, so if you’re interested, check them out.

I’ve been hearing bits and pieces, but last week I got to sit down and take a good look at what they’re doing. Wow. This is some cool stuff that is going to make sharing podcasts, video talks, and other knowledge sharing content a lot easier. I can’t wait until I can start using it; I’ve already lined up some content that I can put up and I’m already thinking of some more I can do.

All purchases should be this easy

If you haven’t seen me in person recently, you may not realize I’m a heretic. Yes, that’s right — I use an Apple 15″ MacBook Pro with Vista as my laptop. It took some jiggling to get it all working — an upgrade to Leopard (OS X 10.5) for the final release of BootCamp, an upgrade to Vista SP1, and finding a stable version of the Atheros wireless drivers — but it’s now reliable and fast.

There are some downsides to this particular laptop. It’s only gives me 2GB of RAM, which means that I can’t run a typical VM configuration (DC, DPM, Exchange) and still have enough power to run PowerPoint like I could under XP. The battery life is okay but not great; I run out on long flights.

I’m off to Tech-Ed this week, so I stopped by the Apple store in Bellevue Square Sunday to pick up a spare battery for the flight. I’ve had bad experiences at this store in the past; I don’t give off the right vibe(or maybe I just look light a tightwad) and can’t get seem to get the attention of the staff. I took a chance, though, and walked in the store.

This time, my customer service experience was great. I caught the eye of Associate 1; although he was busy with another customer, he called for help; I didn’t even see him do it. A minute later, Associate 2 walks up to me. “I understand you’re looking for a 15″ MacBook Pro battery.” Pleasantly shocked, I followed him over to the appropriate shelf and soon had the battery in hand. “Is there anything else I can help you with, or are you ready to check out?”

If you’ve not been into an Apple store recently, they’re doing something absolutely sweet. Each customer service associate has a hip-mounted scanner/cardreader. They scan your merchandise on the spot, take and run your credit card, and ask you for an email address to send the receipt to. Boom — it’s all done, your card is charged, and you don’t have to stand in line at the counter unless you’re doing cash or check. This is a great concept I’d love to see other stores use. My receipt hit my Exchange account (and thus my Windows Mobile phone) as I was walking out of the store.

I love living in the future.

Revised guidance on protecting Exchange with DPM 2007

Just a quick note to let you  all know that the Protecting Exchange Server with DPM 2007 white paper is available for download from Microsoft. This is the same white paper I worked on for them last year, but freshly revised to include more guidance around mailbox-level recovery.

I’ll be giving a talk around this topic next week at Tech-Ed (IT Pro) in Orlando, session number MGT369. Hope to see you there! (Yes, this is the same talk I did at Exchange Connections in Orlando and in MMS in Vegas a month ago; it seems to be a popular session!)

Hyper-V RC1 available

This is pretty cool — I didn’t even notice this at first! Hyper-V RC1 is now available for download through the Microsoft Download center or through Windows Update as an optional update. One of the nice changes here is that you now install the Hyper-V Integration Services on Windows 2008 guest machines  the same way as any other operating system (before, you’d have to install the Hyper-V patch itself as a separate action).

That would be why my Windows Server 2008 machine wanted an extra reboot this afternoon…

Three random links make a post

…so I’ll throw in a fourth for good measure. Rather than try to write a full-length post about each of these, I’m just going to give you a quick bullet list:

One last quick tidbit: Exchange 2007 and Outlook Anywhere scalability whitepaper

A lot of you may have missed this: Microsoft just released a new white paper for Exchange, Outlook Anywhere Scalability with Outlook 2007, Outlook 2003, and Exchange 2007. This paper should give you some detailed guidance goodness on scaling your CAS servers, and also talks about the port exhaustion issues that lead to upper scalability limits.

A certificate roundup

Certificates are one of the biggest issues I keep hearing about with Exchange and OCS, and apparently I’m not the only one. Fellow MVP Michael B. Smith has recently posted two blog articles on certs: how to use SAN certificates with ISA 2006 and other certificate limitations. However, he’s got a couple of points on the second article that I’m confused about:

  • According to this announcement on the Windows Mobile team blog, Windows Mobile 6.0 and up do in fact support wildcard certificates.
  • The first point he makes is also head-scratcher, because I’ve also heard this was an issue, but I’d also recently heard of a workaround for it:
    1. In Outlook, go to the properties for your Exchange account (Tools, Account Settings, select your Exchange account and click Change) and click More Settings.
    2. On the Connection tab, click Exchange Proxy Settings.
    3. Look for the field Only connect to proxy servers that have this principal name in their certificate and make sure it’s checked (you may need to check the Connect using SSL only checkbox first).
    4. The value in this field should normally be set to msstd:server.external.fqdn, the FQDN the server is known as from the outside and that is the subject name of the certificate. So if my certificate was issued for 3Sharp, it would be msstd:mail.3sharp.com. To use this with a wildcard certificate issued to *.3sharp.com, this value would need to be set to msstd:*.3sharp.com.

      Let’s try a diagram to make the point:
       image

I’m doing more checking, trying to figure out what the deal is here; in the meantime, if you’ve got operational experience with either of these issues, please let me know.

At any rate, there’s some more interesting factoids on certificates I’ve picked up:

  • If you want to use a certificate with the Exchange 2007 UM role, you need to have a certificate on the machine whose subject name matches the server’s AD/DNS FQDN. It seems that you can’t enable a certificate for the UM service using the Enable-ExchangeCertificate cmdlet if this does not match. Note that you can do this for other services, such as those hosted by the CAS role; the cmdlet performs different name checks on the certificate based on the services (SMTP, POP3, IMAP, HTTP, and UM) that you are enabling.
  • I’ve said it before, but it needs to be repeated: if you’re not using the default self-signed certificate, simply use the Enable-ExchangeCertificate cmdlet to move all services to one or more additional certificates. Do not delete the default certificate; although in most cases Exchange will simply recreate it when the appropriate service is restarted, you can cause subtle errors that will take a while to figure out.
  • Learn more about certificate usage in Exchange in Creating a Certificate or Certificate Request for TLS.
  • And learn more about the Enable-ExchangeCertificate cmdlet.

More later!

Doing UC in the Pacific Northwest

I’ve been sitting on a cool announcement for several days now, and I’m happy that it’s now time to announce it.

I’ve been working with a group of people to get a new user group for Unified Communications (UC) put together here in the Pacific Northwest. While all of us are here in the Puget Sound area, our goal is to put in place a framework to empower a variety of events and meetings all throughout the region, not just based here in Seattle. Rather than be a typical boring user group with a jawbreaking acronym (PNWUCUG, which we do use), we’re defining ourselves as people who do UC. This gives us a simpler name — We do UC, hosted at ucdoers.org.

From our website:

We are the Pacific Northwest Unified Communications User Group (PNWUCUG) and we have a passion for UC. If you are one of the following, you could be one of us:

  • IT professionals in the Pacific Northwest who design, deploy, or manage Exchange Server, Live Communications Server, and Office Communications Server systems.
  • Developers who write or maintain solutions that integrate, extend, or provide UC capabilities to Exchange Server, Live Communications Server, and Office Communications Server and clients.
  • Industry experts with a recognized expertise in UC.
  • Hobbyists who are exploring Microsoft-based UC solutions.

One thing that’s important for me to clarify — my vision of this user group (which is echoed by the other folks who are getting it off the ground) is that it exists to support all Exchange, LCS, and OCS users, not just people running 2007 and doing the VoIP stuff. We may have a focus on UC, but that’s mainly to align ourselves with the direction Microsoft is taking these products. If you’re using Exchange, we want you to participate; we want to make sure we have content for you.

So, if this sounds like goodness to you, head on over to the blog for the announcement of our May 28th kick-off meeting at The Parlor Billiards & Spirits in Bellevue, WA. For those of you who can’t be there in person, we’re even going to have a Live Meeting feed for you — how cool is that?

Post-Conference report

As I typically do, I’m posting links to my slide decks for the presentations I just finished giving. I apologize to the Connections folks; I was supposed to get this done Monday afternoon or Tuesday and got ambushed by a travel-induced migraine.

Orlando was nice this time of year; not too hot, so the humidity slipped under the radar. It was nice to see a bunch of familiar faces and meet some new ones, and I was very pleased with the attendance at all of my sessions. Doing all three sessions back-to-back is definitely a drain, but the conference organizers helped out a lot by keeping me in the same room for all of them, and had I stayed for a couple of days I’d definitely have had the fun of shuttling back and forth. And I have apparently finally beaten my notorious string of demo failures; my demo DPM environment (provided by Jason Buffington of Microsoft, thank you Jason) worked quite nicely.

For the MMS folks, I can’t put my deck up directly; you’ll need to get it from the MMS CommNet or wait for your attendee DVD to show up. Las Vegas is still completely over the top; the Venetian was opulent and provided a nice venue. For some reason, the casino didn’t seem nearly as intrusive as it could have been (and is in other venues). I am, however, glad I had new shoes — my feet didn’t hurt from all the walking. For the flight home, I picked up 21: Bringing Down the House – Movie Tie-In: The Inside Story of Six M.I.T. Students Who Took Vegas for Millions at the airport and read it cover-to-cover; a great story told well.

A DPM roundup

This was a big travel week for me; I got the privilege of speaking about protecting Exchange with DPM 2007 at both Exchange Connections (in Orlando) and Microsoft Management Summit (in Las Vegas). The session had a good response at both shows, and there’s clearly a lot of buzz going around about DPM. I’ve gotten some good questions which I’ll list here and update as I get answers.

  1. Q: Does DPM protect message tracking logs on an Exchange mailbox server?
    A: Very good question. My gut instinct is “No” but I need to confirm that. I’ll post the confirmation in a separate blog article when I get an answer back.
  2. Q: Is there any good guidance on sizing a DPM installation?
    A: Yes. First see the Data Protection Manager 2007 Storage Calculator (currently only supports the Exchange workload), then see this third-party deconstruction. Note that the second post was written against an earlier release of the calculator, so is in need of some updating, but it’s still a good read.
  3. Q: What kind of overhead does DPM incur?
    A: I have to admit that I don’t remember the specifics of this question (this is why I strongly encourage folks to email their questions to me, as is the case with the following question — thanks!); all I have is a cryptic note “CPU overhead” on my notepad. So, I’m going to assume that we’re talking about the overhead of the protection agent on a protected server. And my answer to that is: Very good question; I need to get some specifics.
  4. Q: From e-mail: “Yesterday during MMS at the Advanced Exchange protection session you mentioned that you had created a white paper on getting DPM working with IBM’s TSM product. If you have a link to this I would be very grateful as I have not been able to find it currently and I am wanting to ensure that they way I have it set up and kind of working is the same way that someone else has been able to get it working.”
    A: Unfortunately, I must have been unclear, for which I apologize. 3Sharp did work with Microsoft during the DPM 2006 timeframe to create several white papers on how to integrate DPM with several backup products: Commvault QiNetix, Symantec Backup Exec, Yosemite Backup, and Windows Backup. Unfortunately, Tivoli wasn’t one of them, and I’m not aware of any current guidance that gives a complete end-to-end picture of integrating TSM with DPM 2007. However, the Backup of DPM Servers section in the DPM Operations Guide should be a good starting place.
  5. Q: Why can’t I use DPM 2007 to recover to the Recovery Storage Group on Exchange 2003 servers, only on Exchange 2007 servers?
    A: Another great question, which I’m querying to find the answer to.
  6. Q: If I can use DPM 2007 to do document-level recovery in SharePoint, why can’t I recover mailboxes or even messages in Exchange without having to use the RSG (for Exchange 2007)or ExMerge (for Exchange 2003)?
    A: There are two parts of this answer, but they both are based on the same premise: DPM does not use “privileged” information on the internals of other Microsoft applications it protects. When recovering documents from a SharePoint replica, DPM doesn’t directly reach into the replica database and extract the information. Instead, it recovers the relevant databases to a temporary recovery SharePoint installation (which can be a single server SPS 3.0 install on a virtual machine, even if you’re recovering data from MOSS 2007) and then finds the relevant documents using SharePoint’s HTTP interfaces. With Exchange, the principle is the same; we recover the mailbox database to a parallel location (the RSG in Exchange 2007; a network folder in Exchange 2003) and then use the Exchange native tools to extract and import the relevant information. Trying to do direct restores of mailboxes or messages into a production database would involve going beyond the existing Exchange APIs. Personally, as an Exchange MVP I hope that Microsoft works on expanding those interfaces to make this sort of thing easier for all third-party vendors, but until they do, DPM plays by Exchange’s rules.
  7. Q: You mentioned coming updates to DPM. Where can I find more info on that?
    A: Jason Buffington of Microsoft has you covered with this webcast.

That’s a good start for now; catch you all later!

Greetings from Orlando!

I’m posting from a break between sessions at Exchange Connections in Orlando, FL. I just had a good session on protecting Exchange with DPM — thanks to everyone who attended and gave lots of good feedback.

Next up — a session on DCAR with Exchange, and then Exchange 2007 update best practices.

The weather is actually the best I’ve ever seen here — not too hot, with a nice breeze, so the humidity isn’t overwhelming. However, the A/C is up full in the room I’m presenting, so I’m glad the speaker shirts are long-sleeved.

More later!

Setting Exchange 2007 Unified Messaging codecs on a per-user basis? Genius!

I was completely floored to discover, via Paul, that you can control which codec the UM role uses to record voicemails on a per-user basis. This is seriously cool stuff, and if you can’t see why quite yet, let me offer the following scenarios for you:

  1. Most common: you have multiple users who have non-Windows Mobile devices that don’t support the WMA codec, but still want to be able to listen to their voicemail on their devices. The GSM and G.711 PCM Linear codecs may be more widely supported. For example, on an EAS-aware iPhone will Apple also roll in support for recognizing UM voicemails? If they do, will they support the WMA codec? Now, in theory, they don’t have to.
  2. Also common: you have multiple users who use a non-Windows based client. (Paul already calls out one example, those of us who use Entourage.) This would be just as valuable, though, for people who are using some IMAP or POP3 client on a Linux/BSD/Solaris box.
  3. Not so common, but possible: you have a specific need to automatically process voicemails in an automated fashion and need to use either the GSM or G.711 PCM linear codecs instead of being able to support WMA. Switching one or two mailboxes over keeps the entire Exchange storage system from suffering the increase in voicemail file size that would result.

Okay, so these are slightly lame scenarios, but I’m sure there’s more out there that I can’t see.

Security and the OCS 2007 A/V Edge role

When people start digging into the specifics of the A/V Edge role in OCS 2007, they usually have a strong and immediate knee-jerk reaction something along the lines of, “No way!” (Mine was, “Oh, heck no!”) This reaction is usually caused by learning one or more of the following deployment requirements:

  • Public IP address. The A/V Edge server needs to have a publicly routable IP address. This address must be publicly routable; you can’t fudge it by giving it an IP address in a private range (10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16) and do any sort of NAT to it. 1:1 NAT or static NAT mapping won’t do the trick here. You can and should have a firewall between it and the Internet, but it can’t be doing any address translation.
  • Dual-homed. The A/V Edge server cannot be separated from the internal OCS servers by NAT. Therefore, if you’re using a private address range and NAT in your internal network, you have to give the A/V Edge server a second network interface and IP address on routable, non-NAT address range. (Note, however, it doesn’t have to be the same address range as the internal network, simply on an address range that is directly routable without NAT.)
  • 20,0002 external ports. The external (publicly routable) interface needs to have the following ports opened to the Internet: UDP 3478, TCP 443, UDP 50,000-59,999, and TCP 50,000-59,999. Security people immediately look at the need to have 10,000 dynamic TCP ports and 10,000 dynamic UDP ports and have their head asplode in sheer instinctive security reaction.

I’ve personally reacted to all three of these requirements; I’ve yet to talk to a security-conscious IT professional new to OCS who hasn’t. So what on Earth is Microsoft doing putting these requirements in place? Have they completely lost it about security?

In a word, no.

There are good reasons why these requirements are in place. Rather than go over them myself, however, let me simply direct you to this excellent post on the OCS team blog. If you have any questions, post them there and tell ‘em I sent you. Note that to post questions on their blog, you need to first join their Community Server site. This is painless and easy; simply click the Join link in the upper right-hand corner, pick a username and password, provide your email address, and you’re ready to go.

Exchange protocol documentation now available

Per the announcement on Tuesday (08 Apr), Microsoft has released a lot of new documentation for various Exchange and Outlook-Exchange protocols. This is some cool stuff — just check out the list of what’s available. However, as the web site warns, it’s preliminary documentation. If you don’t believe them, when you download the files (available in PDF format) the big fat “PRELIMINARY” watermark (in very bold font) will help remind you.[1]

I can already hear some of you out there: “So Microsoft released documentation on obscure or unimportant Exchange protocols. Big deal. I bet they’ve saved all the good stuff for licensing!” Well, I’m not going to deny that this is a complete set of documentation for every Exchange protocol you might ever want to know about — after all, Microsoft is a company who believes in the value of intellectual property. They’ve kinda built a business plan around it, and it’s both foolish and naive to somehow assume that they’re just going to toss all of that overboard overnight. It’s not even reasonable to expect them to completely abandon that position; it’s an arguable proposition that Open Source principles work best in conjunction with an IP scheme that permits open licensing when the developers feel invested in doing so, alongside more restrictive licensing schemes. But that’s a religious argument for another day.

This will be a long post. I’m going to split it into three sections: Appetizers, Main Course, and What’s Missing.

Section 1: Appetizers

First, we have some housekeeping and overview documents and protocols:

Name Description
MS-CAB Cabinet File Format
MS-MCI MCI Compression and Decompression
MS-OXDOCO Outlook-Exchange Protocol Document Roadmap
MS-OXGLOS Office Exchange Protocols Master Glossary
MS-OXPROTO Office Exchange Protocols Overview
MS-OXREF Office Exchange Protocols Master Reference
MS-PATCH LZX DELTA Compression and Decompression

You may have noticed that these documents include a few things that aren’t strictly Exchange or Outlook-specific, such as the CAB file format and various compression protocols. Just remember that the Exchange protocol documentation is part of a wider set of Interoperability Principles, and so it depends on technologies that are part of the more generic set of Windows technologies.

Section 2: Main Course

Okay, with roadmaps and preliminaries out of the way, let’s take a look at the meat:

Name Description
MS-NSPI Name Service Provider Interface (NSPI) Protocol Specification
MS-OXABREF Address Book Name Service Provider Interface (NSPI) Referral Protocol Specification
MS-OXBBODY Best Body Retrieval Protocol Specification
MS-OXCDATA Data Structures Protocol Specification
MS-OXCETF Enriched Text Format (ETF) Message Body Conversion Protocol Specification
MS-OXCFOLD Folder Object Protocol Specification
MS-OXCFXICS Bulk Data Transfer Protocol Specification
MS-OXCICAL iCalendar to Appointment Object Conversion Protocol Specification
MS-OXCMAIL RFC2822 and MIME to E-mail Object Conversion Protocol Specification
MS-OXCMSG Message and Attachment Object Protocol Specification
MS-OXCNOTIF Core Notifications Protocol Specification
MS-OXCPERM Exchange Access and Operation Permissions Specification
MS-OXCPRPT Property and Stream Object Protocol Specification
MS-OXCROPS Remote Operations (ROP) List and Encoding Protocol Specification
MS-OXCRPC Wire Format Protocol Specification
MS-OXCSPAM Spam Confidence Level, Allow and Block Lists Protocol Specification
MS-OXCSTOR Store Object Protocol Specification
MS-OXCSYNC Mailbox Synchronization Protocol Specification
MS-OXCTABL Table Object Protocol Specification
MS-OXDISCO Autodiscover HTTP Service Protocol Specification
MS-OXDSCLI Autodiscover Publishing and Lookup Protocol Specification
MS-OXIMAP4 Internet Message Access Protocol Version 4 (IMAP4) Extensions Specification
MS-OXLDAP Lightweight Directory Access Protocol (LDAP) Version 3 Extensions Specification
MS-OXMSG .MSG File Format Specification
MS-OXMVMBX Mailbox Migration Protocol Specification
MS-OXOAB Offline Address Book (OAB) Format and Schema Protocol Specification
MS-OXOABK Address Book Object Protocol Specification
MS-OXOABKT Address Book User Interface Templates Protocol Specification
MS-OXOCAL Appointment and Meeting Object Protocol Specification
MS-OXOCFG Configuration Information Protocol Specification
MS-OXOCNTC Contact Object Protocol Specification
MS-OXODLGT Delegate Access Configuration Protocol Specification
MS-OXODOC Document Object Protocol Specification
MS-OXOFLAG Informational Flagging Protocol Specification
MS-OXOJRNL Journal Object Protocol Specification
MS-OXOMSG E-mail Object Protocol Specification
MS-OXONOTE Note Object Protocol Specification
MS-OXOPFFB Public Folder Based Free/Busy Protocol Specification
MS-OXOPOST Post Object Protocol Specification
MS-OXORMDR Reminder Settings Protocol Specification
MS-OXORMMS Rights-Managed E-mail Object Protocol Specification
MS-OXORSS RSS Object Protocol Specification
MS-OXORULE E-mail Rules Protocol Specification
MS-OXOSFLD Special Folders Protocol Specification
MS-OXOSMIME S/MIME E-mail Object Protocol Specification
MS-OXOSMMS SMS and MMS Object Protocol Specification
MS-OXOSRCH Search Folder List Configuration Protocol Specification
MS-OXOTASK Task-Related Objects Protocol Specification
MS-OXOUM Voice Mail and Fax Objects Protocol Specification
MS-OXPFOAB Offline Address Book (OAB) Public Folder Retrieval Protocol Specification
MS-OXPHISH Phishing Warning Protocol Specification
MS-OXPOP3 Post Office Protocol Version 3 (POP3) Extensions Specification
MS-OXPROPS Office Exchange Protocols Master Property List Specification
MS-OXPSVAL E-mail Postmark Validation Protocol Specification
MS-OXRTFCP Rich Text Format (RTF) Compression Protocol Specification
MS-OXRTFEX Rich Text Format (RTF) Extensions Specification
MS-OXSHARE Sharing Message Object Protocol Specification
MS-OXSMTP Simple Mail Transfer Protocol (STMP) Mail Submission Extensions Specification
MS-OXTNEF Transport Neutral Encapsulation Format (TNEF) Protocol Specification
MS-OXWAVLS Availability Web Service Protocol Specification
MS-OXWOAB Offline Address Book (OAB) Retrieval Protocol Specification
MS-OXWOOF Out of Office (OOF) Web Service Protocol Specification
MS-OXWUMS Voice Mail Settings Web Service Protocol Specification
MS-XJRNL Journal Record Message Format Protocol Specification
MS-XLOGIN SMTP Protocol AUTH LOGIN Extension Specification
MS-XWDVSEC Web Distributed Authoring and Versioning (WebDAV) Protocol Security Descriptor Extensions Specification

On first glance, that’s an impressive list. NSPIs, S/MIME, SMTP and POP3 extensions, RTF extensions, TNEF — the list goes on. There’s a lot of seriously crunchy material here. The question of the moment, though, is “just how detailed is all this documentation?”

Good question.

I haven’t had time to look through it all in a lot of detail. To be honest, I suspect that a lot of it is in areas that I wouldn’t be able to catch any glaring omissions or discrepancies (sorry, readers, I’m just not up on the latest specs for RTF). However, I did take a quick look through MS-XLOGIN, “SMTP Protocol AUTH LOGIN Extension Specification”[2], since I’m reasonably familiar with SMTP.

Let me skip to the chase: yup, this is preliminary work. On whole, it does a good job of documenting the flow of the LOGIN extension (which people have already mostly figured out how it works through years of careful protocol analysis). The most complicated part of it is that you’re using Base64 to encode the credentials being passed — not rocket science. However, there are some gaps in this straightforward documentation:

  • Nowhere did I find any guidance on what the user and passwords challenges are supposed to be computed on (only that they are to be Base64 encoded). This makes it more difficult to properly code a LOGIN implementation.
  • The samples they gave look like valid Base64, but according to my quick conversion tests in PowerShell, they aren’t. I can’t get any of the sample values to match what they should. Again, this means I can’t work backwards to get the missing data.

I really hope this is the kind of stuff they’re going to fix between this release and the final release, because without it, this documentation isn’t nearly as useful as it could be. Some would even accuse it of being provided merely to give the appearance of interoperability while still keeping enough implementation details close to the chest to keep it from really happening. I, however, subscribe to the philosophy that one should never initially ascribe to malice what can be explained through other possibilities — and I’ve done enough work on these sorts of projects to know that getting the right level of detail in a document like this is far from a no-brainer, especially if you’re dealing with contractors or are having to generate the documentation after the fact. (I don’t know that either of these possibilities are involved, but I’m guessing.)

Section 3: What’s Missing

There are three obvious protocols missing in all of the above: MAPI, MAPI over RPC over HTTP, and Exchange ActiveSync. I can hear the screams now…but this is where I go back to the point that Microsoft still makes money from intellectual property. Microsoft’s Web site offers a searchable IP Catalog that shows you exactly which protocols they offer for a licensing fee, and both MAPI (aka the Outlook Exchange Transport Protocol) and Exchange ActiveSync are on it, as well as several other important protocols for Unified Communications. Microsoft is under no obligation to make every single protocol available for free — but the fact that they’re finding value in doing it with the above protocols is pretty cool and interesting. [3]

[1] If the watermark bugs you, try seeing if your PDF client will allow you to view or print the document without annotations. Using Foxit Reader, I was able to make the watermark go away and actually read some of the text it obscured.

[2] SMTP Protocol? Seriously? Is that like PIN number or ATM machine? Attention, Microsoft technical writers: P stands for Protocol.

[3] You’re free to speculate on what value they get, but not here, please. That’s another religious discussion.

There’s no service like Web service

One of the cool things about Exchange 2007 is the new Web service interface into the store. In theory, having mailboxes and contents exposed via Web services makes it a lot easier for developers and casual dabblers to use Web service-aware tools to interact with Exchange content.

Two weeks ago, I wanted to perform a quick experiment by seeing if I could use Exchange Web Services (EWS) in a SharePoint page to make an always up-to-date extension list for our office. Now, I know this information is stored in Active Directory as attributes on the User objects, but I didn’t see a quick, easy way to configure a SharePoint web part to perform an LDAP or AD query. Instead, I opened up SharePoint Designer and pointed it toward our EWS instance, and what I found surprised me.

Does anyone out there in reader land have any clue why SharePoint Designer insists that an EWS instance isn’t “a valid description of an XML Web service”?

https://exchange.server.fqdn/ews/Services.wsdl

I can browse to it manually, enter my credentials, and get a bunch of XML that sure looks like valid WSDL — but SharePoint Designer’s integrated WSDL parser can’t seem to make heads or tails of it. I could easily consume other types of Web services, and looking at their WSDL, it looks like it’s making use of a lot fewer XML namespaces; their XML structure seems quite a bit simpler than Exchange is generating.

I tried contacting the official SharePoint team blog and was basically told, “Go away, kid. Call support.” I’ve not had a lot of spare time recently to pursue this, but I’m pursuing some other avenues to see if I can’t get to the bottom of this. Stay tuned!