Comparing PowerShell Switch Parameters with Boolean Parameters

If you’ve ever take a look at the help output (or TechNet documentation) for PowerShell cmdlets, you see that they list several pieces of information about each of the various parameters the cmdlet can use:

  • The parameter name
  • Whether it is a required or optional parameter
  • The .NET variable type the parameter expects
  • A description of the behavior the parameter controls

Let’s focus on two particular types of parameters, the Switch (System.Management.Automation.SwitchParameter) and the Boolean (System.Boolean). While I never really thought about it much before reading a discussion on an email list earlier, these two parameter types seem to be two ways of doing the same thing. Let me give you a practical example from the Exchange 2007 Management Shell: the New-ExchangeCertificate cmdlet. Table 1 lists an excerpt of its parameter list from the current TechNet article:

Table 1: Selected parameters of the New-ExchangeCertificate cmdlet

Parameter Description

GenerateRequest
SwitchParameter)

 

Use this parameter to specify the type of certificate object to create.

By default, this parameter will create a self-signed certificate in the local computer certificate store.

To create a certificate request for a PKI certificate (PKCS #10) in the local request store, set this parameter to $True.

PrivateKeyExportable
(Boolean)

Use this parameter to specify whether the resulting certificate will have an exportable private key.

By default, all certificate requests and certificates created by this cmdlet will not allow the private key to be exported.

You must understand that if you cannot export the private key, the certificate itself cannot be exported and imported.

Set this parameter to $true to allow private key exporting from the resulting certificate.

On quick examination, both parameters control either/or behavior. So why the two different types? The mailing list discussion I referenced earlier pointed out the difference:

Boolean parameters control properties on the objects manipulated by the cmdlets. Switch parameters control behavior of the cmdlets themselves.

So in our example, a digital certificate has a property as part of the certificate that marks whether the associated private key can be exported in the future. That property goes along with the certificate, independent of the management interface or tool used. For that property, then, PowerShell uses the Boolean type for the -PrivateKeyExportable property.

On the other hand, the –GenerateRequest parameter controls the behavior of the cmdlet. With this property specified, the cmdlet creates a certificate request with all of the specified properties. If this parameter isn’t present, the cmdlet creates a self-signed certificate with all of the specified properties. The resulting object (CSR or certificate) has no corresponding sign of what option was chosen – you could just as easily submit that CSR to another tool on the same machine to create a self-signed certificate.

I hope this helps draw the distinction. Granted, it’s one I hadn’t thought much about before today, but now that I have, it’s nice to know that there’s yet another sign of intelligence and forethought in the PowerShell architecture.

A certificate roundup

Certificates are one of the biggest issues I keep hearing about with Exchange and OCS, and apparently I’m not the only one. Fellow MVP Michael B. Smith has recently posted two blog articles on certs: how to use SAN certificates with ISA 2006 and other certificate limitations. However, he’s got a couple of points on the second article that I’m confused about:

  • According to this announcement on the Windows Mobile team blog, Windows Mobile 6.0 and up do in fact support wildcard certificates.
  • The first point he makes is also head-scratcher, because I’ve also heard this was an issue, but I’d also recently heard of a workaround for it:
    1. In Outlook, go to the properties for your Exchange account (Tools, Account Settings, select your Exchange account and click Change) and click More Settings.
    2. On the Connection tab, click Exchange Proxy Settings.
    3. Look for the field Only connect to proxy servers that have this principal name in their certificate and make sure it’s checked (you may need to check the Connect using SSL only checkbox first).
    4. The value in this field should normally be set to msstd:server.external.fqdn, the FQDN the server is known as from the outside and that is the subject name of the certificate. So if my certificate was issued for 3Sharp, it would be msstd:mail.3sharp.com. To use this with a wildcard certificate issued to *.3sharp.com, this value would need to be set to msstd:*.3sharp.com.

      Let’s try a diagram to make the point:
       image

I’m doing more checking, trying to figure out what the deal is here; in the meantime, if you’ve got operational experience with either of these issues, please let me know.

At any rate, there’s some more interesting factoids on certificates I’ve picked up:

  • If you want to use a certificate with the Exchange 2007 UM role, you need to have a certificate on the machine whose subject name matches the server’s AD/DNS FQDN. It seems that you can’t enable a certificate for the UM service using the Enable-ExchangeCertificate cmdlet if this does not match. Note that you can do this for other services, such as those hosted by the CAS role; the cmdlet performs different name checks on the certificate based on the services (SMTP, POP3, IMAP, HTTP, and UM) that you are enabling.
  • I’ve said it before, but it needs to be repeated: if you’re not using the default self-signed certificate, simply use the Enable-ExchangeCertificate cmdlet to move all services to one or more additional certificates. Do not delete the default certificate; although in most cases Exchange will simply recreate it when the appropriate service is restarted, you can cause subtle errors that will take a while to figure out.
  • Learn more about certificate usage in Exchange in Creating a Certificate or Certificate Request for TLS.
  • And learn more about the Enable-ExchangeCertificate cmdlet.

More later!

Sweet PowerShell lovin’…for free!

And yes, that’s “free as in beer,” not “free as in what some people think all information wants to be.”[1]

Frank Koch and Marcel Trümpy of Microsoft (in Switzerland) have created not one, but two Windows PowerShell ebooks, and you can get them both for free:

  • A Windows PowerShell course book with associated demo files and examples.
  • A Windows PowerShell server administration book with associated demo files and examples.

Get them both in one easy download either in English or German. The downloads are from Microsoft and no registration is required, according to the blog posting.

[1] If you believe all information wants to be free, I challenge you to put your money where your mouth is and post your Social Security number (if you live in the USA; equivalent if you don’t), birthdate, address, personal phone number, and bank account information here in my comments. After all, that’s all information — and it wants to be free!

Liveblogging the Unified Communications Voice Ignite conference, day 3

Good morning! Back for day 3. (You can see my day 2 notes here.)

09:13: Back when I first started doing OCS, the vision included “hybrid“ gateway devices which included the Mediation Server role functionality in the gateway. Well, they exist now — partners have been busy! (source http://technet.microsoft.com/en-us/office/bb735838.aspx)

10:25: User provisioning can be fun. When provisioning users, you need to populate the msRTCSIP-line attribute with their phone number in E.164 format. OCS doesn’t look at the regular Active Directory phone attributes. You can populate the msRTCSIP-line attribute from the AD attribute, but you need to make sure that you normalize the numbers to E.164 format first. Best case: normalize your AD phone numbers! (source http://technet.microsoft.com/en-us/library/bb870372.aspx)

10:47: WMI is the preferred interface for writing user provisioning scripts — this allows you to do it in the language of your choice, including (yay!) PowerShell (via PowerShell’s WMI provider). The Resource Kit gives you lots of useful scripts (yes, including PowerShell) and samples as a starting point. (source http://www.microsoft.com/downloads/details.aspx?FamilyID=b9bf4f71-fb0b-4de9-962f-c56b70a8aecd&displaylang=en and http://blogs.technet.com/jamesone/archive/2007/08/19/powershell-and-paradigms-of-vb.aspx)

10:51: Mmm. These brownie-walnut-tart thingies are TASTY.

11:04: Kevin’s all hooked up for pictures, so you can see the brownie-walnut-tart thingies for yourself.

12:22: About to jump into more tasty crunchy labs, but before I do, one word of advice — bone up on regular expressions. (source http://technet.microsoft.com/en-us/library/bb803637.aspx, http://www.microsoft.com/downloads/details.aspx?FamilyID=b9bf4f71-fb0b-4de9-962f-c56b70a8aecd&displaylang=en, and http://www.microsoft.com/technet/technetmag/issues/2008/02/OCSTelephony/default.aspx)

14:28: RTP (Realtime Transport Protocol, not RealTime Protocol as many people think) is cool! There’s some clever engineering going on here, although the comparitive size of the header and the payload is pretty skewed, especially once you get all the UDP, IP, and physical link overhead in there – remember the overhead from 09:38 in the day 2 notes? That’s where it comes from. (source http://tools.ietf.org/html/rfc3550, http://forums.microsoft.com/unifiedcommunications/ShowPost.aspx?PostID=2697675&SiteID=57)

15:35: Even though a lot of the OCS conceptual diagrams show the three Edge server roles on separate machines, it is not supported to install these three roles on separate machines. You can deploy all three roles on a single machine OR you can have A/V Edge on one server and Access Edge + Web Conferencing Edge on another server. Each of these servers can also be load balanced server configurations. You can’t load balance a consolidated single server (all three Edge roles) configuration. I’m guilty of getting this one wrong, so if you saw me speak at one of the UC roadshows last fall, make note! (source http://technet.microsoft.com/en-us/library/bb663789.aspx)

15:44: Note that while a reverse proxy (such as ISA) is not a required part of the whole remote access deployment, by not using it you will lose functionality from external clients that aren’t using a VPN connection: you won’t be able to expand AD groups and get their memberships, you won’t be able to download the address book information (which contains all of that lovely normalized phone number information you went to such pains to configure), and you won’t be able to download meeting content in Live Meeting conferences. By reverse proxy, think something like ISA 2006 (which is recommended) or other equivalent applications or appliances. (source http://technet.microsoft.com/en-us/library/bb803627.aspx)

15:51: Contrary to popular belief, the Access Edge server does not perform authentication of incoming remote connections. They do provide validation of incoming SIP requests (filtering out requests to invalid SIP URIs, etc.), but they don’t authenticate. Authenticaton happens either by the OCS Standard Edition server, the OCS Enterprise Edition Front-End pool, or the optional (but highly recommended) Director role. Director roles can be load balanced for greater reliability. (source http://technet.microsoft.com/en-us/library/bb663752.aspx)

17:36: Byron Spurlock has a fantastic blog on OCS at http://blogs.msdn.com/byrons/default.aspx — the only flaw is that Byron needs to update more frequently! Great stuff!

17:42: Want to find the latest and greatest list of UC-compatible certificates? Look no further than KB 929395. However, be aware that this KB doesn’t seem to have been updated recently, and it doesn’t help you figure out which certificates will automatically be trusted by Windows Mobile devices or Office Communicator Phone Edition devices. The key sentence is If the OCS 2007 servers use public certificates they will most like be automatically trusted by the device, since it contains the same list of trusted CA’s as Windows CE.

Continue on to my Day 4 notes.

Post-Connections report

Vegas was great again, this year; the hotel was as lovely as ever, but the overlap with the Latin Grammys sure did some interesting things to the elevators. Mandalay Bay felt full this year! On the other hand, the beach remodel was excellent; the wave pool and the Lazy River pool were both hits with my family. As is my wont, I’m making my session slide decks available for download.

  • EXC16: Advanced Exchange Protection using Data Protection Manager
    Backing up and restoring Exchange servers is an essential part of keeping your messaging infrastructure up and running, even when you’re running an advanced clustering configuration. Why should you consider using the new version of Microsoft System Center Data Protection Manager (“v2“) to protect your Exchange server clusters? Is it any harder than backing up standalone servers? This session covers protecting Exchange 2003 and 2007 servers clustered configurations, including the new Exchange 2007 replication options.

    I thought this session went pretty well; there was a Microsoft session on Tuesday morning that looked like it was going to cover the exact same material, but the overlap was both smaller and shallower than I expected. I got a lot of good questions from this session which I’ll be answering in the next couple of days; I really hope that I was able to convey my own excitement about DPM and how it will make a great partner for protecting Exchange.

  • EXC17: Exchange Management Shell Annoyances
    The Exchange 2007 Management Shell makes full use of the exciting new Windows PowerShell technology. It’s a great command-line management experience, but it’s still not perfect. You may have already been tripped up by annoyances and complications in what seem to be obvious tasks or you may just want to know what dangers lurk beneath the surface. This session will show you some common pitfalls and problems and give you the knowledge to successfully navigate them.

    This session suffered from the inevitable technical glitches; my Exchange virtual environment died an hour or two before the session, so I ended up having to run it from a stock Windows PowerShell session. Luckily, I was able to cover most of the territory from there and even add a couple of things or two. Not having the Get-Help and cmdlet completion information for EMS, though, just sucked; my apologies.

  • EXC18: Getting Run Over by Exchange 2007
    Common knowledge says that upgrading to Exchange 2007 isn’t nearly as hard as the upgrade from Exchange 5.5. That’s not to say that it doesn’t present its own set of challenges—and if you’re caught by them, it will still feel like getting run over by a truck. This session will present some of the common gotchas and how to avoid them. Be at the head of the upgrade parade, not caught in the wheels.

    Wow. This was a great session; standing room only and a lot of good feedback and questions. This is clearly a topic of concern to people — if you have any other upgrade gotchas, let me know!

It’s a release!

For those of you who have been waiting for that sweet, sweet DPM 2007 goodness…wait no more. It’s gone RTM. DPM 2007 is an amazing product, so amazing that somebody and his cow-orker are writing a book about it. (Yes, Ryan and I have been pretty busy on this thing; in fact, I get to work on edits to a couple more chapters tonight after I leave work and Ryan is busy rocking the house with some phat lab tracks for your testing and learning pleasure.)

Of course, my absolutely favorite features of DPM 2007 are:

  • DPM makes it easy it makes protecting and restoring Exchange data. Imagine being able to restore an individual mailbox without having to do brick-level MAPI backups! You can do it with DPM.
  • In fact, DPM synchronizes your databases and transaction logs, so you can restore your data to any specific recovery point.
  • DPM 2007 takes a page from the Exchange 2007 playbook with the DPM Management Shell, based on Windows PowerShell. It’s not quite as pervasive as the EMS, but it’s a damned good start.

But don’t take my word for it — download the evaluation software and start playing with it.

Testing your new Exchange 2007 Send connector

Updated 1401 PDT: added the diagram.

A recent post on a mailing list I frequent gives me today’s blog post.

So you’ve got an Exchange 2000/2003 organization and you’ve decided that you want to upgrade to Exchange 2007. You’ve done all the research and planning and you’ve gotten as far as installing the first HT server (CF-EX01) into your organization:

  • We’ll assume that your organization already has an SMTP connector named Legacy SMTP to handle all outbound mail for the SMTP:* address space.
  • Since this is the first Exchange 2007 server, Exchange 2007 Setup has created a new Exchange 2007-only administrative group and a new Exchange 2007-only routing group.
  • It’s also created a bi-direction Routing Group Connector between the HT server and the Exchange 2003 bridgehead (CF-LE01) you specified as your LegacyRoutingServer.

Let’s take a look at things from EMS:

As we expect, we see the pair of RGCs, each with a cost of 1, and our existing SMTP connector, also with a cost of 1. Right now, outbound message flow is easy: anything in the org only has one outbound gateway.

[PS] C:\>Get-RoutingGroupConnector | ft Identity,Cost,SourceTransportServers,TargetTransportServers
Identity                           Cost SourceTransportServ TargetTransportServ
                                        Ers                 ers
--------                           ---- ------------------- -------------------
CF-EX01-CF-LE01                       1 {CF-EX01}           {CF-LE01}
CF-LE01-CF-EX01                       1 {CF-LE01}           {CF-EX01}

[PS] C:\>Get-SendConnector
Identity    AddressSpaces Enabled
--------    ------------- -------
Legacy SMTP {SMTP:*;1}    True

 
One of the first things you might want to do is get all inbound and outbound mail flowing through your new Exchange 2007 HT server. Inbound is easy: we simply change the configuration on our gateway mail machine or firewall server, or change our MX records, appropriately. For outbound, though, we want to create a new Exchange 2007 Send connector and test it before we actually entrust live email to it. Those of you with large Exchange organizations already know how to do this: manipulate your connector costs. If you’re in a smaller organization that only had one routing group, though, this may be a new concept. Don’t worry, it’s pretty easy.

The goal is to create two outbound routes for the SMTP:* address space, one for the Exchange 2003 side of the organization and one for the Exchange 2007 side. The Legacy SMTP connector already gives us the former and we’ll create the latter in a moment. We need to ensure that the costs of all related connectors are set so that:

  • The combined cost of the Legacy SMTP connector, the RGC, and any other connectors in between is greater than the cost of the new Exchange 2007 Send connector from the Exchange 2007 routing group.
  • Likewise, the combined cost of the new Exchange 2007 Send connector and the RGCis greater than the cost of the Legacy SMTP connector from the Exchange 2003 routing group(s).

To meet these goals, depending on how your organization is configured, you MAY need to mess with the default costs. In our sample organization where we have just two routing groups and we’ve used the default costs for all connectors, this is precisely how it all works out by default. First, though, let’s go ahead and create the new Exchange 2007 Send connector:

[PS] C:\>New-SendConnector -Name 'New SMTP' -Usage 'Internet' -AddressSpaces 'SMTP:*;1' `
    -IsScopedConnector $false -DNSRoutingEnabled $true -UseExternalDNSServerEnabled $false `
    -SourceTransportServers 'CF-EX01'
Identity AddressSpaces Enabled
-------- ------------- -------
New SMTP {SMTP:*;1}    True

[PS] C:\>Get-SendConnector
Identity    AddressSpaces Enabled
--------    ------------- -------
Legacy SMTP {SMTP:*;1}    True
New SMTP    {SMTP:*;1}    True

Yup — two SMTP connectors, each with the SMTP:* address space and a cost of 1. Here’s a quick diagram:

image

Now, let me show you how the routing currently works:

  1. From the Exchange 2007 routing group, we look for our lowest cost to the SMTP:* address space. We see two connectors that match.
    • Our total cost to Legacy SMTP is 2, since its bridgehead is homed in the Exchange 2003 routing group. Cost 1 to navigate the RGC plus cost 1 for the connector.
    • Our total cost to New SMTP is 1, since its bridgehead is homed in the same routing group we’re in. This is our choice.
  2. From the Exchange 2003 routing group, we look for our lowest cost to the SMTP:* address space. Again, wee see the same two matching connectors.
    • Our total cost to New SMTP is 2, since its bridgehead is homed in the Exchange 2007 routing group. Cost 1 to navigate the RGC plus cost 1 for the connector.
    • Our total cost to Legacy SMTP is 1, since its bridgehead is homed in the same routing group we’re in. This is our choice.

Now, we can begin our testing. We’ve got several ways to do this:

  • Add an Exchange 2007 mailbox server, create a test account, create an Outlook profile, and go to town.
  • Add a test mailbox to an existing Exchange 2003 mailbox server and set it up for IMAP/POP3 access. Use Outlook Express or Outlook to set up the mail account, and specify our Exchange 2007 Hub Transport server as the SMTP server.
  • Telnet to port 25 of the Hub Transport server and submit messages manually. You might need to allow anonymous connections on the Default Receive connector if you do this, unless you can do NTLM and Base64 encoding in your head. (If you can, you scare me.)

Still with me? Whew! One last piece: we need to change the route costs when we’re all done with our testing and are ready to flip the switch. Sure, you can do it from the GUI, but where’s the fun in that? Simply use EMS to modify the address space on the Legacy SMTP connector to set its cost higher than the combined total of the RGC + New SMTP connector:

[PS] C:\>Set-SendConnector -Identity 'Legacy SMTP'-AddressSpaces 'SMTP:*;10'
[PS] C:\>Get-SendConnector
Identity    AddressSpaces Enabled
--------    ------------- -------
Legacy SMTP {SMTP:*;10}   True
New SMTP    {SMTP:*;1}    True

That, by the way, is how you update a cost: modify the AddressSpaces parameter on the connector. If you have multiple address spaces, this gets a little bit more complicated; you have to supply all the values instead of just one. We’ll talk about techniques to do this later…perhaps in one of my upcoming sessions at Exchange Connections Fall 2007 in Las Vegas!

Using PowerShell to admin Exchange 2000/2003

Via Evan’s blog, I found out about this pair of postings on using PowerShell’s WMI provider to manage Exchange 2000/2003 servers. I’m still working up my notes on my Exhange Connections session on this topic; once I get past the first draft deadline for the DPM book later this week, I should have spare time to finish the notes and get my postings up online.

Bouncing email to Exchange 2007 distribution groups

A few days ago, I was asked to track down what was, on the surface, a troubling problem with Exchange 2003: bounced messages to addresses I knew for a fact existed in the organization. They were both generating a 5.1.1 User unknown SMTP error.

As you can imagine, we have several distribution and security groups in our Active Directory deployment for various purposes. Mysteriously, two of these mail-enabled groups — we’ll call them Foo and Bar — had started bouncing messages sent to them externally. Both Foo and Bar were created years ago and had given me very little problems under both Exchange 2003 and Exchange 2007. I could send message to Foo internally and have it work, but Bar was completely toast.

Since Bar wasn’t working at all, I tackled that one first. Long story short: the group had somehow gotten purged from Active Directory. Whoops! Easy fix, though; create the group, give it the right email address, add the correct members, and voila! it’s fixed.

Except it’s not. Now Bar is doing the same thing Foo is doing: messages sent internally are fine, messages coming from the Internet are getting a lovely 5.1.1 error still. Time to pull up the objects in Exchange Management Shell and see what I can find:

[PS] C:\>Get-DistributionGroup Foo | fl

...
RequireSenderAuthenticationEnabled : True
...

Oh, geez. Somehow, the “Require sender to authenticate” checkbox got turned on for this group; anonymous incoming connections aren’t authenticated, therefore Exchange won’t accept this group as a recipient. This setting is set to True by default when you create new distribution groups in EMS or EMC, BTW, so don’t forget to turn it off if you need to:

[PS] C:\>Set-DistributionGroup Foo -RequireSenderAuthenticationEnabled $False

Hope this helps!

Post-Connections post

Another Exchange Connections event has come and gone. As nice as the venues are, I really wish the spring Connections events weren’t in Orlando — the town itself is spread out, limiting how easy it is to get out of the event venue for a couple of hours and go do anything on your down time. I was lucky enough to get direct flights to and from Orlando, thus limiting the amount of horsing around I had to do in airports, but the flights are correspondingly longer. consequently, even though I managed to snag an aisle seat in an exit row on my flight back (with no one in the center seat), I awoke today with a killer migraine, a lovely parting gift from seat 15C.

The other thing I lucked out on this year was getting to do all of my sessions on the same day. That may sound like a lot of work — and it is — but there’s a large amount of mental energy I have to invest in getting ready to go up on stage and present, so only having to do that workup once (and sustain it for the day) actually ends up being easier on me. Here are the slide decks for my presentations, along with comments:

  • EXC16: DCAR with Exchange
    I received an interesting comment from several of the attendees at this session, which was that they were not originally going to come to this session because the acronym DCAR meant nothing to them. I know that few people in this industry use it other than Paul and I, so I need to see what I can do about that.
    Key take-away from this session: as far as Exchange 2007 comes with out-of-the-box functionality aimed at discovery, compliance, archival, and retention, you still need third-party software to do a proper job of it — and you need to consider these activities all as facets of the single larger task of messaging data management. Thing I learned from this session: my job gives me the luxury of examining these types of tasks and looking for the bigger picture, but the people who work to keep production environments running don’t often have the time. I need to not be afraid of talking about things I think are “obvious,” because they may be coming from a new perspective some of my attendees don’t get the chance to share. In return, they share their experience and perspective with me, which helps me better fine-tune my message for others.

  • EXC17: 10 Tips to Make Your Exchange Server a Good Net Neighbor
    This was a nice small session, the perfect wrap-up for the day, although I wonder if attendance was hurt slightly by the fact that it was the last session of the day. Nevertheless, I think there were some good questions and discussions, and I’ve definitely got some ideas for future blog posts (and possibly magazine articles).
    Key take-away from this session: you can significantly enhance the reputation your domains gather by thinking about how your Exchange organization interacts with the rest of the Internet and making some appropriate changes.
    Thing I learned from this session: so much of our understanding of email best practices in the end comes back to a fundamental understanding of proper DNS theory and operation — a subject that far too many admins do not have adequate grounding in. Especially in the Windows community, DNS tends to get treated as a black box, and someone who learns how Active Directory integrates with DNS may not learn that some of the assumptions AD makes about DNS are only valid in the context of an AD domain.

  • EXC18: Iron Chef: Using Powershell with Exchange 2003
    Definitely the one that took the vast majority of my mental prep time; I hadn’t realized when I proposed this session what a challenge it would be. On the other hand, I’m glad I did it, and I’ll be breaking it down into a series of detailed blog posts in the coming weeks.
    Key take-away from this session: cmdlets make things so much simpler, but once you’ve got your data in a PowerShell object you can still do some amazing stuff in a very small amount of script.
    Thing I learned from this session: logistics are everything in the success of a presentation; the problems I had with my demos came not from the scripts, which I was re-writing until an hour before the session, but from my last-minute decision to run the slides off the provided machine and run the demo scripting off my laptop. As a result, my practice run (which involved switching in and out of an RDP session) was useless and the attendees kept having to remind me to switch the screen to the right machine. I’m just glad they seemed to take it in good humor.

Need some PowerShell help

Normally, when I write a blog post, I’m trying to help other people out. I forget that it can work both ways. So, today’s post is a plea for help: if you know a lot about PowerShell, I could use an answer to the following three questions. If you’ve got any insights, please drop me an email using the Contact link.


Question the First:

I’ve got a script that manipulates a user’s delivcontlength property in Exchange 2003. This helps me manage the situation where I’ve got a few users who need to be able to receive 20MB messages while most everyone else only needs 10MB. My script grabs all of the user objects in the directory, iterates through the collection, checks to see if the user is one of the special users, and if not it sets the per-user limit to be 10MB.

$ds=New-Object DirectoryServices.DirectorySearcher
$ds.Filter="(&(objectcategory=person)(objectclass=user))"
$AllUsers=$ds.FindAll()
Foreach ($User in $AllUsers) {
  $oUser=$User.GetDirectoryEntry()
  if ($oUser.sAMAccountName -ne "deving") {
    $oUser.Put("delivcontlength", "10240")
    $oUser.SetInfo()
    $oUser.psbase.RefreshCache()
  }
  $oUser | select displayname,delivcontlength
}

This script does what I want to do, but what I don’t know how to do is reset the delivcontlength attribute. If there’s no per-user limit set, this attribute doesn’t exist on the user object — so how do I remove an attribute through PowerShell? Setting it to 0 doesn’t work.

Edit: The correct answer is to use the PutEx method, which allows you to handle a collection of attributes, as well as delete existing attributes. I’ll post an updated snippet of code next week that shows how to use PutEx in live code. Thanks to Andy Webb for the answer.


Question the Second:

Continuing with the delivcontlength attribute, when I go to check its value on a single user:

$oUser.Get(“delivcontlength”)

This throws an exception if the attribute isn’t set. How do I trap that exception inside of a script so I know that the property doesn’t exist, and can do something else based on that information?

Edit: The answer, again from Andy Web, is to use the GetEx method and specify the delivcontlength as one of the attributes in the collection. Then, check to see if a value is returned in the array.


Question the Third:

Given a variable with a DN, what is the easiest way to open the corresponding object using ADSI? I’ve tried the following with no success:

$DN = "LDAP://red-dc01:389/CN=Devin Ganger,OU=Users,OU=3Sharp Accounts,DC=redmond,DC=3sharp,DC=com"
$oUser = [ADSI]$DN

The point, of course, is to be able to retrieve the DNs from some other list such as a CSV and perform some operation on the listed objects by iterating through with a loop.

Edit: I’m told that this should in fact work. When I go paste it into a clean instance, it does, in fact work. Weird!

Getting to know the legacy Routing Group Connector in Exchange 2007

If you install Exchange 2007 into a legacy Exchange organization (by legacy, I mean Exchange 2000/2003), the first time you install the Hub Transport (HT) role into the organization you are asked to designate one of the legacy Exchange servers as a LegacyRoutingServer. You may already know that Exchange uses this server, along with your new HT role, as the bridgeheads for a new bi-directional Routing Group Connector(RGC). This RGC connects the Exchange 2007 routing group with routing group your LegacyRoutingServer is in, thus giving your legacy Exchange server a valid route to the new Exchange 2007 servers.

However, once you go poking around inside the new Exchange Management Console, you’ll quickly find that this RGC doesn’t show up. It does show up if you fire up the legacy Exchange System Manager — actually, you see the expected pair of connector objects, one in each routing group — but if you go to look at their properties, ESM will politely tell you that the RGC objects were created in a newer version of Exchange, so keep your mitts off already. For small organizations, having a single legacy Exchange server connecting the legacy portion of the org to Exchange 2007 probably isn’t that horrible, but in a larger org you may need to specify additional bridghead servers. The answer is found, of course, in the Exchange Management Shell.

As an example, let’s say we have an Exchange 2003 organization with a single routing group. We set up our first Exchange 2007 HT role on machine EX27-HT01 and specified EX23-BH01 as the LegacyRoutingServer. Now, we need to rehome the legacy interoperability RGC to our permanent Exchange 2007 HT, EX27-HT02. I can use the Set-RoutingGroupConnector to modify the existing RGC (which, BTW, is named “Interop RGC”). Here’s how I’d do it:

Set-RoutingGroupConnector `
  -Identity "Exchange Routing Group (DWBGZMFD01QNBJR)\Interop RGC" `
  -SourceTransportServers "EX27-HT02"
Set-RoutingGroupConnector `
  -Identity "First Routing Group\Interop RGC" `
  -TargetTransportServers "EX27-HT02"

Conversely, if we’re retiring EX23-BH01 and moving the RGC to a second Exchange 2003 server, EX23-BH02, we’d do it like this:

Set-RoutingGroupConnector `
  -Identity "Exchange Routing Group (DWBGZMFD01QNBJR)\Interop RGC" `
  -TargetTransportServers "EX23-BH02"
Set-RoutingGroupConnector `
  -Identity "First Routing Group\Interop RGC" `
  -SourceTransportServers "EX23-BH02"

Note that I’ve enclosed the server names in quotes. You don’t have to do this, but I’ve gotten into the habit of quoting server names for a reason: it allows me to specify multiple servers easily if I need to, without any chance of confusing PowerShell. Here’s what it would look like if I wanted to designate all four of our servers in this example as bridgheads:

Set-RoutingGroupConnector `
  -Identity "Exchange Routing Group (DWBGZMFD01QNBJR)\Interop RGC" `
  -SourceTransportServers "EX27-HT01","EX27-HT02"
  -TargetTransportServers "EX23-BH01","EX23-BH02"
Set-RoutingGroupConnector `
  -Identity "First Routing Group\Interop RGC" `
  -SourceTransportServers "EX23-BH01","EX23-BH02"
  -TargetTransportServers "EX27-HT01","EX27-HT02"

For those of you running bigger organizations, you may need to have multiple legacy RGCs. There are several things to consider before you do this, and as usual, the Exchange team blog tells you all you need to know. Before you go run off and read that link, though, here’s a summary of the highlights:

  • All Exchange 2007 servers in the forest are members of the same routing group, regardless of which AD domain and site they’re in. As a result, Exchange 2007 message routing only takes routing groups into consideration when routing into legacy servers.
  • Exchange 2007 will always route messages through Exchange 2007 servers as long as possible, even if the legacy routing topology would be shorter. It will never try to use a shortcut through legacy RGCs to transfer a message from one Exchange 2007 server to another. Likewise, legacy Exchange servers try to route through their legacy topology as long as possible, even if the AD site topology is shorter.
  • Having multiple RGCs from the Exchange 2007 routing group to the legacy routing groups requires you to change the way linkstate updates work in Exchange 2000/2003. This prevents the possibility of message loops that would otherwise be caused by a mix of linkstate-aware routing combined with Exchange 2007’s complete lack of linkstate. However, this also has implications on your legacy mailflow and is intended as a transition state only.
  • It’s easier to create additional legacy interop RGCs using the New-RoutingGroupConnector cmdlet in EMS. While you can use the legacy ESM to create additional legacy interop RGCs, you’ll have to make sure to do three things:
    1. Home the new RGC in the legacy routing group.
    2. Create both directions of the RGC at the same time.
    3. Add the legacy bridgehead servers to the “ExchangeLegacyInterop” group so they have the proper permissions to authenticate to the Exchange 2007 HT servers.
  • As long as you have servers in legacy routing groups, you should always have a legacy routing patch for all routing groups and RGCs to talk to each other. If you force them to communicate solely through the Exchange 2007 routing group, you will break the flow of linkstate information, and your legacy routing groups will become islands.

9/6/2007: Edited to fix the typo in LegacyRoutingServer. Thanks to fellow Exchange MVP William Lefkovics for the catch.

Gaps in PowerShell cmdlet coverage

As I’ve been working more and more with PowerShell, I’ve discovered a few gaps in coverage in the cmdlets it offers. Most of my PowerShell noodling has been with Exchange 2007, so represents holes in the Exchange Management Shell extensions, but the one I just ran across is basic PowerShell.

PowerShell provides a series of cmdlets for querying and manipulating Windows Services, among them New-Service, Set-Service, and Get-Service:

  • New-Service, of course, allows me to create a new service instance, including setting dependencies and credentials.
  • With Set-Service I can change various parameters associated with a service: its display name, description, and startup type (Automatic, Manual, or Disabled). However, I can’t change its credentials or dependencies.
  • Get-Service is the most disappointing. It doesn’t return the startup type parameter, so even though I can return an arbitrary list of services (using built-in rich wildcard support for service names and our friend the where cmdlet to sift through other parameters) I can’t tell whether my returned services are set to automatically run or not!

This makes it pretty difficult to use native PowerShell to (say) write a quick script to capture the status of a set of services for later retrival, then shut them down and set them to Disabled for a time. If I were able to capture that status, I could use a second quick script that would use the saved settings and reset the services to their prior configuration. This script would then be generic and could be used in all sorts of situations. Instead, I get to hardwire stuff in or do it by hand.

The second major hole I’ve found so far is in the Exchange Management Shell extensions, in particular the Set-PublicFolder cmdlet. I’ve finally figured out the right syntax for specifying multiple servers for the -Replicas parameter, but it would have been really be nice if the cmdlet had distinct -AddReplicaTo and -RemoveReplicaFrom parameters. As it is now, if I’m making changes, I have to capture the existing list of replica databases, add or remove the desired entries, then feed the entire list back in to Set-PublicFolder. If I try to pass in a single database, I overwrite the existing value of Replicas. This may be acceptable for large-scale public folder operations, but it makes it a pain to do quick one-off operations.

This is matched by a minor gripe with Get-PublicFolder: when it displays the replicas in a table or list format, it displays only the database names. If I’ve named multiple public folder databases the same name (like, say, “Public Folders”), then I can’t tell by inspecting this list which servers the replicas are on. I need to indulge in further script-foo to figure it out. Again, this makes casual administration from EMS much more of a chore than it should be.

Finally, I’d like to thank both the Exchange team and the PowerShell team for all the hard work they’ve done, both on the code and on the docs. My only complaint about the docs is that often there are not enough examples, especially when you have parameters whose use isn’t immediately obvious (such as the -Replicas parameter for the Set-PublicFolder cmdlet).

Have you found a puzzling gap in cmdlet coverage? If so, let me know, and I’ll start compiling a list.

Managing public folders in Exchange 2007

In general, I love Exchange 2007. I love Exchange 2007 with a love that is brighter than the aggregate IQ of a science fiction convention and stronger than the body odor of the RPG section of that same convention[1].

However, one area of Exchange 2007 that I don’t love is public folder management. It is a downright shame that the best interface for managing public folders in Exchange 2007 remains the Exchange 2003 System Manager. Creating a simple public folder in the Exchange Management Shell isn’t a problem, but the docs are very weak on practical, usable examples of everyday tasks like adding and removing replicas to existing public folders, or working with the system public folders.

There is a great need for an “Exchange 2007 Administrator’s Guide to Public Folders.” Unfortunately, I don’t see it happening any time soon.

[1]I used to be a freelancer in the RPG industry and I’ve spent my share of time at conventions. I’ve even run games at conventions, so I get to make this joke.

Using PowerShell to sort mailboxes by size

This weekend I was playing with the Exchange Management Shell while performing some troubleshooting on our Exchange 2007 system and discovered some fun combinations of cmdlets I thought I’d share.

First, let’s start with a command that lists all of the mailboxes in a given server, sorted by size. There are two variants for ascending and descending, differing only in the inclusion of the -Descending paramter in the Sort-Object cmdlet in the latter:

Get-MailboxStatistics -Server RED-MSG01 | Sort-Object -Property TotalItemSize | `
  Format-Table DisplayName,TotalItemSize

Get-MailboxStatistics -Server RED-MSG01 | Sort-Object -Property TotalItemSize -Descending | `
  Format-Table DisplayName,TotalItemSize

Pretty simple stuff, really; your basic pipeline exercise. You can also use Get-Mailbox -Server SERVER | Get-MailboxStatistics like I originally was, but since the Get-MailboxStatistics cmdlet already supports the -Server parameter it would be redundant. If you have some other way you need to define your collection of mailboxes, once you figure out how to define it, pipe the collection into Get-MailboxStatistics and away you go.

Next up, a variant of the same, only this time I want to only show those mailboxes larger than, say 1GB:

Get-MailboxStatistics -Server RED-MSG01 | Where {$_.TotalItemSize -gt 1GB} | `
  Sort-Object -Property TotalItemSize -Descending | Format-Table DisplayName,TotalItemSize

The Where cmdlet gives me access to comparisons against any propertDispy on the collection of objects passed into it, so be careful where you use it in the pipeline. The collection of objects given by Get-Mailbox is not the same collection of objects given by Get-MailboxStatistics, and they will have different sets of properties. And yes, you put multiple Where cmdlets into different places in your pipeline to fine-tune your selections, as the following search for the Kelly brothers illustrates:

Get-Mailbox -Server RED-MSG01 | Where {$_.DisplayName -imatch "kelly"} | `
  Get-MailboxStatistics | Where {$_.TotalItemSize -gt 1GB} | `
  Sort-Object -Property TotalItemSize -Descending | Format-Table DisplayName,TotalItemSize

This would show me all mailboxes whose display name contains the string “kelly” (without considering the case of the characters) and that was larger than 1GB in size. The -imatch comparison is a case-insensitive regular expression match, so I could have gotten fairly fancy in my comparison.

Last one: let’s take the list of mailboxes, in ascending order, and feed it to the Move-Mailbox cmdlet. This way, we’ll be moving the smaller mailboxes first, allowing us to get through more moves in the beginning and saving the real packrats for last:

Get-MailboxStatistics -Server RED-MSG01 | Sort-Object -Property TotalItemSize | `
  Move-Mailbox -TargetDatabase "RED-MSG02\Mailbox Database"

Let me know if you have questions or comments!

Using PowerShell to find the average size in a group of files

Today I wanted to find out the average size of a group of files. To make it more challenging, I wanted to find the average size of a group of files that existed co-mingled in a directory with several other files I didn’t care about. PowerShell to the rescue!

In my examples, we’ll say I wanted to average the size of all .CSV files in the C:\temp directory. I define the $target variable to contain a string that defines which files you want to work with. There’s no magic here; it’s the same string I’d type on a command line, but you can do much more complicated expressions using wildcards and such. Here’s how I defined it:

Here’s the script I came up with:

$myfile = $(dir $target | select-object Name,Length)
for ($($myfilesize = 0; $i=0); $i -lt $myfile.count; $i++) {$myfilesize += $myfile[$i].Length}
$myfileavg = "{0:N2}" -f $($myfilesize / $myfile.count / 1024)
Write-Output ("Average size of selected files: " + $myfileavg + "KB")

Here’s how the script works.

  • The script performs the dir command against the target (really an alias to Get-ChildItem cmdlet)and pipes the output through the Select-Object cmdlet so we can specify which attributes we care about — in this case only Name and Length. It then stores the results in the $myfile variable.
  • Now we can use a for loop to iterate through all the elements of the $myfile and add up the total number of bytes.
  • Simple arithmetic — total number of bytes divided by the number of files (which is the Count property of the $myfile variable) — gives us our answer in bytes. We divide it by 1,024 to convert to kilobytes, and use the .NET string formatting functions (the {0:N2} bit) to tell PowerShell to round the result to two decimal places.
  • We then print the output out. Done!

I’m sure there are easier and cleaner ways to do this under PowerShell, but my point is to show how a task that would take a lot more work under VBScript or conventional Windows command-line scripting can be done quickly in PowerShell.

Extra credit: say you just wanted to import the files and sizes into Excel? This one-liner creates the CSV file:

Note that you need to use Select-Object here to preserve the object information of the original objects you collected. If you use Format-Table as I first did, it won’t work; your CSV file will contain the object information of the output lines of the table, not the objects used to create the table. The -NoTypeInformation suppresses the inclusion of the #TYPE header, an additional line that documents what object type Export-CSV detected.

dir $target | Select-Object Name,Length | export-csv csv.csv -NoTypeInformation
$target = "c:\temp\*.csv"

Another PowerShell one-liner

Today I needed to evenly split 40,000 files between 5 directories. The math is simple — 8,000 files in each directory — but the way I’d have normally handled a one-off task like this is by using Windows Explorer to go in, manually select the first 8,000 files, drag them into the first target directory, lather, rinse, repeat. However, Explorer decided to be unstable, and I decided to play with PowerShell for a few minutes and see how easy it would be to do it from the shell.

Here’s the one-liner I came up with:

for ($($filelist = dir; $i=0); $i -lt 8000; $i++) {Move-Item $filelist[$i] targetdir}

It’s a simple for loop that shows off a couple of the nice features of PowerShell:

  • I used the filelist variable to hold the return value of the dir cmdlet (which is really just an alias for the Get-ChildItem cmdlet). This cmdlet passes back the child items found in the specified location (or the current directory if none is specified) as a collection, which is stored in the variable.
  • I used the for loop to access each individual item in the filelist collection.

Of course, I used the -whatif switch with the Move-Item cmdlet to verify it was going to do what I wanted before actually trying to move the files.

Bulk creating Exchange mailboxes from a CSV file

The current project I’m working on required me to create a large number of mailbox-enabled user accounts in an Exchange 2007 organization. When you’re using Exchange 2007, you need to provision your accounts with Exchange 2007’s tools, and the Exchange Management Shell (EMS) makes creating mailboxes quick and easy. Here’s the script I used:

$pw = Read-Host "Enter password:" -AsSecureString

get-content mailboxes.txt | foreach {
	$umb=@{}
	$umb.acc, $umb.fn, $umb.ln = $_.split()
	$upn = $umb.acc + '@contoso.com'
	$wn = $umb.fn + ' ' + $umb.ln
	New-Mailbox `
            -Alias $umb.acc `
            -SamAccountName $umb.acc `
            -UserPrincipalName $upn `
            -Name $wn -FirstName $umb.fn `
            -LastName $umb.ln `
            -OrganizationalUnit 'contoso.com/Users' `
            -Password $pw `
            -ResetPasswordOnNextLogon $false `
            -Database 'MBX01\First Storage Group\Mailbox Database'
}

This approach uses the Get-Content cmdlet and a foreach loop to break out each field into a separate variable using the split operator.

It turns out, though, there’s an easier way to do it: use the Import-CSV cmdlet. Bharat Suneja shows you how to use it over on his blog. Man, I wish I’d known about that.

Powershell (despite abandoning the cool name of Monad) rocks.

Database and disk spindle management in Exchange 2007

When you’re optimizing Exchange 2003/2000 servers for performance, one of the first places you look is at your disk configuration. As a rule of thumb, you want to get as many of the following components as possible reading and writing to separate sets of disk spindles (thus reducing contention):

  • The operating system
  • The Exchange binaries
  • Logs (protocol logs, etc.)
  • Mailbox/PF databases
  • Mailbox/PF database transaction logs (by storage group)
  • SMTP queue directories

Under Exchange 2003/2000, the SMTP queues are a set of directories that are by default located on the same disk you installed Exchange to.

Under Exchange 2007, things are a bit different. You still want to minimize contention for disk spindles by moving as many of the above components to separate disks. In most cases, moving these components is quite a bit easier under Exchange 2007. If you follow Microsoft’s recommendations and only create one mailbox database per storage group, you end up with a 1:1 relationship between databases and transaction logs. I was able to move the database and transaction logs on a new Exchange 2007 mailbox server with the following two PowerShell commands:

Move-StorageGroupPath `
  -Identity "SERVER\First Storage Group" `
  -LogFolderPath:"E:\First Storage Group" `
  -SystemFolderPath:"E:\First Storage Group" `
  -Force
Move-DatabasePath
  -Identity "SERVER\First Storage Group\Mailbox Database" `
  -EdbFilePath:"F:\First Storage Group\Mailbox Database.edb" `
  -Force

Note: PowerShell uses the backtick as its line continuation character. I’m not sure why they couldn’t use the underscore and stay consistent with .NET, but now you know.

The -force parameter tells Exchange that yes, I really mean to do this and stop prompting me to verify that I want to move these files, that yes I know I’ll be interrupting service to users of this storage group/mailbox, etc. Exchange will automatically dismount the affected databases, move them to their new location, then remount them. Very easy — much easier than in previous versions of Exchange.

Queues, on the other hand, get a bit trickier. On the brilliant side, queues are now ESE (the database formerly known as Jet) databases instead of directory structures on the filesystem. I’d imagine that this makes the dynamic creation/removal of queues happen a lot more quickly, since you’re now serializing all of your I/O, reusing file handles, and avoiding the hassle of having to deal with file/directory metadata. If I could move the queue database as quickly and easily as I can move mailbox databases, my blog post would come to a rapid, joyous conclusion.

Alas. It’s a bit more complicated than that.

There is no PowerShell cmdlet to move the queue database; instead, you have to follow these steps:

  • Create the target directory and give it the proper permissions. If the parent directory already has the proper permissions, then Exchange will create the actual directory for you.
  • Open the >Exchange install directory<\Bin\EdgeTransport.exe.config file (Notepad will work).
  • Modify the QueueDatabasePath and/or QueueDatabaseLoggingPath parameters to point to the new location.
  • Save and close the file.
  • Restart the Microsoft Exchange Transport service.

Don’t believe me? You can verify for yourself; the process is outlined in the Exchange 2007 Beta 2 documentation topic How to Change the Location of the Queue Database.

Full kudos to the Exchange team for moving the queues into an ESE database — but I’m confused why we aren’t able to manage that database using the same, simple PowerShell methodolgy. Can anyone shed light on this seemingly incomprehensible design decision?

A nifty PowerShell example

Let us say you have a directory full of files written to it over a period of time, and you need to quickly move all of the files written on a given day (any time during that day) to a different folder. This is something that is difficult to do using just the standard DOS command line — but it becomes quite easy in PowerShell (aka Monad).

Here’s an example file listing. It may look a little strange — I created it in PowerShell — but DOS users can quickly figure out what’s going on here:

  Mode   LastWriteTime        Length  Name
  ----   -------------        ------  ----
  -a---   6/1/2006   7:08 PM    3365  000eb851-4c72-4a75-a29c-f73a94ebced2.eml
  -a---  5/28/2006   3:44 AM    2251  000ef0e8-c1c2-4f21-9a30-790a3577475f.eml
  -a---  5/23/2006  10:07 PM   12236  000f2034-b207-414c-abc5-dab4f5db9c03.eml
  -a---   6/1/2006   8:02 AM    1945  000f3c7b-1aa7-48cd-929e-23bef6e19219.eml
  -a---  5/30/2006  11:56 PM    3909  000f4276-947e-4f45-8672-1d465ef0e4e1.eml
  -a---   6/1/2006   9:04 AM    1944  000f8674-3ca1-4e53-bca0-f94e82881400.eml
  -a---  5/29/2006   5:24 PM    3986  000f96ed-6785-4fe9-8511-dcb8dfe14936.eml
  -a---   6/9/2006   6:19 AM    1787  000fe9c9-9b62-4e87-80c1-65018faa7c7d.eml
  -a---  5/30/2006   8:20 AM   12254  000fefb6-9974-4e7b-9939-06bc0e26af82.eml

So, how am I to pull out just the files written on June 1st? Let’s fire up PowerShell and try the following command (note that it should be entered on a single line; I’ve line-wrapped it for readability):

dir | where-object { $_.LastWriteTime -like "6/1/2006 *" } |
    move-item -destination c:\DestFolder

Let’s take a closer look.

I pipe the output of the dir cmdlet into where-object, which allows me to set up a single condition to compare against all of the objects in the pipeline. Here’s the key difference between PowerShell and traditional shell scripting: in PowerShell, the objects passed through the pipeline are real objects, not strings of text. They have various properties — in this case, we see the properties of Mode, Name, Length, and most importantly for what we want, LastWriteTime.

Note that in my condition, I use the -like comparison so I can specify the date portion of the date/time string, along with a wildcard indicating that any time will match. I could just as easily specify a string of “6/*/2006” for any day in June or “* PM” to show any file written between 12:00PM and 11:59PM regardless of the day.

The move-item cmdlet normally takes a source parameter as well, but since it’s getting objects from the pipeline, it automatically knows to use those instead.

Now, getting used to PowerShell takes time, because it’s a bit wordier than you’re used to (especially if, like me, you’re used to UNIX shell programming, which seems to hate vowels). It is absolutely worth the effort, though, because you really can start doing some amazing (and useful) stuff with just a little effort.

Note: dir dir is an alias for the get-childitem cmdlet. PowerShell provides many of these aliases. I used the alias here so that my example is slightly more readable to those of you who haven’t seen PowerShell code before, but if you’re going to use it I urge you to not rely on the aliases. Use the full commands so that you more fully understand what is going on in your cmdlets.

What’s in a name?

Microsoft has made two big product name announcements in the last couple of days:

  • Monad (formerly the Microsoft Shell or Management Shell) has been renamed to Windows PowerShell. I, personally, am disappointed; Monad is a great name (grade-school sound substitutions aside) and had a decently geeky pedigree to interest folks who aren’t normally willing to look at innovations coming out of Redmond. I’ve personally been able to get at least two of my friends to look more closely at it (and ultimately pronounce it a “Cool Idea!”) just because they couldn’t believe that a Microsoft product would be named for something that esoteric. At the same time, it was sufficiently unique that it could be easily turned into a visible and valuable brand by a group with as much marketing muscle as Microsoft. Ah, well.
  • Perhaps less surprising given the recent Office 2007 announcement, Exchange 12 is now officially Exchange Server 2007. Personally, I was hoping for something a bit snazzier, like Windows PowerMessaging Server 2007, built on Windows PowerShell technology.