The Daily Parker

Politics, Weather, Photography, and the Dog

The IDTIDC is no more

Right before Christmas I removed the four dormant servers from the Inner Drive Technology International Data Center (IDTIDC), vowing to complete the job posthaste. Well, haste was Wednesday, so now, post that, I've finally finished.

There are no more servers in my apartment. The only computers running right now are my laptop and the new NAS. (The old switch, hidden under a chair, still has a whirring fan. I may replace it with a smaller, non-fanned switch at some point.)

Here's before:

And here's the after:

Notice that Parker doesn't seem too freaked out by the change, though he did seem uncomfortable while things were actually changing. He's resilient, though.

Fully 18 months ago I started moving all my stuff to Microsoft Azure. Today, the project is completely done. My apartment is oddly quiet, and seems oddly larger. I can get used to this.

Too much to do

With only a few hours to go before I jet out of Chicago, I'm squeezing in client work and organizing my apartment while on conference calls. Also, I'm sending these to my Kindle:

Back to debugging...

Lunchtime link list

Once again, here's a list of things I'm sending straight to Kindle (on my Android tablet) to read after work:

Back to work. All of you.

Managing multiple CRM connections with one Azure application

We've published Part 3 of my series of blog posts about integrating Holden International's Azure-based sales training app with multiple customer-relationship management (CRM) applications. The combined parts 1 and 2 went up mid-August. Part 4 should come out within 10 days.

Here's the complete post:

In my last blog post, I outlined the code architecture enabling us to let Holden International’s Azure-based efox 5.0 application connect with multiple, arbitrary CRM applications. In this post, I’ll show you what it looks like to the end user using our SalesForce connector.

Users whose accounts are integrated with SalesForce can enter efox in one of two ways: either by going directly to efox, or by clicking a “launch” button from a SalesForce Opportunity record.

Going through the front door is simple enough. The user can go directly to efox, or the user can navigate to a saved URL that takes advantage of efox’s RESTful ASP.NET MVC interface. In either case the user may not have already logged into SalesForce.

Some customers haven’t set up SSO with SalesForce yet, because the SalesForce Professional edition doesn’t support it. We’re trying to get everyone using efox to upgrade, because the user experience without SSO is sub-optimal: it’s DSO (double sign-on), requiring users to keep track of two different sets of credentials (one for efox, one for SalesForce). I’ll cover that mess next time.

For now, let’s look just at the SSO user experience.

Because efox has to support non-SSO users, just going to without being signed in to SalesForce takes you to the efox login page:

efox SignIn page

That’s not helpful to SSO users, because (a) they don’t have an efox login and (b) efox doesn’t know which SSO provider they’re using. To solve this problem, we use a RESTful URL that looks like{CustomerCode}, where CustomerCode is unique tag identifying the customer.

Note how this URL lets the SSO controller not care what CRM system the customer is using. When the user navigates to their company’s efox SSO URL, efox looks up the customer and sends the user to the correct login page. So a SalesForce-integrated user gets passed to the SalesForce login page, with a SAML request under the hood telling SalesForce to send the user back to efox:

SalesForce login page

When the user logs in to SalesForce, the SAML magic happens, and they get back to their efox landing page:

Winning Sales Plan landing page

More commonly, though, users launch efox from their CRM systems. When we set up a customer to use efox, they add a “launch” button to their CRM’s Opportunity page:

SalesForce opportunity page with efox launch button

In SalesForce, the button is actually just a VisualForce link button; other systems just use a basic hyperlink. This is because it only needs to let users navigate to a link that follows the format{CRM Opportunity ID}/{User Name}. Navigating to that link takes the user to the SSOController class in efox, which starts the login process (if required) or sends the user to the next step (if she’s already logged in).

Again, through the miracle of MVC RESTful URLs, the link will work not only with SalesForce but with any other CRM provider. The SSO controller uses the user name to find out what customer to use, from which it finds out which SSO provider to use, to which it sends the user’s login request. The return path of the login request connects to the SalesPlanController class, which actually performs the import. (If the user is already logged in, the SSOController sends the user directly to the SalesPlanController.)

I won’t go over the specifics of how the controller pulls information from the CRM system, except to refer back to the discussion last time about the ICrmIntegration interface from last time.

I’ve used SalesForce for the artwork in this post, but the flow and user experience would be the same for any other CRM provider. Users get to move seamlessly between efox and their CRM system, and efox doesn’t care whether it’s connecting to SalesForce, Siebel, or a squirrel.

In my next post, I’ll talk about some of the problems we had to solve in order to let people use efox without SSO.

Hanselman's Azure Glossary for the Confused

Microsoft's Scott Hanselman has published one:


Infrastructure as a Service. This means, I want the computers in my closet to go away. All that infrastructure, boxes, network switches, even software licenses are a headache. I want to put them somewhere where I can't see them (we'll call it, The Cloud) and I'll pay pennies an hours. Worst case, it costs me about the same but it's less trouble. Best case, it can scale (get bigger) if some company gets popular and it will cost less than it does now.

IAAS is Virtual Machines, Networking and Storage in the cloud. Software you wrote that runs locally now will run the same up there. If you want to scale it, you'll usually scale up.


Platform as a Service. This means Web Servers in the cloud, SQL Servers in the cloud, and more. If you like Ruby on Rails, for example, you might write software against Engine Yard's platform and run it on Azure. Or you might write iOS apps and have them talk to back end Mobile Services. Those services are your platform and will scale as you grow. Platform as a service usually hides the underlying OS from you. Lower level infrastructure and networking, load balancing and some aspects of security is abstracted away.

If you're interested in Cloud or Azure development, or you want to understand more about what I do for a living, take a look.

Configuring FTP on a moved Azure VM

A couple weeks back I moved an Azure Virtual Machine from one subscription to another. Since then, I haven't been able to connect to the FTP sites that were running on it. I finally spent some time today to figure out why.

First, I forgot to change the FTP firewall support in IIS. The IP address of the VM changed, so I needed to update the VM's external IP address here:

Then, I had to change the FTP firewall support for the FTP site itself. (It looks the same, just on the FTP site instead of on the IIS root node.)

Ronald Door has a good walk-through of how to set this up for the first time on a new VM. The problem is, I'd already set it up on the VM, so I thought that I only had to make the configuration changes I've just described.

Flash forward an hour and a lot of swearing later, and I realized one more problem. See, I set up the VM endpoints through the Azure portal when I launched the VM in the new subscription. However, it looks like I configured them incorrectly. And Microsoft updated the portal last week.

I finally decided simply to delete all four FTP endpoints (port 20 and 21 plus my two passive data return ports) and rebuild them. Endpoint setup is on the Endpoints tab of the Virtual Machine cloud service item:

That worked. The FTP spice flows fine now.

I'm troubled that I don't know exactly why it worked, though. The only difference between the current and previous setup is that before, I inadvertently created load-balanced sets for the ports. Since I only have one VM, that may have been my error.

Unexpectedly productive weekend

Yes, I know the weather's beautiful in Chicago this weekend, but sometimes you just have to run with things. So that's what I did the last day and a half.

A few things collided in my head yesterday morning, and this afternoon my computing landscape looks completely different.

First, for a couple of weeks I've led my company's efforts to consolidate and upgrade our tools. That means I've seen a few head-to-head comparisons between FogBugz, Atlassian tools, and a couple other products.

Second, in the process of moving this blog to Orchard, I've had some, ah, challenges getting Mercurial and Git to play nicely together. Orchard just switched to Git, and promptly broke Hg-Git, forcing contributors to enlist in Git directly.

Third, my remote Mercurial repositories are sitting out on an Azure VM with no automation around them. Every time I want to add a remote repository I have to remote into the VM and add it to the file system. Or just use my last remaining server, which, still, requires cloning and copying.

Fourth, even though it was doing a lot more when I created it a year ago, right now it's got just a few things running on it: The Daily Parker, Hired Wrist, my FogBugz instance, and two extinct sites that I keep up because I'm a good Internet citizen: the Inner Drive blog and a party site I did ten years ago.

Fifth, that damn VM costs me about $65 a month, because I built a small instance so I'd have adequate space and power. Well, serving 10,000 page views per day takes about as much computational power as the average phone has these days, so its CPU never ticks over 5%. Microsoft has an "extra small" size that costs 83% less than "small" and is only 50% less powerful.

Finally, on Friday my company's MSDN benefits renewed for another year, one benefit being $200 of Azure credits every month.

I put all this together and thought to myself, "Self, why am I spending $65 a month on a virtual machine that has nothing on it but a few personal websites and makes me maintain my own source repository and issue tracker?"

Then yesterday morning came along, and these things happened:

  1. I signed up for Atlassian's tools, Bitbucket (which supports both Git and Mercurial) and JIRA. The first month is free; after, the combination costs $20 a month for up to 10 users.
  2. I learned how to use JIRA. I don't mean I added a couple of cases and poked around with the default workflow; I mean I figured out how to set up projects, permissions, notifications, email routing, and on and on, almost to the extent I know FogBugz, which I've used for six years.
  3. I wrote a utility in C# to export my FogBugz data to JIRA, and then exported all of my active projects with their archives (about 2,000 cases).
  4. I moved the VM to my MSDN subscription. This means I copied the virtual hard disk (VHD) underpinning my VM to the other subscription and set up a new VM using the same disk over there. This also isn't trivial; it took over two hours.
  5. I changed all the DNS entries pointing to the old VM so they'd point to the new VM.
  6. Somewhere during all that time, I took Parker on a couple of long walks for about 2½ hours.

At each point in the process, I only planned to do a small proof-of-concept that somehow became a completed task. Really that wasn't my intention. In fact, yesterday I'd intended to pick up my drycleaning, but somehow I went from 10am to 5pm without knowing how much time had gone by. I haven't experienced flow in a while so I didn't recognize it at the time. Parker, good dog he is, let me go until about 5:30 before insisting he had to go outside.

I guess the last day and a half was an apotheosis of sorts. Fourteen months ago, I had a data center in my living room; today I've not only got everything in the Cloud, but I'm no longer wasting valuable hours messing around configuring things.

Oh, and I also just bought a 2 TB portable drive for $130, making my 512 GB NAS completely redundant. One fewer thing using electricity in my house...

Update: I forgot to include the code I whipped up to create .csv export files from FogBugz.

Articles I've sent to my Kindle

...because I didn't have time to read them today:

I will now go home and read these things on the way.

Re-evaluating tools. Again.

At 10th Magnitude, we have used Beanstalk as our central code repository. We transitioned to Mercurial about a year ago, which Beanstalk supported.

Today they sent around an email saying they're ceasing Mercurial support—including existing repositories—on September 30th, and would we care to switch to Git?

No. No, no, no. No Git. I'm not asking people to learn another damn version control system. (Plus Git doesn't quite suit us.)

But fortuitously, this forced re-evaluation of Beanstalk coincides with a general self-reflective re-evaluation we have underway. That doesn't mean we're going to Git, or (angels and ministers of grace, defend us!) back to Subversion, but as long as we have to move off Beanstalk, why not take a look at our issue tracking, external bug reporting, project management, and document sharing?

I'll have more about this as we get closer to the September 30th date, along with some awesome stuff about how we have developed an Azure application that does single sign-on with...just about any identity provider.

Inner Drive Azure benefits

As I promised four weeks ago, I have the final data on moving all my stuff to Windows Azure. I delayed posting this data because Azure pricing recently changed, as a number of services went from Preview to Production and stopped offering 25% discounts.

The concrete results are mixed at the moment, though increasing within the next couple of months. The intangible results are much, much improved.

First, electricity use. Looking at comparable quarters (March through May), my electricity consumption is down two thirds—even before air-conditioning season:

Consumption from March-May 2013 was 1028 kW/h, compared with 3098 kW/h over 2012. But this explains why the concrete benefits will improve: during June-August 2011, when all of the servers were running and so was the air conditioner, use was 4115 kW/h. I'm expecting to use less than 1800 kW/h this summer, just a little more than the one-month consumption in June 2011.

Costs, alas, have not fallen as much as hoped, unless you add the replacement costs of the servers. I'm currently running 3 SQL Database servers (consuming 2 GB), 3 extra-small cloud service instances, 1 small virtual machine, and 55 GB of storage. Total cost: $150.

Electricity in June 2013 was $55, compared with $165 in June 2012.

Don't forget the Office 365 subscription to replace my Exchange server at $26.

Finally, DSL and phone service went down from $115 to $60, because I dropped the phone service. Temporarily I'm supplementing the DSL with a FiOS service for $30. In a few months, when AT&T bumps the FiOS from 1.5 Mbps to 30 Mbps (they promise!), FiOS will go up to $50 and the DSL will go away.

So, cash flow for June 2012 was $279, and for June 2013 was $289. Factoring in the variability of electricity costs means Azure costs exactly the same as running my own rack.

What about the intangible costs? Well, let's see...I no longer have 8U of rack-mounted servers spinning their cooling fans 24/7 in my Chicago apartment. When I shut them off, the place got so much quieter I could hardly believe it. And I no longer worry about the power going out and losing email while I'm out of town.

In other words, I'm literally sleeping better.

Also, moving to Azure forced me to refactor my demo site Weather Now so extensively that I can now add a ton of really cool features that the old design couldn't support. (Once I have free time again. Someday.)

When you consider as well the cost of replacing the three end-of-life servers ($6000 worth of hardware), the dollars change considerably. Using 60-month depreciation, that's $100 per month savings on the Azure side of the ledger. I'm not counting that, though, because I may have limped along for a couple more years without replacing them, so it's hard to tell.

So: dollars, same; sleeping, better. A clear win for Azure.