Politics, Weather, Photography, and the Dog
Page 1 of 5 in the Software category Next Page
Tuesday 15 July 2014

Microsoft veteran Raymond Chen explains why Ctrl-F doesn't "find" in Outlook, like it does in every other modern application across the universe:

It's a widespread convention that the Ctrl+F keyboard shortcut initiates a Find operation. Word does it, Excel does it, Wordpad does it, Notepad does it, Internet Explorer does it. But Outlook doesn't. Why doesn't Outlook get with the program?

Rewind to 1995.

The mail team was hard at work on their mail client, known as Exchange (code name Capone, in keeping with all the Chicago-related code names from that era). Back in those days, the Ctrl+F keyboard shortcut did indeed call up the Find dialog, in accordance with convention.

And then a bug report came in from a beta tester who wanted Ctrl+F to forward rather than find, because he had become accustomed to that keyboard shortcut from the email program he used before Exchange.

I'll let Chen give you the punchline.

Tuesday 15 July 2014 17:13:10 EDT (UTC-04:00)  |  | Software | Business#
Friday 11 July 2014

I have to dash off to a meeting in a few minutes, then to Wrigley. So this is more of a note to myself.

Lucene.NET will be coming to Weather Now, I hope in a few weeks. This will massively improve its piss-poor searching, and allow me to do a few other things as well given Lucene's amazing search capabilities.

Unfortunately, Weather Now ranks third in development priorities behind my employer and my long-suffering freelance client. At least it kind of runs itself these days.

Friday 11 July 2014 12:44:00 CDT (UTC-05:00)  |  | Software | Business | Cloud | Windows Azure#
Thursday 10 July 2014

Imagine you're sitting on your front stoop, strumming your guitar, and Eric Clapton comes out of nowhere to give you some pointers.

That's about what happened to me today. Earlier this week, Jon Skeet (described by Scott Hanselman as "the world's greatest living programmer) noticed something I posted on the IANA Time Zone list, and asked me about the Inner Drive Time Zone library.

So I sent him the package.

And this afternoon, he sent me a benchmark that he wrote for it. Just like that.

Of course, the benchmark showed that my stuff was about 10% as fast as the Noda Time library, an open-source project he curates. So, having a couple of hours for "personal development time" available, I optimized it.

The specific details of the optimization are not that interesting, but I managed to more than double the library's performance by changing about ten lines of code. (It's now 20% as fast as Noda.) Along the way I exchanged about 10 emails with Skeet, because he kept making really crack suggestions and giving me valuable feedback about both my design and his.

That was cool.

Thursday 10 July 2014 17:52:18 CDT (UTC-05:00)  |  | Software | Business | Astronomy#
Thursday 26 June 2014

A Comcast installer showed up this morning within the appointed time frame, and in about an hour had taken my apartment the Inner Drive Technology World Headquarters from this:

To this:

I almost want to dance around singing "A Whole New World" but that would be very disturbing to my self image.

Instead I'll head into the office, getting in a little earlier than I expected, and come home to real Internet speeds. In fact, I think right now I'll watch something on YouTube just because I can.

Goodbye, AT&T. Hello Comcast, you gorgeous thing.

Thursday 26 June 2014 11:36:51 CDT (UTC-05:00)  |  | Software | Cloud | Work#
Sunday 18 May 2014

I have a totally-no-boring software deployment today. This is not optimal.

Sunday 18 May 2014 09:57:04 CDT (UTC-05:00)  |  | Software | Business | Cloud | Windows Azure#
Saturday 17 May 2014

Short answer: You can't. So don't try.

If you want to find out how I solved the problem (and what that problem actually was), click through.

Saturday 17 May 2014 14:27:42 CDT (UTC-05:00)  |  | Software | Cloud | Windows Azure#
Monday 12 May 2014

I'm uploading a couple of fixes to Inner-Drive.com right now, so I have a few minutes to read things people have sent me. It takes a while to deploy the site fully, because the Inner Drive Extensible Architecture™ documentation (reg.req.) is quite large—about 3,000 HTML pages. I'd like to web-deploy the changes, but the way Azure cloud services work, any changes deployed that way get overwritten as soon as the instance reboots.

All of the changes to Inner-Drive.com are under the hood. In fact, I didn't change anything at all in the website. But I made a bunch of changes to the Azure support classes, including a much better approach to logging inspired by a conversation I had with my colleague Igor Popirov a couple of weeks ago. I'll go into more details later, but suffice it to say, there are some people who can give you more ideas in one sentence than you can get in a year of reading blogs, and he's one of them.

So, while sitting here at my remote office waiting for bits to upload, I encountered these things:

  • The bartender's iPod played "Bette Davis Eyes" which immediately sent me back to this.
  • Andrew Sullivan pointed me (and everyone else who reads his blog) towards the ultimate Boomer fantasy, the live-foreverists. (At some point in the near future I'm going to write about how much X-ers hate picking up after both Boomers and Millennials, and how this fits right in. Just, not right now.)
  • Slate's Jamelle Bouie belives Wisconsin's voter rights decision is a win for our cause. ("Our" in this case includes those who believe retail voter fraud is so rare as to be a laughable excuse for denying a sizable portion of the population their voting rights, especially when the people denied voting rights tend to be the exact people who Republicans would prefer not to vote.)

OK, the software is deployed, and I need to walk Parker now. Maybe I'll read all these things after Game of Thrones.

Sunday 11 May 2014 21:15:29 CDT (UTC-05:00)  |  | Kitchen Sink | US | Software | Business | Windows Azure#
Sunday 4 May 2014

Apparently OCR software sometimes still has trouble interpreting older books:

[A]s Sarah Wendell, editor of the Romance blog  Smart Bitches, Trashy Books noticed recently, something has gone awry. Because, in many old texts the scanner is reading the word ‘arms’ as ‘anus’ and replacing it as such in the digital edition. As you can imagine, you don’t want to be getting those two things mixed up.

The resulting sentences are hilarious, turning tender scenes of passionate embrace into something much darker, and in some cases, nearly physically impossible. The Guardian’s Alison Flood quotes some of the best:

From the title Matisse on the Loose: “When she spotted me, she flung her anus high in the air and kept them up until she reached me. ‘Matisse. Oh boy!’ she said. She grabbed my anus and positioned my body in the direction of the east gallery and we started walking.”

And ‘”Bertie, dear Bertie, will you not say good night to me” pleaded the sweet, voice of Minnie Hamilton, as she wound her anus affectionately around her brother’s neck. “No,” he replied angrily, pushing her away from him.”‘ Well, wouldn’t you?

As Flood notes, a quick search in Google Books reveals that the problem is widespread. Parents should keep their children away from the ebook edition of the 1882 children’s book Sunday Reading From the Young. It all seems perfectly innocent until… “Little Milly wound her anus lovingly around Mrs Green’s neck and begged her to make her home with them. At first Mrs Green hesitated.” And who can really blame her?


Sunday 4 May 2014 08:02:11 CDT (UTC-05:00)  |  | Software | Writing#
Friday 2 May 2014

The Inner Drive Extensible Architecture™ is about to get wider distribution.

After 11 years of development, I think it's finally ready for wider distribution. And, who knows, maybe I'll make a couple of bucks.

I've updated the pricing structure and the license agreement, and in the next week or so (after some additional testing), I'm going to release it to NuGet.

That doesn't make it free; that makes it available. (Actually, I am making it free for development and testing, but I'm charging for commercial production use.)

I'll have more to say on this once it's released.

Thursday 1 May 2014 19:16:24 CDT (UTC-05:00)  |  | Software | Business#
Tuesday 29 April 2014


More later.

Tuesday 29 April 2014 17:08:11 CDT (UTC-05:00)  |  | Chicago | US | Software#
Monday 21 April 2014

I don't know how this little snippet of code got into a project at work (despite the temptation to look at the file's history):

Search = Search.Replace("Search…", "");

// Perform a search that depends on the Search member being an empty string
// Display the list of things you find

First, I can't fathom why the original developer made the search method dependent on a hard-coded string.

Second, as soon as the first user hit this code after switching her user profile to display Japanese, the search failed.

The fix was crushingly simple:

var localizedSearchPrompt = GetLocalizedString("SearchPrompt");
Search = Search.Replace(localizedSearchPrompt, string.Empty);

(The GetLocalizedString method in this sample is actually a stub for the code that gets the string in the current user's language.)

The moral of the story is: avoid hard-coded strings, just like you've been taught since you started programming.

Monday 21 April 2014 15:31:25 CDT (UTC-05:00)  |  | Software | Work#
Saturday 22 March 2014

I had planned to post some photos tonight showing the evolution of digital cameras, using a local landmark, but there's a snag. The CF card reader I brought along isn't showing up on my computer, even though the computer acknowledges that something is attached through a USB port.

As I'm visiting one of the most sophisticated and technological cities in the world, I have no doubt I can fix this tomorrow. Still, it's always irritating when technology that worked a few days ago simply stops working.

For those doubting my troubleshooting skills, I have confirmed that the CF card has all the photos I shot today; that the computer can see the CF card reader; and that the computer can connect effectively to other USB attachments. The problem is therefore either in the OS or in the card reader, and I'm inclined to suspect the card reader.

Saturday 22 March 2014 02:58:25 GMT (UTC+00:00)  |  | London | Software#
Wednesday 19 March 2014

The repercussions from Monday's data-recovery debacle continued through yesterday.

By the time business started Tuesday morning, I had restored the client's application and database to the state it had at the moment of the upgrade, and I'd entered most of their appointments, including all of them through tomorrow (Thursday). When the client started their day, everything seemed to be all right, except for one thing I also didn't know about their business: some of their customers pay them based on the appointment ID, which is nothing more than a SQL IDENTITY column in the database.

If you know how databases work, you know that IDENTITY columns are officially non-deterministic. In this specific case, the column increments by one every time it adds a row, but also in this specific case, I didn't re-enter the data in the same order it was originally entered, since I prioritized the earlier appointments.

We've gotten through the problem now, and the client no longer want to put my head on a spike, so I will now take a moment for an after-action review that might help other software developers in the future.

First, the things I did right:

  • When I deployed the upgrade Saturday, I preserved the state of the database and application at exactly that moment.
  • All of the data in the system, every field of it, was audited. It was trivially easy to produce a report of every change made to the system from roll-out Saturday afternoon through roll-back Monday night.
  • When I rolled back the upgrade Monday night, I preserved the state of the upgraded database and application at exactly that moment.
  • When the client first noticed the problem, I dropped everything else and worked out a plan with them. The plan centered around getting their business back up first, and then dealing with the technology.
  • Their customers were completely back to normal at the start of business Tuesday.
  • The application runs on Windows Azure, which made preserving the old application state not only easy, but possible.

So what should I have done better?

  • My biggest error was overconfidence in my ability to roll back the upgrade. No matter what other errors I made, this was the root of all of them.
  • The second major error was not testing the UI on Internet Explorer 8. Mitigating this was the fact that neither I nor my client was aware that the bulk of their customers used IE8. However, given that people using IE8 were totally unable to use the application, even if the numbers of customers using IE8 was very small, the large impact should have put IE8 near the top of my regression test checklist.
  • Instead of spending a couple of hours re-entering data, I should have written a script to do it.
  • I have always regretted (though never more than today) publicizing the appointments IDENTITY column to the end user, because it's normal they'd use this ID for business purposes. This illustrates the danger—not just the sloppy design—of using a single database field for two purposes. Any future version of the application will have an OrderID field that is not a database plumbing field.

All in all, the good things outweighed the bad, and I may get back in my client's good graces when I roll out the next update. You know, the one that works on IE8, but still solves the looming problem of the platform's age.

Wednesday 19 March 2014 09:59:00 CDT (UTC-05:00)  |  | Software | Business | Cloud | Windows Azure#
Saturday 15 March 2014

I've just picked up Bob Martin's The Clean Coder: A Code of Conduct for Professional Programmers.

I'm just a couple chapters in, and I find myself agreeing with him vehemently. So far, he's reinforcing the professional aspects of the profession. Next chapters will TDD, testing, estimation, and a bunch of other parts of the profession that are more difficult than the actual coding part.

More later.

Saturday 15 March 2014 14:06:41 CDT (UTC-05:00)  |  | Software | Business#
Tuesday 11 March 2014

I just did a dumb thing in Mercurial, but Mercurial saved me. Allow me to show, vividly, how using a DVCS can prevent disaster when you do something entirely too human.

In the process of upgrading to a new database package in an old project, I realized that we still need to support the old database version. What I should have done involved me coming to this realization before making a bucket-load of changes. But never mind that for now.

I figured I just need to create a branch for the old code. Before taking this action, my repository looked a like this:

Thinking I was doing the right thing, I right-clicked the last commit and added a branch:


Well, now I have a problem. I wanted the uncommitted changes on the default branch, and the old code on the 1.0 branch. Now I have the opposite condition.

Fortunately this is Mercurial, so nothing has left my own computer yet. So here's what I did to fix it:

  1. Committed the changes to the 1.0 branch of this repository. The commit is in the wrong branch, but it's atomic and stable.
  2. Created a patch from the commit.
  3. Cloned the remote (which, remember, doesn't have the changes) back to my local computer.
  4. Created the branch on the new clone.
  5. Committed the new branch.
  6. Switched branches on the new clone back to default.
  7. Applied the patch containing the 2.0 changes.
  8. Deleted the old, broken repository.

Now it looks like this:

Now all is good in the world, and no one in my company needs to know that I screwed up, because the screw-up only affected my local copy of the team's repository.

It's a legitimate question why I didn't create a 2.0 branch instead. In this case, the likelihood of an application depending on the 1.0 version is small enough that the 1.0 branch is simply insurance against not being able to support old code. By creating a branch for the old code, we can continue advancing the default branch, and basically forget the 1.0 branch is there unless calamity (or a zombie application) strikes.

Tuesday 11 March 2014 14:38:49 CDT (UTC-05:00)  |  | Software#
Friday 7 March 2014

The test configuration earlier today wasn't the problem. It turned out that MSBuild simply didn't know it had to pull in the System.Web.Providers assembly. Fortunately, this guy suggested a way to do it. I created a new file called AssemblyInit that looks like this:

using System.Diagnostics;
using System.Web.Providers;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace MyApp
   public class AssemblyInit
      public void Initialize()
         Trace.WriteLine("Initializing System.Web.Providers");
         var dummy = new DefaultMembershipProvider();
         Trace.WriteLine(string.Format("Instantiated {0}", dummy));

That does nothing more than create a hard reference to System.Web.Providers, causing MSBuild to affirmatively import it.

Now all my CI build works, the unit tests work, and I can go have a weekend.

Friday 7 March 2014 17:41:56 CST (UTC-06:00)  |  | Software#

One of my tasks at my day job today is to get continuous integration running on a Jenkins server. It didn't take too long to wrestle MSBuild to the ground and get the build working properly, but when I added an MSTest task, a bunch of unit tests failed with this error:

System.IO.FileNotFoundException: Could not load file or assembly 'System.Web.Providers, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

The System.Web.Providers assembly is properly referenced in the unit test project (it's part of a NuGet package), and the assembly's Copy Local property is set to True.

When the unit tests run from inside Visual Studio 2013, they all work. When ReSharper runs them, they all work. But when I execute the command line:

MSTest.exe /resultsfile:MSTestResults.trx /testcontainer:My.Stupid.Test\bin\My.Stupid.Test.dll /test:MyFailingTest

...it fails with the error I noted above.

I'll spare you the detective work, because I have to get back to work, but I did find the solution. I marked the failing test with a DeploymentItemAttribute:

public void MyFailingTest()

Now, suddenly, everything works.

And people wonder why I hate command line crap.

Friday 7 March 2014 14:15:00 CST (UTC-06:00)  |  | Software#
Saturday 1 March 2014

Parker, 14 weeksI'm David Braverman, this is my blog, and Parker is my 7½-year-old mutt. I last updated this About... page in September 2011, more than 1,300 posts back, so it's time for a refresh.

The Daily Parker is about:

  • Parker, my dog, whom I adopted on 1 September 2006.
  • Politics. I'm a moderate-lefty by international standards, which makes me a radical left-winger in today's United States.
  • The weather. I've operated a weather website for more than 13 years. That site deals with raw data and objective observations. Many weather posts also touch politics, given the political implications of addressing climate change, though happily we no longer have to do so under a president beholden to the oil industry.
  • Chicago (the greatest city in North America), and sometimes London, San Francisco, and the rest of the world.
  • Photography. I took tens of thousands of photos as a kid, then drifted away from making art until early 2011 when I finally got the first digital camera I've ever had whose photos were as good as film. That got me reading more, practicing more, and throwing more photos on the blog. In my initial burst of enthusiasm I posted a photo every day. I've pulled back from that a bit—it takes about 30 minutes to prep and post one of those puppies—but I'm still shooting and still learning.

I also write a lot of software, and will occasionally post about technology as well. I work for 10th Magnitude, a startup software consultancy in Chicago, I've got more than 20 years experience writing the stuff, and I continue to own a micro-sized software company. (I have an online resume, if you're curious.) I see a lot of code, and since I often get called in to projects in crisis, I see a lot of bad code, some of which may appear here.

I strive to write about these and other things with fluency and concision. "Fast, good, cheap: pick two" applies to writing as much as to any other creative process (cf: software). I hope to find an appropriate balance between the three, as streams of consciousness and literacy have always struggled against each other since the first blog twenty years ago.

If you like what you see here, you'll probably also like Andrew Sullivan, James Fallows, Josh Marshall, and Bruce Schneier. Even if you don't like my politics, you probably agree that everyone ought to read Strunk and White, and you probably have an opinion about the Oxford comma—punctuation de rigeur in my opinion.

Thanks for reading, and I hope you continue to enjoy The Daily Parker.

Saturday 1 March 2014 14:27:44 CST (UTC-06:00)  |  | Aviation | Baseball | Biking | Cubs | Geography | Kitchen Sink | London | Parker | Daily | Photography | Politics | US | World | Religion | Software | Blogs | Business | Cloud | Travel | Weather | Windows Azure | Work | Writing#
Thursday 27 February 2014

Security guru Bruce Schneier wonders if the iOS security flaw recently reported was deliberate:

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second "goto fail;" statement. Since that statement isn't a conditional, it causes the whole procedure to terminate.

If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what's going on inside Apple?

Schneier has argued previously that the NSA's biggest mistake was dishonesty. Because we don't know what they're up to, and because they've lied so often about it, people start to believe the worst about technology flaws. This Apple error could have been a stupid programmer error, merge conflict, or something in that category. But we no longer trust Apple to work in our best interests.

This is a sad state of affairs.

Thursday 27 February 2014 08:27:46 CST (UTC-06:00)  |  | US | Software | Business | Security#
Sunday 16 February 2014

I remember, back in .NET prehistory (2001), that one of .NET's biggest benefits was to be the end of DLL hell. Yet I spent half an hour this afternoon trying to get a common package (Entity Framework 6) to install in a project that never had that package in the first place—because of a version conflict with .NET itself.

When I tried to install EF6, the NuGet package installer failed the installation with the message "This operation would create an incorrectly structured document". A quick check of StackOverflow suggested a couple of possible causes:

  • The Entity Framework installer creates an invalid web.config file because it gets confused about the older project's XML namespaces.
  • The EF installer chokes on .NET 4.5 and .NET 4.5.1 because it's broken.

Anyone who's spent time with Microsoft products should immediately suspect that hypothesis #2 is unlikely. No, seriously: Microsoft releases things that have bad usability, rude behavior, and incomplete features all the time, but they have some incredible QA people. This fits that pattern: the installer script works fine. It just has pretty dismal error reporting.

So after removing every trace of EF from the relevant files, and downgrading the app to .NET 4.0 from .NET 4.5.1, EF still wouldn't install. Only at this point did I start thinking about the problem.

Let's review: I had an error message about an incorrectly-structured document. The document in question was almost certainly web.config, which I could tell because the EF6 installation kept changing it. The web.config file is an XML document. XML allows you to specify a namespace. This particular XML document had a namespace defined. A Stack Overflow commenter had mentioned namespaces. Um...

At this point I changed the web.config header element from this:

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">

to this


That fixed it.

The moral of this story: read error messages carefully, form hypotheses based on the data you have available, and even before that, stop and think. And even if you're not a Microsoft developer working on NuGet package installer scripts, always give as much detail as possible in error messages, so that developers who read them can spend less time trying to understand why the operation they thought was simple took so long to accomplish.

Regular readers of this blog know how irritated I get when error messages don't actually explain the error. I'm on developers for this all the time. It's rude; it's lazy; it costs people irrecoverable time. This is one of those times.

Sunday 16 February 2014 17:35:55 CST (UTC-06:00)  |  | Software | Cloud | Windows Azure#
Monday 27 January 2014

I'm torn.

Or I'm a dinosaur. Or I'm a Perceiver. Or I'm a senior software development manager who's sick of changing technologies.

My current drama is between continuing to use Mercurial on one hand, and switching to Git on the other. Both are distributed version control systems, so both enable a load of flexibility in single- or multi-developer workflows. I know that sounds like jargon, so let me explain.

No, there is too much; let me sum up: If you don't have to share every little change you make to a software project, everyone is better off.

The cold war between these two products has created two problems that appear to have nothing to do with each other:

  • We have multiple software projects that we have to continue to support in production while we build entirely new hunks of them (which will take months); and
  • All of the cool tools for integration and deployment work with Git, while not all of them work with Mercurial.

I suppose I shouldn't be surprised. I chose Betamax and Laserdisc as well. I have a real weakness for the best technical solution, even while the popular solution takes the lead. (Both Betamax and LaserDisc had superior audio and slightly better video than the products that defeated them. At least I held off choosing between HD DVD and Blu-Ray until one of them was cold and dead in the ground.)

I digress.

I'm annoyed that Git is moving so far ahead of Mercurial that it's becoming an argument to use Mercurial. I assert this is an argument to popularity, not to logic. But I also get really tired of swimming upstream, and if Microsoft, Bitbucket, and a bunch of other companies are pushing a technology, who am I to blow against the wind?

A conclusion, to the extent possible, will follow shortly.

Sunday 26 January 2014 21:37:45 CST (UTC-06:00)  |  | Software#
Monday 30 December 2013

I just saved myself hours of pain by creating a unit test around a simple method that turned out to have a subtle bug.

The method in question calculates the price difference between two subscriptions for a product. If you're using the product, and you use more of it, the cost goes up. Every day, the application looks to make sure you're only using your allotted amount. If you go over, you get automatically bumped to the next subscription level and charged the difference, pro-rated by how much of the subscription term is left.

Here's the basic code:

var delta = subscription.IsAnnual ? 
   newTier.AnnualPrice - currentTier.AnnualPrice : 
   newTier.MonthlyPrice - currentTier.MonthlyPrice;

All well and good, except MonthlyPrice, for reasons known only to the previous developer, is nullable. So in order to prevent an ugly error message, I made sure it could never be null using the ?? operator:

var delta = subscription.IsAnnual ? 
   newTier.AnnualPrice - currentTier.AnnualPrice : 
   newTier.MonthlyPrice ?? 0m - currentTier.MonthlyPrice ?? 0m;

Do you see the problem? I didn't. But I learned today that - takes precedence over ??. So here's the correction:

var delta = subscription.IsAnnual ? 
   newTier.AnnualPrice - currentTier.AnnualPrice : 
   (newTier.MonthlyPrice ?? 0m) - (currentTier.MonthlyPrice ?? 0m);

I discovered that when the unit test I wrote kept insisting that 6 - 6 = 6. This is because without the parentheses where I put them, the compiler thinks I meant this:

newTier.MonthlyPrice ?? ((0m - currentTier.MonthlyPrice) ?? 0m)

In English, the compiler thought my first attempt meant, "Take the new monthly price, except if it's null, in which case take zero minus the old monthly price, unless that's null too, in which case take zero." What I meant, and what my correction means, is, "Take the new monthly price, or zero if it's null, and subtract the old monthly price (or zero if that's null)."

I'm glad I use NUnit.

Monday 30 December 2013 14:02:01 CST (UTC-06:00)  |  | Software#
Monday 2 December 2013

I have an HTC Windows 8X phone. I work for a Microsoft Partner, so this seemed like a good idea at the time. After nearly a year, I can report that I am tired of this phone and want to go back to Android.

The one thing my phone does well is manage two Microsoft Exchange accounts. And it does Skydrive all right too. Those are Microsoft products, so Windows should handle them.

I find the touch-screen waaay too sensitive. It can't determine what letter I want more than half the time, and its auto-correct suggestions hardly ever make sense.

Bing, however, sucks ass, compared with Google. And there's no way to change the hyper-sensitive search button on the phone, which fires up Bing every time my thumb goes near the search icon. Sometimes when I'm trying to take a photo, or do something else that involves the phone not switching applications.

Bing Maps is even worse. I won't spend too much time on a rant when I could just show you.

Let me preface this by saying Seoul's WiFi situation is amazing. I have free WiFi nearly everywhere I go. Which is how I was able to run the following comparison.

Exhibit A, where the Bing Maps application thought I was this afternoon:

(Click for full-size image.)

Exhibit B, where Google Maps thought I was at the same moment:

Google wins.

Note that the Bing Maps application on my phone failed to produce a usable map; Bing Maps itself has the data. Here's what the Bing Maps website shows inside a browser window:

Attention, Microsoft: Having a nicely detailed map on my laptop does not help me when I'm in the middle of Gangnam. That's really exactly the moment that I want a good map.

Oh, and to add insult, Google Maps doesn't really work that well on the IE11 mobile browser. As in, it won't search unless you really make sure you touch exactly the right pixel on the screen.

My next phone? I'm going back to Android.

Monday 2 December 2013 15:15:29 KST (UTC+09:00)  |  | Geography | Software#
Friday 22 November 2013

Tomorrow afternoon is the Day of the Doctor already, and then in a little more than four days I'm off to faraway lands. Meanwhile, I'm watching a performance test that we'll repeat on Monday after we release a software upgrade.

So while riveted to this Live Meeting session, I am pointedly not reading these articles:

Perhaps more to the point, I'm not finishing up the release that will obviate the very performance test I'm watching right now. That is another story.

Friday 22 November 2013 09:46:04 CST (UTC-06:00)  |  | Kitchen Sink | US | Software#
Wednesday 6 November 2013

The person most directly responsible for the HealthCare.gov debacle is "retiring:"

The chief information officer at the Centers for Medicare and Medicaid Services, whose office supervised creation of the troubled federal website for health insurance, is retiring, the Obama administration said Wednesday.

The official, Tony Trenkle, will step down on Nov. 15 “to take a position in the private sector,” said an email message circulated among agency employees.

As the agency’s top information officer, Mr. Trenkle supervised the spending of $2 billion a year on information technology products and services, including development of HealthCare.gov, the website for the new health insurance marketplace.

Sibelius, though. Why's she still around?

Wednesday 6 November 2013 11:17:31 CST (UTC-06:00)  |  | US | Software#
Monday 28 October 2013

Jakob Nielsen's company has written a detailed analysis of how the Federal Health Exchange screwed up usability:

The HealthCare.gov team has suffered what most web professionals fear most: launching a broken web application. This is particularly harrowing given the visibility of the website in question. The serious technical and data issues have been covered extensively in the media, so we won’t rehash those. Instead, in this article we focus on how to improve the account setup process. This is a user experience issue, but fixing it will also alleviate the site's capacity problems.

Account Set-up Usability is Mission Critical

Account setup is users’ first taste of a service. A suboptimal account setup can spawn 3 problems:

  • Increased service cost: When people can’t self-service online and you have no competitors, they call you. Call-center interaction is more expensive than web self-service. In 2008, Forrester estimated call-center calls to cost $5.50 per call versus 10 cents for a user who self-services online.
  • Increased cognitive strain: The instructions for creating usernames and password in this flow (which we address further along in this article) require a great deal of concentration, and if users don’t understand the instructions, they will need to keep creating usernames and passwords until they are accepted.
  • Halo Effect: Account setup is the first in a series of web-based interactions that users will need to conduct on HealthCare.gov. A poor experience with this first step will impact how people feel not only about subsequent interactions with the site, but how they feel about the service in general and the Affordable Care Act as a whole.

The discussion around our office hinges on two things other than usability: first, give us $2 million (of the $400 million they actually spent) and we'll build a much better site. Second, the biggest problems come from the insurance companies on the back end. Users don't care about that; they just want to get health insurance. As Krugman says, though, there really wasn't a way to get the insurance companies out of the equation, and that, more than anything, is the foundation of all these other problems.

Monday 28 October 2013 08:55:57 CDT (UTC-05:00)  |  | US | Software | Business#
Sunday 27 October 2013

Programming languages have come a long way since I banged out my first BASIC "Hello, World" in 1977. We have great compilers, wonderful editors, and strong typing.

In the past few years, jQuery and JSON, both based on JavaScript, have become ubiquitous. I use them all the time now.

jQuery and JSON are weakly-typed and late-bound. The practical effect of these characteristics is that you can introduce subtle, maddening bugs merely by changing the letter case of a single variable (e.g., from "ID" to "Id").

I've just spent three hours of a perfectly decent Sunday trying to find exactly that problem in some client code. And I want to punch someone.

Two things from this:

1. Use conventions consistently. I'm going to go through all the code we have and make sure that ID is always ID, not Id or id.

2. When debugging JSON, search backwards. I'll have more to say about that later, but my day would have involved much more walking Parker had I gone from the error symptom backwards to the code rather than trying to step through the code into the error.

OK, walkies now.

Sunday 27 October 2013 13:25:49 CDT (UTC-05:00)  |  | Software | Cloud#
Monday 21 October 2013

I am agog at a bald impossibility in the New York Times' article today about the ACA exchange:

According to one specialist, the Web site contains about 500 million lines of software code. By comparison, a large bank’s computer system is typically about one-fifth that size.

There were three reporters in the byline, they have the entire Times infrastructure at their disposal, and still they have an unattributed "expert" opinion that the healthcare.gov codebase is 33 times larger than Linux. 500 MLOC? Why not just say "500 gazillion?" It's a total Dr. Evil moment.

Put in other terms: it's like someone describing a large construction project—a 20-story office building, say—as having 500 million rivets in it. A moment's thought would tell you that the mass of 500 million rivets would approach the steel output of South Korea for last month.

The second sentence is nonsense also. "A large bank's computer system?" Large banks have thousands of computer systems; which one did you mean? Back to my example: it's like comparing the 500-million-rivet office building to "a large bank's headquarters."

I wouldn't be so out of my head about this if it weren't the Times. But if they can't get this right, what hope does any non-technical person have of understanding the problem?

One last thing. We, the people of the United States, paid for this software. HHS needs to disclose the source code of this monster. Maybe if they open-sourced the thing, they could fix it faster.

Monday 21 October 2013 14:41:54 CDT (UTC-05:00)  |  | US | Software | Business#
Thursday 17 October 2013

The Chicago technology scene is tight. I just had a meeting with a guy I worked with from 2003-2004. Back then, we were both consultants on a project with a local financial services company. Today he's CTO of the company that bought it—so, really, the same company. Apparently, they're still using software I wrote back then, too.

I love when these things happen.

This guy was also witness to my biggest-ever screw-up. (By "biggest" I mean "costliest.") I won't go into details, except to say that whenever I write a SQL delete statement today, I do this first:

FROM MissionCriticalDataWorthMillionsOfDollars
WHERE ID = 12345

That way, I get to see exactly what rows will be deleted before committing to the delete. Also, even if I accidentally hit <F5> before verifying the WHERE clause, all it will do is select more rows than I expect.

You can fill in the rest of the story on your own.

Thursday 17 October 2013 11:19:27 CDT (UTC-05:00)  |  | Kitchen Sink | Software | Business | Security | Work#
Friday 30 August 2013

Fortunately, I'm in an airport with lots of power outlets. Because my laptop just warned me that it was down to its last few milliamps, even though ordinarily the 90 W/h battery I lug around can last about 8 hours. What happened? Windows Search decided that consuming 50% of my CPU (i.e., two entire cores) was a good idea while running on battery.

So since I have an hour before boarding, and since I'm now plugged in (which means I don't have any worries about driving my portable HDD), here is a lovely picture of Montréal from earlier today:

Thursday 29 August 2013 20:34:48 EDT (UTC-04:00)  |  | Software | Travel#
Sunday 25 August 2013

I'm pulling the public repository for Orchard again, because I made a mistake with Git that I can't seem to undo. I've set up my environment to have a copy of the public repository, and then a working repository cloned from it. This allows me to try things out on my own machine, in private branches, while still pulling the public bits without the need to merge them into my working copy.

Orchard, which will soon (I hope) replace dasBlog as this blog's platform, recently switched from Mercurial to Git, to which led to this problem.

I may simply not have grasped all the nuances of Git. Git is extremely powerful, in the sense that it will do almost anything you tell it to do, without regard for the consequences. It reflects the ethos of the C++ programming language, which gave everyday programmers ways to screw up previously only available to experts.

My specific screw-up was that I accidentally attempted to push my local changes back to my copy of the Public repository. I had added about six changesets, which I couldn't extract from my copy of public no matter what I tried.

So, while writing this, I just pulled a clean copy of public, checked out the two branches I wanted (1.1 and fw45, for those keeping score at home), and merged with my existing changes.

Now I get to debug that mess...and I may toss it and start over.

Sunday 25 August 2013 10:44:05 CDT (UTC-05:00)  |  | Software | Blogs#
Sunday 24 March 2013

Just a quick note about debugging. I just spent about 30 minutes tracking down a bug that caused a client to get invoiced for -18 hours of premium time and 1.12 days of regular time.

The basic problem is that an appointment can begin and end at any time, but from 6pm to 8am, an appointment costs more per hour than during business hours. This particular appointment started at 5pm and went until midnight, which should be 6 hours of premium and 1 hour of regular.

The bottom line: I had unit tests, which automatically tested a variety of start and end times across all time zones (to ensure that local time always prevailed over UTC), including:

  • Starts before close, finishes after close before midnight
  • Starts before close, finishes after midnight before opening
  • Starts before close, finishes after next opening
  • Starts after close, finishes before midnight
  • Starts after close, finishes after midnight before opening
  • Starts after close, finishes after next opening
  • ...

Notice that I never tested what happened when the appointment ended at midnight.

The fix was a single equals sign, as in:

- if (localEnd > midnight & local <= localOpenAtEnd)
+ if (localEnd >= midnight & local <= localOpenAtEnd)

Nicely done, Braverman. Nicely done.

Sunday 24 March 2013 12:36:48 CDT (UTC-05:00)  |  | Software#
Sunday 10 February 2013

Last night I made the mistake of testing a deployment to Azure right before going to bed. Everything had worked beautifully in development, I'd fixed all the bugs, and I had a virgin Windows Azure affinity group complete with a pre-populated test database ready for the Weather Now worker role's first trip up to the Big Time.

People interested in those sorts of things can continue to read some helpful Azure debugging tips. Otherwise, stay tuned for a whinge about trying to do work on an airplane.

Sunday 10 February 2013 10:27:27 CST (UTC-06:00)  |  | Software | Cloud | Windows Azure#
Wednesday 16 January 2013

We're just 45 minutes from releasing a software project to our client for user acceptance testing (UAT), and we're ready. (Of course, there are those 38 "known issues..." But that's what the UAT period is for!)

When I get back from the launch meeting, I'll want to check these out:

Off to the client. Then...bug fixes!

Wednesday 16 January 2013 12:28:36 CST (UTC-06:00)  |  | Kitchen Sink | US | World | Software | Work#
Sunday 6 January 2013

This is about C# development. If you're interested in how I got a 60-fold improvement in code execution speed by adding a one-line Entity Framework configuration change, read on. If you want a photo of Parker, I'll post one later today.

Sunday 6 January 2013 12:17:13 CST (UTC-06:00)  |  | Software | Cloud | Weather | Windows Azure#
Sunday 18 November 2012

I found Joe and Ben Albahari's library of LINQ extensions, which enabled me to finish a really complicated piece of code quickly and elegantly.

Programmers keep reading. Everyone else: I'll have more stuff about the weather tomorrow.

Sunday 18 November 2012 12:34:04 CST (UTC-06:00)  |  | Software | Cool links#
Friday 16 November 2012

Oh, Azure Storage team, why did you break everything?

Software people will want to continue for some specific tips on how to do the upgrade.

Friday 16 November 2012 10:55:17 CST (UTC-06:00)  |  | Software | Cloud#
Saturday 3 November 2012

I mentioned a few weeks ago that I've had some difficulty moving the last remaining web application in the Inner Drive Technology Worldwide Data Center, Weather Now, into Microsoft Windows Azure. Actually, I have two principal difficulties: first, I need to re-write almost all of it, to end its dependency on a Database of Unusual Size; and second, I need the time to do this.

Right now, the databases hold about 2 Gb of geographic information and another 20 Gb of archival weather data. Since these databases run on my own hardware right now, I don't have to pay for them outside of the server's electricity costs. In Azure, that amount of database space costs more than $70 per month, well above the $25 or so my database server costs me.

I've finally figured out the architecture changes needed to get the geographic and weather information into cheaper (or free) repositories. Some of the strategy involves not storing the information at all, and some will use the orders-of-magnitude-less-expensive Azure table storage. (In Azure storage, 25 Gb costs $3 per month.)

Unfortunately for me, the data layer is about 80% of the application, including the automated processes that go out and get weather data. So, to solve this problem, I need a ground-up re-write.

The other problem: time. Last month, I worked 224 hours, which doesn't include commuting (24 hours), traveling (34 hours), or even walking Parker (14 hours). About my only downtime was during that 34 hours of traveling and while sitting in pubs in London and Cardiff.

I have to start doing this, though, because I'm spending way too much money running two servers that do very little. And I've been looking forward to it—it's not a chore, it's fun.

Not to mention, it means I get to start working on the oldest item on my to-do list, Case 46 ("Create new Gazetteer database design"), opened 30 August 2006, two days before I adopted Parker.

And so it begins.

Saturday 3 November 2012 14:48:40 CDT (UTC-05:00)  |  | Software | Business | Cloud#
Sunday 9 September 2012

Despite my enthusiasm for Microsoft Windows Azure, in some ways it suffers from the same problem all Microsoft version 1 products have: incomplete debugging tools.

I've spent the last three hours trying to add an SSL certificate to an existing Azure Web application. In previous attempts with different applications, this has taken me about 30 minutes, start to finish.

Right now, however, the site won't launch at all in my Azure emulator, presenting a generic "Internal server error - 500" when I try to start the application. The emulator isn't hitting any of my code, however, nor is it logging anything to the Windows System or Application logs. So I have no idea why it's failing.

I've checked the code into source control and built it on another machine, where it had exactly the same problem. So I know it's something under source control. I just don't know what.

I hate very little in this world, but lazy developers who fail to provide debugging information bring me near to violence. A simple error stack would probably lead me to the answer in seconds.

Update: The problem was in the web.config file.

Earlier, I copied a connection string element from a transformation file into the master web.config file, but I forgot to remove the transformation attributes xdt:Transform="Replace" and xdt:Locator="Match(name)". This prevented the IIS emulator from parsing the configuration file, which caused the 500 error.

I must reiterate, however, that some lazy developer neglected to provide this simple piece of debugging information, and my afternoon was wasted as a result.

It reminds me of a scene in Terry Pratchett's and Neil Gaiman's Good Omens (one of the funniest books ever written). Three demons are comparing notes on how they have worked corruption on the souls of men. The first two have each spent years tempting a priest and corrupting a politician. Crowley's turn:

"I tied up every portable telephone system in Central London for forty-five minutes at lunchtime," he said.

"Yes?" said Hastur. "And then what?"

"Look, it wasn't easy," said Crowley.

"That's all?" said Ligur.

"Look, people—"

"And exactly what has that done to secure souls for our master?" said Hastur.

Crowley pulled himself together.

What could he tell them? That twenty thousand people got bloody furious? That you could hear the arteries clanging shut all around the city? And that then they went back and took it out on their secretaries or traffic wardens or whatever, and they took it out on other people? In all kinds of vindictive little ways which, and here was the good bit, they thought up themselves. The pass-along effects were incalculable. Thousands and thousands of souls all got a faint patina of tarnish, and you hardly have to lift a finger.

Somehow, debugging the Azure emulator made me think of Crowley, who no doubt helped Microsoft write the thing.

Sunday 9 September 2012 18:12:33 CDT (UTC-05:00)  |  | Software#
Monday 27 August 2012

After installing Windows 8 yesterday, I discovered some interaction problems with my main tool, Visual Studio 2012. Debugging Azure has suddenly become difficult. So after installing the OS upgrade, I spent about five hours re-installing or repairing a whole bunch of other apps, and I'm not even sure I found the causes of the problems.

The next step is to install new WiFi drivers. But seriously, I'm only a few troubleshooting steps from rebuilding the computer from scratch back on Windows 7.

Cue the cursing...

Monday 27 August 2012 16:10:31 CDT (UTC-05:00)  |  | Software#
Sunday 26 August 2012

This morning I installed Microsoft Windows 8 on my laptop. As a professional geek, getting software after it's released to manufacturing but before the general public is a favorite part of my job.

It took almost no effort to set up, and I figured out the interface in just a few minutes. I like the new look, especially the active content on the Start screen. It definitely has a more mobile-computing look than previous Windows versions, with larger click targets (optimized for touch screens) and tons of integration with Windows Accounts. I haven't linked much to my LiveID yet, as I don't really want to share that much with Microsoft, but I'll need it to use SkyDrive and to rate and review the new features.

I also did laundry, vacuumed, cleaned out all my old programming books (anyone want a copy of Inside C# 2 from 2002?), and will now go shopping. And I promise never to share that level of picayune personal detail again on this blog.

Sunday 26 August 2012 12:13:08 CDT (UTC-05:00)  |  | Kitchen Sink | Software#
Monday 6 August 2012

If one of the developers on one of my teams had done this, I would have (a) told him to get some sleep and (b) mocked him for at least a week afterwards.

Saturday night I spent four hours trying to figure out why something that worked perfectly in my local Azure emulator failed with a cryptic "One of the request inputs is out of range" message in the Cloud. I even posted to StackOverflow for help.

This morning, I spent about 90 minutes building a sample Cloud application up from scratch, adding one component at a time until I got to the same failure. And, eventually, I got to the same failure. Then I stepped through the code to figure out

And I immediately saw why.

The problem turned out to be this: I have two settings:

    <?xml version="1.0" encoding="utf-8"?>
    <ServiceDefinition name="Cloud" ...>
      <WebRole name="WebRole" vmsize="Small">
          <Setting name="MessagesConfigurationBlobName" />
          <Setting name="MessagesConfigurationBlobContainerName" />

Here's the local (emulator) configuration file:

    <?xml version="1.0" encoding="utf-8"?>
    <ServiceConfiguration ...>
      <Role name="WebRole">
          <Setting name="MessagesConfigurationBlobName" value="LocalMessageConfig.xml"/>
          <Setting name="MessagesConfigurationBlobContainerName" value="containername"/>
      </Role >

Here's the Cloud file:

    <?xml version="1.0" encoding="utf-8"?>
    <ServiceConfiguration ...>
      <Role name="WebRole">
          <Setting name="MessagesConfigurationBlobName" value="containername" />
          <Setting name="MessagesConfigurationBlobContainerName" value="CloudMessageConfig.xml"/>
      </Role >

I will now have a good cry and adjust my time tracking (at 3am Saturday) from "Emergency client work" to "Developer PEBCAK".

The moral of the story is, when identical code fails in one environment and succeeds in another, don't just compare the environments, compare *everything that could be different in your own code* between the environments.

Oh, and don't try to deploy software at 3am. Ever.

Monday 6 August 2012 10:15:13 PDT (UTC-07:00)  |  | Software | Cloud#
Sunday 29 July 2012

In every developer's life, there comes a time when he has to take all the software he's written on his laptop and put it into a testing environment. Microsoft Azure Tools make this really, really easy—every time after the first.

Today I did one of those first-time deployments, sending a client's Version 2 up into the cloud for the first time. And I discovered, as predicted, a flurry of minor differences between my development environment (on my own computer) and the testing environment (in an Azure web site). I found five bugs, all of them minor, and almost all of them requiring me to wipe out the test database and start over.

It's kind of like when you go to your strict Aunt Bertha's house—you know, the super-religious aunt who has no sense of humor and who smacks your hands with a ruler every time you say something harsher than "oops."

End of complaint. Back to the Story of D'Oh.

Sunday 29 July 2012 17:35:07 CDT (UTC-05:00)  |  | Software | Cloud#
Saturday 30 June 2012

When working with Microsoft Windows Azure, I sometimes feel like I'm back in the 1980s. They've rushed their development tools to market so that they can get us developers working on Azure projects, but they haven't yet added the kinds of error messages that one would hope to see.

I've spent most of today trying to get the simplest website in my server rack up into Azure. The last hour and a half has been spent trying to figure out two related error messages:

  • Failed to debug the Windows Azure Cloud Service project. The output directory ' { path }\csx\Debug' does not exist.
  • Windows Azure Tools: Can't locate service descriptions.

If you're interested in these error messages, click through. For non-technical readers, I'll put up a photo of Parker tomorrow, I promise.

Saturday 30 June 2012 18:17:08 CDT (UTC-05:00)  |  | Software | Cloud#
Sunday 24 June 2012

I have just spent an hour of my life—one that I will never get back—trying to figure out why I couldn't install any software from .msi files on one of my Windows 7 machines. Every time I tried, I would get a message that the installer "could not find the file specified."

If you're interested in this, or you want to see a stupid rage comic face, click through:

Sunday 24 June 2012 15:14:36 CDT (UTC-05:00)  |  | Software | Security#
Friday 9 March 2012

In general, people using words they don't understand, presumably to sound smart, drives me up a tree. In specific, I wish against reason that more people knew how time zones worked. Microsoft's Raymond Chen agrees:

One way of sounding official is to give the times during which the outage will take place is a very formal manner. "The servers will be unavailable on Saturday, March 17, 2012 from 1:00 AM to 9:00 AM Pacific Standard Time."

Did you notice something funny about that announcement?

On March 17, 2012, most of the United States will not be on Standard Time. They will be on Daylight Time. (The switchover takes place this weekend.)

Now, I'm one of the few people in the world who has implemented a complete time zone package for Windows systems, and regular readers will already know about my vocal defense of the Olson/IANA time zone database. So I don't expect most people to know the ins and outs of time zone abbreviations. But this is the point Chen makes, and I would like to make: if you don't know what you're writing, don't write it. Say "Central time" or "local Chicago time" instead of "Central Standard Time," if for no other reason than you'll be wrong about the latter 8 months out of the year.

Friday 9 March 2012 13:33:51 CST (UTC-06:00)  |  | Software | Astronomy#
Thursday 19 January 2012

A couple of things have happened on two issues I mentioned earlier this week:

That is all for now. We in Chicago are bracing for 15 cm of snow tomorrow, so there may be Parker videos soon.

Oh, and: Kodak actually did file for bankruptcy protection today.

Thursday 19 January 2012 13:57:22 CST (UTC-06:00)  |  | US | Software | Business | Astronomy#
Thursday 24 November 2011

The new feature I mentioned this morning is done. Now, in addition to the "where was this posted" button on the footer, you will notice the entry's time zone. Each entry can have its own time zone—in addition to the site-wide default.

I still have to fix a couple of things related to this change, like the fact that the date headers ("Thursday 24 November 2011," just above this entry) are on UTC rather than local time. But going forward (and going backward if I ever get supremely bored), you can now see the local time wherever I was when I posted.

Incidentally, if you want to bring the tzinfo database to your .NET application, I have licensing terms.

Thursday 24 November 2011 14:22:50 CST (UTC-06:00)  |  | Software | Blogs#
On this page....
Why is this Ctrl-F not like other Ctrl-Fs?
Lucene coming to Weather Now
Unexpected master class
Yes, that's an improvement (Comcast win!)
Scary software deployment
How to look up Azure table data by ID
Waiting for software to deploy...
Clbuttic software mistake
Not embracing open source so much as shaking hands
More stuff I haven't got time to read
Head-scratching software bug
Weakest link in the chain
It's not the good times they care about, it's the bad
My new favorite book
How to give yourself angina, Mercurial-style
No, that wasn't it at all
Could not find assembly in command-line MSTest execution
About this blog (v 4.2)
About that iOS "flaw"
Error installing Entity Framework 6 in a very old Web project
To Git or not to Git...
Another bit of sanity brought to you by unit testing
Bing Maps on Windows 8 #fail
Things I found while listening to a conference call
The first head rolls
The usability of HealthCare.gov
jQuery: Party like it's 1989
healthcare.gov is bad, but the Times should know better
Small world
Unbelievably stupid Windows thing
Git is not Mercurial
Border cases
When the Azure emulator is more forgiving than real life
Putting a bow on it
Performance improvement; or, how one line of code can change your life
Chaining LINQ predicates
Upgrading to Azure Storage Client 2.0
Starting the oldest item on my to-do list
I wish stuff just worked
W-8 a second...
W-8, W-8!
Why you should always "sleep on it"
Deployments are fun!
Azure build error messages aren't helpful
Troubleshooting software installation on Windows 7
Time zone pet peeves
Quick updates
Local time zone displays
The Daily Parker +3182d 16h 01m
My next birthday 34d 19h 29m
Parker's 9th birthday 318d 21h 12m
Aviation (318) Baseball (104) Best Bars (4) Biking (43) Chicago (862) Cubs (193) Duke (132) Geography (311) Higher Ground (5) Jokes (282) Kitchen Sink (612) London (40) Parker (185) Daily (204) Photography (139) Politics (302) US (1053) World (239) Raleigh (20) Readings (8) Religion (62) San Francisco (83) Software (196) Blogs (71) Business (221) Cloud (88) Cool links (128) Security (98) Travel (163) Weather (672) Astronomy (76) Windows Azure (58) Work (37) Writing (8)
<August 2014>
Full archive
David Braverman and Parker
David Braverman is a software developer in Chicago, and the creator of Weather Now. Parker is the most adorable dog on the planet, 80% of the time.
All content Copyright ©2014 David Braverman.
Creative Commons License
The Daily Parker by David Braverman is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License, excluding photographs, which may not be republished unless otherwise noted.
Admin Login
Sign In
Blog Stats
Total Posts: 4405
This Year: 303
This Month: 0
This Week: 8
Comments: 0