I should have posted day 25 of the Blogging A-to-Z challenge. yesterday, but life happened, as it has a lot this month. I'm looking forward to June when I might not have the over-scheduling I've experienced since mid-March. We'll see.
So it's appropriate that today's topic involves one of the things most programmers get wrong: dates and times. And we can start 20 years ago when the world was young...
A serious problem loomed in the software world in the late 1990s: programmers, starting as far back as the 1950s, had used 2-digit fields to represent the year portion of dates. As I mentioned Friday, it's important to remember that memory, communications, and storage cost a lot more than programmer time until the last 15 years or so. A 2-digit year field makes a lot of sense in 1960, or even 1980, because it saves lots of money, and why on earth would people still use this software 20 or 30 years from now?
You can see (or remember) what happened: the year 2000. If today is 991231 and tomorrow is 000101, what does that do to your date math?
It turns out, not a lot, because programmers generally planned for it way more effectively than non-technical folks realized. On the night of 31 December 1999, I was in a data center at a brokerage in New York, not doing anything. Because we had fixed all the potential problems already.
But as I said, dates and times are hard. Start with times: 24 hours, 60 minutes, 60 seconds...that's not fun. And then there's the calendar: 12 months, 52 weeks, 365 (or 366) days...also not fun.
It becomes pretty obvious even to novice programmers who think about the problem that days are the best unit to represent time in most human-scale cases. (Scientists, however, prefer seconds.) I mentioned on day 8 that I used Julian day numbers very, very early in my programming life. Microsoft (and the .NET platform) also uses the day as the base unit for all of its date classes, and relegates the display of date information to a different set of classes.
I'm going to skip the DateTime
structure because it's basically useless. It will give you no end of debugging problems with its asinine DateTime.Kind
member. This past week I had to fix exactly this kind of thing at work.
Instead, use the DateTimeOffset
structure. It represents an unambiguous point in time, with a double
value for the date and a TimeSpan
value for the offset from UTC. As Microsoft explains:
The DateTimeOffset structure includes a DateTime value, together with an Offset property that defines the difference between the current DateTimeOffset instance's date and time and Coordinated Universal Time (UTC). Because it exactly defines a date and time relative to UTC, the DateTimeOffset structure does not include a Kind
member, as the DateTime structure does. It represents dates and times with values whose UTC ranges from 12:00:00 midnight, January 1, 0001 Anno Domini (Common Era), to 11:59:59 P.M., December 31, 9999 A.D. (C.E.).
The time component of a DateTimeOffset value is measured in 100-nanosecond units called ticks, and a particular date is the number of ticks since 12:00 midnight, January 1, 0001 A.D. (C.E.) in the GregorianCalendar calendar. A DateTimeOffset value is always expressed in the context of an explicit or default calendar. Ticks that are attributable to leap seconds are not included in the total number of ticks.
Yes. This is the way to do it. Except...well, you know what? Let's skip how the calendar has changed over time. (Short answer: the year 1 was not the year 1.)
In any event, DateTimeOffset
gives you methods to calculate time and dates accurately across a 20,000-year range.
Which is to say nothing of time zones...