Happy 51st Earth Day! In honor of that, today's first story has nothing to do with Earth:
Finally, it looks like I'll have some really cool news to share about my own software in just a couple of weeks. Stay tuned!
The United States Postal Service has a surveillance program that tracks social media posts for law enforcement, and no one can say why:
The details of the surveillance effort, known as iCOP, or Internet Covert Operations Program, have not previously been made public. The work involves having analysts trawl through social media sites to look for what the document describes as “inflammatory” postings and then sharing that information across government agencies.
“Analysts with the United States Postal Inspection Service (USPIS) Internet Covert Operations Program (iCOP) monitored significant activity regarding planned protests occurring internationally and domestically on March 20, 2021,” says the March 16 government bulletin, marked as “law enforcement sensitive” and distributed through the Department of Homeland Security’s fusion centers. “Locations and times have been identified for these protests, which are being distributed online across multiple social media platforms, to include right-wing leaning Parler and Telegram accounts.”
When contacted by Yahoo News, civil liberties experts expressed alarm at the post office’s surveillance program. “It’s a mystery,” said University of Chicago law professor Geoffrey Stone, whom President Barack Obama appointed to review the National Security Agency’s bulk data collection in the wake of the Edward Snowden leaks. “I don’t understand why the government would go to the Postal Service for examining the internet for security issues.”
I mean, scraping social media takes only a modicum of technical skills. In the last year I've written software that can scan Twitter and run detailed sentiment analysis on keyword-based searches. But I'm not a government agency with arrest powers. Or, you know, a constitutional mandate to deliver the mail.
Stack Overflow recently had a good blog entry on Git branching:
When trying to imagine how branches work, it’s tempting to use the concept of “folders.” After all, creating a new branch feels very much like copying the project’s current state and pasting it into a new, separate folder.
But part of the genius behind Git is that it doesn’t just “copy all contents” of your project, which would make things slow and use up lots of disk space. It’s much smarter than that!
[C]ommits in Git are identified by their SHA-1 hash, those 40-character long, cryptic strings. These commit hashes are static and immutable. Branches, on the other hand, are highly flexible, always changing to the commit hash of the latest commit anytime you create a new commit in that branch.
Speaking of Git, I'm moving to the next phase of my big at-home project. The end, while not in sight exactly, has gotten much more predictable. (In software, predictability is a good thing.)
Just a quick note: I'm halfway to the "20 years from now" I mentioned in this post from 13 April 2011. And as I'm engaged in two software projects right now—one for work, one for me—that have me re-thinking all of the application design skills I learned in the 10 years leading up to that 2011 post, I can only hope that I'm not walking down a technological cul-de-sac the way Data General did in 1978.
Bit of a frustrating day, today. I spent 2½ hours trying to deploy an Azure function using the Az package in PowerShell, before giving up and going back to the AzureCLI. All of this to confirm a massive performance issue that I suspected but needed to see in a setting that eliminated network throughput as a possible factor. Yep: running everything completely within Azure sped it up by 11%, meaning an architecture choice I made a long time ago is definitely the problem. I factored the code well enough that I can replace the offending structure with a faster one in a couple of hours, but it's a springtime Sunday, so I don't really feel totally motivated right now to do so.
Lest you worry I have neglected other responsibilities, Cassie already got over an hour of walks and dog park time today, bringing her up to 10½ hours for the week. I plan to take her on another 45-minute walk in an hour or so. Last week she got almost 14 hours of walks, however. I blame the mid-week rain we got.
I also have a 30-minute task that will involve 15 minutes of setup, 10 minutes of tear-down, and 5 minutes of video recording. I will be so relieved next fall when all of our chorus work happens in person again.
Before I do that, however, I'm going to go hug my dog.
I've been coding most of the day because it has rained since 1pm. I'm getting very close to a series of posts on what I've been working on the past few months, so stay tuned.
I've spent the last few weeks in my off-hours beavering away at a major software project, which I hope to launch this spring. Meanwhile, I continue to beaver at my paying job, with only one exciting deployment in the last six sprints, so things are good there. I also hope to talk more about that cool software before too long.
Meanwhile, things I need to read keep stacking up:
Finally, check out the World Photography Organisation's 2021 photo contest results.
Even though my life for the past week has revolved around a happy, energetic ball of fur, the rest of the world has continued as if Cassie doesn't matter:
And if you still haven't seen our spring concert, you still can. Don't miss it!
I'm shaking my head at email service provider Postmark, who four weeks ago announced they would be phasing out support for TLS 1.0 (a network security protocol). I understood this when they announced it in February, 60 days ahead of their cutover to TLS 1.2, but didn't think it applied to anything of mine. This morning they sent a more focused email saying, "you're getting this email because we can see that this applies to you." Panic ensues.
Why panic? Because almost everything I've developed in the last 12 years depends on Postmark for email messaging, and the way they worded their notice, it seemed like all of those apps will fail on April 20th. And the only documentation they supplied relevant to me (and anyone else in the Microsoft universe) was a set of instructions on how to test TLS 1.2 support, not whether this would be a breaking change.
I immediately contacted their support group and said, as nicely as I can, "WTF dudes?" To which they replied, "oh yeah, bummer, dude." So I sent a lengthier reply just now and started digging into their source code. It turns out they're using an out-of-the-box Microsoft component that should transparently switch from TLS 1.0 to TLS 1.2 if asked to do so. I believe, therefore, the affected applications will be fine. In fact, fixing the problem may only require a simple, non-invasive change to Microsoft Azure settings for the affected applications. But I don't know that for sure. And I'm hoping their actual development team will respond with "yeah, no probs, dude, you're cool."
My other headache is literal, from staring at too many screens. So I'll do something else in a moment.
The Daily WTF today takes us back to one of the worst software bugs in history, in terms of human lives ruined or lost:
The ETCC incident was not the first, and sadly was not the last malfunction of the Therac-25 system. Between June 1985 and July 1987, there were six accidents involving the Therac-25, manufactured by Atomic Energy Canada Limited (AECL). Each was a severe radiation overdose, which resulted in serious injuries, maimings, and deaths.
As the first incidents started to appear, no one was entirely certain what was happening. Radiation poisoning is hard to diagnose, especially if you don't expect it. As with the ETCC incident, the machine reported an underdose despite overdosing the patient. Hospital physicists even contacted AECL when they suspected an overdose, only to be told such a thing was impossible.
With AECL's continued failure to explain how to test their device, it should be clear that the problem was a systemic one. It doesn't matter how good your software developer is; software quality doesn't appear because you have good developers. It's the end result of a process, and that process informs both your software development practices, but also your testing. Your management. Even your sales and servicing.
While the incidents at the ETCC finally drove changes, they weren't the first incidents. Hospital physicists had already reported problems to AECL. At least one patient had already initiated a lawsuit. But that information didn't propagate through the organization; no one put those pieces together to recognize that the device was faulty.
On this site, we joke a lot at the expense of the Paula Beans and Roys of this world. But no matter how incompetent, no matter how reckless, no matter how ignorant the antagonist of a TDWTF article may be, they're part of a system, and that system put them in that position.
TDWTF's write-up includes a link to a far more thorough report. It's horrifying.