It's a serious handicap to trust people you can't have a dialogue with. And yet, we're basically forced to. While it's nice to have sources you can trust, it's a good start to just learn from a lot of sources with different viewpoints. Better still, sources that directly contradict one another. No, reading two news sites about the same event doesn't count. At the very least, read up on the history of everything. Nothing suddenly happens. We're all subject to a very long history that heavily influences where we're at and where we're going. Go read materials over two hundred years old. They'll help you escape the current context and examine the world through fresh eyes.
I mentioned in my post on How to Practice that Finding good tests is honestly the hardest part. Well, here's such a test. For every point you don't immediately understand, you've got studying to do.
This is one of those pieces I add to the sadly poorly studied practice of software rewrites. I think rewrites are both too common and not common enough. They're far too common when a new batch of hires who don't understand the system come into contact with it and the institution doesn't have enough pressure to prevent them blindly rewriting it without first understanding the inherit complexity of the thing.
At the other end of the spectrum, they're too infrequently done by developers after they've built a system but repeatedly patched it in response to the real customer needs until it's full of things that only make sense in the context of some future ideal state and two other legacy designs they still haven't moved off of.
The two key variables in play seem to me to be experience of the developers with the codebase and size of the change required of the codebase. Sadly, I only know of a few fairly old and limited studies on the empirical evidence associated with rewrites. Don't get me wrong, there's a lot on refactoring, but rewriting as a concept has become a taboo word we should reclaim. There are absolutely times where both the fastest and highest quality outcome lies in rewriting the software based on the understanding of what the customer seems to need now they've used the existing software.
Please stop trying to make self documenting code a thing. It's never going to be a thing. Naming your functions after the opening to War and Peace doesn't make it any easier to understand. Just write documentation.
I still remember when someone told me they honestly couldn't understand how you could create a web app without React. Sometimes I wonder how anyone manages to build anything while contently undermining the foundation. Maybe they're secretly engaged in a pursuit to generate perpetual employment?
The key to scripting is having a fairly extensive library of routine operations, easy to use lists and hashmaps, and a runtime that can ideally fit in a shebang. There's no reason you can't do this with C. You just have to learn the first hard truth of C, abandon the standard library. To that end, stdb is kind of that, Sean Barrett's standard library.
Always look ahead to the extraction phase of the capital cycle. How are they going to get their money out. The two they'll likely reach for given the last few decades of software business models will likely be selling your inputs and selling space in the outputs. It'll be interesting to watch how this capability evolves over the next few years and who will pay the most for each.
New attack pathology. There's been a lot of physical device vulnerabilities over the last decade. Things we've mostly been discounting because they required wireless proximity to exploit. We have just figured out how to weaponize this. Imagine first collecting a router based botnet, then using that to spray exploits on routers based on IP geolocation to compromise a bunch of people in a rough geographic area.
Fairly timely piece given the current lecture series. I'm actually kind of excited by what lies beyond our digital tech obsession. I wonder what will hold our collective imagination once we see tech for the self-licking ice cream cone it tends to be.
From my understanding when this goes into effect it means that little clause in every software license that reads something like, THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND stops working legally. You sell it, you're legally responsible for it.
I've been saying for so long we need something like this. How it'll ideally play out is that companies that sell software will be legally on the hook for that software. Those companies will need liability insurance. The insurance industry will gather lots of data on what leads to payouts. That in turn will form lists of practices you need to follow to lower your premiums. Software will be forced to improve kicking and screaming.
There's an absolutely huge number of what ifs and uncertainties around this. That it's possibly happening at all is fantastic to hear. Half a century late, but better now than never.
Starting a series of Friday lectures about education. It's always important to understand what the purpose of a thing is. You only understand that by seeing what it does. Interrogating the name of a thing is completely counter productive. Thinking of what it should do from first principals also doesn't help.
People I talk to are always talking about education and the education industry as though they're interchangeable. Like schools are where the learning is. Seeing past the institution is important to start to understand it has interests and those interests are only marginally if at all interested in education.
You must constantly be on guard for sophistry. It's so easy to uncritically accept things that sound completely rational and logical when you already agree with it. Oh how we love viewing novices as incurable dunces. If you think novices over estimate their abilities, you'd be wrong. They just don't know what they don't know. Sure there are lairs out there. Seemingly a lot of them. But when asked to evaluate how skilled someone thinks they are, it's not that novices over estimate and experts under estimate. It's that novices have low precision while experts improve their precision.
This doesn't help explain why there are so many people engaged in bullshitting. Seek truth elsewhere for that one. But hopefully it helps people stop referring to the Dunning-Kruger effect along with many of the other studies being debunked in the replication crisis.
I don't use DragonflyBSD, but keeping up to date on the latest developments in modern filesystems is always a joy.
I've never used the patches, but it looks like someone's maintaining an OpenBSD patchset for it. I'm not sure if OpenBSD has interest in mainlining support for it. It's not really built in the spirit of increasing security, so I have doubts. Time will tell.
Not sure what the future has in store, but I'd really like to do more systems programming. Sadly making money doing this seems particularly difficult in Toronto. On the other hand, there's always building your own business based on Teller's wisdom that, Sometimes magic is just someone spending more time on something than anyone else might reasonably expect.
My career ambitions aside, this is a fantastic talk about building web application SaaS type things like a systems programmer. Namely that fewer dependencies means fewer problems long term.
Why is it almost nobody writing backend software seems to understand isolation levels? I think you should basically be required to present an oral argument about all the levels before being allowed to even connect to the database.
It was funny watching people freak out about the Timeless Timing Attacks which now allows attackers to precisely exploit timing attacks against HTTP2 services. It's only a problem because developers have been blissfully unaware of what Read Committed even means and why shards aren't the secret ingredient in the web scale sauce.
Not a lecture Friday. Just a praise for the Jepsen testing framework. If you're not familiar and work on a system that isn't just a monolith, you really need to read up on it. Ideally, start using it.
I recently brought up this model in a conversation when I was talking about product adoption and market fit. I think it's a great way to understand how people adopt technology. It's super important to understand that your late market fundamentally thinks differently about your product compared to your early market. The late market wants something different and if you don't understand the right time to pivot, you'll either end up passed by a competitor or alienating your early adopters too early and sink.
There's a great lecture by Grace Hopper I watched years ago. Well, the NSA has released a much improved version in two parts.
It's well worth the time. She's a pioneer who's understanding of systems both digital and physical are things we still get wrong. So many of the things she raises are only now reemerging as problems we're beginning to talk about. I always come away with something new just about every time I rewatch these.
That's so cool! I've obviously not reproduced it, but if it's true, that's such an amazing insight. I wonder what's clicking and what impact those clicks have on things in their vicinity.
If there's one field that works hard on understanding computer performance concerns but also has insane opsec to keep a near iron clad grip on the things they learn, it's high frequency trading. I'm not competent to understand how you can make more money if you're faster, but some people are apparently doing it. Here's at least a paper to get some insights into what they're doing. Some interesting insights in there too.