There are so many lessons you learn from making software from scratch. One is that the biggest reasons things are so slow is that there's too many layers of indirection. Libraries and frameworks are all so generic trying to remain flexible that they end up doing a lot of extra unnecessary work. It gets worse when developers then make generic things on top of these generic tools to try and leave the real problem solving to someone else or worse—to users. Lots of code spent doing work the problem doesn't require. That or code endlessly copying memory back and forth for no reason. Deserialize to an object, copy all the data to a different object, pass that object to a function that allocates a different object which copies some or all of the data over, take that object and serialize it to pass it to a different system that deserializes it, and so on.
It's reasonable to assume that the amount of work a computer does is equal to the number of lines of code you write, but that's wildly inaccurate. Writing your own code from scratch gives you a better feel for how much work a single library or language invocation is doing for you. Things like rounded corners, drop shadows, async/anonymous functions, format parsing like JSON or SVG, or using HTTPS. So much overhead. How much of it is really useful to the actual problem you're solving? Is that utility worth the cost to performance?
He's also right about the gulf between web and desktop documentation for novices. Doubly so if you try and program for Linux. Same with tooling. Lots of great developer tools and documentation have been created for web development, but desktop development really hasn't improved that much over the last thirty years. If anything, the rent extraction being done by code signing has made it significantly worse. Doing any of the common handmade projects like making an image viewer or text editor will make it painfully obvious how much room there is here for better solutions. "It was hard for them so it should be hard for you," or whatever.
The reality is more banal. Great documentation and tooling, like any great product, requires a lot of skill and time to produce. Who's paying for it? I maintain this site for free because I'm doing it for me and I'm a bit of a freak who spent six hours after work writing and rewriting a couple paragraphs about a conference talk I watched to help collect and organize my thoughts. Most people expect to get paid for their hard work, especially when they're good at what they do. A lot of open source is driven by new developers looking to get a job in the industry. That's why every few weeks a new text to speech system comes out and a year or two later it's unsupported and dead. Every few weeks a new JavaScript framework comes out and a year or two later it's unsupported and dead. Nobody's writing developer books because developers aren't paying for books. Blogs can document whatever fits in a 500-2000 word essay by a hobbyist/freelancer because web advertising is awful for everyone involved and LLMs are killing what's left. Nobody's figured out how to make programming tutorials work on YouTube so that path to paying your bills documenting things doesn't work. The same reality plays out for tools. What's the last software development tool you bought?
Take all that together and that's why docs tend to consist of what limited documentation gets added when a new API is created (down to just some banter on a mailing list) and tools are more a byproduct than product.
Even Microsoft now treats their desktop operating system as nothing but a cash cow. That's why Valve was able to steal desktop gaming from a company with enough resources to succeed in the console wars. They have fundamentally given up on their desktop operating system along with the developers who build for it. People keep saying Linux. What about it? Have you ever created a normal desktop application for Linux? Not a command line utility or backend server. A proper application you could sell to end users. Something with graphics and audio like a video editor or game. There's a reason Valve is investing in Proton to port the entire Win32 API to Linux. The Linux kernel understands API compatibility but the rest of the ecosystem treats constant breaking changes as a feature, not a bug. "Everything should be open source," they say. Get a job doing something else all day to pay for rent and food, then volunteer to create me world class software for free. Oh, and then every few years they should be forced to completely redesign that software because the ecosystem decided to rewrite how IPC works or whatever. Or be like a handful of open source celebrities working bohemian on nothing but a thousand or so a month in donations if they're lucky.
To understand your outcomes, look at your incentive structure.
This is more of what I was talking about when discussing If You're Going to Vibe Code, Why Not Do It in C?. Though this piece has more data points and doesn't waffle on about anecdotes and open questions. Good piece to continue the thought.
Great rundown on a lot of the things going on in hardware fabs and how they could be leveraged for a supply chain attack. Should you be worried? Sure, but I'd be more worried about the rampant fraud in supply chain management over targeted implants at this point in most contexts.
From CVE-2023-38606 it became pretty clear that modern high-end chips are huge amalgams of in-house, open, and closed design components that no one person really understands. Essentially hard software. There's a good chance that chip designs are becoming subject to all the same attacks complex software has but without the ability to inspect the final artifact.
In terms of exploits, component swaps don't even have to be subtle on most devices. While data centers seem like promising targets, they also have higher security on components compared to the damage you could do in automotive, energy, manufacturing, healthcare, and consumer devices with much less scrutiny. Besides, data centers are highly concentrated and highly reliant on infrastructure to and from the outside like fiber lines, electricity, and water. There are many simpler ways to siege a fortress than subterfuge.
It's a kind of macabre comedy watching the concepts of zero trust from technology go mainstream into society. The real world has been built on trust for thousands of years and watching people deliberately undermine that is festering a fairly unhealthy ecosystem they get to live in. Watching grifting go from niche hack, to corruption, to metagame, to the core of society has been darkly interesting to watch.
Not a fan of using "stupid" to describe any of this. No point insulting anyone since even he admits (and I'd echo) we all have to go through thinking about software architectures like this. As you get better and better at software development, you can conceptualize more and more of a system in one go. At one point, breaking a string on commas was a hard problem to me. Now I can conceptualize complex stateful distributed systems.
The problem is that breaking them down by thinking in terms of serial actors is becoming an ever more inefficient way of having the hardware do the work these days. Computers are basically only getting wider and wider. Not much faster. It started doing this around 2005.
By wider, I mean in parallel. Doing more of the same thing all in one go. Performing dozens or even hundreds of the same operation in just one execution. In this case, he's talking about memory but it applies to all shared resource contention including compute, memory, disk, and network. Doing one thing at a time just has an incredibly high overhead when you see it play out in real contexts where the code is usually tasked to operate on hundreds or thousands of that thing.
Think of it like this. You're tasked with taking groceries from your vehicle to the house. Are you going to do it one condiment at a time? Well, we often program like this to make the tasks easier to conceptualize. We design and build a system to pick out a single remaining item and carry it to the house. Then we make a system that knows where everything goes and delegates items as they arrive to systems that each know how to put an item on a specific shelf. We then connect all those systems to ship. This would be for loops to the top, conditionals to the bottom.
If you instead loaded bags as you shopped by destination shelf, you could carry many bags in together. Your dispatcher can hand a bag to each stocker as you get the next load, and each stocker can basically dump the contents on the shelf and neaten up. If your bandwidth gets wider from going to the gym, your program gets even faster until it's a single fetch and load. This would be conditionals to the top, for loops to the bottom.
You may have never thought of sorting at the store and instead sort on a counter at home. This is because you're thinking discretely about the store system and the home system. If you think about them together, a more optimal solution is possible. Instead of randomly loading your basket, you can load it by destination making it much faster later.
When he's talking about arena allocators, he's talking about the overhead of deallocating hundreds of objects one at a time versus setting next_alloc_index = 0. It's significantly faster if you have a data boundary like a rendered video frame or a backend API endpoint handler. Any time you've got a distinct work boundary you really shouldn't be cleaning as you go but instead do all the setup, all the work, and then all the teardown.
This is why it makes sense not to explicitly close files or sockets or even deallocate memory if your program is a short lived application like a command line utility. The operating system will clean up everything for you all at once when you exit much faster than you can request them each be cleaned up one at a time. In effect, your program's memory is running in an arena allocator. Your file descriptors are being managed in a pool. (That's why you usually get the same ID back if you close and open an new one.)
A great successor talk roughly 15 years in the making. I've linked before to his talk, The Coming War On General Computation. You don't need to watch that talk before this one. It's just there in case you want to go back and understand the history.
The thing I think that's really helped the concept being explained in all those years, is being able to now list decades of concrete examples as he does. Everyone complains about having too many texting apps. When you explain that there's no good technical reason the Discord app can't send messages to WhatApp or WeChat users (we used to text SMS back and forth between dozens of phone manufacturers just fine), I've found some people start to get it. People live and die by stories. Explaining concepts leaves too much to the imagination. If you explained any of these now real things to people as hypotheticals, you're labeled an alarmist since it doesn't align their their existing worldview. Surely such a thing could never happen because…
Well it did. And it continues to. Worse, it's accelerating.