Quick refresher on queue theory. Really take some time and do the math he's doing yourself by hand. That's the big skill from this talk. If you just let him do it for you, you won't learn it. It'll build the ability for you to use Little's Law in your own programming and design work. Simple summary was always limit request concurrency. Specifically he shows how to do this leveraging TCP's congestion protocol.
Can't agree more. The failure mode I see again in again in asynchronous services is not limiting queue sizes including the request queue. You never want an infinite queue. Honestly, you usually don't want queues either, you want stacks, because stacks prioritize liveness, not fairness. If someone shows up with a thousand things to do, a stack will ensure everyone with odd requests who show up after gets priority.
The saga continues. I wasn't aware of RRLP and LPP. Why would you add these to the spec? At least Apple phones by nature of designing all the silicon themselves these days are moving in a direction to curb this. It's gotten to the point, I really can't suggest leaving connections to the outside world running when you're not using them. This constant abuse and dehumanization at the hands of software has really lead us to a dark place as a species. You can't readily see it, so why don't I stick a knife in your back for fun and profit?
Pretty frustrating that I have to stay connected when I'm on-call for work, though I have been looking at what sort of a paging system I can setup for myself when out and about by using either APRS or Meshtastic. Frustrating that all our infrastructure is being coopted like this to build a combination surveillance state and pig butchering farm.
It's a sales pitch, but it's also a pretty great talk about some of the design philosophy that's been going on inside TigerBeetle DB and the results they've found with designing explicitly around simulator based testing and eschewing dynamic heap allocations. Almost too much wisdom here to summarize. A lot of this has been said elsewhere, like pointing out that the maintenance costs are significantly more than building software. "If it compiles, you've just begun." One interesting thing was thinking of developers as being either like painters or sculptors, additive or subtractive. Very cool idea I'd never thought of before.
One of the great things he talks about is using assertions as they do. I've felt for a long time that this is really the direction better type checking would take. Not for more type theory, but instead having better first class support for input validation and output assertions. Most of the reason I use types is to enforce what my interface accepts, but it's so limited. I accept int, int, string. That could be so many things. Type aliasing doesn't really help. Ideally what I'd like is to be able to make more compile time assertions of what my function will and won't allow. Lock down that string for example to ensure it must be valid Base64 without having to copy all the data to a Base64 object or something. I also don't want to have to pointlessly typecast things that are already valid. Assertions used like this might just be the right solution with great locality. To be explicit at the top and bottom of your function about what you expect of the caller and the data you think you're about to return. It's more verbose, but you can compile with assertions turned off if you really care about performance more than correctness (i.e. overwriting all my data with garbage is better than being slow).
Tangentially, he's almost talking about Joe Armstrong's motto, "Let it crash." Both he and the Go devs are right in that a reliable system doesn't crash. That's true, but the key they're both talking about is that you need to properly handle errors to do this. Developers routinely don't properly handle errors. We have a lot of data on this. The biggest source of faults in Java programs for example are the exception handlers. Why? Because we don't test our error handling code. Why? Because we usually only test happy paths. Why? Because we don't usually write robust tests of all the contrapositives. This is why fuzz testing is so effective. Why? Because building and breaking are different skill sets.
That's what he's getting at when he starts talking about hackers. Fuzz tests (or their simulator in TigerBeetle's case) are inherently adversarial. That's how you test something. You don't test compliance. You probe for deviance. But doing that requires you to be systematic. Not just add a unit test when you add a function. The key Joe Armstrong found was you need supervisory processes. A process given an error should just crash. Any error that can be handled by the calling code is really just control flow pretending to be an error.
No, when you run out of memory or the host is offline or the disk is full or whatever, in those cases, the code doing the thing is almost always not the code that should recover. Instead, a supervisory tree, one or more programs that are responsible for that program, their job needs to be figuring out what to do if their child process crashes. How to properly recover like blocking new connections or retrying the process or blocking further action and alerting a human. You can actually see us arriving at this haphazardly in container fleets that crash the instance and launch a new one during a fault. That's probably the most naive solution, but that concept of having an explicit supervisor handle it (the container orchestrator) and not some half baked try-catch block, because we know devs can't routinely get it right (or processes would never crash). That's one of Joe's greatest contributions from his thesis and language, Erlang.
There are so many lessons you learn from making software from scratch. One is that the biggest reasons things are so slow is that there's too many layers of indirection. Libraries and frameworks are all so generic trying to remain flexible that they end up doing a lot of extra unnecessary work. It gets worse when developers then make generic things on top of these generic tools to try and leave the real problem solving to someone else or worse—to users. Lots of code spent doing work the problem doesn't require. That or code endlessly copying memory back and forth for no reason. Deserialize to an object, copy all the data to a different object, pass that object to a function that allocates a different object which copies some or all of the data over, take that object and serialize it to pass it to a different system that deserializes it, and so on.
It's reasonable to assume that the amount of work a computer does is equal to the number of lines of code you write, but that's wildly inaccurate. Writing your own code from scratch gives you a better feel for how much work a single library or language invocation is doing for you. Things like rounded corners, drop shadows, async/anonymous functions, format parsing like JSON or SVG, or using HTTPS. So much overhead. How much of it is really useful to the actual problem you're solving? Is that utility worth the cost to performance?
He's also right about the gulf between web and desktop documentation for novices. Doubly so if you try and program for Linux. Same with tooling. Lots of great developer tools and documentation have been created for web development, but desktop development really hasn't improved that much over the last thirty years. If anything, the rent extraction being done by code signing has made it significantly worse. Doing any of the common handmade projects like making an image viewer or text editor will make it painfully obvious how much room there is here for better solutions. "It was hard for them so it should be hard for you," or whatever.
The reality is more banal. Great documentation and tooling, like any great product, requires a lot of skill and time to produce. Who's paying for it? I maintain this site for free because I'm doing it for me and I'm a bit of a freak who spent six hours after work writing and rewriting a couple paragraphs about a conference talk I watched to help collect and organize my thoughts. Most people expect to get paid for their hard work, especially when they're good at what they do. A lot of open source is driven by new developers looking to get a job in the industry. That's why every few weeks a new text to speech system comes out and a year or two later it's unsupported and dead. Every few weeks a new JavaScript framework comes out and a year or two later it's unsupported and dead. Nobody's writing developer books because developers aren't paying for books. Blogs can document whatever fits in a 500-2000 word essay by a hobbyist/freelancer because web advertising is awful for everyone involved and LLMs are killing what's left. Nobody's figured out how to make programming tutorials work on YouTube so that path to paying your bills documenting things doesn't work. The same reality plays out for tools. What's the last software development tool you bought?
Take all that together and that's why docs tend to consist of what limited documentation gets added when a new API is created (down to just some banter on a mailing list) and tools are more a byproduct than product.
Even Microsoft now treats their desktop operating system as nothing but a cash cow. That's why Valve was able to steal desktop gaming from a company with enough resources to succeed in the console wars. They have fundamentally given up on their desktop operating system along with the developers who build for it. People keep saying Linux. What about it? Have you ever created a normal desktop application for Linux? Not a command line utility or backend server. A proper application you could sell to end users. Something with graphics and audio like a video editor or game. There's a reason Valve is investing in Proton to port the entire Win32 API to Linux. The Linux kernel understands API compatibility but the rest of the ecosystem treats constant breaking changes as a feature, not a bug. "Everything should be open source," they say. Get a job doing something else all day to pay for rent and food, then volunteer to create me world class software for free. Oh, and then every few years they should be forced to completely redesign that software because the ecosystem decided to rewrite how IPC works or whatever. Or be like a handful of open source celebrities working bohemian on nothing but a thousand or so a month in donations if they're lucky.
To understand your outcomes, look at your incentive structure.
This is more of what I was talking about when discussing If You're Going to Vibe Code, Why Not Do It in C?. Though this piece has more data points and doesn't waffle on about anecdotes and open questions. Good piece to continue the thought.
Great rundown on a lot of the things going on in hardware fabs and how they could be leveraged for a supply chain attack. Should you be worried? Sure, but I'd be more worried about the rampant fraud in supply chain management over targeted implants at this point in most contexts.
From CVE-2023-38606 it became pretty clear that modern high-end chips are huge amalgams of in-house, open, and closed design components that no one person really understands. Essentially hard software. There's a good chance that chip designs are becoming subject to all the same attacks complex software has but without the ability to inspect the final artifact.
In terms of exploits, component swaps don't even have to be subtle on most devices. While data centers seem like promising targets, they also have higher security on components compared to the damage you could do in automotive, energy, manufacturing, healthcare, and consumer devices with much less scrutiny. Besides, data centers are highly concentrated and highly reliant on infrastructure to and from the outside like fiber lines, electricity, and water. There are many simpler ways to siege a fortress than subterfuge.
It's a kind of macabre comedy watching the concepts of zero trust from technology go mainstream into society. The real world has been built on trust for thousands of years and watching people deliberately undermine that is festering a fairly unhealthy ecosystem they get to live in. Watching grifting go from niche hack, to corruption, to metagame, to the core of society has been darkly interesting to watch.
Not a fan of using "stupid" to describe any of this. No point insulting anyone since even he admits (and I'd echo) we all have to go through thinking about software architectures like this. As you get better and better at software development, you can conceptualize more and more of a system in one go. At one point, breaking a string on commas was a hard problem to me. Now I can conceptualize complex stateful distributed systems.
The problem is that breaking them down by thinking in terms of serial actors is becoming an ever more inefficient way of having the hardware do the work these days. Computers are basically only getting wider and wider. Not much faster. It started doing this around 2005.
By wider, I mean in parallel. Doing more of the same thing all in one go. Performing dozens or even hundreds of the same operation in just one execution. In this case, he's talking about memory but it applies to all shared resource contention including compute, memory, disk, and network. Doing one thing at a time just has an incredibly high overhead when you see it play out in real contexts where the code is usually tasked to operate on hundreds or thousands of that thing.
Think of it like this. You're tasked with taking groceries from your vehicle to the house. Are you going to do it one condiment at a time? Well, we often program like this to make the tasks easier to conceptualize. We design and build a system to pick out a single remaining item and carry it to the house. Then we make a system that knows where everything goes and delegates items as they arrive to systems that each know how to put an item on a specific shelf. We then connect all those systems to ship. This would be for loops to the top, conditionals to the bottom.
If you instead loaded bags as you shopped by destination shelf, you could carry many bags in together. Your dispatcher can hand a bag to each stocker as you get the next load, and each stocker can basically dump the contents on the shelf and neaten up. If your bandwidth gets wider from going to the gym, your program gets even faster until it's a single fetch and load. This would be conditionals to the top, for loops to the bottom.
You may have never thought of sorting at the store and instead sort on a counter at home. This is because you're thinking discretely about the store system and the home system. If you think about them together, a more optimal solution is possible. Instead of randomly loading your basket, you can load it by destination making it much faster later.
When he's talking about arena allocators, he's talking about the overhead of deallocating hundreds of objects one at a time versus setting next_alloc_index = 0. It's significantly faster if you have a data boundary like a rendered video frame or a backend API endpoint handler. Any time you've got a distinct work boundary you really shouldn't be cleaning as you go but instead do all the setup, all the work, and then all the teardown.
This is why it makes sense not to explicitly close files or sockets or even deallocate memory if your program is a short lived application like a command line utility. The operating system will clean up everything for you all at once when you exit much faster than you can request them each be cleaned up one at a time. In effect, your program's memory is running in an arena allocator. Your file descriptors are being managed in a pool. (That's why you usually get the same ID back if you close and open an new one.)
A great successor talk roughly 15 years in the making. I've linked before to his talk, The Coming War On General Computation. You don't need to watch that talk before this one. It's just there in case you want to go back and understand the history.
The thing I think that's really helped the concept being explained in all those years, is being able to now list decades of concrete examples as he does. Everyone complains about having too many texting apps. When you explain that there's no good technical reason the Discord app can't send messages to WhatApp or WeChat users (we used to text SMS back and forth between dozens of phone manufacturers just fine), I've found some people start to get it. People live and die by stories. Explaining concepts leaves too much to the imagination. If you explained any of these now real things to people as hypotheticals, you're labeled an alarmist since it doesn't align their their existing worldview. Surely such a thing could never happen because…
Well it did. And it continues to. Worse, it's accelerating.
Lending to firms and individuals engaged in the production of goods and services – which most people would imagine was the principal business of a bank – amounts to about 3 per cent of that total.
The article calls out Gary's Economics. I would second that suggestion. Excellent channel! The linked video is his most popular and a great place to start.
I also really don't like many of the reconstructions because they're garishly over saturated. It's actually pretty simple to understand if you just explain it as a skill issue. I imagine it like the Ecce Homo restoration. Combine that with how, like dinosaur or neanderthal reconstructions, you have to either be too literal or too artistic. Both are wrong, but having no nuanced context, people take the first one they see as an absolute truth and then all future explanations are pitted against that exposure.
I like that we attempt to synthesise what we know, but I wonder if reconstructions are more helpful or harmful to understanding. For example, the skin on bone reconstructions of dinosaurs depicted in Jurassic Park form much of what people think they know about dinosaurs. Those are really unlikely what any of them looked like though. For one thing, we have evidence for feathers. For another, if you go in reverse, drawing modern animals in the same style based only on their skeletons, you get similarly wild reptilian results. Do these reconstructions help?
The title of the video, the premiss, the cover image… It's definitely not doing the best sales pitch here. I know it's not normally something I'd post here. Why post it then?
A bunch of my friends struggle with dating. What Adam's put together here is actually really great! Especially since he wrote an entire book and then just decided to give it away for free. It's unfortunately a Google Doc and I don't feel comfortable sharing a PDF version without his permission. Seriously though, check out How to survive dating in the 21st century. It's actually pretty great dating advice. Maybe not perfect, but a hell of a lot better than a lot of the advice grafters will try and sell you. Seriously, I'm posting this here so I can point my friends to go read it. Check it out!
This just sounds like rapid application development (RAD) again. (Everything old is new again.) Nothing against it, but it's been kind of funny watching the vibe coders essentially reinvent Visual Basic or Dreamweaver. Not a knock against either of those. It's just as you hear the argument go on, you get back to, "Why am I using a language at all?" Yeah, that's what RAD was all about. Then you have to ask, "Why isn't that the dominant way we program?" Could just be those older tools weren't as good as vibe coding is. From my experience though, it was because they suffered at scale, in stability, and labour shortage.
On the scale side, as the thing gets bigger and bigger, you need to actually understand and control what the computer's doing. I got my first job building and maintaining a Microsoft Access database. Very easy to click and drag a button or dropdown in, pick the table it loads from or add a bit of VBA for an element's event handlers. The problem is doing naive things like loading every record in the system into a dropdown element or doing a ten way join between tables to run reports. Not using transactions properly in batch operations and rollbacks. Trying to use a shared network drive as a client server protocol and then trying to manage serial write isolation by shouting down the hall. Add to that inexpertly and constantly trying to wrangle incompatible date parsing between machines by guessing and testing OS setting changes. You get the idea. All of these were solvable, but I didn't really understand everything I was doing. Looking back, there's really no way to fix all this stuff now that I do without a significant rewrite. They work great when you first build them and then fall apart a few years later when the system has thousands or millions of records.
The stability part stems from that. You've got someone building without really understanding what they're doing and the system's stability reflects it. I'd routinely leave work with the system not properly working. I'd ship a broken update a couple times while working on it. Lots of regressions. The business put up with it because I was making minimum wage and sunken cost fallacy makes cutting your losses hard. Now imagine a company like SAP or Oracle shows up with a tool that does roughly what your in-house solution was doing but has real robustness. It doesn't break every other day. It's expensive, but results in a lot more reliability for managing all the data your company has to deal with. You don't need an expert on the tool, you can just train staff who already handle more direct business value to work with this tool instead of your in-house thing. It's not as seamless or streamlined, but it's close enough and doesn't involve magic incantations by one person in the whole company able to fix it when things go wrong.
The labour shortage is an interesting one. We went through about a twenty year period where software developers were hard to come by. Businesses wanted more of them than we had in the economy. That meant learning to code could get you a really nice salary. Right now it looks like we're past that. There's now tens of thousands of unemployed programmers just in the United States of America. I left that job for a number of factors, but I'm not currently eager to go back to a minimum wage job managing an Access database right now. What does this mean? Not sure yet. Might even be unrelated. I left and then years later was brought back to quickly fix a report. I fixed it in about thirty minutes and billed an hour. Nobody's touched the system since I left. Nobody besides me understands how the thing works. They've all just learned to use it to do their job. With vibe coding, going back, I wouldn't have even understood it. Food for thought.
Both of the installations mentioned are actually pretty cool.
Still not a fan of how high-end physical art is basically valuable in so far as it facilitates tax evasion and money laundering. Still upset at all the assholes who used and continue to use the technology to destroy the lives of working class people. Still think fanatic techbros are something we as a society really need to address at some point. Still think buying a receipt for a link to a JPEG is absurd. Though I guess "certificates of authenticity" and the similar "deeds to real estate on the moon" have been with us for a while.
On the other hand, an extremely convoluted way to reopen those financial loopholes (as it would have to be) has provided a genuine window for high-end physical artists to make a lot of expensive art installations feasible for a little while without as many middlemen taking a cut. That's pretty cool. All the also-rans with ugly random character generators finally went home and now there's some space to appreciate the real work that's been going on.
I enjoy Python and JavaScript exactly because their most popular runtimes include a REPL and by extension, builtin debugger. That alone provides many of the live coding features he's talking about. They're not perfect, and TypeScript's running the other direction in the name of boxing people into VSCode, but those examples can help if you're unfamiliar with languages and runtimes from Clojure, Erlang, and Smalltalk.
But it's slow... The refrain I can hear many people saying is, but these languages are slow. No, the language is an abstract concept. You're confusing the runtime or build artifacts of a given compiler with the languages. To be fair, the talk also blurs the two for shorthand. Cpython is currently really slow. No debate from me there. PyPy is much faster. Nuitka is faster still. And Numba is likely faster than what you'd program in Rust or C++ and takes a fraction of the time to code. Idiomatic language use can also lead to slow code, but that's a skill issue.
No, the language runtime being slow isn't generally an issue. There are billion dollar companies running Python at scale in production. It just drives up the hardware costs and limits what you can do on a single machine for a reasonable price in a reasonable amount of time. A reasonably priced computer can still do an insane amount of compute though. Erlang programs run a large share of all the computers in the global telephone network. Many games grossing a million dollars or more have been built in Python, Lua, and GML. That said, why waste the compute unnecessarily? I'm constantly drawn to Jai, Zig, and Odin entirely because you shouldn't need a runtime penalty in a production environment.
What I'd really like is the ability to split production from development further when it comes to runtime. Build systems often already have debug and release builds. Zig even has the concepts of ReleaseSafe and ReleaseFast. If you then think about a concept like attaching to a running production system with your runtime, or the ability to both interpret and compile your language as truly first class primitives. Really design the language around iterative highly introspective systems with injectable runtime binding.
An ideal would be a zero cost development runtime. That is, you only pay the runtime cost when you attach to the program. You might object that production builds do a bunch of optimizations. Sure, but what's to say you can't hijack the control flow and feed it to an interpreter? Then you can run the source code that went into those optimized routines in a manner that provides full reflection, introspection, and manipulation. Even if a dozen functions have been inlined and fused, you could run an interpreted version of those by patching the landing sites. The environment would look exactly like you'd dropped a import pdb;pdb.set_trace() but it would be dynamically patching the optimized code paths to the interpreter. I'm sure it's way more complicated to get the optimization scoping right, but there's likely a reasonable tradeoff to scoping since optimizations usually need to bound their search space anyway to prevent combinatorial explosions.
Now we go a step further. You can iteratively program some of that code. Like incremental builds but without having to restart the process. Bring in some of that Bret Victor, hot dispatch looping. Add some of those automatic data structure visualizations by having more standard library data types with first class debugging visualizations. That's kind of what Python's __repr__() class methods were all about, but I've more often seen them abused in ways that make debugging harder by messing about hiding things for aesthetics sake instead of easier by gathering and presenting the rich inner workings of a given data structure.
The ability to essentially connect to your application like you're SSHing into it. Able to run individual functions through the live interpreter, to build them up in a editor and debug them live. The ability to address what Erlang would term processes, but which could be called threads, coroutines, or workers. Monitor their call graph, interrupt them with breakpoints, kill them, spawn new ones, or blue-green upgrade them in-place, all using the attached runtime harness. Almost like independent processes but with robust IPC.
Zig, Jai, and Go took a small step forward with their build system being able to quickly compile and run the program, but yeah, still not great at scale. Builds in very large projects can still take a while, especially as the number of generics expand. In these languages I haven't yet experienced full Rust or C++ level hour long build cycles, but they're still not instant. Not being able to attach to the compiled program, introspect and modify it is really limiting. I shouldn't be tempted to add print statements and restart the program. I should be able to start the program once when I sit down to work and then edit, debug, and evolve it while it's running. Not hot reloading, but proper multiprocess software development. That probably also means state snapshots and time travel debugging. I'm only usually working on a handful of codepaths at a time. The rest of the program should be running as a fully optimized build with only sections in this interpreted slow path.
If any of this sounds cool, check the talk. It's not a talk about what languages you should be using, it's a talk for language developers to really consider looking back at what cool things already exist. Making the case that if your language doesn't have these sorts of features, you're spending a lot of extra unnecessary time fiddling with all the work you have to do around the actual work.
This is exactly what I've been talking about when I say you need to practice more. This! This isn't just about drawing. It's about practicing your craft. Its about how you become great at anything. Programming, music, art, speaking Spanish, playing basketball, whatever. You just need to do it. The more you practice, the better you'll become.
The internet has made measuring yourself, of getting bogged down in the meta, and endlessly distracting yourself with theories and critique, far too easy and fun. Stop sharpening your axe and just start cutting shit. Sure it'll be rough and ugly. That's how you get better. You cannot just think yourself better. You have to practice.
He's right on the money complaining that technology has held many people back. Too many people who decide they really want to learn a skill get distracted until they give up because the internet talks about all the technology they supposedly need. It creates this artificial impediment to actually doing the thing. They need a stylus hooked up to Photoshop with the right set of brushes, or need a MIDI keyboard hooked up to a DAW with the right set of VSTs, instead of things as cheap and easily obtainable as a pencil and some paper or a used guitar or keyboard. Heck a stick and some mud or a drumable surface is all you need.
For programmers it's getting all wrapped up trying to download and use VSCode or trying to install Linux or whatever instead of opening a simple text editor and just hacking out a script to automate something or make a little webpage or game.
The best part about a pencil or an instrument is that the moment you start interacting with it, you start to create something. Something ugly and raw, your first time doing anything is going to be awful, but you quickly see the possibilities. It's real. You aren't faffing, you're failing, and failing is the first step to improving.
Sure, after you're practicing all the time, you can start to improve on the process. Use resources to learn new things to try. But I often tell people considering a gym membership to start by going for a walk every day. If you can't commit to a walk, be honest, you're not really going to commit to the gym. Same goes for practice. If you just start by watching lots of videos or buying some fancy gear or whatever, you're not likely to stick with it. You spent a bunch of time and money but you're still right where you started. You'll know how to talk the talk, even look the part, but you still can't walk. You won't be able to create anything of your own worth celebrating until you're regularly practicing.
So please, just start practicing. It's a much better use of your limited time on earth compared to spending it consuming things other people make. The things you've made will outlive you. The things you've consumed or only thought about die with you. Change only happens when we physically do things, create and build things. Not for others, but ourselves. Create things you enjoy. Share them if you like. But do it first and foremost for yourself because making things makes us happy.
I'm some goober typing words in a text editor and you're now reading them. That's the power of creation. And very rarely, someone like you might really enjoy that but we can't enjoy anything if you don't create something. Draw the pictures you want to see. Play the notes you want to hear. Make the videos you want to watch. Build the furniture you want to use. Write the works you want to read. Fabricate the cloths and accessories you want to wear. But start small, make it a habit, and create things because nobody can stop you.
I have no practical application for this knowledge, but it's one of those really cool obscure bits of Windows internals. Like knowing about alternative data streams in NTFS, or how you can't normally name files things like con, aux, or prn, because path resolution has devices like these globally available. Why? Because Windows has its roots in CP/M and takes backwards compatibility very seriously.