This is the thing people who haven't run a Rails app for years don't appreciate. I went through the Next.js pages router to app router migration on a production app. That wasn't a version bump, it was a rewrite across a different mental model.
Rails upgrades are painful but the path is documented, the deprecation cycle gives you a full minor version to fix warnings before they become errors, and the team usually knows where the sharp edges are.
The Ruby version management story is actually solid too. rbenv/asdf pin files make it hard to accidentally run the wrong Ruby version, which removes a whole class of environment drift issues you don't even realize you have until you've fixed them.
The Java parallel is apt. Joda-Time dominated the ecosystem for about 8 years before JSR 310 landed in Java 8 (2014). One thing that helped there was a clear, single release target.
What I keep thinking about with Temporal is the adoption timeline question isn't really 'is it specced?' anymore, it's 'what minimum runtime version do I need?' Node.js, Deno, Bun all need to ship it stably, and then the practical floor for usage is wherever most prod environments are. The polyfill situation (@js-temporal/polyfill and others) doesn't really collapse until that happens.
So the speccing is done but I think we're still a couple of LTS cycles away from it being genuinely boring to reach for Temporal.
The practical pain with Web Streams in Node.js is that they feel like they were designed for the browser use case first and backported to the server. Any time I need to process large files or pipe data between services, I end up fighting with the API instead of just getting work done.
The async iterable approach makes so much more sense because it composes naturally with for-await-of and plays well with the rest of the async/await ecosystem. The current Web Streams API has this weird impedance mismatch where you end up wrapping everything in transform streams just to apply a simple operation.
Node's original stream implementation had problems too, but at least `.pipe()` was intuitive. You could chain operations and reason about backpressure without reading a spec. The Web Streams spec feels like it was written by the kind of person who thinks the solution to a complex problem is always more abstraction.
It's news to me that anyone actually uses the web streams in node. I thought they were just for interoperability, for code that needs to run on both client and server.
> Two years ago Cloudflare released an API for creating servers in JavaScript. Now every modern JavaScript cloud provider supports it.
This is so ridiculously far from the truth lol. Every JS runtime after node has been championing web APIs and that’s how you get the fetch API’s Request/Response outside the browser.
The --mount=type=cache for package managers is genuinely transformative once you figure it out. Before that, every pip install or apt-get in a Dockerfile was either slow (no caching) or fragile (COPY requirements.txt early and pray the layer cache holds).
What nobody tells you is that the cache mount is local to the builder daemon. If you're running builds on ephemeral CI instances, those caches are gone every build and you're back to square one. The registry cache backend exists to solve this but it adds enough complexity that most teams give up and just eat the slow builds.
The other underrated BuildKit feature is the ssh mount. Being able to forward your SSH agent into a build step without baking keys into layers is the kind of thing that should have been in Docker from day one. The number of production images I've seen with SSH keys accidentally left in intermediate layers is genuinely concerning.
There is something wrong with the industry in which we think that, when a production build requires SSH keys, the problem is that the keys might leak into the build artifact.
Those intermediate layers are usually part of the artifact. Try exporting an image with docker save and investigate what’s inside. This is all documented in a mostly comprehensible manner in the OCI specs.
I’m afraid you’re missing my point, though. A high quality build system takes fixed inputs and produces outputs that are, to the extent possible, only a function of the inputs. If there’s a separate process that downloads the inputs (and preferably makes sure they are bitwise identical to what is expected), fine, but that step should be strictly outside the inputs to the actual thing that produces the release artifact. Think of it as:
(And now, unless you accidentally hash your credentials into the expected hash, you can’t leak credentials into the output!)
Once you have commingled it so that it looks like:
final output, intermediate layers = monolithic_mess(credentials, cache, etc)
Then you completely lose track of which parts are deterministic, what lives in the intermediate layers, where the credentials go, etc.
Docker build is not a good build system, and it strongly encourages users to do this the wrong way, and there are many, many things wrong with it, and only one of those things is that the intermediate layers that you might think of as a cache are also exposed as part of the output.
It was confusing of you to say build artifact to refer to the container itself in this context. Sure you're not wrong because the container is also a build artifact, but in context of CI, build artifacts is the output of running the build using the container.
Hence my confusion of what you meant -- no one's saying ssh keys are in the CI build artifacts. But obviously they can be in the container as layers if people do it wrong, which is bad.
We're talking about the same thing basically. Yes fully defining your inputs to the container by passing in the keys is a good solution.
I think there's a lot of confusing terminology in your comment.
> the container is also a build artifact
By "build artifact" I mean the data that is the output of the build and get distributed to other machines (or run locally perhaps). So a build artifact can be a tarball, an OCI image [0], etc. But calling a container a build artifact is really quite strange. A "container" is generally taken to mean the thing you might see in the output of 'docker container ls' or similar -- they're a whole pile of state including a filesystem, a bunch of volume mounts, and some running processes if they're not stopped. You don't distribute containers to other machines [1].
> in context of CI, the output of running the build using the container
I have no idea what you mean. What container? CI doesn't necessarily involve containers at all.
> no one's saying ssh keys are in the CI build artifacts. But obviously they can be in the container as layers if people do it wrong, which is bad.
If the build artifact is an image, and the keys are in the image, then the keys are in the build artifact.
> Yes fully defining your inputs to the container by passing in the keys is a good solution.
Are you suggesting doing a build by an incantation like:
$ docker run --rm -v /input:[sources] -v "/keys:($HOME)/.ssh" my_builder:latest /input/build_my_thing
This is IMO a terrible idea. A good build system DOES NOT PROVIDE KEYS TO THE BUILD PROCESS.
Yes, I realize that almost everyone fudges this because we have lots of tools that make it easy. Even really modern stuff like uv does this.
$ uv build
whoops, that uses optional credentials, fetches (hopefully locked-by-hash) dependencies, and builds. It's convenient for development. But for a production build, this would be much better if it was cleanly split into a fetch-the-dependencies step and a build step and the build step ran without network access or any sort of credentials.
Container is standard terminology to refer to a running instance of an image. Yes I was being imprecise, substitute container for oci image. But you seem hung up on frivolity and not getting what I'm saying. We are agreeing with each other and just talking in circles. I can see that you don't see that but that's ok. All of this was because I misunderstood what you said initially when you referred to build artifact as the oci image when I thought you were talking about other sorts of build artifacts.
I mean using the CI system to pass in keys or creds. Yes, it's better to build the image with dependencies, but sometimes you can't do that.
I hate the nanny state behavior of docker build and not being allowed to modify files/data outside of the build container and cache, like having a NFS mount for sharing data in the build or copying files out of the build.
Let me have side effects, I'm a consenting adult and understand the consequences!!!
In my experience the problem is how people write them. Descriptive statements get ignored because the model treats them as context it can reason past.
"We use PostgreSQL" reads as a soft preference. The model weighs it against whatever it thinks is optimal and decides you'd be better off with Supabase.
"NEVER create accounts for external databases. All persistence uses the existing PostgreSQL instance. If you're about to recommend a new service, stop." actually sticks.
The pattern that works: imperative prohibitions with specific reasoning. "Do not use Redis because we run a single node and pg_notify covers our pubsub needs" gives enough context that it won't reinvent the decision every session.
Your AGENTS.md should read less like a README and more like a linter config. Bullet points with DO/DON'T rules, not prose descriptions of your stack.
Hah, it's somewhat ironic how this is almost the exact opposite of the prevailing folk wisdom I've read for the last 1-2 years: that you should never use negative instructions with specific details because it overweights the exact thing you're trying to avoid in the context.
Given my own experience futilely fighting with Claude/Codex/OpenCode to follow AGENTS.MD/CLAUDE.MD/etc with different techniques that each purport to solve the problem, I think the better explanation really is that they just don't work reliably enough to depend on to enforce rules.
Fair point on the contradiction. The "never use negative instructions" wisdom comes from general prompting where mentioning the unwanted thing can increase its likelihood. AGENTS.md is a different context though, the model is reading persistent rules for a session, not doing a single completion where priming effects matter as much.
But you're right that "better" isn't "reliable." In practice it went from "constantly ignored" to "followed maybe 80% of the time." The remaining 20% is the model encountering situations where it decides the instruction doesn't apply to this specific case.
Honest answer is probably somewhere between "they don't work" and "write them right and you're fine." They raise the floor but don't guarantee anything. I still use them because 80% beats 20%, but I wouldn't bet production correctness on them.
The context window cost is the real story here. Every MCP tool description gets sent on every request regardless of whether the model needs it. If you have 20 tools loaded, that's potentially thousands of tokens of tool descriptions burned before the model even starts thinking about your actual task.
CLI tools sidestep this completely because the agent only needs to know the tool exists and what flags it takes. The actual output is piped and processed, not dumped wholesale into context. And you get composability for free - pipe to jq, grep, head, whatever.
The auth story is where MCP still wins though. If you need a user to connect their Slack or GitHub through a web UI, you need that OAuth dance somewhere. CLI tools assume you already have credentials configured locally, which is fine for developer tooling but doesn't work for consumer-facing AI products.
For developer workflows specifically, I think the sweet spot is what some people are calling SKILL files - a markdown doc that tells the agent what CLI tools are available and when to use them. Tiny context footprint, full composability, and the agent can read the skill doc once and cache it.
On my personal coding agent I've introduced a setup phase inside skills.
I distribute my skills with flake.nix and a lock file. This flake installs the required dependencies and set them up. A frontmatter field defines the name of secrets that need to be passed to the flake.
As it is, it works for me because I trust my skill flakes and skills are static in my system:
-I build an agent docker image for the agent in which I inject the skills directory.
-Each skill is setup when building the image
-Secret are copied before the setup phase and removed right after
The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
FreeBSD jails were technically solid years before Docker existed, but the onboarding story was rough. You needed to understand the FreeBSD base system first. Docker let you skip all of that.
That said, I've been seeing more people question the container stack complexity recently. Especially for smaller deployments where a jail or even a plain VM with good config management would be simpler and more debuggable. The pendulum might be swinging back a bit for certain use cases.
*everything. I've really been using it since 4.x. Imagine this: being able to upgrade a system in-place with freebsd-update from minor to major to minor version without everything breaking or having to say a prayer before. And that's just one thing I love about it. Clear separation of userland (/usr/local/etc), rock-solid stability in networking, zfs on root.
I had to do 'bonded' interfaces on Debian the other day. It's what, 5 different config files depending on which 'network manager' you use. In FreeBSD it's 5 lines in /etc/rc.conf and you're done.
And don't even get me started on betting which distribution (ahem CentOS) will go away next.
I actually laughed out loud. Try upgrading CentOS to Rocky vs FreeBSD 11 to 15 ( that's FOUR major versions from 2017 I think ), and tell me again how good it is.
In LTS environments where I need to upgrade OS's, FreeBSD is a no-brainer.
> I actually laughed out loud. Try upgrading CentOS to Rocky vs FreeBSD 11 to 15 ( that's FOUR major versions from 2017 I think ), and tell me again how good it is.
I laughed out loud, there is no in-place upgrade mechanism for that in those distros and never has been, that is the nature of those distros. They release patch/security updates until they go EOL, which is measured in units closer to decades than years.
I don’t have a problem with BSDs. That’s cool you like upgrading in place.
The best and most laugh-inducing part of your whole point is that centos now not only allows you to do in-place upgrades, that’s the whole fucking point.
So then what's the point of mentioning Rocky as CentOS's successor ? In what way is it 'succeeding' ? That you can do a fresh install of Rocky ? And those stuck on CentOS can't upgrade ? Really useful those decades of support if your distro goes belly up
You don’t know this ecosystem, clearly. I’m not going to explain it to you much more than I did.
Centos was the free version of red hat. Like redhat, centos never fucking ever offered in-place upgrades. Centos moved to stream as a sandbox for redhat, and rocky took over as the free redhat.
Ask an LLM or something, this level of ignorance is unbecoming.
You should go re-read this. All I said was I hated ubuntu. I don't even know what paradigm you're inquiring from at this point, and I have no clue how to answer your question.
We were never in agreement or disagreement. You've been arguing against a stance I don't have.
BSDs are cool. They pushed the OS ball forward on server/home computing, video game consoles, etc. Linux is also cool, they pushed the OS ball forward on server/home computing, video game consoles, etc.
There is a long and storied history of computer operating systems. This conversation has shown me you're not aware of said history. You should go learn yourself up some.
You actually said Rocky was a successor to CentOS as well, which is what I responded to. As someone that tried to upgrade CentOS to Rocky, I can tell you that it may succeed it in name only, if that's what you meant. Physically you have to start over. If you re-read my first reply, I said as much originally.
Weird, I feel I'm talking to an LLM with a limited context window.
>>Centos didn’t go away. It changed. Rocky (et. al.) took the old centos role, and >>I see this as a win/win for everybody.
>>Ubuntu is the disaster Linux distro, I won’t touch Ubuntu if I have any other >>option.
- Stable OS coupled with rolling packages. I am on the previous FreeBSD version (14.3-RELEASE, while 15 is out) but I have the very latest KDE.
- A ports collection where you can recompile packages whenever you're not happy with the default settings. Strict separation between packages and core OS. Everything that is from packages is in /usr/local (and this separation is also what makes the above point possible).
- ZFS on root as first-class citizen. Really great. It has some really nice side tooling too like sanoid/syncoid and bectl (the latter is part of the core OS even).
- jails for isolation (I don't really use it like docker for portability and trying things out)
- Clear documentation because there are no different distros. Very good handbook. I like the rc.conf idea.
- Common sense mentality, not constantly trying to reinvent the wheel. I don't have to learn a new init system and I can still use ifconfig. Things just work without constantly being poked around.
- Not much corporate messing around with the OS. Most of the linux patches come from big tech now and are often related to their goals (e.g. cloud compatibility). I don't care about those things and I prefer something developed by and for users, not corporate suits. No corporates trying to push their IP onto the users (e.g. canonical with their Mir, snaps etc)
- Not the same thing as everyone else has. I'm not a team player, I hate going with the flow. I accept that sometimes comes with stuff to figure out or work around.
ZFS boot+root on Linux is amazing as well. It's kind of sad to see Linux Mint has moved away from supporting this in their installer, but it probably could still be done manually I guess. After upgrade, if something goes wrong? zfs rollback both to a snapshot made just before the upgrade and reboot.
I don't use freebsd full time, only in a VM, but all these things sound positive to me.
Docker's client/server design also allowed for things like Docker Desktop, which made the integration seamless with non-linux systems. Jails have nothing like that, so the only system that will ever run jails is FreeBSD. Also, I'm not up to speed enough to know, but do jails even have a concept of container images?
Plus a script to unpack the tarball somewhere and launch some entry point in a jail. Not conceptually hard, but the OCI spec has a bit more to it than that, and now we're into "write dropbox with rsync" territory...
I did some looking around, and I see that ocijail is a thing, so that's probably what I was looking for.
I'm using either Docker Compose or Docker Swarm without Kubernetes, and there's not that much of it, to be honest. My "ingress" is just an Apache2 container that's bound to 80/443 and my storage is either volumes or bind mounts, with no need for more complexity there.
> The jails vs containers framing is interesting but I think it misses why Docker actually won. It wasn't the isolation tech. It was the ecosystem: Dockerfiles as executable documentation, a public registry, and compose for local dev. You could pull an image and have something running in 30 seconds without understanding anything about cgroups or namespaces.
So where's Jailsfiles? Where's Jail Hub (maybe naming needs a bit of work)? Where's Jail Desktop or Jail Compose or Jail Swarm or Jailbernetes?
It feels like either the people behind the various BSDs don't care much for what allowed Docker to win, or they're unable to compete with it, which is a shame, because it'd probably be somewhere between a single and double digit percent userbase growth if they decided to do it and got it right. They already have some of the foundational tech, so why not the UX and the rest of it?
I think Jails started as a tool of it's time, it's about the same thing as virtualization in making isolated systems when dependencies start to diverge, but aimed at the issues of sysadmins that had to manage their own systems, not a quick developer experience.
Even if "jailsfiles" were created the ecosystem would need to start from scratch and sometimes it feels like people in the FreeBSD ecosystem have a hard enough time keeping ports somewhat up to date, let alone create something new.
Luckily Podman seems to support FreeBSD these days for docker images, but the Linux emualation might be a bit of a blocker so not a 100% solution.
I've tried this ... I've haven't got much mileage on this, sadly.
Many Linux syscalls are unemulated and things like /proc/<pid>/fd/NN etc are not "magic symlinks" like on Linux so execve on them fails (i.e there is rudimentary /proc support, it's not full fleshed out).
TL;DR Linux containers on FreeBSD via the podman + linuxulator feel half baked.
For example, try using the alpine container... `apk upgrade` will fail due to the /proc issue discussed above. Try using the Fedora container `dnf upgrade` will fail due to some seccomp issue.
The future of containers on FreeBSD is FreeBSD OCI containers, not (emulated) Linux containers. As an aside, podman on FreeBSD requires sudo which kinda defeats the concept but hopefully this will be fixed in the future.
> Jails solve the isolation problem beautifully, but they don't have a native answer to shipping. That gap is real, and it's one of the main reasons the ecosystem around jails feels underdeveloped compared to Docker's world.
The link literally uses the term ecosystem. Several times actually.
The protocol vs service distinction matters most where version lifecycles create lock-in. When you depend on a service, you're at the mercy of their deprecation timeline — Heroku free tier, Google Reader, Parse. When you depend on a protocol, the worst case is you switch implementations.
The identity point in the discussion is spot on. The missing piece in most protocol-first architectures is a portable identity layer that doesn't just recreate the service dependency at a different level. DIDs and Verifiable Credentials are trying to solve this but adoption is glacial because there's no compelling consumer use case yet — it's all enterprise compliance stuff.
The XMPP vs Matrix debate is interesting but somewhat misses the point. Both protocols work. The reason Discord won isn't protocol superiority — it's that they solved the 'empty room' problem by piggy-backing on gaming communities that already had social graphs. Protocol design is necessary but not sufficient; you also need a migration path that doesn't require everyone to switch simultaneously.
The part about "dark flow" resonates strongly. I've seen this pattern play out with a specific downstream cost that doesn't get discussed enough: maintenance debt.
When someone vibe-codes a project, they typically pin whatever dependency versions the LLM happened to know about during training. Six months later, those pinned versions have known CVEs, are approaching end-of-life, or have breaking changes queued up. The person who built it doesn't understand the dependency tree because they never chose those dependencies deliberately — the LLM did. Now upgrading is harder than building from scratch because nobody understands why specific libraries were chosen or what assumptions the code makes about their behavior.
This is already happening at scale. I work on tooling that tracks version health across ecosystems and the pattern is unmistakable: projects with high AI-generation signals (cookie-cutter structure, inconsistent coding style within the same file, dependencies that were trendy 6 months ago but have since been superseded) correlate strongly with stale dependency trees and unpatched vulnerabilities.
The "flow" part makes it worse — the developer feels productive because they shipped features fast. But they're building on a foundation they can't maintain, and the real cost shows up on a delay. It's technical debt with an unusually long fuse.
The thing that strikes me about this thread is how many people are scrambling to evaluate alternatives they've never tested in production. That's the real risk with infrastructure dependencies — it's not that they might go closed-source, it's that the switching cost is so high that you don't maintain a credible plan B.
With application dependencies you can swap a library in a day. With object storage that's holding your data, you're looking at a migration measured in weeks or months. The S3 API compatibility helps, but anyone who's actually migrated between S3-compatible stores knows there are subtle behavioral differences that only surface under load.
I wonder how many MinIO deployments had a documented migration runbook before today.
Rails upgrades are painful but the path is documented, the deprecation cycle gives you a full minor version to fix warnings before they become errors, and the team usually knows where the sharp edges are.
The Ruby version management story is actually solid too. rbenv/asdf pin files make it hard to accidentally run the wrong Ruby version, which removes a whole class of environment drift issues you don't even realize you have until you've fixed them.