It's already been beaten into acceptance that I have to use the Ticketmaster app (shockingly awful) or Dice app (not quite as bad but still sucks) to get into a lot of music venues in Boston.
But at one club they wanted me to install another app just to check my coat. I elected to hide it under a some furniture instead lol
Looking at the commit dates (which seem to be derived from the original publication dates) the history seems quite sparse/incomplete(?) I mean, there have only been 26 commits since 2000.
It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata. So commits 1->2->3->4 could be modified to have timestamps 1->3->2->4. I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.
> It's related to commits actually having a parent-child structure (forming a graph) and timestamps (commit/author) being metadata.
Yeah, I think everyone is aware. It's just that the last couple dozen commits, to me, looked like commits had been created in chronological order, so that topological order == chronological order.
> I know GitHub prefers sorting with author over commit date, but don't know how topology is handled.
The amendment 34 (by Markéta Gregorová, Greens/EFA) to the Sippel Report A10-0040/2026 significantly restricts the ePrivacy derogation (chat control extension until 2027).
It replaces Art. 3 para. 1 lit. a of Regulation (EU) 2021/1232: Processing (scanning) may only be
- strictly necessary for technologies to detect/remove known CSAM material (hashes, no unknown content),
- proportionate,
- limited to necessary technologies and content data.
The sims are really well done, the dynasty simulator especially. You can actually stress-test the argument instead of just nodding along. Appreciate the craft.
I have issues with the economics though.
The income model is calibrated from three separate literatures that were never estimated together. Different samples, different decades, different identification strategies. Then the big move, βIQ drops to 0.10, βW jumps to 0.65, gets asserted as a scenario and fed into the simulator like it’s an empirical result. The interactivity makes it feel rigorous but you’re mostly just exploring the author’s priors.
The skill premium has survived every automation wave we’ve thrown at it, including ones that felt just as terminal. ATMs didn’t kill bank tellers. US teller count went from ~300k to ~500k between 1970 and 2010 (see Bessen paper), because cheaper branches meant more branches.
The essay waves off Jevons with “human attention is fixed” but US legal spend is ~$400B/yr against ~$100B in estimated unmet need (LSC data). That’s 25% latent demand just sitting there at current prices. I would see that as saturated.
The “27.5% programmer decline” is doing a lot of work. BLS SOC 15-1251 (“computer programmers”) is a narrow legacy bucket that excludes software devs, DevOps, ML engineers, all of which grew. Total software dev employment (15-1252) was up in 2024 vs 2022. Classification artifact, not a labor market signal.
And the historical base rate on “this time the bridge closes for good” is… zero. Power loom, ag mechanization, manufacturing to services, analog to digital,etc. each killed the old skill-to-capital channel and built a new one within a generation. You can’t just assert AI is different from all prior GPTs, you have to show the mechanism that prevents a new channel from forming. The essay doesn’t really do that for me.
The assortative mating argument cuts against itself imo. If credentials lose signal value, the institutions where sorting happens (elite unis, professional firms) lose sorting power too. The essay predicts mating shifts to “wealth directly” but… how exactly? Credentials were legible because institutions verified them. Strip the institution and you’d expect noisier matching, not tighter. The Fagereng et al. paper it cites is Norwegian data, which has among the lowest wealth inequality in the OECD. Not obvious that translates.
Again I generally like the writeup, and I think the essay is right that capital returns are pulling away from labor income and AI accelerates it. But “the bridge narrows and the crossing gets harder” is the defensible version. “Closes permanently within a decade” requires believing something unprecedented will happen on a specific timeline..
Thanks for the detailed reply, really appreciate it!
Re post agi world and coefficients: yupp totally agree. This isn't proper modelling. I just wanted something I can play around with to test my intuitions.
Re Jevons: ok let's say that latent demand is freed up. It's bounded by human purchasing power and the rate at which humans can actually consume the output.
I bounced off of this article because I didn't like the conclusion, then provided myself a rationalization that it was probably mostly AI generated. What inspired you to engage with the article more deeply? You agree with the conclusion, but not with any of the supporting arguments.
I'm also fascinated by your compliment on of the dynasty simulator, which I found completely inscrutable. What kind of background knowledge would help understand it, economics training?
The margin death spiral she describes (accepting 10-15% fees to cover payroll, then needing more clients at worse rates) is the same thing that killed mid-tier law firms and architecture studios.
The collective model is how freelance dev shops have worked for a decade already tbh.
Agent breaks sandbox, accesses the wrong repo, nobody watching.
This is basically why AI coding productivity stays flat at ~10% despite 93% adoption.. you speed up one narrow thing and create new problems in review and oversight that eat the gains.
This is quite niche, using d²C/dt² > 0 as the trigger is clever. In bond math convexity tells you when small rate changes produce disproportionately large price moves, same signature shows up when an agent hits a retry loop.. costs accelerate way faster than the work justifies. The zero-variance check is the nice complement, a healthy agent has noisy cost-per-call, a stuck one looks mechanically identical every time.
Monolithic agent platforms that try to own everything will lose to composable stacks where you can swap each layer independently. I think the "Boss" orchestrator spawning specialized agents is the right call imo.
Open-source matters here because the defensible part isn't the orchestration UI, it's the accumulated process knowledge each org builds on top. Tried to map out why monolithic agent platforms are a trap: https://philippdubach.com/posts/dont-go-monolithic-the-agent...
reply