Hacker Newsnew | past | comments | ask | show | jobs | submit | CharlieDigital's commentslogin

I recently saw a preserved letterpress printing press in person and couldn't help but think of the parallels to the current shift in software engineering. The letterpress allowed for the mass production of printed copies, exchanging the intensive human labor of manual copying to letter setting on the printing press.

Yet what did not change in this process is that it only made the production of the text more efficient; the act of writing, constructing a compelling narrative plot, and telling a story were not changed by this revolution.

Bad writers are still bad writers, good writers still have a superior understanding of how to construct a plot. The technological ability to produce text faster never really changed what we consider "good" and "bad" in terms of written literature; it just allow more people to produce it.

It is hard to tell if large language models can ever reach a state where it will have "good taste" (I suspect not). It will always reflect the taste and skill of the operator to some extent. Just because it allows you to produce more code faster does not mean it allows you to create a better product or better code. You still need to have good taste to create the structure of the product or codebase; you still have to understand the limitations of one architectural decision over another when the output is operationalized and run in production.

The AI industry is a lot of hype right now because they need you to believe that this is no longer relevant. That Garry Tan producing 37,000 LoC/day somehow equates to producing value. That a swarm of agents can produce a useful browser or kernel compiler.

Yet if you just peek behind the curtains at the Claude Code repo and see the pile of unresolved issues, regressions, missing features, half-baked features, and so on -- it seems plainly obvious that there are limitations because if Anthropic, with functionally unlimited tokens with frontier models, cannot use them to triage and fix their own product.

AI and coding agents are like the printing press in some ways. Yes, it takes some costs out of a labor intensive production process, but that doesn't mean that what is produced is of any value if the creator on the other end doesn't understand the structure of the plot and the underlying mechanics (be it of storytelling or system architecture).


Not sure how the US comes back from this.

Who will trust US treaties going forward?


I don't think we do. I think this is our Teutoburg Forest moment [1].

Part of the issue is there's no real opposition in the US to what's going on. The Democrats being the controlled opposition party aren't in opposition to the war (eg [2][3][4]). They just oppose the way it was initiated. In other words, they have a process objection not a policy objection.

I've seen lamenting over Harris losing the elction (as well as more than a few doing "stolen election") about how the world could be different. But US foreign policy is uniparty

[1]: https://en.wikipedia.org/wiki/Battle_of_the_Teutoburg_Forest

[2]: https://www.aljazeera.com/news/2024/10/8/kamala-harris-says-...

[3]: https://www.democrats.senate.gov/newsroom/press-releases/lea...

[4]: https://www.nbcnews.com/politics/congress/hakeem-jeffries-wo...


Your “sources” are just mindless whataboutism that do not in any way provide evidence Harris/Democrats would have started this same idiotic war with Iran.

Democrats in Congress are currently almost universally opposed to the War in Iran. As the minority party they are unable to stop it unilaterally. Budget obstructions are the single lever available to them and given other issues like ICE, healthcare cuts, federal layoffs, can’t be used for every issue, every time without diffusing that very limited power into irrelevance.

Talk about “controlled opposition” given the blatantly obvious differences between the last two administrations is a signal of either being uninformed or a deliberate demotivational strategy.

Here are recent quotes from Schumer/Jefferies/Harris that for some reason you selectively chose not to include:

  "Trump’s actions in Iran will be considered one of the greatest policy blunders in the history of our country," - Chuck Schumer

  “The American people are sick and tired of the chaos, high costs and extreme Republican agenda. Donald Trump must end his reckless war of choice in the Middle East. Now.” - Hakeem Jefferies

  “In the last 48 hours Donald Trump has dragged America into a war that we don’t want” - Kamala Harris

  [1] https://www.yahoo.com/news/articles/chuck-schumer-hakeem-jeffries-more-024256513.html?guccounter=1
[2] https://www.wpr.org/news/harris-iran-trump-dragged-america-w...

> Part of the issue is there's no real opposition in the US to what's going on. The Democrats being the controlled opposition party aren't in opposition to the war

Most emphatically yes. We've seen occasional bursts of spirited dissent but that's about it. As far as sustained opposition, it still seems that they're hoping to just wait out the clock for things to go back to "normal".

> But US foreign policy is uniparty

No, I'd say even with this senseless "war" the "uniparty" model has still become invalid with Trump. While the US fear industry ("news media") has been beating the drums against Iran for quite some time, the US military/intelligence community has resisted attacking. If we had a President Harris, I would bet that we would not be attacking Iran, especially in this manner - not because of Harris herself, but rather because she wouldn't have gutted the domain experts who come up with reality-based plans, and who have presumably been saying "If we overtly attack Iran they close the Strait and actually end up stronger".

I like to refer to that system as bureaucratic authoritarianism - no meaningful checks on government power itself, but there are checks on how it's exercised. The critical difference is that Trumpism is autocratic authoritarianism (especially the second round after he broke so many laws the first time without consequence) - the experts and other group-project stakeholders (eg Inspectors General) were all fired (or at the very least sidelined), and replaced with glaringly incompetent yes-men who execute any simplistic "plan" regardless how bad it is.


It'll partly depend on what internal housecleaning—or perhaps fumigation—and reform happens in the US.

While it is unlikely to occur, imagine the international effect if the US resoundingly impeached and removed of a lawless president, and Congress formalized a lot of international agreements into statute rather than delegating too much to the executive branch.


Nah, this problem is systemic, and much older than the current administration. Or has everyone forgotten the "anthrax" in a test tube? The invisible WMDs? The fake news about soldiers tossing babies out of incubators? Setting up a web of lies and attacking is a foundational value of the United States.

I think this was the nail in the coffin. Not only has the US exsanguinated their military capability at the behest of Israel, everyone with half a brain watched closely as they took AD out of the gulf states and moved them into Israel. Taiwan, Japan and South Korea are not morons, they will see the writing on the wall and they will move to make diplomatic peace with their neighbours (China) now that the US has keeled over with self-inflicted wounds.

It doesn't really matter what happens internally in the US now, everyone realizes that every four years the world will roll the dice.


That is not going to happen. Even if MAGA doesn't rig the midterms and the Democrats actually win something, they will just "reach across the aisle" and "work on healing our divided nation". Nobody will see any consequences for the suffering they caused.

What we've learned is that laws only matter if Congress chooses to enforce.

>Not sure how the US comes back from this.

It shouldn't. The responsible course going forward is a constitutional convention and the dissolution of the United States.


A Constitutional Convention, by definition, would almost certainly not cause or require dissolution of the US. You could only effectively call a convention of people who explicitly do not want dissolution.

> Who will trust US treaties going forward?

Who trusted them before?


Comments are the ultimate agent coding hack. If you're not using comments, you're doing agent coding wrong.

Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.

You get free long term agent memory with zero infrastructure.


Agents and I apparently have a whole lot in common.

Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.


Right? It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along, because it turns out humans function better too when given the proper context for their work. The only silver lining is that this is a colossal karmic retribution for the orgs that never gave a shit about this stuff until LLMs.

You are seeing very similar trends in GTM

suddenly everyone cares about data hygiene. But it’s not like this shouldn’t have always been a priority


> It's infuriating. Nearly all of the agentic coding best practices are things that we should have just been doing all along

There's a good reason why we didn't though: because we didn't see any obvious value in it. So it felt like a waste of time. Now it feels like time well spent.


> Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

"Self-descriptive code doesn't need comments!" always gets an eye-roll from me


Helping the AI is helping themselves. You're doing your job, the AI is helping with their job.

This isn't just great advice ⸻ it's terrific advice. I'd love to delve a little deeper.

Would you like me to draft a list of recommendations for how best to use comments?

I didn't even know there was a "three em dash". Bravo.

Huh. It's displayed taking up three cells in my terminal, but laid out as if its width were one cell. Irritating. I wonder if there are any other grapheme clusters that don't properly fit in two cells?


CJK text is typically rendered as 2 columns per character, but in general this is dependent on the terminal emulator

We figured out how to remove that crap in our ICLR 2026 paper: https://arxiv.org/pdf/2510.15061

This.

Its also annoying to have to go through this stack

code -> blame -> commit message -> jira ticket -> issue in sales force...

Or the even better "fixes bug NNNNN" where the bug tracking system referenced no longer exists.

Digging through other systems (if they exist) to find the nugget in an artifact is a problem for humans too.


Comments are great for developers. I like having as much design in the repo directly. If not in the code, then in a markdown in the repo.

Meanwhile, some colleagues: "Code should have as little comments as possible, the code should explain itself." (conceptually not wholly wrong, but it can only explain HOW not WHY and even then often insufficiently) all while having barebones/empty README.md files more often than not. Fun times.

Comments are great until they diverge from the code. The "no comments, just self-explanatory code" reaction comes from the trauma of having to read hundreds of lines of comments only to discover they have nothing to do with how the code actually works, because over time the code has received updates but the comments haven't. In that case it's better to just have no comments or documentation of any kind--less cognitive overhead. This is a symptom of broken culture, but the breakage is the same kind that has managers salivating over LLM vibeslop. So I totally get where your colleagues might be coming from. Working within the confines of how things actually are it could be totally reasonable.

So don’t do that.

I’m not saying comments are magic or anything. It takes work to keep them in sync with the code.

It’s a useful goal. Not a rule that gets you out in jail if yo fail.

It doesn’t mean you can blindly trust comments. I treat all code, and comments, with skepticism until I can understand and run it.


This is honestly such a bad argument against comments.

I'm gonna note down my reasons for doing things and other information I deem useful, and if some other dipshit 5 years from now when I've moved on comes along and starts changing everything up without keeping the comments up to date that's their problem not mine. There was never anything wrong with my comments, the only thing that's wrong is the dipshit messing things up.

Doesn't matter what I do, the dipshit is going to mess everything up anyway. Those outdated comments will be the least of their worries.


> that's their problem not mine

IME unfortunately that's not actually the case. It very much is your problem, as the architect of the original system, unless you can get yourself transferred to a department far, far away. I've never managed that except by leaving the company.

To be clear, I don't believe it should be this way, but sadly unless you work in an uncommonly well run company it usually is.


I really can't imagine this ever becoming a real problem. Not once have I ever worked in a place where any kind of leadership would ever give a shit about comments nor anything else in the code itself. The lowest level leadership has ever gone is click a button to see if it works.

And if anyone has a problem with comments existing it's trivial to find/replace them out of existence. Literally a one minute job, if you actually think the codebase would be better without them.

This is such a humongous non-issue it's crazy man.


Leadership doesn't need to give a shit about the code to cause the cultural defect that leads to comments not being maintained. All they need to do is set the conditions which prevent code owners from having the agency to reject shoddy work. In my experience this always happens. It can manifest as either:

(1) "flat" organization where everyone owns everything and therefore nobody has the authority to reject a PR

or (2) "rubber stamp" culture where people who reject shoddy work are "not a team player" and therefore performance defective.

So far every company I've worked at has one or both of these symptoms. Working in the confines of those systems, it's not an irrational choice to decide that comments and other forms of documentation aren't worth trying to maintain, and are therefore detrimental.


Reject a PR?

I provide feedback on PRs. Then the owner of the PR adjusts it to accommodate my feedback and once I'm happy with it I approve it and we merge. If you're working in a place so cancerous that you can't just leave a comment on a PR reminding someone to update the comment they forgot to update I don't know why you're still there. This is called code review and it's common practice. If all you ever do is approve PRs then you're not doing code review and you might as well skip the whole PR step and let people merge into main as they please.

In any case your argument still just boils down to "I work with a bunch of stupid lazy dipshits" so why bother doing anything at all then? Write comments, don't write comments, write tests, don't write tests, do whatever the fuck you want because you're surrounded by useless dipshits and nothing you do matters anyway. Might as well write some comments for your own sake, everything's a ball of mud anyway it doesn't matter.

I'm gonna keep doing what I think is right in my sane corner of the world. And honestly I don't believe you. I think these excuses are just that. Excuses. I've been around quite a bit and haven't seen anything like you describe. Sure there's plenty of lazy dipshits but you don't have to sink to their level.


You may be a bit overconfident about how clear you will be with your comments.

The “dipshit” doesn’t mess everything up for fun. They don’t understand the comments written by the previous “dipshit” and thus are unable to update the comments.


Oh really? I'm overconfident in my ability to write and read simple clear text notes?

Here's what I think. I think you guys heard the "self-documenting code" BS and ate it up, and now you're grasping at straws to defend your cargo cult position, inventing these "problems" to justify it.

If you're looking at some code and there's a comment saying something that doesn't make sense to you, maybe that's a clue that you're missing a puzzle piece and should take a step back maybe talk to some people to make sure you're not messing things up? Maybe, for a non-dipshit, that comment they don't understand could actually be helpful if they put some effort into it?

Also just to be clear I don't think this is a likely occurrence unless someone doesn't know squat about the codebase at all - my comments generally assume very little knowledge. That's their whole purpose - to inform someone (possibly me) coming there without the necessary background knowledge.

It just isn't feasible to include the why of everything in the code itself. And it sure as hell is better to include some info as comments than none at all. Otherwise a bug will often be indistinguishable from a feature.

And I don't think dipshits mess things up for fun. I think they just suck. They're lazy and stupid, as most developers are. If I'm there I can use reviews etc to help them suck less, if I'm not they're free to wreck my codebase with reckless abandon and nothing I do will make any difference. I cannot safeguard my codebase against that so there's no point in trying and the fact that this is your argument should make you stop and reconsider your position because it's far fetched as fuck.


I agree with and appreciate your comment.

I’ll also note that I’ve worked with developers who didn't like git blame because someone might misinterpret the results. I think some people want excuses for poor work, rather than just working as correctly as possible.


And then you find out the dipshit that didn't keep the comments up to date was you all along

It wasn't.

Actually good naming does plenty to explain the why. And because it’s part of the code it might actually be updated when it stops being true.

How would you use good naming to explain this https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overv...

Or how would you name methods and variables to explain why some payment reconciliation process skips matching for transactions under 0.50 EUR and just auto-approves them, because the external payment processor rounds differently than the internal ledger at sub-euro amounts, creating mismatches that were flooding the finance team's exception queue in 2013, explained more under Jira issue ZXSV-12456 and more details are known by j.doe@myorg.com. The threshold was chosen after analyzing six months of false positives, when it's any higher someone being undercharged doesn't get caught. I don't think autoApproveThreshold = 0.50 or anything like that would get the full context across, even if the rules themselves are all code.

I think surely you can have both! Code should explain itself as often as possible, but when you hit a wall due to some counter-intuitive workarounds being needed, or some business rules or external considerations that you need to keep track of, then comments also make sense. Better than just putting everything in a Jira issue somewhere, given that it often won't be read by you or others and almost certainly will not be read by any AI agents (unless you have an MCP or something, but probably uncommon), or spending hours trying to get the code to explain something it will never explain well. I've had people ask me about things that are covered in the README.md instead of reading it.


You’ve correctly identified that naming isn’t sufficient for all communication. Name the things that stay constant in the code and explain the things that vary with a particular implementation in version control messages. Version control as a medium communicates what context the message was written for, which is far more appropriate than comments.

> Name the things that stay constant in the code and explain the things that vary with a particular implementation in version control messages.

Then the question becomes how often we look in the version control history for the files that we want to touch.

Which of these is more likely:

A) someone digging into the full history of autoApproveThreshold and finding out that they need to contact j.doe@myorg.com or reference ZXSV-12456

B) or them just messing the implementation with changes due to not reviewing the history of every file they touch

If someone is doing a refactor of 20 files, they probably won't review the histories of all of those, especially if the implementation is spread around a bunch of years, doubly so if there are a bunch of "fixes" commit messages in the middle, merge commits and so on. I've seen people missing out on details that are in the commit log many, many times, to the point where I pretty much always reach for comments. Same goes for various AI tools and agents.

Furthermore, if you want to publish a bit of code somewhere (e.g. Teams/Slack channel, or a blog), you'd need to go out of your way to pull in the relevant history as well and then awkwardly copy it in as well, since you won't always be giving other people a Git repo to play around with.

It's not that I don't see your point, it's just that from where I stand with those assumptions a lot of people are using version control as a tool wrong and this approach neither works now, nor will work well for them in the future.

It's more or less the same issue as with docs in some Wiki site or even a separate Markdown file (which is better than nothing, definitely closer than a Wiki, especially if the audience is someone who wants an overview of a particular part of the codebase, or some instructions for processes that don't just concern a few files; but it's still far removed from where any actual code changes would be made, also a downside of ADRs sometimes).


> the code should explain itself.

This is a good goal. You should strive to make the code explain itself. To write code that does not need comments.

You will fail to reach that goal most of the time.

And when you fail to reach that goal, write the dang comments explaining why the code is the way that it is.


But you will also fail to keep the comments and code synchronized, and the comment will at some point no longer describe why the code is doing whatever it does

Which is why you're reviewing changes. I haven't memorized what every line of code does, if it was worth commenting then it was confusing-enough that it needed the comment and so I'll read the comment to make sense of the code being changed. If I don't read the comment that means the comment was too far from the confusing code.

Alternately, you can say the same about informative variable names or informative function names. "If I change the function then the name is no longer accurate". You don't say that because function names and variable names are short and clear and are close to the problem at hand. Do the same with comments.

Which is why the copilot hyper-verbosity is harmful. Comments need to be terse so your eyes don't filter them out as noise.


Yeah, my point has basically nothing to do with AI and is the argument against comment blocks in general. It's bad to store information in two places.

But copilot code review agent is pretty good at catching when code and comments diverge (even in unrelated documentation files).

Comments are mostly useful when they explain the why, not the what.

HOW vs WHY is a great destination between design and documentation.

Gonna try and use that throughout my life. Thanks!


This is also a great way to ensure the documentation is up to date. It’s easier to fix the comment while you’re in the code just below it than to remember “ah yes I have to update docs/something.md because I modified src/foo/bar.ts”.

People moving docs out of code are absolutely foolish because no one is going to remember to update it consistently but the agent always updates comments in the line of sight consistently.

Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.


Experience doesn’t leave me with any confidence that the long term memory will be useful for long. Our agentic code bases are a few months old, wait a few years for those comments to get out of date and then see how much it helps.

The great thing about agentic coding is you can define one whose entire role is to read a diff, look in contextual files for comments, and verify whether they’re still accurate.

You don’t have to rely on humans doing it. The agent’s entire existence is built around doing this one mundane task that is annoying but super useful.


Idk if u are serious.

Yes, lets blow another 5-10k a project/month on tokens to keep the comments up to date. The fact ai still cannot consistently refactor without leaving dead code around even after a self review does not give me confidence in comments…

Comments in code are often a code smell. This is an industry standard for a reason and isnt because of staleness. If u are writing a comment, it means the code is bad or there is irreducible complexity. It is about good design. Comments everywhere are almost always a flag.

Note, language conventions are not the same.


> “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”

That's revealing waaaay more than the agent needs to know.


Doesn't look like privileged information to me.

Seems to me like everyone's just grasping at straws to nitpick every insignificant little thing.


Hmm, I'm sure if you're getting parent's comment.

I think a big question is whether one wants your agent to know the reason for all the reasons for guidelines you issue or whether you want the agent to just follow the guidelines you issue. Especially, giving an agent the argument for your orders might make the agent think that can question and so not follow those arguments.


> If you're not using comments, you're doing agent coding wrong.

Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.


You're wasting context doing that when a 3 line comment that the agent itself leaves can prevent the agent from searching and reading 30 files.

You're wasting context re-specifying what the code should already say, defining an implementation once should be enough, otherwise try another model that can correctly handle programming.

    > No, this is a bad solution.
This is a great solution. See: EU and normalization on USB-C for power delivery and wider market effect. Yes, market was heading in this direction, but EU legislation brought it over the line.

I wish all of the tooling vendors would support MCP prompts because that would solve this problem and provide a very good way of delivering aggregate feeds of skills -- dynamically, even.

Codex, for example, currently does not support this[0].

Then we can just point to an MCP server and have the MCP server dynamically compose the set of skills without needing to do any syncs, git sub-modules, etc.

[0] https://github.com/openai/codex/issues/5059


  Yes, I agree that MCP-based prompt/skill delivery would be a very interesting direction.

  If tooling vendors broadly supported MCP prompts, an MCP server could become a dynamic distribution layer for team-managed skills, which would remove a lot of sync-oriented workflow.

  My current assumption is that we still need something Git-native today because:
  - skills are mostly authored and reviewed in Git
  - teams need provenance and governance around them
  - tool support for MCP prompt delivery is still incomplete

  So I see Harbor more as a practical system for the current ecosystem, not necessarily the final shape.


Also worth pushing for a more standardized skills command for CLIs, similar to —help, but for (agent/human) workflows, https://cliwatch.com/blog/designing-a-cli-skills-protocol (if you ship these with your CLI, you also get versioning out of the box so to say)


I think TanStack Intent is quite close to that direction.

Packaging skills with libraries/CLIs and letting agents discover them from installed packages makes a lot of sense. I see Harbor as addressing a different layer on top of that: organizational collection, cataloging, provenance, governance, and safety.


Why stop at skills though? If you are trying to solve the provisioning problem for agent tools, shouldn't that also include MCP, commands, hooks, rules, etc in addition to skills?


Skills are not (only) just prompts, the more advanced skills have scripts or other assets as well.


Scripts are just text; skills are just text. Scripts can be inlined with the skill. Agent can create its own temp dir and extract the script to run.

Don't overthink it; it's all just text. I want to serve the text from HTTP instead of having to deploy via `git` and sync. I want to be able to dynamically generate that text on the server based on the identity of the user, their role, what team they're in, what repo they're working on.

I don't want static skills. That users have to remember to sync and keep up to date.


I could be wrong, but I don't think there is anything in the skills spec that prohibits a skill from having binaries? A skill may even choose to embed them to be able to run in sandboxed environments.

Agreed on skills not being static. Of course, with the way the internet works, I don't want them to be too dynamic either :)


C# is the other direction, IMO.

I've been using C# since the first release in 2003/4 timeline?

Aside from a few high profile language features like LINQ, generics, `async/await`, the syntax has grown, but the key additions have made the language simpler to use and more terse. Tuples and destructuring for example. Spread operators for collections. Switch expressions and pattern matching. These are mostly syntactic affordances.

You don't have to use any of them; you can write C# exactly as you wrote it in 2003...if you want to. But I'm not sure why one would forgo the improved terseness of modern C#.

Next big language addition will be discriminated unions and even that is really "opt-in" if you want to use it.


> Next big language addition will be discriminated unions and even that is really "opt-in" if you want to use it.

I was excited for DU until I saw the most recent implementation reveal.

https://github.com/dotnet/csharplang/blob/main/proposals/uni...

Compared to the beauty of Swift:

https://docs.swift.org/swift-book/documentation/the-swift-pr...


The C# impl is still early and I think what will end up happening is that a lot of the boilerplate will end up being owned by source generators in the long term. C# team has a habit of "make it work, make it better". Whatever v1 gets released is some base capability that v2+ will end up making more terse. I'm happy and OK with that; I'd rather have ugly unions than no unions (yes, I already use OneOf)

Ah Source Generators, after all these years still badly documented, when searching you most likely will find the original implemenation meanwhile deprecated, have poor tooling with string concatenation, and only have a few great blog posts from .NET MVPs to rely on.

:shrug: we're using them very effectively and there are plenty of resources at this point.

Very useful for reducing boilerplate and we can do some interesting things with it. One use case: we generate strongly typed "LLM command" classes from prompt strings.


There are plenty of resources, outside Microsoft Learn that is, and the content is mostly understandable by those of us that have either watched conference talks, or podcasts on the matter.

Now having someone diving today into incremental code generators, with the best practices not to slow down Visual Studio during editing, that is a different matter.

They are naturally useful, as a user, as a provider, Microsoft could certainly improve the experience.


    > If US wanted to onshore routers, we could make it happen
It will take months if not years to get a product to market.

    > So, let’s ask again, why? Why is this jump concentrated in software about AI?...Money and hype
The AI field right now is drowning in hype and jumping from one fad to another.

Don't get me wrong: there are real productivity gains to be had, but the reality is that building small one-offs and personal tools is not the same thing as building, operationalizing, and maintaining a large system used by paying customers and performing critical business transactions.

A lot of devs are surrendering their critical thinking facilities to coding agents now. This is part of why the hype has to exist: to convince devs, teams, and leaders that they are "falling behind". Hand over more of your attention (and $$$) to the model providers, create the dependency, shut off your critical thinking, and the loop manifests itself.

The providers are no different from doctors pushing OxyContin in this sense; make teams dependent on the product. The more they use the product, the more they build a dependency. Junior and mid-career devs have their growth curves fully stunted and become entirely reliant on the LLM to even perform basic functions. Leaders believe the hype and lay off teams and replace them with agents, mistaking speed for velocity. The more slop a team codes with AI, the more they become reliant on AI to maintain the codebase because now no one understands it. What do you do now? Double down; more AI! Of course, the answer is an AI code reviewer!. Nothing that more tokens can't solve.

I work with a team that is heavily, heavily using AI and I'm building much of the supporting infrastructure to make this work. But what's clear is that while there are productivity gains to be had, a lot of it is also just hype to keep the $$$ flowing.


People will dismiss this critical-thinking shutoff loop as doomer conspiracy, but it's literally the strategy that ai founders describe in interviews. Also people somehow can't or don't remember that uber was almost free when it came out and the press ran endless articles about the "end of car ownership", but replacing your car with uber today would be 10x more expensive. Ai companies are in a mad dash to kill the software industry so that they can "commoditize intelligence". There will be thousands of dead software startups that pile slop on slop until they run out of vc funny-money.

Agent Framework + middleware + source generation is the way to go.

Agent Framework made middleware much easier to work with.

Source generation makes it possible to build "strongly typed prompts"[0]

Middleware makes it possible to substitute those at runtime if necessary.

[0] https://github.com/CharlieDigital/SKPromptGenerator/tree/mai...


I work with author; author is definitely not AI generated.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: