These personal blogs are starting to feel like Linkdin Lunatic posts, kinda similar. to the optimised floor sweeping blog, “I am excited to provide shareholder value, at minimum wage”
What does it tell you that programmers with the credibility of antirez - and who do not have an AI product to sell you - are writing things like this even when they know a lot of people aren't going to like reading them?
What it tells me is that humans are fallible, and that being a competent programmer has no correlation with having strong mental defenses against the brainrot that typifies the modern terminally-online internet user.
I leverage LLMs where it makes sense for me to do so, but let's dispense with this FOMO silliness. People who choose not to aren't missing out on anything, any more than people who choose to use stock Vim rather than VSCode aren't missing out on anything.
It's not Vim vs VSCode though - the analogy might be writing in assembler vs writing in your high level language of choice.
Using AI you're increasing the level of abstraction you can work at, and reducing the amount of detail you have to worry about. You tell the AI what you want to do, not how to do it, other than providing context that does tell it about the things that you actually care about (as much or little as you choose, but generally the more the better to achieve a specific outcome).
> the analogy might be writing in assembler vs writing in your high level language of choice.
If it were deterministic, yes, but it's not. When I write in a high level language, I never have to check the compiled code, so this comparison makes no sense.
If we see new kinds of languages, or compile targets, that would be different.
It's a new type of development for sure, but with an agentic system like Claude Code that is able to compile, run and test the code it is generating you can have it iterate until the code meets whatever test or other criteria you have set. No reason code reviews can't be automated too, customized to your own coding standards.
Effort that might be put into feeling that you need to manually review all code generated might better be put into things like automating quality checks (e.g code review, adherence to guidelines) ensuring that testing is comprehensive, and overall management of the design and process into modular testable parts the same way as if you'd done it manually.
While AI is a tool, the process of AI-centric software development is better regarded as a pair-design and pair-coding process, treating the AI more like a person than a tool. A human teammate isn't deterministic either, yet if they produce working artifacts that meet interface requirements and pass unit tests, you probably aren't going to insist on reviewing all of their code.
> the process of AI-centric software development is better regarded as a pair-design and pair-coding process, treating the AI more like a person than a tool.
This is the part that makes me throw up in my mouth a bit, I'd rather pair with a human. But whatever, I'm old. You'll have to excuse me as as there are a lot of nefarious-looking clouds out there.
Sure, but the AI is faster & cheaper than a human, or even of a team of humans. So, if you are a solo developer and can't afford to hire a team of humans to help accelerate your project, you now have the option of using AI instead.
It seems the capability and utility of these models/products is increasing very fast. Agentic tools like Claude Code that run locally in your terminal and therefore have access to all your dev/test tools and environment is a huge advance since now the output isn't just code, it's fully tested debugged code, that passes whatever tests and quality gates you tell it are necessary.
At the same time that the tooling has improved, so have the models, and only very recently (last 6 months or so). People swear by Opus 4.5, and I've also been impressed by Gemini 3.0. A year ago I was also much more skeptical of the utility of AI for serious use, but they've improved a lot.
This is so stupid. You still have to review that code, you still have to know what the solution to something is, ergo, you still need to know how to do it and you still have to deal with the cognitive load from reviewing someone else's code. I don't understand how you can write as if the implementation, fairly trivial and mechanical, is somehow more taxing than reading someone else's code..
This is not the support argument you think it is, it just further allures to the fact that people raving about AI just generate slop and either don't review their code or just send it for their coworkers to review.
I guess AI bros are just the equivalent of script-kiddies, just running shit they don't know how it works and claiming credit for it.
It depends on what you are using it for, and how you are using it. If you are using AI to write short functions that you could code yourself in close to the same time as reviewing the AI generated code, then obviously there is no benefit.
There are however various cases where using AI can speed development considerably. One case is larger complex project (thousands of LOC) where weeks of upfront design would have been followed by weeks/months of implementation and testing.
You are still going to do the upfront design work (no vibe coding!) and play role of lead developer breaking the work into manageable pieces/modules, but now there is value in having the AI write, test and debug the code, including generating unit tests, since this would otherwise have been a lengthy process.
This assumes you are using a very recent capable frontier model in an agentic way (e.g. Claude Code, or perhaps Claude web's Code Interpreter for Python development) so that the output is debugged and tested code. We're not talking about just having the AI generate code that you then need to fix and test.
This also assumes that this is a controlled managed process. You are not vibe coding, but rather using the AI as a pair-programmer working on one module at a time. You don't need to separately review the code line by line, but you need to be aware of what is being generated, and what tests are being run, so that you have similar confidence in the output that you might have done if you'd pair-programmed it with a human, or perhaps delegated it to someone else with sufficient specifications that "tested code meeting specs" means you don't have to review the code in detail unless you choose to.
I haven't tried it myself, but note that you can also use AI to do code reviews, based on a growing base of code standards and guidelines that you provide. This can then be used as part of the code development process so that the agent writing the code iterates until it passes code review as well as unit tests.
> We have to abandon the appeal to authority and take the argument on its merits, which honestly, we should be doing regardless.
I don't really agree. In virtually any field, when those who have achieved mastery speak, others, even other masters, tend to listen. That does not mean blindly trust them. It means adjust your priors and reevaluate your beliefs.
Software development is not special. When people like antirez (redis) and simonw (django) and DHH (rails) are speaking highly of AI, and when Linus Torvalds is saying he's using AI now, suggesting they may be on to something is not an appeal to authority. And frankly, claiming that they might be saying nice things about AI because of some financial motive is crazy.
> And frankly, claiming that they might be saying nice things about AI because of some financial motive is crazy.
I'm actually taken aback by the vehemence of the anti-AI brigade on HN. It seems objectively crazy to me to suggest someone like antirez, with a long visible history now has an agenda to push AI products, so he writes blog posts to do so.
This is just genuinely going into the wilfully blind territory now, and your post is the one downvoted for pointing it out.
I think we are properly into holy war territory and people on either side are losing their minds, and their objectivity.
That is an argument to authority. There is a large enough segment of folks who like to be confirmed in either direction. Doesn't make the argument itself correct or incorrect. Time will tell though.
People higher up the ladder aren't selling anything but they also have to not worry about losing jobs. We are worried that execs are going to see the advances and quickly clear the benches, might not be true but every programmer believing they have become a 10x programmer pushes us more into that reality.