Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The next two years of software engineering (addyosmani.com)
323 points by napolux 2 days ago | hide | past | favorite | 382 comments




TBH, it all feels like a huge gamble at this point. Neither skills, education, institutional ties, nor current employment can guarantee a stable foundation for life.

This hits harder depending on how much money, social capital, or debt you accumulated before this volatility began. If you’ve paid off your debts, bought a house, and stabilized your family life, you’re gambling with how comfortable the coming years will be. If you’re a fresh grad with student debt, no house, and no social network, you’re more or less gambling with your life.


I felt a lot safer when I was a young grad than now that I have kids to support and I can't just up and move to wherever the best job opportunity is or live off lentils to save money or whatever.

Yeah, kids change the landscape a lot. On the other hand, if you don't have any personal ties, its easier to grab opportunities, but you are unlikely to build any kind of social network when chasing jobs all over the country/world.

Either way, there is very little to no path toward "family + place to live + stable job" model.


This. At 43 I have friends that are all over the country now.

Where I am I’m alone. Don’t underestimate the value of community.


When I was single with no kids, I felt pretty comfortable leaving a good job to join a startup. I took a 50% pay cut to join when the risk seemed high, but the reward also seemed high.

It paid off for me, but who knows if I would have taken that leap later in life.


There must be "dozens of us" with this fear right now. I'm kinda surprised there isn't a rapid growing place for us to discuss this... (Youtube, X account, Discord place..)

There is, it happens locally in political organizations. Mostly in services and hospital work.

[flagged]


May I suggest that it may be your attitude about that piece of paper that is holding you back rather than the paper itself?

I don't have a college degree either. I am about 50. I have never been unemployed and have had high paying software dev jobs my entire adult life. Your claim that the lack of degree is the only thing holding you back is very much incorrect.

I suspect the problem is elsewhere and you are unwilling or uncomfortable to discuss it.


I'm confused as to why someone who freely admits they have been broke & unemployed for 15 years feels they are qualified to provide "advice", make critical judgement calls about others and brag about their awesomeness.

>> My actual accomplishments in the world of computing ... are the stuff of legends

We agree on the legends part


> "going back to school" to learn what I already know pretty damn well already, given that I've been programming since I was 8

It's small consolation if sitting in a classroom is something you truly hate, but the guys who are programming pros before they go into a CS program are very often the ones who do really well and get the most out of it.


Of course. When you're already an expert on day one, it's "easy" to "get the most" out of it. "The most" in this case being wasting four more years of your life that you'll never get back, only to end up enslaved to some corporation or government, assuming you can even be employed at all in the smoking remains of the economy/country that still exist when you graduate. Pass.

Did you already understand complexity theory (the most obvious example of this phenomenon) by the time you went to an actual school?

Tinkering is great but (good) school teaches you all the things and not just the things you obviously, and then you don't have any knowledge gaps.

Any fool can probably weld metal but how do you learn to do it properly if you don't learn properly?


So you’re freeloading dumpster food from a corporation. You show them buddy!

"Freeloading"? What a bizarre thing you just said.

Enjoy your commute.


programming since you were 8 and created a HN account 16 days ago... seems legit

I created my first Linux from scratch when I was a freshman in college in a third world country (not India). Fast forward few years later, I now write Linux kernel code for a living. Not sure what you did wrong, bud, to end up miserable like this.

[flagged]


This account can't be real

Funny. I was just thinking the same thing about your account.

Protip: When you consistently present yourself as somebody with a massively inflated ego who will be a constant pain to interact with, nobody's going to hire you, skills or not.

I left high school with average results and immediately got a job as a junior web developer, and I’m nothing special. I feel there must be more to this story… You don’t come off very well in your post, I imagine it could be the same in person and perhaps therein lies the issue?

> I left high school with average results and immediately got a job as a junior web developer

Sure, I could have done something like that too! If I had only zigged instead of zagged.

The problem happens when you leave high school and become [$XYZ] instead, do that for a while, realize it's a dead end career, then decide to return to your roots and become a programmer. At the same time you had also taken a year off to relax, only to later realize that "one year of unemployment" in 2010 (just after 2008) translates as "unemployable" to HR.

In general, if you don't make it your #1 top priority to get back into the rat race and stay there permanently, you are treated as an outcast and scorned by all the slaves. You also might, in your frustration, begin doing some research and start to learn what a racket it all really is. You could very easily find yourself slowly drifting apart from the rest of the herd, unable to ever be employed again without an exceptional desire to rejoin the herd. (Which I lack.)

Want to be "self employed" for a while? You know, exercise some self direction in life, which should be considered a good thing by anyone with a brain? Good luck ever getting a job again, lol. You're unemployable.

There is MUCH you still have to learn about life. But fortunately for you, a big learning opportunity is coming up. The current IT malaise is only the beginning of a very big economic crash. Enjoy your commute while it lasts.


> There is MUCH you still have to learn about life.

This response, along with your OP, it’s so pretentious and condescending. It seems you feel that you’re superior to everyone intellectually. I assume that you hold the same attitude in person and this is not helping your situation.

The irony is that I’ve done exactly this. I tried to start a business in my early 20’s and failed dramatically. I stopped developing altogether for a decade while I did minimum wage jobs and struggled to find a career. I started developing again in my early 30’s and half a decade later I’m running a software business.

You may well be intelligent but severely lacking in other necessary areas. It seems it is you who has much to learn.


I'm on the flip side of this - not exactly young but no dependants which is making me a little bit less nervous. Seems like the next 20 years will be a wild ride & it doesn't seem optional so lets go I guess

True. This is one of the best arguments for not having kids. I could never imagine putting myself in that uncertain situation. Much better to reduce those risks, and focus on yourself.

Having kids is a personal choice. The stress of having to support them is real and it might mean, at times, you sacrifice more than you would have without kids.

It's been entirely worth it for me and I cannot imagine my life without kids. But it's a deeply personal choice and I am not buying or selling the idea. I would just say nobody is ever ready and the fears around having them probably are more irrational than rational. But not wanting them because of how it might change your own life is a completely valid reason to not have kids.


I'm happy for you that you are in a situation where you can afford it. Many can't.

I agree with everything you said except for

> the fears around having them probably are more irrational than rational

My $0.02 is that if anything, the fears people have about how much their lives would be transformed are significantly lacking, and a lot of the "it's not so bad" advice is post-hoc rationalization. I mean, it's evolutionarily excellent that we humans choose to have kids, but it's very rational to be afraid and to postpone or even fully reject this on an individual basis. And as an industry and as a society, we should probably do a lot more to support parents of young children.

I found this smbc comic about a "happiness spigot" to be the most poignant metaphor - https://www.smbc-comics.com/comic/happiness-spigot?utm_sourc...


Ya, this is a fair callout. I moreso meant fears around being a bad parent. If anything, people experiencing those fears will be fine parents because they've got the consideration to already be thinking about doing a good job for their newly born.

Amen. It's hard to live with hope right now at all. Programmer or otherwise we're constantly told we're all going to be replaced and the economy is a mess (US). Definitely a depressing time to be alive.

If you are looking for any sort of hope, even a cursed one: there is the perspective that LLM generated code is legacy code as a service. LLMs were trained on a lot other people's legacy code. A lot of "vibe coding" is for what de facto are "day one legacy code apps". If my career has taught me anything, companies will always sunk cost fallacy throw new money at "fixing"/"expanding" legacy apps or the endless "rewrite cycle" of always trying to rewrite legacy apps but never quite succeeding.

Skills like Legacy Code Anthropology and Reverse Engineering will grow into higher demand. Like the worst legacy apps built by junior developers and non-developers (Access/Excel VBA and VB6 alone had a lot of "low code" legacy by non-developers), LLMs are great at "documenting" What was built, but almost never Why or How, so skills like "Past Developer Mind Reading" and "Code Seances" will also be in high demand.

There will be plenty of work still to do "when" everything is vibe coded. It's going to resemble a lot more the dark matter work a lot of software engineering is in big enterprise: fixing other people's mistakes and trying to figure out the best way you can why they made those mistakes so you can in theory prevent the next mistake.

It's a very dark, cursed hope to believe that the future of software engineering is the darkest parts of its present/past. As a software developer who has spent too large of an amount of my career in the VB6 IDE and who often joked that my "retirement plan" was probably going to be falling into an overly-highly-paid COBOL Consultancy somewhere down the line, I'm more depressed that there will be a lot more legacy work than ever, not that there won't be enough work to go around, and it will be some of the ugliest, most boring, least fun parts of my career, forever, and will have even less "cushiness" to make up for it. (That "dream" of a highly paid COBOL Consultancy disappears when good Legacy Code becomes too common and thus the commodity job. Hard to demand slicker, higher salaries when supply is tainted and full.)


I think what we're missing is certainty, not hope. You used to have more certainty that if you checked all the correct boxes your financial future would be guaranteed. Hope for the future is sort of separate and the most optimistic person could hold on to hope even now, and the most pessimistic person could lack hope even graduating with a CS degree in 2015.

You can have hope even if a positive outcome isn't guaranteed. In fact that is when hope is the most valuable (and maybe also difficult to find).


> Amen. It's hard to live with hope right now at all. Programmer or otherwise we're constantly told we're all going to be replaced and the economy is a mess (US). Definitely a depressing time to be alive.

Unless you're a plumber.


BTW the whole plumber/electrician/whatever thing is ridiculous. I studied industrial automation before I joined tech. I checked the salaries for manufacturing maintenance engineer last month. The wages are a sad joke compared to the costs of living.

Also don’t forget the part where we’re told housing will never be affordable again in many areas, and don’t expect to be able to remote work to move to LCOL areas any more.

If the economy is profoundly affected by employment conditions going forward, how can housing costs not drop?

Don't forget large scale purchasers using property for tax evasion, money laundering or other such uses. They largely don't even care if the property is maintained.

(eventually properties collapse, but if they keep the values inflated this way, that won't matter to them)

If you want to know more, look into RCMP reports on high property prices in Vancouver BC/Canada circa 2010s+, for example.


If majority of housing is owned for profit by REITs or landlords they have such a leverage over ordinary person, that they can indefinitely hold the prices/rents at a level where they extract maximum of available resources from owning land while making sure people have enough processed food and cheap internet-provided entertainment that they don't rebel.

The prices will adapt, but the equilibrium will always be elite-oriented economy where accommodation of the masses is a second-tier goal.


Not sure that is warranted. AI will create exciting changes to society for the better. These times are uncertain but certainly not depressing.

I don't doubt this, however, the question is if AI will do this in our life-time. The industrialization has led to prosperity in the long term, but initially it led primarily to the proletarianization of the people. Are you willing to accept a devaluation of your skills and a decline in your prosperity so that in 50 to 100 years there is a chance that AI will lead to a better future?

No one is going to ask if you're willing to accept this - it's simply going to happen whether we like it or not.

Some people will answer without being asked. The most we will get out of that is that the word "saboteur" will get a more modern synonym (not sure what it will be, but the inventor of cheap EMP granades will have the biggest say in that). The future will, of course, steamroll over such answers, as it always did, but we'll all feel the bumps on the way.

I don't think with any confidence we can say it will be for the better. Or at least, not on balance for the better.

AI companies are not even pretending they will improve society.

They are promising CEOs they can eliminate their workforce to increase profits. For people working for a wage it’s all downside, no upside.


Uncertainty is frequently a contributor to depression. Uncertainty is one of the most reliable stress triggers, which, over prolonged periods of time, especially when paired with low perceived control, is a direct path to increased depression. So if something is uncertain, it is often depressing as well.

I think we can assume it will create disruption, but by definition this is both positive and negative for different individuals & dimensions, and it is small solace if society improves while your life languishes or declines - this is just what's happened to a generation of young males in the US and is having huge repercussions. I think you're right to suggest the goal is to avoid letting the uncertainity make you depressed, but that does not automatically make it so of everyone.

It’s positive if you are already wealthy, negative if you have to work for a living.

Is that AI generated by any chance? Seems like an AI crystal ball that you're looking into.

It's fine to have that opinion, but please frame as an opinion or else give me the lotto numbers for next week if you can predict the future that accurately.


Prove your assertion.

it is depressing to me for exactly the same reason

"AI will create exciting changes to society for the better"

Why are you certain of this?


It's been a cause of mild background anxiety for me for the past 3 years. One part is financial and the other is a potential loss of a comfortable and relatively high status job that I can get even with below average social and physical skills.

I need about 4.5 years until basic financial independence, I wonder how does it feel to be at that point.


I'm more worried about the global impact.

Will people still buy and sell houses?

Will house prices go down because no one can afford them?

Will house prices go up because so few will sell their assets?

I would like to buy a small farm today without debt and cheap energy (upfront investment in solar and storage) but I need a few years more.

Does the world can really change that fast? I don't know but the progress in AI is fast, very fast.


Fresh grads will be fine regardless. You're okay to start over from scratch at 25. 42 on the other hand is tough

I feel for the mid-career people with families to support. Can't imagine how stressful that would be


This is a fresh perspective for me. I'm around 25 and have been struggling with finding some kind of path towards making my career into something sustainable long-term, but never really considered the other side. I think the issue many have on my end is that they don't really have much of anything to stand on while they rebuild yet, whereas they might think that someone more experienced could pivot to business and people-oriented roles by leveraging what they have now. I know many people personally struggling to find work as it is right out of school, and many have student loans which exacerbate the situation. For a lot of people, starting from scratch is not realistically feasible in the near future unless they're content with being homeless for a while.

Of course labor jobs will always exist, and a 25 year old would (on average) be much more physically able for that than someone older, so it goes both ways.


Consider people in their 40s have..

A mortgage: if you were assuming a strong income that would continue, you very likely could be forced to sell your house and take a huge loss

A family, kids: people relying on you

Time: at this point you have retirement plans and financial deadlines you need to hit if it's to ever become a reality

God forbid you have any health issues that cost $$$ which tend to come as you age. Can you afford to lose health insurance?

If you think about re-skilling and starting off at entry level.. people don't really want to hire older beginners.

Of course that's absolute worst case scenario, but I guarantee there are a lot of people there.

I'd 100% choose living out of my car for a while. In your 20s you can upend everything and completely reinvent yourself. Time, minimal responsibilities and energy are priceless

> could pivot to business and people-oriented roles by leveraging what they have now

There's a reason that's really vague, right? Because who knows if it'll be available

I don't think AI is gunna reach this point but who knows. It's not off the table


If enough people have nothing left to lose, the French Revolution will most likely be the outcome. Or a working UBI. If programmers aren't safe, I can't imagine most other professions won't be on the chopping block as well.

There's a lot of this forum in exactly that position. The fear is real; there is a real risk this AI destroys families and people's lives in the disruption.

I understand this perspective, but it's like... I would like to have a house and kids and all those things you mentioned, even if it was hard. That's not an option, financially, for a lot of young people

Work on becoming Financially Independent. The best time to start was when you started your career, the second best time to start is now.

Yeah really seems like the only way to win (or rather not lose) is simply not to play.

At this point I’ve realized I need to cast all other ambitions aside and work on getting some out of the way land that I own.


I'm older, aware, decently resourced and really trying NOT to play but it is still hard to accomplish. I'm married with 3 kids and even though I sit out much of the nonsense, your friends, family and community will keep pulling you back in. It's hard to do"not playing" without "not participating" and I don't think anybody should do that.

And not just SWE. If that falls then we're pretty close to societal upheaval because the difference vs other jobs is largely just better training data (github)

> TBH, it all feels like a huge gamble at this point.

Honestly? It does and I feel completely hopeless. I'm very, very angry with the world/life at this point to put it mildly.


I think we all need to respond by being very, very flexible and open minded about how to contribute to society going forward. I'm on the back end of my career but I imagine it's terrifying for newcomers. Stay agile! We're all in this together.

I say this without hyperbole: we are (IMHO) on the verge of total systemic collapse.

We've had 50+ years of deteriotating worker conditions and a massive concentration of wealth to like 10,000 people. The 1980s crushed the labor movement, to all of our detriment.

The GFC destroyed the career prospects of many millenials who discovered their entry-level positions no longer existed so we created a generation that we're loaded with student debt, working as baristas.

A lot of people on HN ignored this because the 2010s were good for tech people but many of us didn't realize this post-GFC wave would eventually come for us. And that's what's happening now.

So on top of the millenaisl we now have Gen Z who have correctly realized they'll never have security, never buy a house and will never retire. They'll live paycheck to paycheck, barely surviving until they die. Why? All so Jeff Bezos can have $205 billion instead of $200 billion.

I'm reminded of the quote "only nine meals separates mankind from anarchy".

I believe we've passed the point where we can solve this problem with electoral politics. Western democracies are being overtaken by fascists because of increasing desperation and the total destruction of any kind of leftism since WW2. At this point, it ends violently and sooner than many think.


> we are (IMHO) on the verge of total systemic collapse. (...) At this point, it ends violently and sooner than many think.

This is, in a twisted way, an expression of hope. The expectation of a grand collapse is one that's shared by many, but can you explain what gives you that complete certainty that it's near or that it's coming at all?

The far bleaker possibility that I think is totally realistic is that things continue getting worse, but they never cross over the final line. Things are mismanaged, everyone is worse off, but that nine-meal gap is never allowed to happen, and any real threats are squashed at the roots. There's no singular collapse, instead of one definitive societal stab wound that's followed by long hospital recovery, we're inflicted with a thousand minor cuts to near-death.

The people who benefit from all this have been refining their knowledge and growing their power and influence. They're near-gods at this point. They may make a mistake, but what if they don't and the current situation is maintained for decades to come?


You’re focusing on the US. In Europe, worker protections are great. However, Europe has been lagging behind the US for a couple of decades and the gap has been growing. All tech giants are based in the US or China. Maybe things will collapse in the US now, but very unlikely to do so in Europe unless the impact from the US is so great it propagates to Europe, despite Europe itself not having done the things that caused such collapse in the US , which seems to be the result of prioritizing profit and greed above everything else, including people wellbeing.

Worker protection doesn’t do you any good if companies refuse to hire people abd the government can’t afford the social safety net.

Oh I couldn't disagree more. Europe is on the verge of full-blown fascism. Europe has Reform (UK), AfD (Germany) and National Front (France) as well as Hungary.

Europe created the Russia-Ukraine problem by giving their energy security to Russia. Interestingly, this is a (super) rare win for the first Trump administration: forcing Europe to build an LNG port in 2018 [1] and warning against the dangers of dependence on Russian natural gas. This warning has been completely vindicated.

Europe has stagnant wages, a declining social safety net (eg raising the retirement age in France), a housing affordability crisis in most places (notably exlucding Vienna and there needs to more attention on why this is), inflation problems and skyrocketing energy costs. It's the same 1930s economic conditions that gave rise to fascism last time.

Europe has the same rising anti-immigrant rise in response to declining material conditions that the US hass. In Europe's case it's against Syrians and North Africans. In the UK this also included Polish people.

France is really a perfect example here. Despite all the economic problems you have Macro siding with Le Pen to keep Melenchon and the left out of power.

All of this is neoliberalism run amok and it comes from decisions in WW1, WW2 and post-WW2, most notably that Europe (and the US) decided the biggest threat was socialism and communism. And who's really good at killing communists? Nazis. Just look at the resume of Adolf Heusinger, an early NATO chair [2].

Europe has also outsourced their security to the US via NATO. And NATO is on the verge of collapse. There's a lot of thinking that Congress won't allow Trump to withdraw from NATO, as many in his administration want to do, but NATO could well splinter if Trump takes Greenland.

What happens to Europe with an expansionist Russia and no US security guarantee?

Oh and speaking of worker protections, what happens when the price of bailing out European energy or security issues is the privatizing of your otherwise universal healthcare? It was rumored that parts of the administration wanted the UK to privatize the NHS as part of a post-Brexit trade deal. 15 years of austerity has primed the population to accept this kind of thing.

Many Europeans (rightly) look down on the insanity that's currently going on in the US but at the same time they don't realize just how dire the situation is in Europe.

[1]: https://www.reuters.com/article/business/germany-to-build-ln...

[2]: https://en.wikipedia.org/wiki/Adolf_Heusinger


> What happens to Europe with an expansionist Russia and no US security guarantee?

While European military strength isn't in its prime right now, their capabilities without the US are often way underestimated. Not that most of the other issues aren't applicable - everyone appears to be more or less fucked in multiple ways - but losing a conventional war to Russia isn't on the table, barring unthinkable mismanagement or a world-changing event (preemptive use of nukes, etc). Russia has stalemated a war against a singular country that has a fraction of Russia's wealth, loads of antiquated equipment and a small sample of Western tech. The Russian economy has a massive hole in it largely thanks to said war, and is only propped up by existing savings - they're not in danger right now, they're rapidly approaching that point with no way of stopping. Even if the war never happened, they'd still be far weaker than the whole of Europe and likely some individual European countries.


> Europe created the Russia-Ukraine problem by giving their energy security to Russia.

It wasn't "given", Russia did it on purpose. There are SO MANY cases of politicians advocating for Russian natural gas or oil as an energy source who were later revealed to be 100% paid for with Russian money.


This is so depressing to read but I can't help feeling you are right. The feeling is quite surreal becouse if I turn off my computer I can't notice the difference locally in my county. It is like lunatics from "the internet" runs alot of things now irl.

> There's a lot of thinking that Congress won't allow Trump to withdraw from NATO

I wonder how that is supposed to work when the Executive branch has proven they can do whatever they want regardless of the other two branches. The rules are worthless if there are no consequences for breaking them.


I was following you until you implied everyone who’s not a communist is a Nazi.

Yeah, they don't really teach this part in history [1]:

> At the height of the Cold War in the 1950s, law enforcement and intelligence leaders like J. Edgar Hoover at the F.B.I. and Allen Dulles at the C.I.A. aggressively recruited onetime Nazis of all ranks as secret, anti-Soviet “assets,” declassified records show. They believed the ex-Nazis’ intelligence value against the Russians outweighed what one official called “moral lapses” in their service to the Third Reich.

And NATO [2]:

> The most senior officers of the latter group were Hans Speidel and Adolf Heusinger, who on Oct. 10 and Nov. 12, 1955, respectively, were sworn in as the Bundeswehr’s first two lieutenant generals... Heusinger, a POW until 1948, ...

> That spring Heusinger succeeded Speidel as chief of Combined Forces when the latter was appointed commander in chief of Allied Land Forces in Central Europe becoming the first German officer to hold a NATO commander in chief position

And it goes on.

Nazi links are well-established to Operation Paperclip [3] under Werner von braun.

And there are many others [4].

I didn't say all the non-communists were Nazi. I said the neoliberal and imperialist projects of the US and Western Europe post-WW2 sided with and gave haven to Nazis to fight communism, which is true.

Fascism in the US didn't begin with the Nazis however. You can trace back the roots to the white supremacy the US was founded on, the slave trade, the Civil War, Reconstruction and even the Business Plot [5] that sought to overthrow FDR in 1933, probably labelling him a communist.

But the Nazis were very popular in the US, culminating with the German American Bund rally in Madison Square Gardens in 1939 [6].

Oh and let's not forget Henry Ford's contribution to all this, notably The International Jew [7], so much so that Hitler praised him in Mein Kampf.

Personally, I'm of the view that a lot of this can be traced back to simply not stringing up all the former slave owners after the Civil War.

[1]: https://archive.ph/A8HHC

[2]: https://www.historynet.com/these-nato-generals-had-unusual-b...

[3]: https://en.wikipedia.org/wiki/Operation_Paperclip

[4]: https://www.npr.org/2014/11/05/361427276/how-thousands-of-na...

[5]: https://en.wikipedia.org/wiki/Business_Plot

[6]: https://en.wikipedia.org/wiki/1939_Nazi_rally_at_Madison_Squ...

[7]: https://www.thehenryford.org/collections-and-research/digita...


> Europe created the Russia-Ukraine problem by giving their energy security to Russia.

> This warning has been completely vindicated.

That's funny. The US warned Europe of dependence on Russia all the while promoting policies that antagonized Russia in Europe (e.g. NATO expansion). It's almost like the US wanted to push Europe and Russia against each other, so that it could sell its way more expensive natural gas in Europe!? Perhaps they did not anticipate the Russians would be bold enough to go to war on that, but they were certainly willing to accept the risk.

> It's the same 1930s economic conditions that gave rise to fascism last time.

Please. Europe may have some issues , but it's not nearly as bleak as you try to make it... I live here, I go around a lot. Europe is as affluent as ever. People are having a good time, in general. In the 1930's some countries had hyperinflation... you're comparing that to 5% yearly inflation these days?

> Europe has also outsourced their security to the US via NATO.

On that we agree. It was a really bad decision, but understandable given how much the US soft power after WWII was absorbed by Europeans. Some Europeans act like European countries are US states. They take to the streets to join movements that are 100% American, like BLM. It's bizarre.

> What happens to Europe with an expansionist Russia and no US security guarantee?

It shocks me that people like you think Russia is a serious threat to all of Europe, rather than just Ukraine (and maybe Moldova and Georgia if you push it). How can you justify that view? Russia has not drawn any red lines about anything related to the rest of Europe like it had with Ukraine and Georgia (which was thoroughly ignored by Europe, with the strong support and should I say it, advice of the USA), it has not said anything as threatening as Trump saying Greenland will be part of America the nice way or the hard way, yet you believe the US is not a threat, but Russia is. There's some serious dissonance in this line of thought.

> Oh and speaking of worker protections, what happens when the price of bailing out European energy or security issues is the privatizing of your otherwise universal healthcare?

Americans have been saying this for 50 years... they just can't accept that the system has been working well in Europe for workers for all this time (though not as much for companies, as you can clearly notice it's much harder to make behemoths like FAANG in Europe, no doubt because without exploiting workers you can't really do that).


> The US warned Europe of dependence on Russia all the while promoting policies that antagonized Russia in Europe (e.g. NATO expansion)

I think there's a certain amount of historical revisionism going on with this. It is complicated however.

You can point to George W. Bush opening the door to NATO membership in 2006 [1] as a turning point but NATO had been gobbling up former Communist bloc countries for more than a decade.

Another noteworthy event is the 2014 revolution that ousted Russian puppet Viktor Yanukovych as the president of Ukraine, culminating in the Minsk Agreement (and Minsk II) to settle disputes in the Dombas and elsewhere.

Russia does have legitimate security concerns int he region such as access to the Black Sea and not having NATO on their border. And by "legitimate" here I simply mean that Europe and the EU do the exact same thing, most notably when the US almost started World War 3 over Soviet influence in Cuba (which itself was a response to the US installing nuclear MRBMs in Turkey). Also, in terms of the threat of a conventional land war, Ukraine is basically a massive highway into Russia, previously used by both Hitler and Napoleon. Not that it worked out well for either.

Whatever the case, having another Belarus in Ukraine was ideal for Russia and I think their designs on this long predated any talk of Ukraine joining NATO, which was DOA anyway. Germany, in particular, were always going to veto expanding NATO to share a border with Russia.

My point here is I'm not convinced that any promises of neutrality by Ukraine would've saved Ukraine from Russian designs.

> Europe is as affluent as ever

Based on what? Personal anecdotes? The EU acknowledges a housing crisis [2].

> It shocks me that people like you think Russia is a serious threat to all of Europe,

It is a serious threat. Not in the conventional land-war a la WW2 sense but we're dealing with the world's other nuclear superpower (China doesn't have the nuclear arsenal Russia does, by choice). But Putin's playbook is oddly reminiscent to Hitler's playbook leading up to the war. That is, Hitler argued he was unifying Germans in Austria, the Sudetenland, etc. Similarly, Putin is using ethnically Russian populations in a similar way: as an excuse to intervene and take territory.

There is a significant Russian population in Latvia who are stateless. IIRC it's estimated there are more than 200,000 of them.

American security and energy guarantees are really the only things holding Europe together right now. If NATO splinters, what's to stop Russia from seizing parts of Latvia?

This situation is precarious.

> they just can't accept that the system has been working well in Europe for workers for all this time

No, they don't care that it works. In fact, they've been doing everything they can to make it not work. We now have a generation of people in many European countries (and I include the UK here) who have never not known austerity and constant government cutbacks. Satisfaction with the NHS deteriorates as it's been deliberately starved for 15+ years.

This is a well-worn and successful playbook called starving the beast [3]. It's laying the groundwork for a push for privatization. It'll be partial privatization to start with and just creep from there.

I'm not sure you truly appreciate just how much US foreign policy is designed to advance the interests of American corporations.

[1]: https://www.theguardian.com/world/2008/apr/01/nato.georgia

[2]: https://www.consilium.europa.eu/en/policies/housing-crisis/

[3]: https://en.wikipedia.org/wiki/Starve_the_beast


> You can point to George W. Bush opening the door to NATO membership in 2006 [1] as a turning point but NATO had been gobbling up former Communist bloc countries for more than a decade.

The 1990's Russia was a hugely struggling nation that could barely feed its population, but even then they opposed NATO expansion strongly!

> The decision for the U.S. and its allies to expand NATO into the east was decisively made in 1993. I called this a big mistake from the very beginning. It was definitely a violation of the spirit of the statements and assurances made to us in 1990.

Source: (Gorbachev in interview from 2014) https://www.rbth.com/international/2014/10/16/mikhail_gorbac...

> Based on what? Personal anecdotes? The EU acknowledges a housing crisis [2].

The housing crisis is mostly limited to inflated prices in large cities and is itself evidence that people have a good purchasing power, since it's not being driven by foreign capital (at least where I live, in the Nordics).

Which statistics show the EU is NOT affluent?? If we look at GDP (+1.35% yearly in the last 10 years [1], not too bad for developed economies) and unemployment (currently around 6% for the whole EU [2]), it's not bad, especially if you consider the huge number of recent immigrants (unemployment among the native population is much lower than the total figures show, in Sweden, for example, native Swedes have near full employment).

But yeah, I think personal anedoctes are also helpful to establish whether a country looks like it's going down... and everywhere I go, I see only good signs: shops expanding, lots of new buildings, full bars and restaurants, people are driving the latest electric cars... what I don't see is things like businesses closing down, struggling local shops etc. which are normally very visible (I know, I've seen that) in economies that are in dire straits.

> There is a significant Russian population in Latvia who are stateless. IIRC it's estimated there are more than 200,000 of them.

Yes, I've been to Latvia and Russian is clearly spoken by a large percentage of the population (to my surprise, including the young generation). As long as they are not suppressed from speaking their language (as is happening in Ukraine right now and even before the war, and in some areas in the Baltic countries) and they're not made second-class citizens (as is happening in Estonia, where they can no long vote [3]), Putin will not have any excuse to do that, and those countries would be wise to not provide such excuses! Anyway, I think that regardless of that, NATO will survive even without the USA (as something else, perhaps, but the union between European states is extremely important to maintain) and I really belive Article 5 will exist even if NATO evolves into a Europe-only alliance.

> I'm not sure you truly appreciate just how much US foreign policy is designed to advance the interests of American corporations.

Not sure what you're referring to... I think I do appreciate it. The interview [4] Trump had with the American oil companies after the partial "annexation" of Venezuela couldn't be a better example of that.

[1] https://en.wikipedia.org/wiki/Economy_of_the_European_Union#...

[2] https://en.wikipedia.org/wiki/Economy_of_the_European_Union#...

[3] https://www.lemonde.fr/en/russia/article/2025/03/26/estonia-...

[4] https://www.youtube.com/watch?v=W_7VhFaRqKE


At one point there will be enough people with no job or no hope who will do the math of "there are 100 million of us and 3000 billionaires" and hopefully have read enough history to learn about the French Revolution and the tools they used.

The only way to prevent this is to guarantee that people without jobs will still have a roof over their heads and enough calories and micronutrients every day to survive - and some entertainment.


I do wonder what will come next, it seems very unlikely that modern states can effectively be toppled and replaced by revolutions but maybe the nature of revolutions will change themselves. After all, it's not like the neoliberal paradigm was always so, it was systematically planned by elites in the 1970s and we're now proudly living in the society they envisioned (elites have wealth while everyone else struggles). The neoliberal establishment was definitely a revolution that impacted and destroyed many lives but it wasn't treated as such.

I guess the next turning of the wheel will be similar too.

Or maybe we all just have poor imaginations.


at the dissolution and decentralization of empires feudalism in it's many forms historically seems to be the most common outcome.

i would say that we firmly live in the American Empire with techno-feudalistic tendencies, but a historical event of such magnitude as the complete dissolution of the American state will probably see a reversal to a more traditional feudal system. Think Jeff Bezos and Bill Gates buying up and becoming the Dukes of the PNW.

personally though i don't think we are at this stage yet or even close to it. until the federal government becomes COMPLETELY inept and the average citizen cannot buy food, this won't happen. yes market conditions are currently not the best but we are nowhere near starvation.


And yet the current administration, like every other administration since the mid 90s, still sets labor immigration policy on the testimony of the tech industry that there is still a critical shortage of tech labor so the doors must be remain open for the 30th year of the temporary program that's only going to be in place until the tech companies have time to train domestic talent. If you have a problem with this, you're a racist Nazi who should be excluded from society. Left, right, up, down, they all agree on this, as does the vast majority of posters here. Their defense for this is that little down arrow since they have no other legitimate defense for the 30th something year of the temporary program to give them time to train the talent they claim doesn't exist in the United States.

My experience hasn't been LLMs automate coding, just speeds it up. It's like I know what I want the solution to be and I'll describe it to the LLM, usually for specific code blocks at a time, and then build it up block-by-block. When I read hacker news people are talking like it's doing much more than that. It doesn't feel like an automation tool to me at all. It just helps me do what I was gonna do anyways, but without having to look up library function calls and language specific syntax

> My experience hasn't been LLMs automate coding, just speeds it up.

This is how basically everyone I know actually uses LLMs.

The whole story about vibecoding and LLMs replacing engineers has become a huge distraction from the really useful discussions to be had. It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.


As a professional programmer, I think both are useful in different scenarios.

You're maintaining a large, professional codebase? You definitely shouldn't be vibe coding. The fact that some people are is a genuine problem. You want a simple app that you and your friends will use for a few weeks and throw away? Sure, you can probably vibe code something in 2 hours instead of paying for a SaaS. Both have their place.


I’m seeing vibe coding redefine what the product manager is doing. Specifically, adding solution execution to its existing strategy and decision making responsibilities. The PM puts solutions in front of a customer and sees what sticks, then hands over the concept to engineering to bake into the larger code base. The primary change here is no longer relying on interviews and research to make product decisions that engineering spends months building only to have flop when it hits market. The PM is being required to build and test dozens of solutions before anything makes its way to engineering resources. How engineering builds the overall solution is still under their control but the fit is validated before it hits their desk.

Yes!

I think the next step is to realize that this kind of product manager role is one that more "engineers" should be willing to take on themselves. It's pretty clear why user interviews and research and product requirement docs are not obviously within the wheelhouse of technical people, but building lots of prototypes and getting feedback is a much better fit!


I think the problem starts with the name. I've been coding with LLMs for the past few months but most of it is far from "vibed", I am constantly reviewing the output and guiding it in the right direction, it's more like a turbo charged code editor than a "junior developer", imo.

> The whole story about vibecoding and LLMs replacing engineers has become a huge distraction

Because the first thing that comes from individual speed up is not engineers making more money but there being less engineers, How much less is the question? Would they be satisfied with 10%, 50% or may be 99%?


Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.

If we doubled agricultural productivity globally we'd need to have fewer farmers because there's no way we can all eat twice as much food. But we can absolutely consume twice as much CSS, try to play call of duty on our smart fridge or use a new SaaS to pay our taxes.


Oh but we can absolutely let all that food go to waste! In many places unbelievable amounts of food go to waste.

Actually, most software either is garbage or goes to waste at some point too. Maybe that's too negative. Maybe one could call it rot or becoming obsolete or obscure.


I have been around for “the past few decades”. Then you saw the rapid growth of the internet, mobile and BigTech. Just from the law of large numbers, BigTech isn’t going to grow exponentially like it did post 2010.

It’s copium to think that with the combination of AI and oversupply of “good enough” developers, that it won’t be harder for developers to get jobs. We are seeing it now.

It wasn’t this bad after the dot com bust. Then if you were just an ordinary enterprise developer working “in the enterprise” in a 2nd tier city (raises hand), jobs were plentiful.


I think the better way to think of this is whether it will be harder for people who are good at using AI tools to accomplish things with computers to get jobs. Maybe, but I don't think so. I think this skill set will be useful in every line of work.

That doesn’t solve the problem. It’s easy enough to be “good enough” at AI tools just like it’s easy enough to be a decent enterprise CRUD full stack/back end/mobile developer. It will still be hard to stand out from the crowd.

I saw this coming on the enterprise dev side where most people work back in 2015. Not AI of course, but the commoditization of development.

I started moving closer to the “business”, got experience in leading projects, soft skills, requirements gathering, AWS architecture etc.

I’m not saying the answer is to “learn cloud”. I am saying that it’s important to learn people skills and be the person trusted with strategy and don’t just be a code monkey pulling well defined tickets off the board.


My point is: I don't think there will be way more jobs for "AI developers", I think there will be plenty of jobs for people who are employed in an industry and adept with using AI tools to be effective at their job. These people would not be differentiating themselves from other "AI developers", but from other people who do their role in whatever industry they are in, but who aren't as adept with these tools.

> Generally the demand for software engineers has increased as their productivity has increased, looking back over the past few decades. There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.

I see this fallacy all the time but I don't know if there is a name for it.

I mean, we make used fun of MBAs for saying the same thing, but now we should be more receptive to the "Line Always Goes Up" argument?


Jevons paradox and it’s not a fallacy. It’s an observable behavior. The problem is it’s not predictive.

> Jevons paradox and it’s not a fallacy. It’s an observable behavior. The problem is it’s not predictive.

I was referring specifically to this point, which, IMHO, is a fallacy:

>>> There seems to be effectively infinite demand for software from consumers and enterprises so the cheaper it gets the more they buy.

There is no way to use the word "infinite" in this context, even if qualified, that is representative of reality.


As counter-anecdata, I have a family members that are growing businesses from scratch and they constantly talk to me about problems they want to solve with software. Administrative problems, product problems, market research problems, you name it. I'm sure they have other problems they don't talk to me about where they're not looking for software solutions, but the list of places they want software to automate things is never-ending.

There consumer internet is mostly cropped up by white collar people buying stuff online and clicking on ads. Once the cutting starts, the whole internet economy just becomes a money swapping machine between 7 VC groups.

The demand for paid software is decreasing cause these AI companies are saying "Oh dont buy that SAAS product because you can build it yourself now"


SaaS is not just software though, it’s operationalized software and data management. The value has increasingly been in the latter well before AI. How many open source packages have killed their SaaS competitors (or wrappers)?

As much as I appreciate the difference between literal infinity and consumers' demand for software, there's just so much bad software out there waiting to be improved that I can't see us hitting saturation soon.

This reasoning is flawed in my opinion, because at the end of the day, the software still has to be paid for (for the people that want/need to make a living out of it), and customers wallet are finite.

Our attention is also a finite resource (24h a day max). We already see how this has been the cause for the enshittificaton of large swathes of software like social media where grabbing the attention for a few seconds more drives the main innovation...


Most software is paid for by businesses, not consumers.

the demand for software has increased. The demand for software engineers has increased proportionally, because we were the only source of software. This correlation might no longer hold.

Depending on how the future shapes up, we may have gone from artisans to middlemen, at which point we're only in the business of added value and a lot of coding is over.

Not the Google kind of coding, but the "I need a website for my restaur1ant" kind, or the "I need to agregate data from these excel files in a certain way" kind. Anything where you'd accept cheap and disposable. Perhaps even the traditional startup, if POCs are vibecoded and engineers are only introducer later.

Those are huge businesses, even if they are not present in the HN bubble.


> "I need a website for my restaurant" kind, or the "I need to aggregate data from these excel files in a certain way" kind

I am afraid that kind of jobs were already over by 2015. There are no code website makers available since then and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point, its so efficient even AI wont be more cost effective than that. If you have $10k saved you can hire a competitive agency to maintain and build your website. This business is completely taken over by low cost fiverr automators and agencies for high budget projects. Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store.


Just to add to the point: no code web site makers have already incorporated AI to simplify marketing tasks like drafting copies/blogs/emails.

I wonder exactly what you do, because almost none of your comment jibes with my knowledge and experience.

Note that I own an agency that does a lot of what you say is “solved”, and I assure you that it’s not (at least in terms of being an efficient market).

SMBs with ARR up to $100m (or even many times more that in ag) struggle to find anyone good to do technical work for them either internally or externally on a consistent basis.

> I am afraid that kind of jobs were already over by 2015.

Conceptually, maybe. In practice, definitely not.

> There are no code website makers available since then

… that mostly make shit websites.

> and if you can't do it yourself you can just pay someone on fiverr and get it done for less than $5-50 at this point,

Also almost certainly a shit website at that price point, probably using the no-code tools mentioned above.

These websites have so many things wrong with them that demonstrably decrease engagement or lose revenue.

> its so efficient even AI wont be more cost effective than that.

AI will be better very soon, as the best derivative AI tools will be trained on well-developed websites.

That said, AI will never have taste, and it will never have empathy for the end user. These things can only be emulated (at least for the time being).

> If you have $10k saved you can hire a competitive agency to maintain and build your website

You can get an ok “brochure” website built for that. Maintaining it, if you have an agency that actually stays in business, will be about $100 minimum for the lowest effort touch, $200 for an actually one line change (like business hours), and up from there from anything substantial.

If you work with a decent, reputable agency, a $10k customer is the lowest on the totem pole amongst the agency’s customer list. The work is usually delegated to the least experienced devs, and these clients are usually merely tolerated rather than embraced.

It sucks to be the smallest customer of an agency, but it’s a common phenomenon amongst certain classes of SMBs.

> This business is completely taken over by low cost fiverr automators and agencies for high budget projects.

This is actually true. Mainly because any decent small agency either turns into one that does larger contracts, or it gets absorbed by one.

That said, there is a growing market for mid-sized agencies (“lifestyle agencies”?).

> Agencies have become so good now that they manage websites from Adidas to Lando Norris to your average mom & pop store

As mentioned above, you absolutely do not want to be a mom and pop store working with a web agency that works with any large, international brand like Adidas.

I appreciate your points from a conceptual level, but the human element of tech, software, and websites will continue to be a huge business for many decades, imho.


anecdotal at best but I have directly heard CTOs - and hear noise beyond my immediate bubble - talk about 10x improvements with a straight face. Seems ridiculous to me, and even if the coding gets 10x easier the act of defining & solving problems doesn't #nosilverbullet

It doesnt even have to work, it just need to show execs that it can be used to cut costs by firing employees.

I perform software engineering at a research oriented institution and there are some projects I can now prototype without writing a line of code. The productivity benefits are massive

Prototypes are always meant to be thrown away though, someone's going to have to redo it to comply with coding standards, scaling requirements, and existing patterns in the code base.

If the prototype can be just dropped in and clear a PR and comply with all the standards, you're just doing software engineering for less money!


the reality is people will be shipping "prototype code" all the time, outpacing those that don't and winning.

> It’s almost impossible to discuss LLMs on HN because everyone is busy attacking the vibecoding strawman all the time.

What’s “the vibecoding strawman”? There are plenty of people on HN (and elsewhere) repeatedly saying they use LLMs by asking them to “produce full apps in hours instead of weeks” and confirming they don’t read the code.

Just because everyone you personally know does it one way, it doesn’t mean everyone else does it like that.


I'd assume the straw-man isn't that vibe-coding (vbc) doesn't exist, but that all/most ai-dev is vbc, or that it's ok to derail any discussion on ai-assisted dev with complaints applicable only/mainly to vbc.

Neither of those would be a strawman, though. One would be a faulty generalization and the other is airing a grievance (could maybe be a bad faith argument?).

https://en.wikipedia.org/wiki/Faulty_generalization

Though I get that these days people tend to use “strawman” for anything they see as a bad argument, so you could be right in your assessment. Would be nice to have clarification on what they mean.


Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man; I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).

> Hmm, if the purpose of either is so an "easier" target can be made, I think it could still qualify as a straw-man

Good point.

> I think an accusation of straw-manning is in part a accusation of another's intent (or bad faith - not engaging with the argument).

There I partially disagree. Straw-manning is not engaging with the argument but it can be done accidentally. As in, one may genuinely misunderstand the nuance in an argument and respond to a straw man by mistake. Bad faith does require bad intent.


Half strawman -- a mudman, perhaps. Because we're seeing proper experts with credentials jump on the 'shit, AI can do all of this for me' realization blog post train.

Which experts?

Well, I have a lot of respect for antirez (Redis), and at the time of my writing this comment he had a front page blog post in which we find:

"Writing code is no longer needed for the most part."

It was a great post and I don't disagree with him. But it's an example of why it isn't necessarily a strawman anymore, because it is being claimed/realized by more than just vibecoders and hobbyists.


Does Linus Torvalds count?

When has he stated that he uses AI like that? The last I heard about him a month ago, he specifically stated that he was not interested in AI to write code: https://www.zdnet.com/article/linus-torvalds-ai-tool-maintai...

3 days ago: https://github.com/torvalds/AudioNoise/blob/main/README.md

> Also note that the python visualizer tool has been basically written by vibe-coding. I know more about analog filters -- and that's not saying much -- than I do about python. It started out as my typical "google and do the monkey-see-monkey-do" kind of programming, but then I cut out the middle-man -- me -- and just used Google Antigravity to do the audio sample visualizer.


For me there are two things notesworthy in that repo:

* the README was clearly not written by an LLM nor aided

* he still uses GPLv2 (not 3) as the license for his works


the author of this post and Steve yegge come to mind

So another strawman?

Yeah I also sense this disconnect between the reality and hype.

In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.

But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.

But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. I think for those people, it does feel like a novel automation tool. It's just that we all already knew how to do this, by writing code. But most people didn't know how to do that. And now they can do a lot more.

And I think this is a genuine step change that will have a big effect on our industry. Personally, I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.


The best advice to juniors is "do not use AI!"

Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team unless he means in generating lines of code, which is to say tech debt.

Being able to put a lot of words on screen is not the accomplishment in programming. It usually means you've gone completely out of your depth.


> Dunno why the author thinks an AI-enhanced junior can match the "output"of a whole team

Because the author has a vested interest in peddling this bullshit given he works on Gemini at Google.


I think it does both: you can have an LLM automate bad coding (that's the vibe coding part), and you can have an LLM speed up good coding.

Many times, bad code is sufficient. Actually too many times: IMHO that is the reason why the software industry produces lower quality software every year. Bad products are often more profitable than good products. But it's not always for making bad products: sometimes it's totally fine to vibe code a proof or concept or prototype, I would say.

Other times, we really need stable and maintainable code. I don't think we can or want to vibe code that.

LLMs make low-quality coding more accessible, but I don't think they remove the need for high-quality coding. Before LLMs, the fraction of low-quality code was growing already, just because it was already profitable.

An analogy could be buildings: everybody can build a bench that "does the job". Maybe that bench will be broken in 2 months, but right now it works; people can sit on it. But not everybody can build a dam. And if you risk going to jail if your dam collapses, that's a good incentive for not vibe coding it.


i notice a huge difference between working on large systems with lots of microservices and building small apps or tools for myself. The large system work is what you describe, but small apps or tools I resonate with the automate coding crowd.

I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.


All I know is that firing half my employees and never hiring entry level people again nets me a bonus next quarter.

Not really sure why this article is talking about what happens 2 years from now since that’s 8 times longer than anything anyone with money or power cares about.


Hmmm I know this it’s true because if management only thought quarterly, no one would ever hire anyone. Hiring someone takes 6+ months to pay off as they get up to productivity.

But the management immediately gets street cred for increasing headcount and managing more resources.

I can't tell if we're doing like a sarcastic joking thing where we're making fun of management, or if you really believe this. If we're joking around, then haha. If you really believe this to be true, then you have a warped view of reality.

The street cred doesn't come from managing more resources, the street cred comes from delivering more.


What a benevolent bossman here, keeping 50% of the jockeys around this quarter. He is probably sacrificing one of his yachts for this.

He’s keeping some around so he can fire half again next quarter for another bonus. That’s the sort of forward-thinking strategic direction that made him the boss man.

So log(N) times the bonus. Very smart boss here.

I'm somewhere in between myself. Before LLMs, I used to block a few sites that distracted me by adding entries in /etc/hosts file to mapping them to 127.0.0.1 on my work machine. I also made the file immutable so that it would take a few steps for me to unblock the sites.

The next step was for me to write a cron job that would reapply the chattr +1 and rewrite the file once in 5 minutes. Sort of an enforcer. I used Claude (web) to write this and cut/pasted it just because I didn't want to bother with bash syntax that I learned and forgot several times.

I then wanted something stronger and looked at publicly available things like pluckeye but they didn't really work the way I wanted. So I tried to write a quick version using Claude (web) and started running it (October 2025). It solved my problem for me.

I wanted a program to use aider on and I started with this. Every time, I needed a feature (e.g. temporary unblocks, prevent tampering and uninstalling, blocking in the browser, violation tracking etc.), I wrote out what I wanted and had the agent do it. OVer the months, it grew to around 4k lines (single file).

Around December, I moved to Claude code from aider and continued doing this. The big task I gave it was to refactor the code into smaller files so that I could manage context better. IT did this well and added tests too. (late December 2025).

I added a helper script to update URLs to block from various sources. Vibe-coded too. Worked fine.

Then, I found it hogging memory because of some crude mistakes I vibe-coded early on fixed that. Cost me around $2 to do so. (Jan 2026).

Then I added support to lock the screen when I crossed a violation threshold. This required some Xlib code to be written. I'm sure I could have written it but it's not really worth it. I know what to do and doing it by hand wouldn't really teach me anything except the innards of a few libraries. I added that.

So, in short, this is something that's 98% AI coded but it genuinely solves a problem for me and has helped me change my behaviour in front of a computer. There are no companies that my research revealed that offer this as a service for Linux. I know what to do but don't have the time write and debug it. With AI, my problem was solved and I have something which is quite valuable to me.

So, while I agree with you that it isn't an "automation tool", the speed and depth which it brings to the environment has opened up possibilities that didn't previously exist. That's the real value and the window through which I'm exploring the whole thing.


It seems alright, but I wonder if it crashes the economy for vast majority of internet businesses. I personally run some tool websites like ones to convert images, cut videos but the traffic for now seems stable, but my tools don't target devs. Most likely you didnt actually need it, but who am i to judge, I just find myself doing random projects because it "takes less time".

Interesting, I did something similar but with browser policies on my PC although I just got the LLM to write the json

I’m doing both. For production code that I care about, I’m reading every line the LLM writes, correcting it a lot, chatting with an observer LLM who’s checking the work the first LLM and I are writing. It’s speeding stuff up, it also reduces the friction on starting on things. Definitely a time saver.

Then I have some non-trivial side projects where I don’t really care about the code quality, and I’m just letting it run. If I dare look at the code, there’s a bunch of repetition. It rarely gets stuff right the first time, but that’s fine, because it’ll correct it when I tell it it doesn’t work right. Probably full of security holes, code is nasty, but it doesn’t matter for the use-cases I want. I have produced pieces of software here that are actively making my life better, and it’s been mostly unsupervised.


You can think of LLMs as a higher level language for whatever programming language you are using, but informal with ambiguous grammar.

I don't think that works. The fact that it can produce different output for the same input, usage of tools etc. don't really fit into the analogy or mental model.

What has worked for me is treating it like an enthusiastic intern with his foot always on the accelerator pedal. I need to steer and manage the brakes otherwise, it'll code itself off a cliff and take my software with it. The most workable thing is a pair programmer. For trivial changes and repeatedly "trying stuff out", you don't need to babysit. For larger pieces, it's good to make each change small and review what it's trying.


I feel like some of the frontier models are approaching run-of-the-mill engineer who does dumb stuff frequently. That said, with appropriate harnessing, it’s more like go-karts on a track; you can’t keep them out of the wall, but you can reset them and get them back on a path (when needed). Not every kart ends up in the wall, but all of them want to go fast, so the better defined the track is the more likely the karts will find a finish line. Certainly more likely than if you just stuck them in a field with no finish line and said “go!”.

I don't really know if I agree with you but the analogy is really good. :)

On the foolishness of "natural language programming". - prof.dr.Edsger W.Dijkstra

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


>We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.

It seems it only took until about 2023 or so


And a buggy (non-deterministic) compiler that will occasionally do random things that break your security model, leak sessions, and the like.

My experience (with minfx.ai) has been that it is very important to build a system which imposes lots of constraints on the code. The more constrained you can make it, the better. Rust helps a lot in this. Thanks to this, for the first time in my career, I feel like the bigger the system gets, /the easier/ it is to develop, because AI can discover and reuse common components. While human would struggle searching for these and how to use them in a large codebase. Very counter-intuitive!

It’s a better Google for me. Instead of searching AWS or StackOverflow it hallucinates a good enough output that I can refactor into an output.

The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.

It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.

It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.


I resonate with the phrase: "You never learn to ask good questions"

If by block by block you mean you stop using an IDE and spend most of your time looking at diffs, sure. Because in a well structured project, that's all you need to do now: maintain a quality bar and ensure Claude doesn't drop the ball.

This is how I use it for work-production code.

>> The skillset is shifting from implementing algorithms to knowing how to ask the AI the right questions and verify its output.

The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.

With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.


And there's a ton of human incentives here to take shortcuts in the review part. The process almost pushes you to drop your guard: you spend less physical time observing the code while you write, you get huge chunks of code dropped on you, iterations change a lot to keep a mind model, there's FOMO involved about the speed gain you're supposed to get... We're going to see worse review quality just by a mater of UX and friction of the tool.

Yes! It depends on the company, of course, but I think plenty of people are going to fall for the perverse incentives while reviewing AI output for tech debt.

The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.

Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.

The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.

But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.


  > enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle
and a new bottleneck appears...

(i don't disagree with this take though, qa should be done from start to finish and integral every step of the way)


> The bottom line: Junior developer hiring could collapse as AI automates entry-level tasks

If AI automated entry-level tasks from today, that just means "entry-level" means something different now. It doesn't mean entry-level ceases to exist. Entey-level as we know it, but not entry-level in general.


It's not just juniors. One of my partners carries a PhD in epidemiology and bimolecular science; they've been job searching for eight months with no bites, just silence. A friend of mine is a chemical engineering PhD, she's been searching for a year and just had her first interview.

I have eight years of software engineering experience but am only one rung up from the bottom of our SWE ladder, and we don't even hire the bottom rung anymore at my org. Seems like there's crushing pressure from above to limit hiring at every stage.


Entry level jobs have been in decline for years though.

I used to work on teams which were 50% entry level. Then just one. Then all senior teams became the norm.

This all happened after I became senior but before AI came along.


Definitely. Our hiring pipeline seized up in 2022 and we've been disproportionately hiring senior+ roles since, and many teams are senior heavy.

I think AI is a convenient scapegoat for other macro trends.


Yeah this is truly my biggest concern. I think it's really bad.

One indicator is intern hiring. I have seen the intern budgets getting slashed by 60-70% (4 interns in 2025 vs 14 interns in 2024).

Sometimes I wonder if I made the wrong choice with software development. Even after getting to a senior role, according to this article, you're still expected to get more education and work on side projects outside of work. Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.

To put it very directly - if you are OK with being good but not exceptional at your job, this is totally fine. If you want to be exceptional you will probably need to put in the extra work. Not everyone is OK with this tradeoff and it's totally fine to "just" be good and care more about having outside hobbies and a social life and etc.

I had a period of time where I really wanted to be exceptional. I spent many hours studying and working on side projects but it just never really clicked. I think I'm decent at what I do for work but more complicated topics (graphics programming, low level memory management, etc.) just seem to not stick, no matter how many hours I put into studying. Sometimes it feels like I'm forcing this career but after this many years it's hard to give it up. I do still enjoy it but I don't think I'll ever really get it.

Your story sounds similar to mine. There are some parts of programming at which I know I will never excel. I also don't have time in my life to spends lots of hours outside of work developing my skills. I think it's important to realize that the median software engineer is probably not doing these things either. Maybe the top 10% are? Something like that would be my guess. It's okay to not be in the top 10%!

This is refreshing to read. Sometimes when I come here and look at the posts and comments it seems like lots of people are doing lots of things that are confusing to me. I'm recently coming to terms with being okay about not being able to learn everything that I don't understand and outside of work I've started pursuing non-programming related hobbies which led me to make the comment I did.

Since you're getting into a senior role, learn the mantra, it depends :D

The usual trade-off of a well paid software development job is lack of job security and always learning - the skill set is always changing in contrast with other jobs.

My suggestion, stop chase trends and start to hear from mature software developers to get better perspective on what's best to invest on.

And why the mantra is always true?

You can find stable job (slow moving company) doing basic software development and just learn something new every 4 years and then change companies.

Or never change company and be the default expert, because everyone else is changing jobs, get job security, work less hours and have time within your job to uplift your skills.

Keep chasing latest high paid jobs/trends by sacrificing off time.

What's the best option for you? Only you know, it's depends on your own goals.


No. As junior you feel the pressure to make senior. You can't be junior for too long.

As senior, if you choose, you can coast. By coast I mean you do justice to your job and the salary you are paid. Its a perfectly acceptable choice for someone to be senior for as long as they want.

The biggest bottleneck is going to be what other seniors and higher think of you.


This also dovetails with his other point:

Given how quickly models, tools and frameworks rise and fall, betting your career on a single technology stack is risky.

This was something I dealt with a lot when JS frameworks became the newest shiny thing and suddenly the entire industry shifted in a few years from being a front-end developer to being a full stack developer.

This happened to a lot of my friends who went all in on Angular. Then everybody switched to React.

The issue then became, "What should I learn?" because at my company (a large fortune 200 company) they were all in on Angular, and weren't looking for React developers, but I knew companies were moving away from Angular. So do I work to get better and more indispensable with Angular, and risk not knowing React? Or do I learn the new shiny framework betting at some point my company will adopt it or I will be laid off and need to know it?

It feels like half my life as a dev was spent being a degenerate gambler, always trying to hedge my bets in one way or another, constantly thinking about where everything was going. It was the same thing with dozens of other tools as well. It just became so exhausting trying to figure out where to put your effort into to make sure you always knew enough to get that next job.


frameworks are irrelevant. if someone can work in angular they can pick up react as they go and vice versa, especially if assisted by AI. the problem is hiring practices where resumes are discarded if the keywords don't much...

>I made the wrong choice with software development.

If you didn't like working with computers, then you (and another gazillion people who choose it for the $$$) probably made the wrong choice.

But totally depends on what you wanted to get out of it. If you wanted to make $$$ and you are making it, what is the problem? That is assuming you have fun outside of work.

But if you wanted to be the best at what you do, then you gotta love what you are doing. May be there are people who have super human discipline. But for normal people, loving what they goes a long way towards that end.


> If you didn't like working with computers, then you probably made the wrong choice.

This doesn't match what I have seen in other industries. Many auto mechanics I know drive old Buicks or Ford's with the 4.6l v8 because the cars are reliable and the last thing they want to do on a day off is have to work on their own car. I know a few people in other trades like plumbers, electricians, and chefs and the pattern holds pretty well for them as well.

You can enjoy working with computers and also enjoy not working in your personal time.


Exactly this. I love writing code and solving problems. In my 20s and very early 30s I worked a lot of long hours and tried my best to always be learning new things and upskilling but it's never ending. It's hard sometimes to not look back and think about the hours I spent on code instead of building stronger friendships and relationships.

Every career path presents you with some version of this opportunity cost dilemma. The good news is you are not stuck - you can recalibrate to allow more of what you now know you want, while still maintaining a grip on the part of the job/career/enterprise that you actually excel at, and jettisoning the rest.

> If you didn't like working with computers, then you (and another gazillion people who choose it for the $$$) probably made the wrong choice.

The problem is the field is changing, fast. I love writing code... I'm not so sure I love prompting Claude, coordinating agents and reviewing +30k vibe-coded PRs.


> If you didn't like working with <insert anything>, then you ...

This type of argument can hold for any profession and yet we aren't seeing this pattern much in other white-collar professions. Professors, doctors, economists, mechanical engineers, ... it seems like pretty much everybody made the wrong choice then?

I think this is a wrong way to look at it. OP says that he invested a lot of time into becoming proficient in something that today appears to be very close to part extinction.

I think that the question is legit, and he's likely not the only person asking oneself this question.

My take on the question is ability to adapt and learn new skills. Some will succeed some will fail but staying in status-quo position will certainly more likely lead to a failure rather than the success.


Your first point hits the nail on the head. We are expected to have side projects and to keep up with new things (outside of work) but most other jobs don't have that. I would be okay with my work sending me off for additional training, on company time, but I don't want it to consume the time I have left after work.

I don't know why but our profession for some reason is different than the others in this respect and people often like to think that this is a norm and if you're not doing it you're not worthwhile. I think it has to do with some interesting psychological effects of people who are generally attracted to this profession but also due to the companies who implemented those mental hacks as a means to attract people who are 100% for it. Leetcode style interviews where you virtually have to spend months to prepare oneself for the interview, even as a senior, is one example of that but I also do remember the age, which was not too long ago, where your resume wouldn't even get a look if you didn't have several open-source repositories/contributions to show. This is in some part even valid as of today.

There are plenty of such examples but both of these imply that you're ready to devote a lot of your extra time, before or after the job, only that you can show you're relevant in the eyes of those who are the decision makers. This normally means that you're single, that you have no kids, family, no other hobbies but programming etc. This works when you're in your 20's and only up to the certain point unless you become a weirdo in your 30's and 40's etc. without any of these.

However, in the age where we are met with the uncertainty, it may become a new normal to devote extra effort in order to be able to remain not competitive but a mere candidate for the job. Some will find the incentive for this extra pain, some will not but I think it won't be easy. Perhaps in 5 years time we will only have "AI applied" engineers developing or specializing their own models for given domains. Writing code as we have it today I think it's already a thing of a past.


> for some reason

I think the reason is quite simple. Software is endlessly configurable. And thus a lot higher chance to get the configuration wrong.

This is what makes it attractive, and makes it hard to get right.

You cannot get good at it without making a ton of mistakes. When companies look for people with a lot of side projects, they are looking at people who already have made such mistakes and learned from them, preferably on their own time and not on paid, companies time.


That argument would be sound if no other profession existed that is at least comparably complex.

It is not about complexity.

I ll list some attributes of software development that makes it unique.

* No hard rules, textbooks to follow, industry as a whole still make costly mistakes and recovery cycles.

* No easy way to gauge the requirement-fit of the thing you made. Only time will tell.

* Cheap (financially) to practice, make mistakes and learn.


You're making some strong assumptions about other industries which are incorrect. All of that exists elsewhere and is not so unique to software as you may think. Things are never that simple. Your argument reads more as a justification to the status quo and gatekeeping rather than being objective. I'm sure the doctors would have said something similar for their profession too but it doesn't necessarily mean it is true. Software engineering is a demanding profession but it is not that special as we like to think it is.

>doctors would have said something similar for their profession

Actually that applies to doctors. A doctor who is not curious and is not willing to do learn/research on their own initiative is only a marketing hand of pharma.

But it is quite hard for doctors to do any real research independently. They can't really do experiments on real people...

Software is really special.


So a software engineer who is not curious enough to invest 15+ hours daily over the course of years is just a marketing hand of ... what ... programming language of their choice or company they work for?

Don't get me wrong. I am that guy, who probably over-invested into the development of his skills but I don't think it's a normal thing to expect.


>So a software engineer who is not curious enough to invest 15+ hours daily over the course of years is just a marketing hand of ..

That does not apply here. Because more often than not, we don't prescribe products/services that our clients must go out and buy, without exception.

>it's a normal thing to expect.

It is not a normal thing to expect because in other fields there are few people who can afford to do that. So an employer cannot really pick someone from that pool.

But in software, it is possible if one choose to do it. So the pool is a lot bigger, so it becomes feasible for an employer to pick someone from there, instead of picking from I-am-only-as-good-as-I-am-paid-to-be pool..


> That does not apply here. Because more often than not, we don't prescribe products/services that our clients must go out and buy, without exception.

You know that treating patients is not only about picking the right medicament and writing prescriptions? It's about diagnosing, testing the hypotheses, optimizing for the particular patient case, learning about all the specific factors of their environment including the genetics, then we have surgeons, etc.

And yet I don't quite see doctors being on a time spending spree to become exquisite in all of those things. Nor do I see hospitals or clinics doing such knowledge and ability harness tests over their potential employees. Stakes are much higher in medicine than they are in software so it makes no sense at all to make an argument that doctors cannot "afford" it. They can, they have books and practice the same way we do. I don't get to modify the production system every day but yet I am learning constantly of how not to make those same production system go down when I do.

> It is not a normal thing to expect because in other fields there are few people who can afford to do that.

It's not a normal thing in software too, you know? Let's please stop normalizing things which are not normal. If there is one thing that makes me happy in this new era of AI-assisted development is that all this bs is coming to its end.


I am not normalizing anything!

I am just describing the logical behavior of an employer who wants to get the best person for the job.

About the other thing, I think I will let you have the last word since I feel that we are speaking past each other.


Software development as a career was born, reached maturity and died in less than 100 years (being generous).

It never had time to develop into a truly professional field like medicine, law or engineering.


I don't know where you take the idea that it's dead or dying as a discipline. The need for software solutions is clearly bigger than ever and growing. And what I see, even as and especially with LLM coding becoming more prevalent, is a breakneck rapid decline in the quality of delivered software and a downright explosion of security issues and incidents.

AI is making it so that”working with computers” is no longer a viable career path. At least that’s the goal.

As AI allows more and more people to accomplish tasks without a deep understanding of computers, “working with computers“ will be as much of a marketable job skill as “working with pencils” 50 or 100 years ago.


>I made the wrong choice with software development.

I wonder what the best decision would have been. What job is AI immune and has a stable 40 hour week, no overtime, with decent pay. Teacher? Nursing?


Neither teachers nor nurses only work 40 hours and no overtime. :')

Definitely something that requires social/interpersonal skills though will be the thing that winds up being AI immune. Humans are social creatures so I assume there will always be some need for it.


humans have stopped being social creatures when social media took off

I wasn't thinking AI immune in my comment but it's fair to include it. I wouldn't even mind overtime because that implies pay (in my mind). I'm more so talking about the unpaid time we are expected to put into further education or into side projects.

SWE job always required education. Few made a career knowing one thing (e.g. COBOL) but those days are long gone. I have been SWE for 3 decades and have always had to further my education (including now)

> Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.

I feel you. It's a societal question you're posing. Your employer (most employers) deal in dollars. A business is evaluated by its ability to generate revenue. That is the purpose of a business and the fiduciary duty of the CEO's in charge.


I have been programming professionally since 1996. I have never once spent one minute on “side projects”.

It's funny you should ask this. When I started out, 30 years ago, here were the answers you'd get from most people:

> Am I supposed to want to code all the time?

Yes.

> When can I pursue hobbies,

Your hobby should be coding fun apps for yourself

> a social life, etc.

You social life should be hanging out with other engineers talking about engineering things.

And the most successful people I know basically did exactly that.

I'm not saying y'all should be doing that now, I'm just saying, that is in fact how it used to be.


Not my experience at all. The very notable engineers I know didn't do their most notable work because of engineering or coding skills. Instead it was finding interesting problems and making a start or thinking a bit differently about something and doing something about it and being approachable and available all along that made a difference.

If all they did was code all the time, write code for fun and interacted mostly with other similar people, they probably wouldn't be the first choice for these projects.


That's not true at all.

The ones who ace their careers are for the most people that are fun, driven, or psychos, all social traits that make you good in a political game.

Spending lots of time with other socially awkward types talking about hard math problems or whatever will get you nowhere outside of some SF fantasy startup movie.

I'd say it's especially important for the more nerdy (myself included) to be more outgoing, and do other stuff like sales or presentations, design/marketing og workshops - that will make you exceptional because you then got the "whole package" and undestand the process and other people.


> And the most successful people I know basically did exactly that.

Well that depends heavily on how you define successful. Successful in life? I would tend to disagree, unless you believe that career is the only thing that counts. But even when career is concerned: the most successful people I know went on from being developer to some high end management role. The skills that brought them there definitely did not come from hanging out with other engineers talking about engineering things.


> You social life should be hanging out with other engineers talking about engineering things.

Fuck. That.

I worked at a faang, successful people weren't people that did engineering, it was people who did politics.

The most successful people were the ones that joined at the same time as the current VP.

Your hobbies need to be fun, to you. Not support your career. If its just there to support your career, its unpaid career development, not a hobby. Should people not code in their free time? thats not for me to decide. If they enjoy it, and its not hurting anyone, then be my guest.

Engineers are generally useless at understanding whats going on in the real world, they are also quite bad at communicating.

do. fun. things.


I love your last point. I asked this question because I used to be the person that would spend 4+ hours after work every day trying to keep up with new tech and working on side projects. But now, I've gotten into art and it's really changed my perspective on things like this. I've spent many hours doing, as you call it, unpaid career development instead of pursuing hobbies, building up my friendships, and in general just having fun. It feels like I've taken life so seriously and I don't have much to show for it.

You just sound very angry your career isn't fun to you. I'm sorry.

Thats an odd assertion.

My career has been fun, thats why I still do the thing I'm doing. I've worked with the very best in their respective fields for ~20 years.

I have done many and varied fun things through work, and continue to do so.

But.

Work stops at contracted time. After that it's me time.


I never worked “for fun”. My job for 30 years is just a means to support my addiction to food and shelter. I don’t hate my job especially my last 3 since 2020 when I started working remotely. But it is just something I do.

Yes, that's the answer that sounds utterly miserable. Spending 30 years toiling for a third of your day at something you don't like.

I'm sorry for you as well.


Out of all the things I consider “toiling” waking up, rolling out of bed, and walking over to the room next door is not one of them.

I’m more concerned that it is the highlight of someone’s life being in front of a computer all day.


I also started 30 years ago - mid 1996.

I did not do side projects. I really enjoyed most of my 20s as a single person. I was a part time fitness instructor, I dated, hung out with friends, did some traveling.

The other developers at my job also had plenty of outside hobbies.


Funny that he mentions people not pivoting away from COBOL. My neighbors work for a bank, programming in COBOL every day. When I moved in and met them 14 years ago, I wondered how much longer they would be able to keep that up.

They're still doing it.


The market can stay irrational longer than you can stay solvent

it sounds like these people are staying solvent as long as the market stays irrational.

to be fair, that cobol program has been working for probably 30 years (maybe even longer than that) - thats unusually reliable and long-lived for a software project.

the only real contender in this regard is the win32 api, and actually that did get used in enterprise for a long time too before the major shift to cloud and linux in the mid 2010s.

ultimately the proof is in the real-world use, even if its ugly to look at... id say, even as someone who is a big fan of linux, if i were given a 30 year old obscure software stack that did nothing but work, i would be very hesitant to touch it too!


It still needs continual software maintenance though. The developers still making their money in COBOL make it because it doesn't just keep working untouched. (Just about no software does.)

> the only real contender in this regard

I would like to add the business core functions of SAP R/3 (1992). Much of the code created for it in the early 90s still lives in the current SAP S/4HANA software.


There is a great deal of death in a language.

> junior developer employment drops by about 9-10% within six quarters, while senior employment barely budges. Big tech hired 50% fewer fresh graduates over the past three years.

This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.

> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.

Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.

The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.

Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.

[1] https://download.ssrn.com/2025/11/6/5425555.pdf


AI became very popular suddenly. This is something that wasn't in anyone's budget. I believe cost savings from hiring freezes and layoffs are to pay for AI projects and infrastructure.

Right so you shift budget away from other things. The “study” looked at ai integration job listings. You have to budget those.

Hiring was booming until about 2020 though.

The TCJA change (of 2017) went into effect in 2022, I should have been more clear.

I didn't know that but that makes perfect sense. A lot of layoffs and outsourcing coincided with that. Are there any signs it'll be reintroduced?

It was late last year.

From my experience AI is only good with mainstream coding tasks. Javascript, node, react, crud. Whatever it has seen in overabundance, it is good with. Already with typescript it is less strong than with javascript. It is just a clever token generator without intelligence. Often it resembles intelligence but so can a human by learning quotes of famous people.

It is a new and exciting tool but immediately limited with medium complex tasks. Also we will see a lot more code with tricky bugs coming out of AI assistants and all of that needs to be maintained. If software development gets cheaper per line of code then there will be more demand. And someone has to clean up the mess created by people who have no clue whatsoever of SWE.

Once upon a time people developed software with punch hole cards. Even without AI a developer today is orders of magnitude more proficient than that.

The only thing I hope I am not going to see in my lifetime is real artificial intelligence.


>The flip scenario: AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation.

I find this one hard to believe. Software is already massively present in all these industries and has already replaced jobs. The last step is complete automation (ie drone tractors that can load up at a hub, go to the field and spray all by themselves) but the bottleneck for this isn't "we need more code", it's real-world issues that I don't see AI help solving (political, notably)


I tend to agree with your assessment. The increase in demand cannot possibly equal the loss from AI.

Given projections of AI abilities over time AI necessarily creates downward pressure on new job creation. AI is for reducing and/or eliminating jobs (by way of increasing efficiency).

AI isn't creating 'new' things, it's reducing the time needed to do what was already being done. Unlike the automobile revolution new job categories aren't being created with AI.


I think there's a good chance demand goes up in Europe.

We are going to need to de-risk our software dependencies, and Germany is going to need to use computers.

Germany is going to be crazy, I think.


Germans were so quick to revert back to paper after COVID that it felt like one of the only reasons they came out of lockdown eventually was to get paper back.

The Gewerkschaft tactics to resist AI is what I’m really interested in seeing.


Agree, people were already worried about the excessive focus on software over physical technology well before LLMs significantly reduced the barrier to entry

> Demonstrate that one junior plus AI can match a small team’s output.

I don't understand the take that a junior with AI is able to replace a small team. Maybe a horribly performing small team? Even then, wouldn't it just be logical to outfit the small team with AI and then have a small team of small teams?

The alleged increased AI output of developers has yet to be realized. Individuals perceive themselves as having greatly increased output, but the market has not yet demonstrated that with more products (or competitors to existing products) and/or improved products.


Something very odd about the tone of this article. Is it mostly AI written? There is a lot of references and info. But I am feeling far more disconnected with it.

For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.


Yes, lots of AI style/writing in this article. I wouldn't necessarily discredit an article just based on stylization if the content was worth engaging with ... but like you mentioned, when the AI is given too much creative control it goes off the rails by the middle and turns into what the kids call "AI slop".

It resembles an article, it has the right ingredients (words), but they aren't combined and cooked into any kind of recognizable food.


Thanks a lot for taking the time to confirm. Not hating on AI slop or something. But I do genuinely feel if he/she/they tried to invest time in writing it, people would consume and enjoy it better.

Its hard to put my finger on it. But it lacks soul, it factor or whatever you want to call it. Feels empty in a way.

I mean, this is not the first AI assisted article am reading. But usually, it's to a negligible level. Maybe it's just me. :)


I suppose that eventually enough people will have grown up reading mostly ai slop that that way of speaking will eventually become the norm.

100% has that AI slop smell.

intro... Problem... (The Bottom line... What to do about it...) Looped over and over. and then Finally...

I want to read it, but I can't get myself to.


Understandable. I usually only recognise AI assist cos someone in the comment section points it out. But the off putting tone of this was blatantly obvious. This is by far the most AI influenced article I have read yet.

imo, The OP has bad ai-assisted takes on almost every single "critical question". This makes me doubt if he has breadth of experience in the craft. For example.

> Narrow specialists risk finding their niche automated or obsolete

Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.

> Universities may struggle to keep up with an industry that changes every few months

Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.


I think this is a fair take (despite the characteristic HN negativity/contrarianism), and succinctly summarizes a point that I was finding hard to articulate while reading the article.

My similar (verbose) take is that seniors will often be able to wield LLMs productively, where good-faith LLM attempts will be the first step, but will be frequently be discarded when they fail to produce the intended results (personally I find myself swearing at the LLMs when they produce trite garbage; output that gets `gco .`-ed immediately- or LLM MR/PRs that get closed in favor of manually accomplishing the prompted task).

Conversely, juniors will often wield LLMs counterproductively, accepting (unbeknown) tech debt that the neither the junior nor the LLM will be able to correct past a given complexity.


All well-documented knowledge fields will be gone if software goes. Then the undocumented ones will become documented, and they too will go. The best advice to junior devs is get a hands on job before robotic articulating sausages are perfected and humans become irrelevant blobs of watery meat.

I think the GPT3 or 4 minute mile moment for robotics will be when we see a robotic hand with the dexterity of a 6 year old. Once that happens it will quickly be over.

I'm looking forward to the antics of first-generation robotic plumbers and electricians, myself.

Will be amusing watching them fail at common tasks for a couple years then they will suddenly be better than any human plumber or electrician.

I have been telling people that, titles aside, senior developers were the people not afraid to write original code. I don’t see LLMs changing this. I only envision people wishing LLMs would change this.

I almost think what a lot of people are coming to grips is with is how much code is unoriginal. The ones who've adjusted the fastest were humble to begin with. I don't want to claim the title, but I can certainly claim the imposter syndrome! If anything, LLMs validated something I always suspected. The amount of truly unique, relevant to success, code in a given project is often very small. More often than not, it's not grouped together either. Most of the time it's tailored to a given functionality. For example, a perfectly accurate Haversine distance is slower than an optimized one with tradeoffs. LLMs have not yet become adept at housing the ability to identify the need for those tradeoffs in context well or consistently, so you end up with generic code that works but not great. Can the LLM adjust if you explicitly instruct it to? Sure, sometimes! Sometimes it catches it in a thought loop too. Other times you have to roll up your sleeves and do the work like you said, which often still involves traditional research or thinking.

I disagree.

1) Senior developers are more likely to know how to approach a variety of tasks, including complex ones, in ways that work, and are more likely to (maybe almost subconsciously) stick to these proven design patterns rather than reinvent the wheel in some novel way. Even if the task itself is somewhat novel, they will break it down in familar ways into familar subtasks/patterns. For sure if a task does require some thinking outside the box, or a novel approach, then the senior developer might have better intuition on what to consider.

The major caveat to this is that I'm an old school developer, who started professionally in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.

2) I think the real distinction of senior vs junior programmers, that will carry over into the AI era, is that senior developers have had enough experience, at increasing levels of complexity, that they know how to architect and work on large complex projects where a more junior developer will flounder. In the AI coding world, at least for time being, until something closer to AGI is achieved (could be 10-20 years away), you still need to be able to plan and architect the project if you want to achieve a result where the outcome isn't just some random "I let the AI choose everything" experiment.


I completely agree with your second point. For your first point my experience tells me the people least afraid to write original code are the people least oppositional to reinventing wheels.

The distinguishing behavior is not about the quantity of effort involved but the total cost after consideration of dependency management, maintenance time, and execution time. The people that reinvent wheels do so because they want to learn and they also want to do less work on the same effort in the future.


> in the early 80's, a time when you basically had to invent everything from scratch, so certainly there is no mental block to having to do so, and I'm aware there is at least a generation of developers that grew up with stack overflow and have much more of a mindset of building stuff using cut an paste, and less having to sit down and write much complex/novel code themselves.

I think this is really underappreciated and was big in driving how a lot of people felt about LLMs. I found it even more notable on a site named Hacker News. There is an older generation for whom computing was new. 80s through 90s probably being the prime of that era (for people still in the industry). There constantly was a new platform, language, technology, concept to learn. And nobody knew any best practices, nobody knew how anything "should work". Nobody knew what anything was capable of. It was all trying things, figuring them out. It was way more trailblazing / exploring new territory. The birth of the internet being one of the last examples of this from that era.

The past 10-15 years of software development have been the opposite. Just about everything was evolutionary, rarely revolutionary. Optimizing things for scale, improving libraries, or porting successful ideas from one domain to another. A lot of shifting around deck chairs on things that were fundamentally the same. Just about every new "advance" in front-end technology was this. Something hailed as ground breaking really took little exploration, mostly solution space optimization. There was almost always a clear path. Someone always had an answer on stack overflow - you never were "on your own". A generation+ grew up in that environment and it felt normal to them.

LLMs came about and completely broke that. And people who remembered when tech was new and had potential and nobody knew how to use it loved that. Here is a new alien technology and I get to figure out what makes it tick, how it works how to use it. And on the flip side people who were used to there being a happy path, or a manual to tell you when you were doing it wrong got really frustrated as their being no direction - feeling perpetually lost and it not working the way they wanted.

I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs. So much was, "I tried something it didn't work so I gave up". Or "I just kept telling it to work and it didn't so I gave up". Explore, pretend you're in a sci-fi movie. Does it work better on Wednesdays? Does it work better if you stand on your head? Does it work differently if you speak pig-latin? Think side-ways. What behavior can you find about it that makes you go "hmm, that's interesting...".

Now I think there has been a shift very recently of people getting more comfortable with the tech - but still was surprised of how little of a hacker mindset I saw on hacker news when it came to LLMs.

LLMs have reset the playing field from well manicured lawn, to an unexplored wilderness. Figure out the new territory.


To me, the "hacker" distinction is not about novelty, but understanding.

Bashing kludgy things together until they work was always part of the job, but that wasn't the motivational payoff. Even if the result was crappy, knowing why it was crappy and how it could've been better was key.

LLMs promise an unremitting drudgery of the "mess around until it works" part, facing problems that don't really have a cause (except in a stochastic sense) and which can't be reliably fixed and prevented going forward.

The social/managerial stuff that may emerge around "good enough" and velocity is a whole 'nother layer.


No, the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers. Case in point, you can’t really cleverly “hack” LLMs. It’s more a roll of the dice that you try to affect using hit-or-miss incantations.

>the negative feelings about LLMs are not because they are new territory, it’s because they lack the predictability and determinism that draw many people to computers

Louder for those turned deaf by LLM hype. Vibe coders want to turn a field of applied math into dice casting.


an unexplored wilderness that you pour casino chips into (unless you're doing local model stuff yea yea)

>I found it especially ironic being on hacker news how few people seemed to have a hacker mindset when it came to LLMs

You keep using the word "LLMs" as if Opus 4.x came out in 2022. The first iterations of transformers were awful. Gpt-2 was more of a toy and Gpt-3 was an eyebrow-raising chatbot. It has taken years of innovations to reach the point of usable stuff without constant hallucinations. So don't fault devs for the flaws of early LLMs


> Senior developers: Fewer juniors means more grunt work landing on your plate

I'm not sure I agree with that. Right now as a senior my task involves reviewing code from juniors; replace juniors with AI and it means reviewing code from AI. More or less the same thing.


> More or less the same thing.

Worse. The AI doesn't share any responsibility.


And can’t be mentored by the senior except in some ersatz flat text instruction files.

And the mistakes AI makes don't carry the same code smells juniors make. There are markers in human code that signals how well they understood the problem, AI code more often looks correct at a glance even if it's horribly problematic.

Yeah, this is a big thing. AIs (at the moment) don't learn. You wait for a new model to come out and hope it is better than the last one. But that isn't the same thing as a person learning over time.

> AIs (at the moment) don't learn.

Yes, and even when it learns (because there's new version of the AI model) it doesn't learn according to your company/team's values. Those values might be very specific to your business model.

Currently, AI (LLM) is just a tool. It's a novel and apparently powerful tool. But it's still just a tool.


The juniors get better and closer to the ideal that my team requires via this process. Current AIs don’t, not the same way.

And then because of salary compression and inversion where new employees at the same level as your former junior developers get paid more based on market trends while HR won’t give your former junior developers raises, they end up leaving when ther get “good enough”.

So why hire juniors at all instead of poaching a mid level ticket taker from another company?

If you are a line level manager, even if you want to retain your former junior now mid level developer, your hands are probably tied.


Humans resemble AGI more than they do LLMs

currently my job as a junior is to review vibe code that was "written" by seniors. it's just such bullshit and they make mistakes I wouldn't even dare to make in my first year of school

Except that the AI doesn't get tired, and your superiors know it. The volume of code you'll have to review will increase.

This article suggests it is specialists who are "at risk", but as much more of a generalist I was thinking the opposite and starting to regret not specialising more.

My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.

But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.

So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.


LLMs have read EVERYTHING yes. that includes a lot of not optimal solutions, repeating mantras about past best practices that are not relevant anymore, thousands of blog posts about how to draw an owl by drawing two circles and leaving the rest as an exercise to the reader etc.

The value of a good engineer is his current-context judgment. Something that LLMs can not do Well.

Second point, something that is being mentioned occasionally but not discussed seriously enough, is that the Dead Internet Theory is becoming a reality. The amount of good, professionally written training materials is by now exhausted and LLMs will start to feed on their own slop. See How little the LLM's core competency increased in the last year even with the big expansion of their parameters.

Babysitting LLM's output will be the big thing in the next two years.


I mean there is no strat that saves you 100% from it. The layoffs are kind of random, based on teams they dont see any vision for, or engineers who dont perform. Generalising is better imo.

The most useful thing juniors can do now is use AI to rapidly get up to the speed with the new skill floor. Learn like crazy. Self learning is empowered by AI.

Engineers > developers > coders.


AI has a lot of potential as a personal, always on teaching assistant.

It's also an 'skip intro' button for the friction that comes with learning.

You're getting a bug? just ask the thing rather than spending time figuring it out. You don't know how to start a project from scratch? ask for scaffolding. Your first boss asks for a ticket? Better not to screw up, hand it to the machine just to be safe.

If those temptations are avoided you can progress, but I'm not sure that lots of people will succeed. Furthermore, will people be afforded that space to be slow, when their colleagues are going at 5x?

Modern life offers little hope. We're all using uber eats to avoid the friction of cooking, tinder to avoid the awkwardness of a club, and so on. Frictionless tends to win.


That is quite some wishful thinking there. Most juniors won't care, just vibe-code their way through.

It takes extra discipline and willpower to force yourself do the painful thing, if there is a less painful way to do it.


Scientists > engineers > developers > coders

Mathematicians > scientists > engineers > developers > coders


Because employers famously hire based on skill and not credentials or existence

> The flip scenario: AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation. Rather than replacing developers, AI becomes a force multiplier that spreads development work into domains that never employed coders. We’d see more entry-level roles, just different ones: “AI-native” developers who quickly build automations and integrations for specific niches.

This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.

I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)

I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.


An AI-enabled developer is still a full-time job that requires SWE expertise. I think the quoted portion is correct, but it will be a gradual change as CTO/CIOs realize the arbitrage opportunity in replacing most of their crappy SaaS subscriptions with high-velocity in-house solutions. The savvy ones, at least.

This is true if you want to build professional software. But what I foresee is a lot more tasks being accomplished with task-specific tools created by the people responsible for doing those tasks. Like how people use spreadsheets to get their jobs done, but with a much broader set of use cases.

Once it is easier to just make almost anything yourself than it is to go through a process of expressing your requirements to a professional software development group and iterating on the results, that will be a popular choice.


That works until it’s important enough to secure, monitor, operate, debug, and have people on call in case it breaks.

At that point it gets handed over to the engineers.


Yes, but I think the reason things become that important is when they are used by a lot of people. People (or small teams) building purpose-specific tools for themselves don't require any of that.

Is there a Jeapordy for guessing prompts? Give an executive summary of GenAI trends where GenAI is the destiny and everything reacts to it. Touch on all “problems”. Don’t be divisive by making hard proclamations. Summarize in a safe way by appealing to the trope of the enthusiastic programmer who dutifully adapts to the world around them in order to stay “up to date”; the passive drone that accepts whatever environment they are placed in and never tries to change it. But add insult to injury by paradoxically concluding that the only safe future is the one you (individual) “actively engineer”.

I’m not saying that this was prompted. I’m just summarizing it in my own way.


The next two years of software engineering will be the last two years of software engineering (probably).

I don't see the market flooded yet with software that was "so easy to build using LLMs".

Last year was, as it seems, just a normal year in terms of global software output.


If anything, looking at for example what Microsoft has been releasing, it's been a year below average (in terms of quality).

The software being written isn't coming to "the market". It's all internal development.

Look at smaller SaaS offerings and people selling tiny utility apps, those will go away slowly.

Why would I pay for something when I can make it for my own (or company internal) use in an afternoon?


You are not looking at right places. Github repo counts have been high since 2020 because there are companies & individuals who run fork scripts. So AI cant match the numbers.

But on product hunt, the amount of projects is First week of Jan: 5000+, Entire Jan 2018: 4000 approx.


That doesn’t mean industry output is high, it means people are starting new products.

Has the output of existing companies/products increased substantially?

Have more products proven successful and started companies?

hard to say but maybe a little


>Has the output of existing companies/products increased substantially?

Would be impossible to tell.


No, it would be pretty easy if it looked like new features were shipping significantly faster. But they’re not.

This is such a stupid argument. A very significant amount of code never makes it into the public sphere. None of the code I've written professionally in the last 26 years is publicly accessible, and if someone uses a product I've written they likely don't care if it was written with the aid of an LLM or not.

Not to mention agent capabilities at the end of last year were vastly different to those at the start of the year.


Even if a portion of software is not released to the general public, you'd still expect an increase in the amount of software released to the general public.

Even if LLMs became better during the year, you'd still expect an increase in releases.


What are you willing to bet on that prediction? Your car? Your home?

Talk is cheap, let's see the money :D


Please don’t get my hopes up. Adaptable people like me will outcompete hard in the post-engineering world. Alas, I don’t believe it’s coming. The tech just doesn’t seem to have what it takes to do the job.

Some related fields will be gone too. And the jobs which will remain will be impossible to get.

> And the jobs which will remain will be impossible to get.

Exactly my thoughts lately ... Even by yesterday's standards it was already very difficult to land a job and, by tomorrow's standards, it appears as if only the very best of the best will be able to keep their jobs and the ones in a position of decision making power.


I've been saying for a decade that one of the fundamental issues with SWE in the average company, is that management does not seem to understand that SWE is a management level job. Its not an assembly line worker. It requires reorganizing, streamlining, stake-holders, etc - in code and data - which directly affect people much the same that any other management role has. There are just fewer issues with HR and more with CDNs or CVEs.

> A CEO of a low-code platform articulated this vision: in an “agentic” development environment, engineers become “composers,”

I see we'll be twisting words around to keep avoiding the comparison.


The bottom up and top down don’t seem to match.

Where is all the new and improved software output we’d expect to see?


The most important question is who will get paid the most? I don't think the future of software engineering will be attractive if all you do is more work for same or even less pay. A second danger is too much reliance on AI tools will centralise knowledge and THAT is the scariest thing. Software systems will need to perform for a long time, having juniors on board and people who understand software architecture will be massively important. Or will all software crash when this generation retires?

The people who don't lose their jobs will also not be in a great spot, there wont be a guarantee that they will never lose their jobs, they will continue to live on the wobbly and uncertain foundation, will get fired for first no they say to the management. If software engineering falls, all the related industries will fall too, thus creating a domino effect, that none of the execs can imagine right now.

I really do wonder what sort of economy change is coming to us because companies will hypothetically need to hire less people to sustain the equal output of today. They can do that basically today so not even hypothetically anymore, it just needs some time to take off.

The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs? UBI, redistribution of wealth through taxes? I'm not so convinced about that ...


> The question IMO is, who will be creating the demand on the other side for all of these goods produced if so many people will be left without the jobs?

There is no reason why people will left without jobs. Ultimately, "job" is simply a superstructure for satisfying people's needs. As long as people have needs and the ability to satisfy them, there will be jobs in the market. AI change nothing in those aspects.


I think it very much does. Those exact needs so far have been fulfilled by N people jobs. Today those same needs are going to be fulfilled by N-M people jobs. For your hypothesis to work, human, or shall I say better, market needs to scale such that M people left redundant will be needed to cover that new gap. The thing is that I am not so sure about the "scaling" part. Not to mention that people's skills also need to scale such so that they can deliver the value for scaling the market. Skills that we had until yesterday are slowly started to begin a thing of a past so I am wondering what type of skills people will need in order to get those "new" jobs? I would genuinely like to hear the opinion because I am not really positive that the market will self-adjust itself such that the economy will remain the same.

UBI is just a pipe dream. The rich are clutching their pearls even harder.

I think so too.

> there wont be a guarantee that they will never lose their jobs, they will continue to live on the wobbly and uncertain foundation

The people who lose their jobs prove this was always the case. No job comes with a guarantee, even ones that say or imply they do. Folks who believe their job is guaranteed to be there tomorrow are deceiving themselves.


the people who start successful new companies will get paid the most.

The outlook on CS credentials is wrong. You'll never be worse off than someone without those credentials, all other things equal. Buried in this text is some assumption that the relatively studious people who get degrees are going to fall behind the non-degreed, because the ones who didn't go to school will out-study them. What is really going to happen generally is that the non-degreed will continue to not study, and they will lean on AI to avoid studying even the few things that they might have otherwise needed to study to squeak by in industry.

The fundamentals of CS dont change and are more valuable to learn for the long term. Vibe coders think they can just bypass everything because they can ask a machine to write them a todo list.

I think you're right but it's more like the theory and other thinking skills are harder to pick up on your own than particular technologies. You definitely still ought to learn both theory and particular tech skills, as they are not interchangeable. A person who only knows pure CS is difficult to employ as an engineer because programming entails particular technological skills.

On the junior developer question:

A humble way for devs to look at this, is that in the new LLM era we are all juniors now.

A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.

We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.


Sorry but no. Software engineering is too high dimensional such that there is no rulebook for doing it the way there is for building a bridge. You need to develop taste, much like high level Go players do. This is even more critical as LLMs start to spit out code at an ever higher rate allowing entropy to accumulate much faster and letting unskilled people paint themselves into corners.

I think of it a bit like ebike speed limits. Previously to go above 25mph on a 2-wheeled transport you needed a lot of time training on a bicycle, which gave you the skills, or you needed your motorcycle licence, which required you to pass a test. Now people can jump straight on a Surron and hare off at 40mph with no handling skills and no license. Of course this leads to more accidents.

Not to say LLMs can't solve this eventually, RL approaches look very strong and maybe some kind of self-play can be introduced like AlphaZero. But we aren't there yet, that's for sure.


I don't think that conflicts with what I said but perhaps counters with something I didn't; your ebike analogy implies a recklessness that the junior with the attributes I mentioned will be averse to. Conversely the senior with the full grasp of LLMs and the "taste" and judgement will naturally be ahead.

But the comparison I made was between the junior with a good attitude and expert grasp on LLMs, and the stick-in-the-mud/disinterested "senior". Those are where the senior and junior roles will be more ambiguous in demarcation as time moves forward.


My question: are those people who were building crappy, brittle software, which was full of bugs and and orher suboptimal behavior, that were the main reasons of slowing down the evolution that software, will they now begin writing better software because of AI? Answering yes, implies that the main reason of those problems was that those developers didn't have enough time to spend on analyzing those problems or to build protection harnesses. I would stronly argue that was not the case, as the main reason is of intelectual and personal nature - inability to build abstractions, to follow up the route causes (thus not aquiring necessary knowledge), or to avoid being distracted by some new toy. In 2-5 years I expect the industry going into panic mode, as there will be a shortage of people who could maintain the drivel that is now being created en masse. The future is bright for those with the brains, just need to wait this out

Please dear developers be as lazy as possible and use LLMs. The amount of bugs that get shipped enable me a comfortable life as opsec.

Love the article, I had a struggle with my new identity and thus had to write https://edtw.in/high-agency-engineering/ for myself, but also came to the realisation that the industry is shifting too especially for junior engineers.

Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?


Do companies hire less because of AI or try to rely more on AI because they can’t afford to hire?

Tech layoffs have been happening even before LLMs.


Ah, time to drop out and shoot myself in the woods

One of the better analysis of this question I think.

On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.


I'm mostly convinced at this point that the jobs market will only be affected temporarily.

This is really just another form of automation, speeding things up. We can now make more customized software more quickly and cheaply. The market is already realizing that fact, and demand for more performant, bespoke software at lower costs/prices is increasing.

Those who are good at understanding the primary areas of concern in software design generally, and who can communicate well, will continue to be very much in demand.


I‘m leaning more towards significant job impact.

It’s hard to tell though not just because it’s inherently uncertain where this goes but also because those closest to it are also the least likely to view it objectively.

So near impossible to find someone clued up but also not invested in a specific outcome


For some reason miss two important points:

1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?


I think this post is a great example of a different point made in this thread. People confuse vibe-coding with llm-assisted coding all the time (no shade for you, OP). There is an implied bias that all LLM code is bad, unmaintainable, incomprehensible. That's not necessarily the case.

1) Either you, the person owning the code, or you + LLms, or just the LLMs in the future. All of them can work. And they can work better with a bit of prep work.

The latest models are very good at following instructions. So instead of "write a service that does X" you can use the tools to ask for specifics (i.e. write a modular service, that uses concept A and concept B to do Y. It should use x y z tech stack. It should use this ruleset, these conventions. Before testing run these linters and these formatters. Fix every env error before testing. etc).

That's the main difference between vibe-coding and llm-assisted coding. You get to decide what you ask for. And you get to set the acceptance criteria. The key po9int that non-practitioners always miss is that once a capability becomes available to these models, you can layer them on top of previous capabilities and get a better end result. Higher instruction adherence -> better specs -> longer context -> better results -> better testing -> better overall loop.

2) You are confusing the fact that some labs subsidise inference costs (for access to data, usage metrics, etc) with the true cost of inference on a given model size. Youc an already have a good indication on what the cost is today for any given model size. 3rd party inference shops exist today, and they are not subsidising the costs (they have no reason to). You can do the math as well, and figure out an average cost per token for a given capability. And those open models are out, they're not gonna change, and you can get the same capability tomorrow or in 10 years. (and likely at lower costs, since hardware improves, inference stack improves, etc).


Perhaps thinking about AI generated code in terms of machine code generated by a compiler helps. Who maintains the compiled program? Nobody. If you want to make changes to it, you recompile the source.

In a similar fashion, AI generated code will be fed to another AI round and regenerated or refactored. What this also means is that in most cases nobody will care about producing code with high quality. Why bother, if the AI can refactor ("recompile") it in a few minutes?


AI assists the maintenance. A lot of posts seem to think like once the code is committed the AI’s what, just go away? If you can write a test for a bug, likely it can be either fully or partially fixed by an ai even today.

I don't know, we hired a junior developer and are about to hire another. Not sure it collapsed. I just think it's really hard to get a job across the board right now.

One thing that fucks with juniors is the expecration of paying for subscriptions for AI models. If you need to know how the AI tools work, you need to learn them with your own money.

Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.


You can get by pretty well with the ~$20/month plans for either Claude or Gemini. You don't need to be doing the $200/month ones just to get a sense of how they work.

Again, not everyone can afford it, and it becomes a hurdle. Computers are acquirable, but 20$ extra a month might not be.

And yes, that plan can get you started, but when I tested it, I managed to get 1 task done, before having to wait 4 hours.


This is why opencode is giving free access to one or two models, unlimited access.

If the AI gets so good then they shouldn’t need to pre-learn.

> Addy Osmani is a Software Engineer at Google working on Google Cloud and Gemini

Ah, there it is.


Yep, it never fails. Here's another prediction for "The next two years of software engineering"; AI vendors will start to utilize their senior devs' personal domains to write their advertising pieces to attempt to mitigate scrutiny when such things are posted to social media.

Ahhhh, this is like that guy who works at Claude Code and runs 100 agents at the same time to replace 100 juniors. Everyone is convinced he will be the last software engineer on earth.

> Senior developers: Position yourself as the guardian of quality and complexity.

Agreed but it's not an easy charge to fulfill.


I recently started as a developer advocate - I have similar opinions to the author - junior devs have a hard time getting hired and flipping to something like devrel makes a lot of sense.

The points mentioned in the article, regarding the things to focus on, is spot on.

I honestly think middle management will be where AI causes job cuts.

In today's corporate environment, 70% of the costs are in management and admin muddlement, do we really think these "people skills" translate into anything useful in an AI economy?

The junior devs have far more hope.


I agree. The lack of discussion about replacing middle-management with AI betrays the real politics of business.

Middle-managers output exactly what LLMs do: chats, documents, summaries. Particularly working remotely. They don't even generate tickets/requirements – that's pushed to engineers and product people.


> Junior developers: Make yourself AI-proficient and versatile. Demonstrate that one junior plus AI can match a small team’s output. Use AI coding agents (Cursor/Antigravity/Claude Code/Gemini CLI) to build bigger features, but understand and explain every line if not most. Focus on skills AI can’t easily replace: communication, problem decomposition, domain knowledge. Look at adjacent roles (QA, DevRel, data analytics) as entry points. Build a portfolio, especially projects integrating AI APIs. Consider apprenticeships, internships, contracting, or open source. Don’t be “just another new grad who needs training”; be an immediately useful engineer who learns quickly.

If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.


What. That’s written in a way that’s like “men writing women”. Not putting themselves in the shoes of a junior who has no context or almost no opportunities.

> be an immediately useful engineer who learns quickly

Then also nothing has really changed. This was, verbatim, the advice everybody was giving when I was a grad student almost 20 years ago.

Back then, the conclusion was to learn the frameworks du jour, even if it was unfulfilling plumbing and the knowledge had a half-life of a few weeks. You needed it to get hired, but you made your career because of all the solid theory you learned and the adaptability that knowing it gave you.

Now, the conclusion is to learn how to tickle the models du jour in the right way, even though it's intellectually braindead, unaspiring work and knowledge with a half-life of a few days. It's still the theoretical foundation that will actually make the junior become a valuable engineer.

The more I read between the lines of AI evangelists' posts like this, the more I'm convinced that expectations will return to grounded reality soon. They are new tools to help the engineer. They enable new workflows and maybe can even allow a two-digit percentage increase in speed while upholding quality. But they're in no way a revolution that will make possible "10× engineers" or considerably replace engineering positions beyond the "it doesn't really matter" area of PoCs, prototypes, one-offs, cookie-cutter solutions, etc.


And you think juniors aren't doing this? At this point everyone in the market does more Vibe coding than those who are not in the market. Market is saturated most because Execs cutting jobs not because juniors are not good.

I would like to see how things will be when using AI would require half of a devs current paycheck.

This was an amazing read! The entire thought of being so versatile as the article mentions is similar to the book Homo Deus by Harari.

In my opinion we always needed to be versatile to stand any chance of being comfortable in these insanely rapid changing times.


I mean it's pretty simple: management will take bad quality (because they don't understand the field) over having and paying more employees any day. Software engineer positions will shrink and be unrecognizable: one person expected to be doing the work of multiple departments to stay employed. People may leave the field or won't bother learning it. When the critical mass is reached, AI will be paywalled and rug pulled. Then the field evens itself out again over a long, expensive period of time for every company that fell for it, lowering the expectations back to reality.

That's another one for concluding that there's nothing new under the sun. This is the exact dynamic that happened during the offshoring hype.

Now, it's expecting senior engineers to "orchestrate" 10 coding agents, then it was expecting them to orchestrate 10 cheap developers on the other side of the world. Then, the reckoning came when those offshore developers realised that if they produced code as good as that of a "1st world" engineer, they can ask a similar salary, too, and those offshoring clients who didn't want to pay up were left with those contractors who weren't good enough to do that. This time, it will be agent pricing approaching the true costs. Both times, the breaking point is when managers realise that writing code was never the bottleneck in the first place.


This is truly the problem: You either get fired or you get to work 10x more to survive. Only question is how many of us will be in 1st group and how many in the 2nd group, its a lose lose situation.

Exactly. Some jobs moved from database, backend, frontend and devops to "fullstack", which means 4 jobs with the pay of one. People do that job, but with only 8h-10h in a day the quality is as expected. I think overall people will try to move out of the field, no matter how much of a force multiplier AI might be. Its simply a worse trade to carry so much responsibility and burden when you can work in IT or outside of IT in a less cognitively demanding field with set hours and expectations for the same pay (in EU, very hyperbolic statement tbh). Especially when the profit you bring dwarfs the compensation with all the frustrations that come with knowing that and being kept down in the corporate ladder.

This article reads like it was written by an AI.

Change is a constant for software engineers. It always has been. If your job is doing stuff that should be automated, either you are automating it or you are not a very good software engineer.

A few key fallacies at play here.

- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.

- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.

- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.

The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.

Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.

The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.


Maybe a harsh criticism. The article seemed to be all over the place, maybe because the subject is also all over the place. I agree with everything, its just that it seemed like the same story we've been in for awhile.

Wasn't the main take away generally "study everything even more than you were, and talk/network to everybody even more than you were, and hold on. Work more more more"


The author has a bizarre idea of what a computer science degree is about. Why would it teach cloud computing or dev ops? The idea is you learn those on your own.

If that's "the idea", then clearly we need a more holistic, useful degree to replace CS as "the" software degree.

Despite what completely uninformed people may think, the field "computer science" is not about software development. It's a branch of mathematics. If you want an education in software development, those are offered by trade schools.

I didn't say it taught you software development (though it does, non-primarily), I said it was '"the" software degree'. I.e. the degree you get if you want to get into software in general - which, in reality, is what people believe is true, and reality/pragmatism is all that matters, even if you feel so superior to those people as to resort to insults.

To be clear, I did not go into CS. But I do live in this world


What I want is for universities to offer a degree in Software Engineering. That's a different field from Computer Science.

You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.

But if chemical engineering belongs at a university, so does software engineering.


Many do. Though, the one I'm familiar with is basically a CS-lite degree with software specific project design and management courses.

Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.


Plenty of schools offer software engineering degrees alongside computer science, including mine ~20 years ago.

The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.


Saying this as a software engineer that has a degree in electrical engineering - software "engineering" is definitely not the same as other engineering disciplines and definitely belongs in a trade school.

Right, because the guy sitting next to me and is designing a PCB for next copy of rPI is so much more for an engineer than the other guy designing a distributed computing algorithm? It shows that you only dealt with the trivial things in SE. There are very complex areas in both disciplines and as much as I can find trivial things in SE I can do the same for EE. Let's just not pretend it's a science fiction when it's not.

Developing a distributed computing algorithm I think would squarely fall into CS. Engineering is the application of stuff like that.

Last I checked ASU does, and I’m certain many other universities do too.

My university had Electrical Engineering, Computer Engineering, Software Engineering and Computer Science degrees (in additional to all the other standard ones.)

This widely circulated claim ignores the fact that math is not science.

The degree is (should be) about CS fundamentals and not today's hotness. Maybe a "trades" diploma in CS could teach today's hotness.

Cloud computing is not some new fundamental area of computer science. It’s just virtual CPUs with networks and storage. My CS degree from 1987 is still working just fine in the cloud, because we learned about CPUs, virtualization, networks, and storage. They’re all a lot bigger and faster, with different APIs, but so what?

Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.

School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.


There has to be a balance of practical skills and theory in a useful degree, and most CS curricula are built that way. It should not be all about random hot tech because that always changes. You can easily learn tech from tutorials, because the tech is simple compared to theory. Theory is also important to be able to judge the merits of different technology and software designs.

I am not sure abot devops. But Cloud Computing likely has lot of science behind it. When done properly. They are not any less complex systems to reason about than just code. And I mean it as understanding and designing cloud platforms. Not as deploying code to them.

Why is this necessarily true?

A CS degree is there to teach you concepts and fundamentals that are the foundation of everything computing related. It doesn't generally chase after the latest fads.

Sure, but we need to update our definitions of concepts/fundamentals. A lot of this stuff has its own established theory and has been a core primitive for software engineering for many years.

For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.

Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.

We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.


This is mostly software engineering not computer science though. That is but a small sub-section of computer science.

I just don't see the distinction. Looking at it from the other direction: most CS degrees will have you spend a lot of time looking at assembly language, computer architecture, and *nix tools. But none of these are mathematical inevitabilities - they're just a core part of the foundations of software engineering.

However, in the decades since this curricula was established, it's clear that the foundation has expanded. Understanding how containerization works, how k8s and friends work, etc is just as important today.


Containerization would be covered in a lecture on OS Concepts. A CS degree isn't to teach you about using containerization. Take a course specific to that.

I do agree that the scale has expanded a lot. But this is true with any other fields. Does that mean that you need to learn everything? Well at some point it becomes unfeasible.

See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.


One option that didn't seem to be discussed in TFA is turning away from AI.

There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.

Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.

Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.

I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.

Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.

I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).

For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.

I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.


Assuming AI is at all useful it's likely to be used for safety-critical software development. Safety-critical processes aren't likely to care about LLM involvement much at all, much like they don't generally care about competence of those doing the work already.

>Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.

I'd put my money on this. From my understanding of LLMs, they are basically mashing words together via markov chains and have added a little bit of subject classification with attention, a little bit of short-term memory, and enough grammar to lay things out correctly. They don't understand anything they are saying, they are not learning facts and trying to build connections between them, they are not learning from their conversations with people. They aren't even running the equivalent of a game loop where they can even think about things. I would expect something we're trying to call an AI to call you up sometimes and ask you questions. Trillions of dollars have got us this far, how far can it actually take us?

I want my actual AI personal assistant that I have to coerce somehow into doing something for me like an emo teen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: