Hacker Newsnew | past | comments | ask | show | jobs | submit | MontyCarloHall's commentslogin

Would reduced heating due to warmer winters offset this? Global carbon emissions due to heating are approximately 4 times the amount of carbon emissions due to cooling [0].

(Of course, the ideal scenario is not that rising carbon emissions from increased cooling get offset by lower emissions from decreased heating, but rather that we transition to abundant carbon-free energy from solar, wind, nuclear, etc. and are able to keep our houses as cool as we want in the summer and as warm as we want in the winter without any environmental consequences.)

[0] https://ourworldindata.org/grapher/co2-heating-cooling


I ran into a similar issue years ago, where the base infrastructure occupied the lion's share of the container size, very similar to the sizes shown in the article:

   Ubuntu base      ~29 MB compressed
   PyTorch + CUDA   7 – 13 GB
   NVIDIA NGC       4.5+ GB compressed
The easy solution that worked for us was to bake all of these into a single base container, and force all production containers built within the company to use that base. We then preloaded this base container onto our cloud VM disk images, so that pulling the model container only needed to download comparatively tiny layers for model code/weights/etc. As a benefit, this forced all production containers to be up-to-date, since we regularly updated the base container which caused automatic rebuilding of all derived containers.


That approach works really well when you have a stable shared base image.

Where it starts to get harder is when you have multiple base stacks (different CUDA versions, frameworks, etc.) or when you need to update them frequently. You end up with lots of slightly different multi-GB bases.

Chunked images keep the benefit you mentioned (we still cache heavily on the nodes) but the caching happens at a finer granularity. That makes it much more tolerant to small differences between images and to frequent updates, since unchanged chunks can still be reused.


I'm willing to bet you don't full-on YOLO vibecode like the lead Claude Code developer, running 10 Claude Code sessions in parallel to push 259 pull requests that modify >40k lines of code in a month [0]? There is zero chance any of that code was rigorously reviewed.

I use Claude Code almost every day [1], and when used properly (i.e. with manual oversight), it's an amazing productivity booster. The issue is when it's used to produce far more code than can be rigorously reviewed.

[0] https://www.reddit.com/r/ClaudeAI/comments/1px44q0/claude_co...

[1] https://news.ycombinator.com/item?id=45511128


>Waymo benefits from Google's unparalleled geospatial data.

How much of Waymo's training data is based on LIDAR mapping versus satellite/aerial/street view imagery? Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city. The fact that this needs to happen suggests Google's geospatial data moat is not as wide as it seems.

If LIDAR becomes cheap, you could imagine other car manufacturers would add it cars, initially and ostensibly to help with L2 driver aids, but with the ulterior motive of making a continuously updated map of the roads. If LIDAR were cheap enough that it could be added to every new Toyota or Ford as an afterthought, it would generate a hell of a lot more continuous mapping data than Waymo will ever have.


> Before Waymo deploys in a new city, it deploys a huge fleet of cars that spend months of driving completely supervised, presumably to construct a detailed LIDAR map of the city.

Not entirely true. From their recent "road trips" last year, the trend is they just deploy less than 10 cars in a city for a few weeks (3-4 weeks from what I recall) for mapping and validating. Then they come back after a few months to setup infrastructure for ride hailing (depot, charging, maintenance, etc.) and start service.


I am curious how much Claude Code is used to develop Anthropic's backend infrastructure, since that's a true feat of engineering where the real magic happens.


Isn't the main problem with pensions today the dramatic increase in life expectancy post-retirement? They were never intended to support decades of retirement: in the old days, you retired at 65 and there was a good chance you'd be dead by 70, at most 75. (When Social Security was established in the 1930s, life expectancy was only 61.) Nowadays, there's a good chance you'll live beyond 90, with expenses increasing disproportionately towards end-of-life. Combine that with a shrinking birthrate, and no feasible increase in ROI/worker contributions can sustain pension-funded retirements of 25+ years.


Using overall life expectancy here is misleading, as it includes the risk of childhood mortality. You have to look at life expectancy at a given age. According to the SSA's life tables[0], life expectancy for men at 65 in 1930 and 1940 was about 12 years. In 2020, it was about 17. A significant increase, but not nearly as extreme as you're saying.

[0] https://www.ssa.gov/oact/NOTES/pdf_studies/study120.pdf


In 1930, if a person starts paying into the pension at 30, at that point they have a life expectancy of 37 years, ie they will benefit from the pension for 2 years. Life expectancy at age 30 goes up to 48 in 2020, which gives them 13 years after retirement, 6.5 times higher. Assuming linearity, the average life expectancy after retirement during the time you are paying into your pension between 30 and 65 would be 7 years in 1930, and 17 in 2020.


No, the problem was that increased contributions [that would've ensured solvency with lower market returns and/or extended life expectancy] would've cost more sooner, and no one wants to pay more. So, we "extend and pretend" using generous return assumptions and when those assumptions are not borne out in reality, we simply shrug. Similar to $39T in US treasuries someone will need to pay back. Only the top 40% of Americans have enough income to have a federal tax liability, so who is going to pay this debt back? Different sides of the same coin. What is a debt after all besides a promise to pay.


At age 65 life expectancy hasn’t gone up as much, only around 6 years more today vs 1930. So it’s still a factor but not as dramatic as you make it seem.


What is the value in 1930 though? Only 6 years could in fact mean over 100% increase in life expectancy at age 65. There's a reason full Social Security now starts at 67 rather than 65, and you are incentivized to take Social Security at age 70 if possible.


Ultimately the way we will get out of this is inflation, except that so many pensions have COLA.

Years spent in retirement have roughly doubled, while the pyramid has shrunken from 16:1 workers to retirees in 1950 to 3:1 today.

Means testing and retirement age increases also cause a voter or worker revolt.

Going to be real hard to keep this on the rails.


>I'd expect to see movements to force government regulation of AI.

I agree. It will be an interesting debate to watch play out, because a) lots of end-users love using AI and will be loath to give it up, and b) advances in compute will almost certainly allow us to run current frontier models (or better) locally on our laptops and phones, which means that profits no longer accrue to a few massive AI labs. It would also would make regulating it a lot tricker, since kneecapping the AI labs would no longer effectively regulate the technology.


That would be an interesting scenario.


>Life and business is not about profit. It’s about bettering the lives of people.

This mentality results in the grass at the Taj Mahal being cut with hand tools [0], or Japan having a whole category of "useless jobs" like elevator operators [1, 2] that simply exist to provide employment. Taken to an extreme, this is the broken windows makework fallacy. If I smash a lot of windows, the local glazier gets paid handsomely, at the expense of everyone who had to pay for window replacements.

[0] https://www.youtube.com/shorts/wAH8jj9cm_o

[1] https://www.taipeitimes.com/News/editorials/archives/2015/06...

[2] http://www.ageekinjapan.com/elevator-operator/


Anything taken to an extreme is extreme, that includes capitalism.

We know that turning everyone and everything into a product has it's own set of negative outcomes. Trying to play this off as a binary situation is a form of extremism in itself.

There is already the term Bullshit Jobs [1] for service economies like the US where huge numbers of people are employed as part of company bureaucracy rather than representing the most efficient outcome.

Simply put trying to run a society like a business is going to ensure that you get such a large number of people unhappy that you start a revolution that tries to burn everything down and leads to a lot of death.

[1] https://en.wikipedia.org/wiki/Bullshit_Jobs


Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise? (I don't know, but perhaps you do). You seem to be strongly implying that this is in some way "wrong" rather than a subjectively different view of the purpose of human existence - for what reason? (I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims").


>Are those people cutting the grass/operating the elevators happier/unhappier than they would be otherwise?

There are numerous studies that show menial labor leads to poor mental health. Perhaps these people employed as makework automatons are happier than they would be if they had no employment whatsoever and were destitute on the street, but these are not the only two alternatives.

>I'll ignore the glazier example as it seems quite extreme, and also comes with more obvious/specific "victims"

The "victims" at the Taj Mahal/department store are the visitors/customers who have to pay slightly higher prices as a result. While not as extreme as the glazier in the broken window fallacy, the grass cutters/elevator operators exist on the exact same spectrum.


I think what leads to poor mental health is varied - poverty is definitely one cause, presumably one which is lessened in this case. I completely agree with you that there are more than two alternatives, but society seems unwilling/unable to consider any of the more radical.

You could frame those visitors to the Taj Mahal as victims, but that takes quite a narrow and short-term view of value to them. Would the Taj Mahal be as pleasant a place to visit if it were in an even more unequal and precarious society than it is? We all pay for things that don't directly benefit us through taxation (usually). The childless pay for schools, the car-less pay for roads, but we benefit from the society that having them creates. It seems hard to say that those visitors to the Taj Mahal would not benefit from being in a more prosperous and sustainable society.


>>Brynjolfsson found a 13% relative decline in employment for early-career workers (ages 22-25) in AI-exposed occupations since late 2022. For young software developers specifically, employment fell almost 20% from its 2022 peak

>This is confounding AI-exposed white collar occupations with occupations that were overrepresented with extended remote work.

Yup. If you look at Brynjolfsson's actual publication [0], you'll see that precipitous decline in hiring juniors in "AI-exposed occupations" starts in late 2022. This is when ChatGPT first came out, and far too early to see any effects of AI on the job market.

You know what else happened in late 2022? The end of ZIRP and Section 174, which immediately put a stop to the frantic post-COVID overhiring of bootcamp juniors just to pad headcount and signal growth. The problem with Brynjolfsson's paper is that it doesn't effectively deconvolve "AI-exposed occupations" from "ZIRP/Section 174-exposed occupations," which overlap significantly.

[0] https://digitaleconomy.stanford.edu/app/uploads/2025/11/Cana...


Addendum to counterpoint: why haven't those SotA gen-AI companies become the most productive software companies on earth, and release better and cheaper competitors to all currently popular software?

People always gripe about the poor quality of software staples like Microsoft Office or GitHub or Slack. Why hasn't OpenAI or Anthropic released a superior office suite or code hosting platform or enterprise chat? It would be both a huge cash cow and the best possible advertising that AI-facilitated software development is truly the real deal 10x force multiplier they claim.

If someone invents a special shovel that can magically identify ore deposits 100% of the time, they aren't going to sell it with the rest of the shovelmongers. They're going to keep it to themselves and actually mine gold.


Generating code isn't the bottleneck for selling software.

Those apps aren't that bad, it's just internet people complaining about things like react.

Imo "higher quality" isn't a way to sell software


Because it’s not their business to sell a chat app? "Our company is the frontier lab for AI models, oh and btw we also offer SlackClone, sign up for enterprise please". Their job is selling shovels, really good, increasingly more expensive shovels that keep getting better, let others waste their time looking for gold.


But Google sells the productivity apps and also does the exact same things OpenAI does.

If their work on Gemini is this leading world-class stuff, why aren’t Google’s software products not suddenly becoming better?

Was the most recent release of Android demonstrative of a significant uptick in product iteration? Shouldn’t we suddenly be seeing Android pulling far ahead of iOS in an unusually rapid fashion because Apple doesn’t have access to the same quality of shovels?

What about Microsoft Windows 11? Isn’t Microsoft a major OpenAI investor with full access to their latest and greatest?

Why aren’t we seeing release schedules accelerate or feature lists growing at a faster rate?

Supposedly we are selling a lot of shovels here but I don’t see a lot of holes being dug.


Android is a poor example here especially with how more and more features are moved from the OS to Play services. Google is shipping so many features without even an OS update that's how Android has always been. Even for their OS, Pixel feature drops happen every quarter. AOSP is only a base for others to build anyway, have you seen how fast samsung and others are pushing updates and uncountable number of features. It's not comparable to iOS at all.


Okay, I agree with your premise, but can you point to some tangible acceleration in innovation.

Are these Google Play features coming out faster than they used to in a way that coincides with AI adoption?


Not really no. It's pretty much the same pace as before. I wanted to point out Android is not playing catch up to iOS in anyway in features or quality, it's the opposite. Your comment asked why Google isn't catching up to Apple with AI's help. iOS meanwhile has been regressing since 18 and is a mess now on 26.


Yes, to clarify, I’m not making any claim on Android versus Apple and which one is better, who is catching up to whom. Which operating system is ahead or better is essentially irrelevant to the point I’m making.

My main claim revolves around your second sentence: Google is a major primary source of AI research and has access to frontier models before all their customers, especially competitors like Apple who are clearly behind in the AI race and/or not participating in the same way.

In theory, if AI is transformational to developer velocity, Android and all other products under Google’s umbrella should be moving faster than competitors that don’t have early access and preferential wholesale cost AI infrastructure, and they should be clearly iterating faster and better than they did prior to ~2022-2024.

To me, the biggest argument for an AI bubble burst is that companies like Meta and Google won’t actually be able to show their prospective customers that their own workflows have benefitted. Google can’t say “we now ship major [Google Product] features n% faster/better” because there’s no evidence of it. They might make the claim but nobody will believe them.

Major corporations will try the products, start spending $20-200 per engineer per month extra, they’ll see productivity gains of <5% and maybe even see code quality drop, then they’ll decide that the experiment was a bust.

Essentially, this experience will be the most common one: https://www.reddit.com/r/ExperiencedDevs/comments/1r6olcv/an...


This I do agree with. All I've seen is reducing headcounts and forcing people to take up other roles as well.


But they are marketing their AI as replacing all software engineers. Their CEO can’t stop saying it. According to them the cost of producing software is now just the cost of tokens to generate it.

They have special knowledge to leverage AI to clone (and even improve) huge revenue businesses with high margin. If their claims about the abilities of LLMs are accurate it would be foolish to just leave that on the table.

It would also prove the power of their LLM product as truly disruptive. It would be amazing marketing!


They care about money, they are making tons of investor money doing what they are doing, there's no incentive to pivot if it would just turn investor money into consumer money.


Their business is making money. If they can build money printing machines, they're not going to refuse to use them because that's "not their business".

Do you really think they would be out donating trillions of dollars to other companies out of the goodness of their hearts, instead of just bankrupting everyone in the software industry if they could?


Huh? What kind of question is that? Who waste the opportunity to win the AI race to become another Jira vendor? Everything has the opportunity cost. Didn’t you already learn that?


Isnt that point kind of the counterpoint to the AI-first narrative. With standard, human driven operations its true about opportunity costs. What we are told is that AI will replace human, essentially saying that opportunity cost becomes cash only. Then the question of why doesnt AI lab start SaaS fully managed by AI becomes ever more interesting. Maybe because it's not that simple. Hence, it's not that easy in other companies as well to just replace devs, engineers and so on with AI


They could always help with some OSS software’s list of bugs and issues.


Waste ? They can become both an AI race winner AND a disruptive Jira vendor. Yet they don't. Why ? To be a successful Jira vendor will prove their point that software engineers are obsolete now. Why don't they do that already ?


Then why are they letting their models write browsers and compilers?


Why hasn't OpenAI or Anthropic released a superior office suite or code hosting platform or enterprise chat?

My guess is two-fold. One, they are specialized in AI. Two, building another anthropic is a big moat and they like to keep it big vs what you could build with it.


Why aren't we in the year of the Linux desktop? It's free and arguably close enough, better, or as good as Windows.

I think in the modern world people would absolutely sell the special shovel, because even if you have a better product that doesn't mean people are going to be using it. You need to have a much better product for a long time for that to happen. And being much better than the competition is hard.


Anthropic appears to have realized before OpenAI that code gen was an important enough market to specialize in.

For now though, building smarter models / general integration tooling is a better us of model companies' capital.

Once/if performance gains plateau, expect you'll see them pivot right quick to in-house software factories for easily cloneable, large TAM product spaces, then spin off those products if they're successful.


100% agreed. When/if that pivot happens will be the sign that gen-AI is truly disrupting the software market in a profound way. "You're using the model wrong/you're not using the latest model" is an oft-repeated argument against AI skeptics. Nobody knows how to use the latest models better than their developers.


For the time being though, theyre going to build their in house software with electron because building native apps is hard.


They're still training up using all of our extensive feedback to improve software architecture. Maybe later this year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: