Hacker Newsnew | past | comments | ask | show | jobs | submit | mikert89's commentslogin

There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine

the question is will we experience resource constraints before we get there? what if the step up to post-scarcity is gated by a compute level just out of our reach?

human ingenuity will solve this

Or we'll have ecological collapse.

NFTs will solve this

There are limits to algorithms. AI won't solve the halting problem nor will it solve EXPTIME problems in polynomial time.

Not sure if this is satire.

Edit: What we have built is a natural language interface to existing, textually recorded, information. Transformers cannot learn the whole universe because the universe has not yet been recorded into text.


Transformers operate on images and a variety of sensor data. They can also operate completely on non-textual inputs and outputs. I don't know what the ceiling on their capabilities is, but the complaint that they only operate on text seems just obviously wrong. There are numerous examples but one is meteorological forecasting which takes in a variety of time series sensor inputs and outputs e.g. time-series temperature maps. https://www.nature.com/articles/s41598-025-07897-4

Based on a glance at their other comments: not satire.

100% agreed. Sadly, lots of people out there with the "trust me bro, just need more compute". Hopefully we don't consume all the planet's resources trying.

I reevaluated my priors long ago when I saw that scaling laws show no sign of stopping, no sign of plateau.

Strangely some people on HN seem to desperately cling to the notion that it's all going to come to a halt. This is unscientific. What evidence do you have - any evidence - that the scaling laws are due to come to an end?


All the curves have been levelling off as expected. Not really sure what you're talking about.

They have not, every successful pre-train as of late has had performance increases greater than what the scaling laws predict.

Those gains are arch based, data quality based, etc. Scaling laws only relate to data volume and compute, holding other factors constant.

I’d like to see something that indicates models are getting better without the need for more training data. I would expect most gains are coming from more and better labeled data. We’re racing towards a complete encyclopedia of human knowledge. If we get there that’s only a drop in the bucket of all knowable things.

Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?

I guess the great filter is a real thing and not just a thought experiment.


I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)

Completely agree. Meat should be priced to include externalities. People can get used to beans. Beans are great!

I suspect it's not that people do not see the progress, they fail to fully trust laws not truly backed by physics like the transistor laws. We empirically see that scaling works and continue to work.


Why should we have strong priors in either direction? Maybe it will keep scaling for decades like Moore's law. Maybe not.

The issue people have isn’t some interpretation of scaling laws, it’s whether the planet’s ecology is goi g to be able to sustain this endeavour.

I shouldn’t have to say this out loud, but if the environment collapses, we will die, and no amount of “just a bit more scaling bro, just think of the gains” will matter.


People's voluntary dietary choices cause far more suffering and ecological damage than AI, and for much less return or economic output. But you tell people to switch to plant based foods and they lose their shit.

Yes. There's more than one thing that needs to change if we're going to make it through this.

AFAIK the data does not need to be text.

Well diffusers are trained unsupervised on raw pictures. I don't know how they train multi-modal LLMs on images, but yes obviously they are consuming other media than just text. I don't think, but would be happy to be corrected, that models glean much of their "knowledge" from non-textual training data.

you couldnt be more wrong

Please tell me more. When I ask an LLM a question, and get a text response, can that response incorporate non-textual information from visual training data?

It’s more than likely not.

Poe's (c)law?

Poe’s (C)law: The more absurd AI-generated content becomes, the more likely people are to believe it is real.

funny thing about this tweet is the founder still couldnt stop herself from name dropping MIT

Look at this series of tweets from her:

> One interesting observation I’ve noticed is a lot of top founders did oddly strong at math from a young age.

https://x.com/kocalars/status/2027076198002553159

Nauseating.


Much like their soc audits, her time at MIT was also incomplete. Still doesn’t stop them from cosplaying as a grad though!

Where did my standard of living go? Couldnt possibly have to do with imported labor working around the clock under the threat of being kicked out of the country

For tech jobs specifically? Compensation has been increasing since the turn of the millennium, what standard of living do you mean? If you mean housing, that's due mainly to NIMBYism from native labor buying and owning houses, especially before the tech boom, not imported labor.

[flagged]


Supply and demand is fake when it suggests something I don't like, what's so hard to understand?

That's not really what they said though, they said their quality of life was going down due to visa holders, which I have not seen any proof of.

Cheap labour producing goods for the native population at low costs should increase your standard of living, no? It makes the products you buy cheaper.

By your logic, if you were the only person in the country, you'd live like a king.


Companies are importing labor so they can avoid pay competitive wages to native workers. If you need to hire people from other countries they should have the same pay and protections as everyone else.

Cheaper goods in exchange for losing your well-paying job is an awful deal for the one who lost the job.

That's way too naive, prices never go down, the owner pockets the difference, you pay the same, and once they come to your industry you have more competition

Tbf, if I was the only person in the country, no one would stop me of being the actual king.

By your logic, slavery was one of the finest economic policies. Cheap labour, how about free labour? Have we thought of that? Everything would just be free.

In the real world, the evidence is obvious: average productivity/wages drop, incentive to invest in labour-saving technology disappears, and you get multiple decades of stagnation. Every country which had unlimited, unfree labour has had decades of slow growth as a result.

Income growth in the working age population in the US since 1990 has been about the same as Japan, a country which is widely regarded as on the verge of economic collapse. US per capita income is probably 20-30% lower than it would be with first-order effects from immigration, likely much more with second order effects. Under any other circumstances with economic policy elsewhere, the US economy would be growing 7%/year now (and ofc, the answer for Japan's ills is apparently, you guessed, lots of immigration).

China is seeing secular reductions in production costs because of capital investment, not low wages. The peculiarly statist notion of American capitalists that the route to economic supremacy was large numbers of illiterate Guatemalans should go down as not only an economic failure but a moral one (equally of H1B).


Basically YC + MIT background is a license to raise infinite capital. So they just needed to check some revenue boxes etc.

I'm willing to bet an intelligent LLM with a dataset and a pandas stats package could outperform this model by running its own experiments and making predictions

Instead of willing to bet, you can do it yourself and prove it. It is not like there is a ceiling for doing what you are proposing. I am willing to bet that you are wrong.

This isnt the problem, the problem is open source software became a status marker/way to build a company.


Similar to delve, this guy has almost no work experience. You have to wonder if YC and the cult of extremely young founders is causing instability issues in society at large?


It's interesting to see how the landscape changes when the folks upstream won't let you offload responsibility. Litellm's client list includes people who know better.


Welcome to the new era, where programming is neither a skill nor a trade, but a task to be automated away by anyone with a paid subscription.


alot of software isnt that important so its fine, but some actually is important. especially with a branding name slapped on it that people will trust


The industry needs to step up and plant a flag for professionalization certifications for proper software engineering. Real hard exams etc


I can't even imagine what these exams would look like. The entire profession seems to boil down to making the appropriate tradeoffs for your specific application in your specific domain using your specific tech stack. There's almost nothing that you always should or shouldn't do.


All engineering professions are like that. NCEES has been licensing Professional Engineers for over a hundred years. The only thing stopping CS/SE is an unwillingness to submit to anything resembling oversight.


All software runs on somebody's hardware. Ultimately even an utterly benign program like `cowsay` could be backdoored to upload your ssh keys somewhere.


https://xkcd.com/2347/ , but with `fortune -a` and `cowsay` instead of imagemagick


It’s a flex now. But there are still many people doing it for the love of the game.


Wow this is in a lot of software


Yep, DSPy and CrewAI have direct dependencies on it. DSPy uses it as its primary library for calling upstream LLM providers and CrewAI falls back to it I believe if the OpenAI, Anthropic, etc. SDKs aren't available.


People underestimate how specific the genetic pools are that relate to high intelligence


meaning what exactly ?


Just know that alot of startups with all star founders are closer to delve than not.

Its mostly marketing, "look at this MIT genius that noticed something about legacy xyz industry that no one else did"

Truth is venture funds are allocating a limited pie of what is really societies capital to people that dont deserve it


It's almost a cycle.

Something bad happens because of lack of regulation -> People strive for regulation -> Govt's actually regulates and sets some norms/procedures -> system works for a while -> Then someone takes the same idea and molds it into something else to bypass the regulation -> they get promoted because they are "clever" and get rewarded -> Then something bad happens as the tool is used by public.

From Prediction markets to Buy now, pay later to Delve to so many other things.

Is there a name to this particular phenomenon, because this just keeps on repeating in multiple industries.


> Truth is venture funds are allocating a limited pie

This pie does not seem that limited recently.


there are only a certain amount of series A rounds per year


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: