There's no limit to the algorithms. People dont understand yet. They can learn the whole universe with a big enough compute cluster. We built a generalizable learning machine
the question is will we experience resource constraints before we get there? what if the step up to post-scarcity is gated by a compute level just out of our reach?
Edit: What we have built is a natural language interface to existing, textually recorded, information. Transformers cannot learn the whole universe because the universe has not yet been recorded into text.
Transformers operate on images and a variety of sensor data. They can also operate completely on non-textual inputs and outputs. I don't know what the ceiling on their capabilities is, but the complaint that they only operate on text seems just obviously wrong. There are numerous examples but one is meteorological forecasting which takes in a variety of time series sensor inputs and outputs e.g. time-series temperature maps. https://www.nature.com/articles/s41598-025-07897-4
100% agreed. Sadly, lots of people out there with the "trust me bro, just need more compute". Hopefully we don't consume all the planet's resources trying.
I reevaluated my priors long ago when I saw that scaling laws show no sign of stopping, no sign of plateau.
Strangely some people on HN seem to desperately cling to the notion that it's all going to come to a halt. This is unscientific. What evidence do you have - any evidence - that the scaling laws are due to come to an end?
I’d like to see something that indicates models are getting better without the need for more training data. I would expect most gains are coming from more and better labeled data. We’re racing towards a complete encyclopedia of human knowledge. If we get there that’s only a drop in the bucket of all knowable things.
Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?
I guess the great filter is a real thing and not just a thought experiment.
I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)
I suspect it's not that people do not see the progress, they fail to fully trust laws not truly backed by physics like the transistor laws. We empirically see that scaling works and continue to work.
The issue people have isn’t some interpretation of scaling laws, it’s whether the planet’s ecology is goi g to be able to sustain this endeavour.
I shouldn’t have to say this out loud, but if the environment collapses, we will die, and no amount of “just a bit more scaling bro, just think of the gains” will matter.
People's voluntary dietary choices cause far more suffering and ecological damage than AI, and for much less return or economic output. But you tell people to switch to plant based foods and they lose their shit.
Well diffusers are trained unsupervised on raw pictures. I don't know how they train multi-modal LLMs on images, but yes obviously they are consuming other media than just text. I don't think, but would be happy to be corrected, that models glean much of their "knowledge" from non-textual training data.
Please tell me more. When I ask an LLM a question, and get a text response, can that response incorporate non-textual information from visual training data?
Where did my standard of living go? Couldnt possibly have to do with imported labor working around the clock under the threat of being kicked out of the country
For tech jobs specifically? Compensation has been increasing since the turn of the millennium, what standard of living do you mean? If you mean housing, that's due mainly to NIMBYism from native labor buying and owning houses, especially before the tech boom, not imported labor.
Cheap labour producing goods for the native population at low costs should increase your standard of living, no? It makes the products you buy cheaper.
By your logic, if you were the only person in the country, you'd live like a king.
Companies are importing labor so they can avoid pay competitive wages to native workers. If you need to hire people from other countries they should have the same pay and protections as everyone else.
That's way too naive, prices never go down, the owner pockets the difference, you pay the same, and once they come to your industry you have more competition
By your logic, slavery was one of the finest economic policies. Cheap labour, how about free labour? Have we thought of that? Everything would just be free.
In the real world, the evidence is obvious: average productivity/wages drop, incentive to invest in labour-saving technology disappears, and you get multiple decades of stagnation. Every country which had unlimited, unfree labour has had decades of slow growth as a result.
Income growth in the working age population in the US since 1990 has been about the same as Japan, a country which is widely regarded as on the verge of economic collapse. US per capita income is probably 20-30% lower than it would be with first-order effects from immigration, likely much more with second order effects. Under any other circumstances with economic policy elsewhere, the US economy would be growing 7%/year now (and ofc, the answer for Japan's ills is apparently, you guessed, lots of immigration).
China is seeing secular reductions in production costs because of capital investment, not low wages. The peculiarly statist notion of American capitalists that the route to economic supremacy was large numbers of illiterate Guatemalans should go down as not only an economic failure but a moral one (equally of H1B).
I'm willing to bet an intelligent LLM with a dataset and a pandas stats package could outperform this model by running its own experiments and making predictions
Instead of willing to bet, you can do it yourself and prove it. It is not like there is a ceiling for doing what you are proposing.
I am willing to bet that you are wrong.
Similar to delve, this guy has almost no work experience. You have to wonder if YC and the cult of extremely young founders is causing instability issues in society at large?
It's interesting to see how the landscape changes when the folks upstream won't let you offload responsibility. Litellm's client list includes people who know better.
I can't even imagine what these exams would look like. The entire profession seems to boil down to making the appropriate tradeoffs for your specific application in your specific domain using your specific tech stack. There's almost nothing that you always should or shouldn't do.
All engineering professions are like that. NCEES has been licensing Professional Engineers for over a hundred years. The only thing stopping CS/SE is an unwillingness to submit to anything resembling oversight.
All software runs on somebody's hardware. Ultimately even an utterly benign program like `cowsay` could be backdoored to upload your ssh keys somewhere.
Yep, DSPy and CrewAI have direct dependencies on it. DSPy uses it as its primary library for calling upstream LLM providers and CrewAI falls back to it I believe if the OpenAI, Anthropic, etc. SDKs aren't available.
Something bad happens because of lack of regulation -> People strive for regulation -> Govt's actually regulates and sets some norms/procedures -> system works for a while -> Then someone takes the same idea and molds it into something else to bypass the regulation -> they get promoted because they are "clever" and get rewarded -> Then something bad happens as the tool is used by public.
From Prediction markets to Buy now, pay later to Delve to so many other things.
Is there a name to this particular phenomenon, because this just keeps on repeating in multiple industries.
reply