Police officers have on many occasion used these cameras to track people outside of the scope of their jobs. Usually it's a man tracking a woman (like an ex-wife, ex-girlfriend, or just a random stalking target).
> SF and Seattle, cities with some of the gnarliest public safety problems in the country
As someone that lives in SF and has spent a decent amount of time in Seattle... this isn't accurate. I lived for a few years in Philadelphia and would actually hear gunshots frequently. Friends of friends got shot. Many a friend got mugged. Thankfully I never got attacked.
Don't fall for the bait that because we have homelessness we're a hellscape (and the homeless population is nothing like what is in Los Angeles from what I've seen with my own eyes).
I can see this being one of many AI tools for video editors. Combined with a handful of other tricks an SFX shop should have a tremendously higher productivity.
There's at least a decent argument to be made that the limiting factor is actually the physical silicon itself (at least at cutting-edge nodes) not really the power. This actually gives AI labs an incentive to run those specific chips somewhat cooler, because high device temperatures and high input voltages (which you need to push frequencies higher) might severely impact a modern chip's reliability over time.
Power is the limiting layer above physical chips. You can add more chips and them at lower clock or add more efficient chips later on, but you can't really change the power of a data center easily.
It will nonetheless be vastly cheaper to build a new datacenter and arrange for powering it than to fab the amount of leading-edge chips and compute systems that are going to ultimately eat that power. So the chips themselves are still the meaningful constraint.
Surely, there should be some more critical questions posed by why just buying a bunch of GPUs is a good idea? It just feels like a cheap way to show that growth is happening. It feels a bit much like FOMO. It feels like nobody with the capital is questioning whether this is actually a good idea or even a desirable way to improve AI models or even if that is money well spent. 1 GW is a lot of power. My understanding is that it is the equivalent to the instantaneous demand of a city like Seattle. This is absurd.
It feels like there is some awareness that asking for gigawatts if not terrawatts of compute probably needs more justification than has been proffered and the big banks are already trying to CYA themselves by publishing reports saying AI has not contributed meaningfully to the economy like Goldman Sachs recently did.
kinda complicated though when you consider it fully. Power consumption only measures the environmental impact really, we come up with more clever ways to use the same amount of power daily.
it's kind of like an electrical motor that exists before the strong understanding of lorentz/ohm's law. We don't really know how inefficient the thing is because we don't really know where the ceiling is aside from some loosey theoretical computational efficiency concepts that don't strongly apply to practical LLMs.
to be clear, I don't disagree that it's the limiting factor, just that 'limits' is nuanced here between effort/ability and raw power use.
"Do you realize that the human brain has been liken to an electronic brain? Someone said and I don't know whether he is right or not, but he said, if the human brain were put together on the basis of an IBM electronic brain, it would take 7 buildings the size of the Empire State Building to house it, it would take all the water of the Niagara River to cool it, and all of the power generated by the Niagara River to operate it." (Sermon by Paris Reidhead, circa 1950s.[1])
We're there on size and power.
Is there some more efficient way to do this?
Edit: What we have built is a natural language interface to existing, textually recorded, information. Transformers cannot learn the whole universe because the universe has not yet been recorded into text.
Transformers operate on images and a variety of sensor data. They can also operate completely on non-textual inputs and outputs. I don't know what the ceiling on their capabilities is, but the complaint that they only operate on text seems just obviously wrong. There are numerous examples but one is meteorological forecasting which takes in a variety of time series sensor inputs and outputs e.g. time-series temperature maps. https://www.nature.com/articles/s41598-025-07897-4
100% agreed. Sadly, lots of people out there with the "trust me bro, just need more compute". Hopefully we don't consume all the planet's resources trying.
I reevaluated my priors long ago when I saw that scaling laws show no sign of stopping, no sign of plateau.
Strangely some people on HN seem to desperately cling to the notion that it's all going to come to a halt. This is unscientific. What evidence do you have - any evidence - that the scaling laws are due to come to an end?
I’d like to see something that indicates models are getting better without the need for more training data. I would expect most gains are coming from more and better labeled data. We’re racing towards a complete encyclopedia of human knowledge. If we get there that’s only a drop in the bucket of all knowable things.
Bro the planet is literally experiencing a climate disaster and you think the solution is to create more systems that are misaligned with the planet's ecosystem for humans?
I guess the great filter is a real thing and not just a thought experiment.
I assure you that voluntary meat consumption because "taste buds go brr" is a much bigger problem than AI that results in actual productivity gains (and potentially solve the very climate crisis you complain about.)
I suspect it's not that people do not see the progress, they fail to fully trust laws not truly backed by physics like the transistor laws. We empirically see that scaling works and continue to work.
The issue people have isn’t some interpretation of scaling laws, it’s whether the planet’s ecology is goi g to be able to sustain this endeavour.
I shouldn’t have to say this out loud, but if the environment collapses, we will die, and no amount of “just a bit more scaling bro, just think of the gains” will matter.
People's voluntary dietary choices cause far more suffering and ecological damage than AI, and for much less return or economic output. But you tell people to switch to plant based foods and they lose their shit.
Well diffusers are trained unsupervised on raw pictures. I don't know how they train multi-modal LLMs on images, but yes obviously they are consuming other media than just text. I don't think, but would be happy to be corrected, that models glean much of their "knowledge" from non-textual training data.
Please tell me more. When I ask an LLM a question, and get a text response, can that response incorporate non-textual information from visual training data?
I think if you’re a professional and you’re actually coding for >4 hours per day it makes sense. Also if you’re one of those weirdos that likes to command an army of agents.
Well I’m a software engineer and code more than 4 hours per day.
But I do check the generated code, make sure it doesn’t go banana. I wouldn’t do multiple features at the same time as I have no idea how people are checking the output after that.
I like AI coding and it accelerated my work, but I wouldn’t trust their output blindly
It speeds up so much of what I do, simple tasks are amazing to delegate to AI and it leaves me with more time in the day to tackle the big tasks.
My general rule of thumb now is I never get AI to build the bones of something which I believe I will need to build something else on top of later. But if it’s some throw away dead end feature that won’t require me to build more tooling on top of in future I’ll happily spin up a cloud agent and use the result.
They're really amazing for making one-off tools where you just need some clean output and can throw away the code afterwards. I had Claude Opus put together a data labeling web app with very little effort. Less work than creating an account on some SaaS and learning their system.
reply