Hacker Newsnew | past | comments | ask | show | jobs | submit | jagged-chisel's commentslogin

Collars for monitoring and herding. As long as they don’t require The Cloud, they’ll be quite helpful.

>As long as they don’t require The Cloud

Given that you hear frequently (even on the front page of HN today)

- people getting locked out of their cloud accounts and then facing a Kafkaesque faceless bureaucracy

- physical products turning into bricks because the cloud account disappeared with the company's failure

I would certainly hope that a cloud account is optional.


Given the trends elsewhere that seems inevitable. Or are farmers/ranchers particularly averse to those features?

> Halter has accumulated what is likely the world’s largest dataset of cattle behavior.

Something tells me you don't acquire the largest dataset of cattle behavior with opt-in cloud analytics.


“Sophistication and literacy” are orthogonal to the peculiarities of a black box search engine.

Those literate sophisticates would still be noobs at getting something useful from Google.


Especially given the complexity of how prices actually increased. Did priced change solely due to tariffs? No, there were other factors.

This whole this is just lawyering at its core. I find the outrage “on behalf of customers” to be disingenuous.


One man’s “useful” is another man’s “trolling”

fair

That’s no guarantee that a Power implementation isn’t compromised.

The Hawthorne effect is real. And I don’t think we will ever get a 100% solid grip on what’s happening in others’ minds. Well, until we can actually read, understand, and interpret brain activity at the cellular level.

Looks like the dashboard was down. Payments continued to work. And I guess the dashboard is back up now?

Yeah, that's the interesting part.

In big systems, you usually find out what's mission-critical by seeing what still works when something goes sideways.


Can someone in the know give a little summary of what we’re looking at here? What’s the purpose? How effective is the code/system at accomplishing its purpose? Etc…

Automated Mathematician is a historically significant step in the evolution of classic AI based on evaluating symbols and rules. This branch of AI seems to have hit a dead end although one can never be certain of such things.

Obviously stuff like LLMs produces much more impressive results as of now, that's a given. OTOH who knows - neural networks have also had a long-ish period when OCR seemed to be the pinnacle of what they can deliver before they exploded via Deep Learning/Transformers/LLMs and what not.


Indeed, AFAIK neural networks have caused at least two AI winters before finally breaking through thanks to a few good new ideas and the fact that the needs of computer games incidentally led to the development of a big industry of specialized, programmable, high-performance dot product calculators.

Speaking of winters; there's a good article about Cyc, a successor to Automated Mathematician. Cyc was the last big project in symbolic AI: https://yuxi.ml/cyc

Automated Mathematician was what lead to Eurisko: https://en.wikipedia.org/wiki/Eurisko

Eurisko demonstrated superhuman abilities to play strategy games in early 1980-th, and even used strategies from VLSI place-and-route task in planning fleet placement in games. This is knowledge transfer between tasks.


Pioneering and seminal early Lisp AI work, ~50 years before LLMs. From the mind behind the Cyc project.

https://en.wikipedia.org/wiki/Automated_Mathematician


That’s obviously because they’re not being “evil”

Sounds like we’re bringing back the PGP key signing parties

The sooner we do the better.

I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again.

In the flat trust model we currently use most places, it's on each person to block each spammer, bot, etc. The cost of creating a new bot account is low so it's cheap to make them come back.

On a web of trust, if you have a negative interaction with a bot, you revoke trust in one of the humans in the chain of trust that caused you to come in contact with that bot. You've now effectively blocked all bots they've ever made or ever will make... At least until they recycle their identity and come to another key signing party.

Once you have the web in place though, a series of "this key belongs to a human" attestations, then you can layer metadata on top of it like "this human is a skilled biologist" or "this human is a security expert". So if you use those attestations to determine what content your exposed to then a malicious human doesn't merely need to show up at a key signing party to bootstrap a new identity, they also have to rebuild their reputation to a point where you or somebody you trust becomes interested in their content again.

Nothing can be done to prevent bad people from burning their identities for profit, but we can collectively make it not economical to do so by practicing some trust hygiene.

Key signing establishes a graph upon which more effective trust management becomes possible. It on its own is likely insufficient.


You can never prevent things like this, but you can make it expensive enough to effectively solve the problem for almost all use cases.

Definitely miss those!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: