The Hawthorne effect is real. And I don’t think we will ever get a 100% solid grip on what’s happening in others’ minds. Well, until we can actually read, understand, and interpret brain activity at the cellular level.
Can someone in the know give a little summary of what we’re looking at here? What’s the purpose? How effective is the code/system at accomplishing its purpose? Etc…
Automated Mathematician is a historically significant step in the evolution of classic AI based on evaluating symbols and rules. This branch of AI seems to have hit a dead end although one can never be certain of such things.
Obviously stuff like LLMs produces much more impressive results as of now, that's a given. OTOH who knows - neural networks have also had a long-ish period when OCR seemed to be the pinnacle of what they can deliver before they exploded via Deep Learning/Transformers/LLMs and what not.
Indeed, AFAIK neural networks have caused at least two AI winters before finally breaking through thanks to a few good new ideas and the fact that the needs of computer games incidentally led to the development of a big industry of specialized, programmable, high-performance dot product calculators.
Speaking of winters; there's a good article about Cyc, a successor to Automated Mathematician. Cyc was the last big project in symbolic AI: https://yuxi.ml/cyc
Eurisko demonstrated superhuman abilities to play strategy games in early 1980-th, and even used strategies from VLSI place-and-route task in planning fleet placement in games. This is knowledge transfer between tasks.
I wonder what the PGP signing concept does to thwart people who want to profit and don't care about the public good. It seems like anyone who attends a signing party can sell their key to the highest bidder, leading to bots and spammers all over again.
In the flat trust model we currently use most places, it's on each person to block each spammer, bot, etc. The cost of creating a new bot account is low so it's cheap to make them come back.
On a web of trust, if you have a negative interaction with a bot, you revoke trust in one of the humans in the chain of trust that caused you to come in contact with that bot. You've now effectively blocked all bots they've ever made or ever will make... At least until they recycle their identity and come to another key signing party.
Once you have the web in place though, a series of "this key belongs to a human" attestations, then you can layer metadata on top of it like "this human is a skilled biologist" or "this human is a security expert". So if you use those attestations to determine what content your exposed to then a malicious human doesn't merely need to show up at a key signing party to bootstrap a new identity, they also have to rebuild their reputation to a point where you or somebody you trust becomes interested in their content again.
Nothing can be done to prevent bad people from burning their identities for profit, but we can collectively make it not economical to do so by practicing some trust hygiene.
Key signing establishes a graph upon which more effective trust management becomes possible. It on its own is likely insufficient.
reply