Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the risk of reigniting the perpetual war about how to characterize machine intelligence, and by extension how to characterize the risk they pose, Yann has been (and still is AFAIK) more in the "existential AI risk is a long-term problem" group. In a 2016 interview LeCun said [1]:

> We’re very far from having machines that can learn the most basic things about the world in the way humans and animals can do. Like, yes, in particular areas machines have superhuman performance, but in terms of general intelligence we’re not even close to a rat. This makes a lot of questions people are asking themselves premature. . That’s not to say we shouldn’t think about them, but there’s no danger in the immediate or even medium term. There are real dangers in the department of AI, real risks, but they’re not Terminator scenarios.

That's pretty measured overall, but he doesn't know that there's no existential AI risk in the medium term. No one does, and that's the problem. Experts simply suspect that it's unlikely. Stuart Russell and him have debated similar topics [2].

To tie back to your point: I keep seeing LeCun brush over tricky questions like yours and the ones at [2] with an arrogant confidence. I wish that he would be more careful, and I hope that I have a skewed view of him.

[1] https://www.theverge.com/2017/10/26/16552056/a-intelligence-...

[2] https://www.lesswrong.com/posts/WxW6Gc6f2z3mzmqKs/debate-on-...



He's not wrong, we're very far. And looking at past "progress" it seems that we'll get there very slowly. So it seems long-term.

Except people are bad at exponential processes. Yet when economics drives us we are suddenly good at making them happen. And this combo seems to be what makes these existential risks. (Like climate change, or other manifestations of the coordination problem.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: