High altitude planes going to the moon is a beautiful analogy. I think this is what I’ll use to explain to less technical friends why I think we’re still many years from self driving cars.
The question isn't whether high-altitude planes can go to the moon, it's whether human intelligence is closer to the clouds or to the moon. For all the talk about how language models "just" learn correlations, there's a remarkable dearth of evidence that humans do something qualitatively different.
> For all the talk about how language models "just" learn correlations, there's a remarkable dearth of evidence that humans do something qualitatively different.
GPT3 doesn't know the difference between a given set of characters and the idea/object the characters represent. It can associate "river" and "stream" and "water" but has no understanding beyond that they appear in patterns together. It couldn't possibly make the connection that river and streams are bodies of water, because there is no association with reality.
GPT3 wouldn't even know the difference between human language and characters derived from some random data source.
The only thing it does is identify deeply complex patterns, as long as there are humans around to notify it when it's doing a good job. It's going to be very useful for auto-complete, and jumping in to help users finish repetitive tasks, along with the other stuff ML is good at, but it's simply a GIGO pattern recognition system.
So I think you have it exactly backwards -- there is a dearth of evidence that AGI is even remotely possible. We have known the full anatomy of the C. Elegans ringworm since 1984 -- it's 1mm long and has 300 neurons. There is a foundation dedicated to replicating it's behavior[1], and all they have achieved is complex animation.
> It couldn't possibly make the connection that river and streams are bodies of water, because there is no association with reality.
Yet if you asked it if rivers and streams are bodies of water it would probably say that they are.
Likewise if I asked you if black holes and neutron stars are both celestial bodies you would say yes... but you've presumably never seen them, only read about them.
Now I think you could argue that you could ultimately tie your knowledge back to some reality, like star ties to the sun and how you've seen the sun, but I haven't been so convinced that we know enough about how the mind works to be sure that the distinction between form and meaning is real.
Assuming there is no "soul" which makes meat life special, I don't see any fundamental problems with building simulated intelligence.
Are you just dumbing down humans to match the model? LeCunn's post is very sparse on detail, but the point is that humans can easily reason about a vast number of things that any form of sequential language model cannot. That alone is evidence that humans are doing something qualitatively different.
It isn't conclusive evidence however, and larger models may produce significantly more human like results. But from what we know about how gpt-3 works, all the evidence is on the side of it not resembling human intelligence.
Exactly - the analogy fails if a few assumptions we have about ourselves or what GPT-3 is actually “doing” are wrong. Until we hit some asymptotic limit on training these kinds of language models, I’m withholding judgement on what such a model will be capable of representing if/when that limit arrives.
> there's a remarkable dearth of evidence that humans do something qualitatively different.
Perhaps now, but if history is any indication, when we (as humans) think we have a good grip on how something really works (like human intellect in this example), we've been wrong.
We model the world around us from observation and testing, find our errors, remodel, and improve over time.
Then at some point we find some piece of information that shows us our model was a decent approximation, but fundamentally wrong, and that we need to start from scratch.
If we find that we want to go beyond the moon (and we eventually will), or that the moon is further than we think, we'll again need a different approach.
I always feel like there's a certain beauty and cosmic humor to it.
I have been using that analogy to explain why Tesla is many years away from a self driving car. Several others are building something that is fundamentally different.
Several things. Mostly that it has to be rolled out slowly because people lives are at stake so testing can run for quite a bit longer than you’d expect. Also that everyone wants to be the one who makes the breakthrough so companies will claim its right around the corner (like fusion) repetitively, i.e. Tesla saying it’d be here in 2018. We’re just at a point where these things can use parking lots so I wouldn’t expect a complete rollout several years as systems are built on top of other systems that have been widely tested and confirmed to work.