Deep learning is pattern finding, not pattern matching. That is the difference. There is a business narrative that's hyping up ML/AI, there's lots of retrospective tracing of modern ML's roots back to something older, but that doesn't change the fact that there has been a big explosion of demonstrated advances opened up by modern approaches over the past 10 years.
Edit: just to expand a bit, the "intelligence" in modern AI is in the training, not inference. I think people see inference in a neural network as pattern matching or some kind of filter, and that's basically true, and "dumb" if you like. But learning the weights is a whole different thing. Stochastic Gradient Descent et al are using often only a comparative few examples to learn a pattern and embed it in a set of weights in a way that can generalize despite being highly underdetermined. It's not general intelligence, but it's a much different thing than the casual dismissals people like to post, usually directed at the inference part as if the weights just magically appeared
Deep learning is pattern finding, not pattern matching.
"Pattern matching" obviously includes "pattern finding" already, in common parlance.
By "deep learning" one means, in effect, "hierarchical pattern finding" (which is basically what "feature" or "representation" learning means to a lay person).
But still, at the end of the day ... pattern finding.
>> "Pattern matching" obviously includes "pattern finding" already, in common parlance.
So, the terminology I recognise is "pattern matching" when referring to matching an object against a pattern, as in matching strings to regular expressions or in unification (where arbitrary programs can be matched); and "pattern recognition" when referring specifically to machine vision tasks, as an older term that as far as I can tell has fallen out of fashion. The latter term has a long history that I don't know very well and that goes back to the 1950's. You can also find it as "statistical pattern recognition".
To be honest, I haven't heard of "pattern finding" before. Can you say what "common parlance" is that? What I'm asking is, whom is it common to, in what circles, etc? Note that a quick google for "pattern finding" gives me only "pattern recognition" results.
To clarify, deep learning is not "pattern matching", or in any case you will not find anyone saying that it is.
In addition to what others are saying I want to highlight that there's decent evidence that "all" we're doing is pattern finding and pattern matching. Definitely ML isn't at a human level yet, but I see no reason to think that we're missing some secrete sauce that's needed to make a true AI. When someday we do achieve human level intelligences in computers, it will be at least in large part with ML.
We're getting into the territory of some external debates -- that is to say, into areas that have already been debate by others, so there isn't much knew I could tell you here.
But basically I'm in the camp of Gary Marcus and others, who would probably respond by saying something along the lines of the following:
"No, there's not particularly good evidence that all we're doing is pattern matching, and a lot of evidence to the contrary. For one thing, a lot of these ML algorithms (touted as near- or superhuman) are easily fooled, especially when you jiggle the environmental context by even just a little bit."
"For another, and on a higher level, what algorithms lack -- but which sentient mammals have in spades -- is an ability to 'catch' themselves, an ability to look at the whole situation and say 'Woah, something just isn't right here'. (This is referred to as the 'elephant in the room' problem with modern AI)."
"And then there's the lack of a genuine survival drive -- not to mention the fact that we don't see any inkling of evidence of some capacity for self-awareness in any currently existing system."
Just as a starting point. But these are some huge differences between currently existing AI systems (however you want to define "AI here) and actual sentient cognition.
Huge, huge differences... such that I don't see one can get the idea that "all" we're doing is pattern recognition. Even if it may cover roughly 90 percent of what our neural tissues do - that other 10 percent is absolutely crucial, and completely elusive to any current, working technology as such.
When someday we do achieve human level intelligences in computers, it will be at least in large part with ML.
No doubt it will, but still there's that ... remaining 10 percent. Which you won't get to with accelerated pattern finding any more than a faster airplane will get you to the moon.
> In addition to what others are saying I want to highlight that there's decent evidence that "all" we're doing is pattern finding and pattern matching.
Is there? Link? It is possible, I just don't think there is enough evidence to say anything about it. What evidence would show that humans are just pattern finding and matching machines?
Some combination (sequential, parallel, meta, whatever convenient shape) of single tasks executed at super-human ability (computer vision + path learning are there already) would inevitably lead to some sort of singularity when coupled with just SoA motion, handling and decision making / optimization. Generalizing occasional singularities, though, still seems a long way ahead.
We’re quite far from human-level intelligence and require drastically more power and data storage. It wouldn’t be surprising to see that the ML powering general AI would look the same as today’s AI to 90s ML. Still useful but qualitatively a meaningfully different approach.
I'm pretty sure they meant "pattern finding" as "discovering new patterns that can be used for matching", not "finding a pattern that matches an existing known one".
But currently humans do that "pattern finding". If you want it to learn to recognize animals you give it thousands of images of different animals in all sorts of poses, angles and environments and tells it what those animals are, basically the "pattern" is created by the humans who compiles the dataset with enough examples to catch every situation, and then the program uses that pattern to find matches in other images. However if you want a human to recognize animals it is enough to show this picture and then they will be able to recognize most of these animals in other photos and poses, humans creates the pattern from the image and don't just rely on having tons of data:
Edit: In some cases you can write a program to explore a space. For example, you can write a program that plays board games randomly and notes the outcome, that is a data generator. Then you hook that to a pattern recognizer powered by a data centre, and you now have a state of the art gameplay AI. It isn't that simple, since writing the driver and feedback loop is extremely hard work that an AI can't do, humans has to do it.
not sure how well acquainted you are with neuroscience, but that is the basis of human cognition. I really don't understand the attempt to belittle "pattern finding". State of the art neuroscience tells us, that we have thousands of thousands of mental systems that have developed useful models to interpret the world. Human intelligence is by and large pattern finding.
I truly think all the criticism of deep learning is really a lack of understanding of neuroscience. It is not much different. We just currently do it much more inefficient than a brain
> But still, at the end of the day ... pattern finding.
And programming at the end of the day ... if's and for's. See how ridiculous it is? A fuzzy high level concept explaining away the complexity of getting there.
I think most people would take "pattern matching" to mean matching against fixed patterns or engineered features, as opposed to learned features which you might call "pattern finding".
Although the distinctions you're drawing are valid -- from the bigger picture point of view, this is basically hair splitting.
The more basic and important point is: technology that works on the basis of "pattern finding" (however you wish to call it) -- even if it performs exponentially better and faster than humans, in certain applications -- is still far different from, and falls far short of technology that actually mimics full-scale sentient (never mind if it needs to be human) cognition.
Or that is to say; of any technology that can (meaningfully) be called AI.
I think the AI effect is reversed. It should be "any time a program can compete with humans at a new task it will get marketed as AI".
I don't really get the original argument. I never thought "todays AI isn't smart, but if it could play Go, that would be an intelligent agent!". So "the AI effect" is just a strawman, I have seen no evidence that anyone actually made such a change of heart. AI research is important, but nothing so far has been anywhere remotely intelligent and I never thought "if it could do X it would be intelligent" for any of the X AI can do today. When an AI can pretty reliably hold a remote job and get paid without getting fired for like a year, that is roughly the point I'd agree we have an intelligent agent.
Edit: That wouldn't require "much". If the AI can read and understand text, then it can read about backend programming and what makes a good server and build a mental model for stuff like code quality. Based on what it learned it can also device a strategy for how to get a job, so it creates a github account, writes some example projects and puts them on github and writes a CV based on those, then go look for jobs.
Of course, this would only be simple if the agent was intelligent. Not like todays agents which are coded for extremely specific scenarios. Even the general gameplay AI they talked about is for very specific scenarios compared to what a human deals with. In order to move towards creating an intelligent agent we need to make progress in this direction. But nothing that has came out of AI research recently really do make any progress in that direction.
> I don't really get the original argument. I never thought "todays AI isn't smart, but if it could play Go, that would be an intelligent agent!"
I think the "AI effect" primarily refers to how practical successes in the field of AI get taken for granted and no longer considered AI - leaving AI with the unsolved problems and bleeding-edge research.
If you ordered something for Christmas recently, 'AI' may have been used to understand your search query, rank relevant pages, allow the websites to determine you're not a DDoS attack, allow your bank to determine that your transaction is legitimate, let the confirmation email through your spam filter, then work out routes and other logistics to get it delivered. Before you started your order, 'AI' may have also been involved in the product's design (like optimization of circuits), used to detect defects during manufacturing, or for monitoring and maintenance of rail/road surfaces used for delivery. But all this has been working for a while, so fades into the background rather than being what people think of as AI.
There is also that more philosophical element (rather than just about perception of AI as a field) involving our boundary for what "intelligence" is.
People won't usually phrase it as "once AI can do X I'll be happy calling it intelligent", but sometimes "doing X requires intelligence" or "an unintelligent machine could never do X", with similar implications. Like, going way back, Descartes' claims that machines will be incapable of responding appropriately in conversation.
With AI becoming increasingly capable, even though the timeframes are often far longer than AI-optimists predict, a prevalent view seems to be that AI could behave identically to humans and still not be intelligent. Possibly even be physically identical to humans and still not be intelligent, if you buy into the P-zombie argument (and additionally their nomological possibility, which most people don't).
Holding a remote job (including the inverview and portfolio process) is an interesting benchmark, but I can't help but feel your reponse when that gets achieved may be "oh but that's basically just patchwork of language models - clearly in retrospect my criteria must not have been strict enough".
that is not true though. State of the art neuroscience is, basically human cognition is a tower of abstractions of smaller systems of pattern finding to optimize a reward
Edit: just to expand a bit, the "intelligence" in modern AI is in the training, not inference. I think people see inference in a neural network as pattern matching or some kind of filter, and that's basically true, and "dumb" if you like. But learning the weights is a whole different thing. Stochastic Gradient Descent et al are using often only a comparative few examples to learn a pattern and embed it in a set of weights in a way that can generalize despite being highly underdetermined. It's not general intelligence, but it's a much different thing than the casual dismissals people like to post, usually directed at the inference part as if the weights just magically appeared