See [0] for a critical review of (the popularization of) this paper. As usual, there’s something real here but it’s way overhyped because everyone involved benefits from over-playing the role of deep learning here.
There seems to be a lot of animosity towards deep learning, to the point that it comes off as jealousy - I mean read the other long thread in this discussion. I don't mean to say the criticisms in the article you linked are not valid, just that there are a lot of people ready to jump on any failure or exaggeration related to ML, giving it far more scrutiny than most research disciplines. It gets more money than most as well so maybe that's ok. And I know it gets overhyped in business contexts and media. But there are research preprints.
That said, my observations about this story was that one one hand, everyone talks about how AI/ML will change everything and has lots of applications. But this article strikes my as clearly as a story of a solution in search of a problem, that had to go deep into a branch of theoretical math to find a place where ML could make a demonstrable impact. I know this application is not the only thing ML is good at (for context I own a machine learning consulting company and am bullish on the business applications of ML). But when you look at deepmind and the money that's gone into it, between the often cited "data centre cooling optimization" and this, they are not exactly having a massive impact, even as they do a lot of cool stuff.
> "they are not exactly having a massive impact, even as they do a lot of cool stuff"
AlphaFold has changed the course of structural biology research and some descendant of that technology will absolutely be used to design drugs that many people take.
The academic research community would eventually have built models with the same performance as AlphaFold2, probably within the decade. Nonetheless, DeepMind was first, and way ahead of everyone else.
I don't understand how the goalposts have moved so far that solving protein folding only counts as a cool trick or novelty.
> I don't understand how the goalposts have moved so far that solving protein folding only counts as a cool trick or novelty.
It's the AI paradox. Before you solve a problem, like playing superhuman chess, people say "that's hard, you'd need real Artificial Intelligence to do that!"
After you solve the problem, they see the solution and say "wait, that's just engineering, not real AI".
There are two interpretations: in one you put a metric and when you find people is cheating you move the metric. In the other interpretation you really don't have any idea what you want and when people give you concrete results you change your mind (this rings so close to any job...)
That's true of every np-complete (or similar class) problem: it's easy to verify that a solution work, but hard to find it in the space of possible solutions.
Reportedly, quantum computers can help us with some problems of that kind.
My world timeline basically split into before and after when AlphaGo won game 3. What we imagined was mechanically possible and not possible was greatly mistaken. I remember walking around in a light-headed daze, watching people go about their daily business not realizing that the world had changed. Everything since then is still great to read about and I do get surprised from time to time, especially about the diversity of applications rather than more strength in a given area. At the same time, I'm not shocked by any of it. I don't even think discovering basic material/cells or microorganisms on a nearby planet or moon would surprise me as much today as AlphaGo did. We don't have confirmation, but it's not entirely unexpected--'How' is that mystery rather than imagining its possibility.
Discovering new areas of application however obscure or deep in any field is what I find more amazing than any practical immediate application of ML-after all the G in AGI is general so we should be hitting all the bases.
> I don't even think discovering basic material/cells or microorganisms on a nearby planet or moon would surprise me as much today as AlphaGo did.
AlphaGo was a given. If we don't nuke ourselves, we'll eventually develop AGI (or boosted BCI cybernetics). It's priced into the timeline.
Discovering alien biology tells us about our place in the universe. It gives us a sense of the distribution of life and what else could be out there.
If what we find is similar to us, it points to some kind of panspermia or global convergent evolution. Maybe other life will be shockingly different with wildly new means of representing information. Nothing like our nucleic acids. Who knows?
I put my bets on us developing AGI before meeting our cosmic neighbors, sadly. The Fermi paradox is strong.
Before AlphaGo's achievement, it was still expected to occur, but the expectation was in about 20 years with much more developed tech. The shock and surprise was that using larger scale of combinations of current tech could achieve the feat. The qualitative change in worldview is that 'dumb' machines can do intelligent things. AlphaZero was a tour-de-force of how dumbness can excel. In hindsight we can say that these things happening in our lifetime with tech of the day isn't surprising, but if following closely as-it-happened, it was a big surprise to everyone including the AlphaGo team. I don't think anyone knew if there was any limit where efficiency/effectiveness of the systems being built would top out in the complexity it could tackle.
> There seems to be a lot of animosity towards deep learning, to the point that it comes off as jealousy
This happens every time people post research here. Any article about new battery tech will have people saying "What about all the other new battery techs that never went anywhere? This is just a prototype, it wont revolutionize the industry", and they are right. The difference is that when people say those things in a thread about ML, then AI optimists who thinks that these things are evidence that the singularity is near starts to argue that no, this time is truly different! And they do that every single time, AI optimists have believed that computers will take over any day now for 50 years. If those AI optimists didn't post so much then those threads wouldn't grow.
> [This article] had to go deep into a branch of theoretical math to find a place where ML could make a demonstrable impact
I have no opinion on this paper, but that is generally how cutting-edge research works. First, choose an impactful unsolved problem you have the highest chance of solving. Then, use the new learnings to ascend higher mountains.
I think you miss my point. This is not the first thing AI/ML has been demonstrated on. There are lots of powerful theoretical techniques that have done really well on research problems. But ML had been challenged to actually break into large scale practical utility. I think there is an argument that a lot of the mainstream work is solutions looking for problems, and my point is they found a problem finally, and it's a pretty obscure one. I see this more as a victory for finding something ML is good at, rather than finding it can help with a common or even specific real world problem.
I dont like to repeat myself, but I am an AI bull, I believe there is lots of real value that we haven't got out of it yet. But problems like this don't help build a case for how useful it's supposed to be, given the investment deep mind has received and the fact that they and others are searching hard for practical applications. I don't mean to criticize the research in any way,I'm talking about commercial utility.
As far as assisting with pure math problems, it is in its infancy.
Yet ML in general has had numerous practical successes that most people use daily (speech recognition, reverse image search, recommendation systems, etc.).
Yesterday, Yann LeCun wrote on his Facebook page: "I didn't know that Facebook, and now Meta, would be built around AI. If you extirpated deep learning out of Meta today, the company would literally crumble." [Source: https://www.facebook.com/yann.lecun/posts/10158029031237143?...]
My perspective is that, ML/DL is just another tool in the shed, with its own advantages and disadvantages. It excels when you have massive amounts of data to begin with, but fails catastrophically otherwise. And encoding domain specific knowledge about your data is the most important trick of creating good models and loss functions, so deep learning won't "automatically solve your problem" unless you're doing something cookie-cutter like image classification.
The verdict is that people in every scientific/engineering field should learn a bit of ML/DL (the underlying theory in it is not that hard), think a bit about the benefits and costs of it, tuck it in their toolbelt as one of their various tools, and move on. There's no harm in learning about the new trendy stuff, but keep in mind that there's no point in banging even the most high-quality hammer at a screw.
> It excels when you have massive amounts of data to begin with, but fails catastrophically otherwise.
Then what is zero shot learning and does this not represent a major advance? Sure, you need lots of data, but then it learns things not even in the distribution. That's pretty cool.
> And encoding domain specific knowledge about your data is the most important trick of creating good models and loss functions
This is true of all machine learning in general, but I don't think it's more true of DL specifically.
> so deep learning won't "automatically solve your problem" unless you're doing something cookie-cutter like image classification.
Remember a few years ago when image classification was hard? Let's take a moment to appreciate how many things deep learning made easier in the last decade. Now it automatically solves it, with more and more models pretrained for tasks released on huggingfaces every day.
> tuck it in their toolbelt as one of their various tools, and move on.
Disagree, if I'm training any classifier, I'm starting that work in pytorch (after data wrangling of course). Deep learning changed the entire field. Learn it. Use it.
That "other thread" wasn't throwing hate at deep learning -- not by any stretch. Orr even talking about the failures (or even exaggerations) of ML -- compared to what ML as such claims to be able to do. ML/DL are both perfectly fine for what they do, and have of course made great strides in recent years.
The critique was the tendency to continually pass these strides off as "AI" or something asymptotically close to it. Either on the part of sleazy companies pushing self-driving cars like they're just around the corner. Or supposedly reputable science journals that really ought to know better by now.
There is an element of jealousy/envy, I think. Many here want to be a Data Scientist, and many here want to work at Google (well, Deepmind), and seeing them make progress faster than they ever could (working on a big team with access to google-level resources) is really upsetting. The most upsetting thing is the idea that their names will probably be cemented into history, even if in a small way, and in the future people might discover them in textbooks or whatever. For a lot of people this makes the envy even worse.
So the natural desire is downplay the discovery, because this does two things: one it makes you appear smarter than them, and two it might just prevent them from actually achieving something great, which would take away your envy.
I can get where these people are coming from, but it's so silly to say "look at these idiots, they didn't even consider that their research has limitations!" and think it makes you look like you're genuinely concerned about the limits of the research and not just an envious person picking away at small mistakes because you can't do the hard work.
There's a pretty good Numberphile Podcast episode[0] about this. It doesn't go deep into mathematical details of the topic, it's more of an overview of the current state of things and an explanation of ways in which ML can help coming up with new theories/conjectures.
There has been work of similar spirit going on since at least 2017 in mathematical/theoretical physics. With similar limited importance. But the ideas are floating around already for a long time. But if DeepMind gives it a go they make a big fuzz about it, and of course they even do not need to cite previous works ...
Probably entropy. If you randomize fishing line positions almost all of them would be tangled. Now putting the fishing line in a box isn't the same thing as randomizing it, but the point of entropy is that small random changes will move the state towards higher entropy, meaning that the random changes ought to move the line towards being tangled.
Of course this assumes that a tangled line has higher entropy for some definition of entropy. Proving/structuring that would be too much work though, but if solved could make for a nice paper.
15 years ago on slashdot I read an article about this, basically as you say, it is entropy driven and tangled is a more natural state. I'll try and find it.
I cant find it - there is a 2007 Science article but it's not what remember. Anyway, this link has a short summary and a couple other good links:
Yes, I remember it from then too! Oddly I also recall that part of the result was that 3 dimensions is a kind of sweet spot for cords to be prone to tangling — in 2 dimensions they can’t and in 4+ the state space too large for it to happen without really trying at it.
As others said, it's entropy. Basically, think about the ways that you can arrange your lines in a box. There's relatively few ways that you can arrange the lines so that they're neat and not all tangled up. There's a lot of ways that they can be tangled up. So if the lines get jostled and move a bit, it's more likely that they end up in a slightly more tangled arrangement than in a less tangled arrangement. Do this a bunch of times and your lines end up all tangled up.
I'm definitely doing that. I'm using an amazon rod and reel combo deal by maxcatch, and it came with a very fine fishing line at the tip of the leader. So the wind picks it up, and loops it around the tip of the rod sometimes. But I also had snags inside the reel. And snags and tangles around bushes, and tree limbs.
One technique in particular, called saliency maps, turned out to be especially helpful. It is often used in computer vision to identify which parts of an image carry the most-relevant information. Saliency maps pointed to knot properties that were likely to be linked to each other, and generated a formula that seemed to be correct in all cases that could be tested. Lackenby and Juhász then provided a rigorous proof that the formula applied to a very large class of knots2.
Kewl, and not to discount the result but sounds like ... pattern matching, right?
So why do the editors of all these journals keep feeding off the ML == AI meme?
Your comment is a good example of AI meaning whatever we haven't solved yet.
Yes, I'm aware of the Rodney Brooks quote. I think it's kind of glib and misses the mark.
Take chess, for example. When algorithms were developed to beat humans at chess -- it wasn't, like the quote implies, out of some sense of envy or schadenfreude that people said, "Oh, but that's not real AI".
But rather -- from the intrinsic features of the system that first met this challenge. Which, at the end of the day, were basically about generalized tree search. Okay, there was more to it than that -- but in 3 word or less, that's gist of the AlphaZero algorithm, as described in the original paper.
So people said "Oh -- it turns out you don't need human-like cognition to beat humans at chess/go/shogi. You just need good enough tree search, and some decent hardware."
Pretty soon self-driving cars will also not count as AI, because we'll have solved self-driving and therefore it's not hard enough anymore.
That's one of the weird things about the current moment in AI: people keep talking about self-driving cars as if they're a straight-line extrapolation the breakthroughs we've seen so far (give or take a few details).
They're not. If you just think about if for a minute: nearly all of the challenges met thus far -- taking the two you just mentioned as particularly shining examples -- as significant as they have been, have been pretty darn one-dimensional. "Here's a function; optimize it" (basically).
The reason self-driving cars are floundering at the moment is that the intrinsic problem they're dealing with is an entirely different beast from "solve this symbolic game faster than a human" or "label these photos better than a roomful of monkees". It is vastly more complex. And the emerging consensus (whatever some companies may say to the contrary) is that solution is not lurking just around the corner, and not coming our way anytime soon -- certainly not "pretty soon".
Regardless, you probably won’t need human level, general purpose AI to drive a car.
It’s just a dumb definitional game that some people insist on playing. AI doesn’t just mean human level intelligence. The field is bigger than that. When people talk about the AI in their video game, they’re not demanding that bots be as smart as humans.
There are more specific terms like hard AI or AGI for when that’s the topic of discussion, and there’s no reason to assume that whenever someone says AI they mean one of those other things.
Regardless, you probably won’t need human level, general purpose AI to drive a car.
Right (most likely), but my point was -- we're not going to get there by extrapolating linearly from human-beating technologies such as AlphaZero (impressive though they are). Or by stitching a few of these on top of each other.
When people talk about the AI in their video game, they’re not demanding that bots be as smart as humans.
Of course the ML/AI distinction is completely muddled in public discourse, and the marketing fluff of many startups hasn't helped either. But the point from the very beginning was: from practitioners (and journals like Nature) we should expect better.
Uh, no. We've had decades of AI being more than "how long til they think like humans?" That's not what AI is. You're talking about AGI or hard AI.
Hell, even Wikiepdia makes it clear:
> Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a] Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however, this definition is rejected by major AI researchers.[b]
> AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go).[2][citation needed] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[3] For instance, optical character recognition is frequently excluded from things considered to be AI,[4] having become a routine technology.[5]
Insisting that others use your non-standard definition is silly.
Because pattern matching is the only significant new thing in AI research the past few decades. Every time they apply pattern matching to something new we will get articles like this and AI optimists will come and say that soon AI will take our jobs. Pattern matched images, pattern matched texts, reverse pattern matching to generate uncanny images and articles, pattern matched gameplay states etc.
Deep learning is pattern finding, not pattern matching. That is the difference. There is a business narrative that's hyping up ML/AI, there's lots of retrospective tracing of modern ML's roots back to something older, but that doesn't change the fact that there has been a big explosion of demonstrated advances opened up by modern approaches over the past 10 years.
Edit: just to expand a bit, the "intelligence" in modern AI is in the training, not inference. I think people see inference in a neural network as pattern matching or some kind of filter, and that's basically true, and "dumb" if you like. But learning the weights is a whole different thing. Stochastic Gradient Descent et al are using often only a comparative few examples to learn a pattern and embed it in a set of weights in a way that can generalize despite being highly underdetermined. It's not general intelligence, but it's a much different thing than the casual dismissals people like to post, usually directed at the inference part as if the weights just magically appeared
Deep learning is pattern finding, not pattern matching.
"Pattern matching" obviously includes "pattern finding" already, in common parlance.
By "deep learning" one means, in effect, "hierarchical pattern finding" (which is basically what "feature" or "representation" learning means to a lay person).
But still, at the end of the day ... pattern finding.
>> "Pattern matching" obviously includes "pattern finding" already, in common parlance.
So, the terminology I recognise is "pattern matching" when referring to matching an object against a pattern, as in matching strings to regular expressions or in unification (where arbitrary programs can be matched); and "pattern recognition" when referring specifically to machine vision tasks, as an older term that as far as I can tell has fallen out of fashion. The latter term has a long history that I don't know very well and that goes back to the 1950's. You can also find it as "statistical pattern recognition".
To be honest, I haven't heard of "pattern finding" before. Can you say what "common parlance" is that? What I'm asking is, whom is it common to, in what circles, etc? Note that a quick google for "pattern finding" gives me only "pattern recognition" results.
To clarify, deep learning is not "pattern matching", or in any case you will not find anyone saying that it is.
In addition to what others are saying I want to highlight that there's decent evidence that "all" we're doing is pattern finding and pattern matching. Definitely ML isn't at a human level yet, but I see no reason to think that we're missing some secrete sauce that's needed to make a true AI. When someday we do achieve human level intelligences in computers, it will be at least in large part with ML.
We're getting into the territory of some external debates -- that is to say, into areas that have already been debate by others, so there isn't much knew I could tell you here.
But basically I'm in the camp of Gary Marcus and others, who would probably respond by saying something along the lines of the following:
"No, there's not particularly good evidence that all we're doing is pattern matching, and a lot of evidence to the contrary. For one thing, a lot of these ML algorithms (touted as near- or superhuman) are easily fooled, especially when you jiggle the environmental context by even just a little bit."
"For another, and on a higher level, what algorithms lack -- but which sentient mammals have in spades -- is an ability to 'catch' themselves, an ability to look at the whole situation and say 'Woah, something just isn't right here'. (This is referred to as the 'elephant in the room' problem with modern AI)."
"And then there's the lack of a genuine survival drive -- not to mention the fact that we don't see any inkling of evidence of some capacity for self-awareness in any currently existing system."
Just as a starting point. But these are some huge differences between currently existing AI systems (however you want to define "AI here) and actual sentient cognition.
Huge, huge differences... such that I don't see one can get the idea that "all" we're doing is pattern recognition. Even if it may cover roughly 90 percent of what our neural tissues do - that other 10 percent is absolutely crucial, and completely elusive to any current, working technology as such.
When someday we do achieve human level intelligences in computers, it will be at least in large part with ML.
No doubt it will, but still there's that ... remaining 10 percent. Which you won't get to with accelerated pattern finding any more than a faster airplane will get you to the moon.
> In addition to what others are saying I want to highlight that there's decent evidence that "all" we're doing is pattern finding and pattern matching.
Is there? Link? It is possible, I just don't think there is enough evidence to say anything about it. What evidence would show that humans are just pattern finding and matching machines?
Some combination (sequential, parallel, meta, whatever convenient shape) of single tasks executed at super-human ability (computer vision + path learning are there already) would inevitably lead to some sort of singularity when coupled with just SoA motion, handling and decision making / optimization. Generalizing occasional singularities, though, still seems a long way ahead.
We’re quite far from human-level intelligence and require drastically more power and data storage. It wouldn’t be surprising to see that the ML powering general AI would look the same as today’s AI to 90s ML. Still useful but qualitatively a meaningfully different approach.
I'm pretty sure they meant "pattern finding" as "discovering new patterns that can be used for matching", not "finding a pattern that matches an existing known one".
But currently humans do that "pattern finding". If you want it to learn to recognize animals you give it thousands of images of different animals in all sorts of poses, angles and environments and tells it what those animals are, basically the "pattern" is created by the humans who compiles the dataset with enough examples to catch every situation, and then the program uses that pattern to find matches in other images. However if you want a human to recognize animals it is enough to show this picture and then they will be able to recognize most of these animals in other photos and poses, humans creates the pattern from the image and don't just rely on having tons of data:
Edit: In some cases you can write a program to explore a space. For example, you can write a program that plays board games randomly and notes the outcome, that is a data generator. Then you hook that to a pattern recognizer powered by a data centre, and you now have a state of the art gameplay AI. It isn't that simple, since writing the driver and feedback loop is extremely hard work that an AI can't do, humans has to do it.
not sure how well acquainted you are with neuroscience, but that is the basis of human cognition. I really don't understand the attempt to belittle "pattern finding". State of the art neuroscience tells us, that we have thousands of thousands of mental systems that have developed useful models to interpret the world. Human intelligence is by and large pattern finding.
I truly think all the criticism of deep learning is really a lack of understanding of neuroscience. It is not much different. We just currently do it much more inefficient than a brain
> But still, at the end of the day ... pattern finding.
And programming at the end of the day ... if's and for's. See how ridiculous it is? A fuzzy high level concept explaining away the complexity of getting there.
I think most people would take "pattern matching" to mean matching against fixed patterns or engineered features, as opposed to learned features which you might call "pattern finding".
Although the distinctions you're drawing are valid -- from the bigger picture point of view, this is basically hair splitting.
The more basic and important point is: technology that works on the basis of "pattern finding" (however you wish to call it) -- even if it performs exponentially better and faster than humans, in certain applications -- is still far different from, and falls far short of technology that actually mimics full-scale sentient (never mind if it needs to be human) cognition.
Or that is to say; of any technology that can (meaningfully) be called AI.
I think the AI effect is reversed. It should be "any time a program can compete with humans at a new task it will get marketed as AI".
I don't really get the original argument. I never thought "todays AI isn't smart, but if it could play Go, that would be an intelligent agent!". So "the AI effect" is just a strawman, I have seen no evidence that anyone actually made such a change of heart. AI research is important, but nothing so far has been anywhere remotely intelligent and I never thought "if it could do X it would be intelligent" for any of the X AI can do today. When an AI can pretty reliably hold a remote job and get paid without getting fired for like a year, that is roughly the point I'd agree we have an intelligent agent.
Edit: That wouldn't require "much". If the AI can read and understand text, then it can read about backend programming and what makes a good server and build a mental model for stuff like code quality. Based on what it learned it can also device a strategy for how to get a job, so it creates a github account, writes some example projects and puts them on github and writes a CV based on those, then go look for jobs.
Of course, this would only be simple if the agent was intelligent. Not like todays agents which are coded for extremely specific scenarios. Even the general gameplay AI they talked about is for very specific scenarios compared to what a human deals with. In order to move towards creating an intelligent agent we need to make progress in this direction. But nothing that has came out of AI research recently really do make any progress in that direction.
> I don't really get the original argument. I never thought "todays AI isn't smart, but if it could play Go, that would be an intelligent agent!"
I think the "AI effect" primarily refers to how practical successes in the field of AI get taken for granted and no longer considered AI - leaving AI with the unsolved problems and bleeding-edge research.
If you ordered something for Christmas recently, 'AI' may have been used to understand your search query, rank relevant pages, allow the websites to determine you're not a DDoS attack, allow your bank to determine that your transaction is legitimate, let the confirmation email through your spam filter, then work out routes and other logistics to get it delivered. Before you started your order, 'AI' may have also been involved in the product's design (like optimization of circuits), used to detect defects during manufacturing, or for monitoring and maintenance of rail/road surfaces used for delivery. But all this has been working for a while, so fades into the background rather than being what people think of as AI.
There is also that more philosophical element (rather than just about perception of AI as a field) involving our boundary for what "intelligence" is.
People won't usually phrase it as "once AI can do X I'll be happy calling it intelligent", but sometimes "doing X requires intelligence" or "an unintelligent machine could never do X", with similar implications. Like, going way back, Descartes' claims that machines will be incapable of responding appropriately in conversation.
With AI becoming increasingly capable, even though the timeframes are often far longer than AI-optimists predict, a prevalent view seems to be that AI could behave identically to humans and still not be intelligent. Possibly even be physically identical to humans and still not be intelligent, if you buy into the P-zombie argument (and additionally their nomological possibility, which most people don't).
Holding a remote job (including the inverview and portfolio process) is an interesting benchmark, but I can't help but feel your reponse when that gets achieved may be "oh but that's basically just patchwork of language models - clearly in retrospect my criteria must not have been strict enough".
that is not true though. State of the art neuroscience is, basically human cognition is a tower of abstractions of smaller systems of pattern finding to optimize a reward
I get that when you boil it down to pattern matching it sounds less impressive, but we are starting to see superhuman pattern matching algorithms, and algorithms that display logic through advanced “pattern matching”. And it’s quite obvious now that pattern matching is far more useful than pure rule based systems for any problem of sufficient complexity.
And AI is a very broad field, including pattern matching (supervised ML or otherwise), logic/rule based systems, etc. Is your complaint that they use AI/ML interchangeably? Because in this case, while AI is a more term for the technology than ML, it is not a strictly incorrect title.
Is your complaint that they use AI/ML interchangeably?
Yes, that's all I'm saying. And it seems like such a known point that they're not the same and not interchangeable, or so I thought (that I'm kind of astonished at the downvotes at my original comment).
Because in this case, while AI is a more term for the technology than ML, it is not a strictly incorrect title.
I get what you're saying at the product level -- and the fact that the vast bulk of the public subjected to these technologies couldn't tell you the difference, nor could they begin to care.
But to practitioners, the basic facts remain: ML ≠ AI, it's a proper subset, and we're doing the public a genuine disservice (and arguably causing substantial harm) by pretending to tell them that we're making good progress developing AI as distinct from hypercharged ML, that any day now they'll have self driving cars ... and all that crap the industry has basically been telling people.
[0:https://arxiv.org/abs/2112.04324]
Edit: the critical review isn’t about the technical content of the paper per se, but rather the hyped news around it.