Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not accusing you of doing this in a bad way, but it kinda begs the question to argue that all intelligence is curve fitting, because if it is to be looked at that way, then we don't know the characteristics of the hyperspace on which it can be defined as "just" curve fitting, which is an important aspect of being able to carefully say that it is "just" curve fitting. While there almost certainly is such a hyperspace, just by virtue of the fact we're leaving ourselves so many degrees of freedom in our unspecified speculations here that it can't help but cover everything we ever do, we can't be confident that if we did know everything involved that it would be anywhere near the best representation, which is the only standard we can have in terms of whether or not something is "really" curve fitting. It isn't that hard to imagine that while an n-dimensional space can be defined upon which our intelligence is "curve fitting" that there is some better computational paradigm that would both describe it in fewer free parameters (or, in this case, a big-O different number of free parameters, probably) and also be easier to work with computationally, in which case we would have a solid ground to stand on to say that, no, it isn't really just "curve fitting".

Or to bring it down to Earth in another way, consider just the act of writing a program in the modern world. If you work really, really hard, you can define a space in which our act of programming is just "curve fitting"... but it's far from obvious that that is even remotely a sensible way to look at the world. (See "differentiable programming" for the best counterpoint I know to that: https://en.wikipedia.org/wiki/Differentiable_programming but it's a very small niche right now.) When I'm debugging a program there is almost never any utility at all in trying to think about it as a "curve" and trying to get it closer to a the "correct" curve. A Turing-complete-complex space can be described as curves, but those curves are just awfully complicated and I don't see how it would be a help.

My personal suspicion is that while our cognition involves rather less of this "Turing complete" thinking than we'd like to fancy ourselves using, we do irreducibly use elements of it [1], and as long as our best AI models are incapable of representing Turing-complete computations there is simply no chance of them being the answer to true human-scale cognition. (We do have models that can do it, e.g., evolutionary computation, but we lack any sensible idea of how to "update" such models like a neural net. Neural nets themselves in the simplest case aren't Turing complete, and none of the hybrid models seem to get there to me either, though I welcome correction on that point.)

[1]: Evidence: I don't think we could program Turing-complete machines if we were incapable of thinking that way ourselves. We aren't necessarily great at it, our engineering techniques are deeply characterized by the fact we can't really manipulate very many things at once in this manner and we have no choice but to break things up into very small modules and for us to combine them in a way that means that at any given time we have only a very small number of things to keep track of locally, but we are still doing non-trivially more than zero of the Turing-complete style of thinking. It isn't a hard guess from there to think that even if we aren't all that great at a full mathematical manifestation of this style of thinking, we may indeed be doing something somewhere between what our current neural nets do and this full TC-style thinking at a larger scale, and the inability to capture this in our neural nets is a currently-fatal-flaw.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: