The Man Who Would Teach Machines to Think

17

Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we've lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.

“It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”

Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.

Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.

Written By: James Somers
continue to source article at theatlantic.com

17 COMMENTS

  1. Inventors got stuck for centuries trying to fly by duplicating birds. The automobile in no way tries to duplicate human or equine anatomy. I see no reason why artificial intelligence has to work the same way as the human brain. Learning its tricks is on the agenda, but that is not the goal. We want something much faster and more reliable.

  2. >
    Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself.

    I have some serious doubts about this. My own thought is, if we understand it, it ain’t intelligent. I suspect that the nearest thing to AI at the moment is Google’s “Deep Learning system, precisely because the experts who wrote it don’t understand what’s going on. I’m not suggesting the Google system is intelligent, only it is possibly a step in the evolution of artificial intelligence.

    • In reply to #2 by SomersetJohn:

      AI at the moment is Google’s “Deep Learning system, precisely because the experts who wrote it don’t understand what’s going on. I’m not suggesting the Google system is intelligent, only it is possibly a step in the evolution of artificial intelligence.

      Of course Google’s engineers understand “deep learning”. The point is the algorithms learn features. What they don’t know is what the features are a priori, but they know how and why the learning works.

    • In reply to #2 by SomersetJohn:

      Hi John,

      My own thought is, if we understand it, it ain’t intelligent.

      I don’t understand that test?

      If my Psychologist can understand me I’m not intelligent?

      Turing’s famous test would also appear to be exactly what you deny: If a Computer can appear to be human in a conversation then it is intelligent (at least, that is, as intelligent as the Human chatting to it).

      Flipping your test on it’s head: If a Computer talks gobbledegook it’s automatically intelligent. Nah, that doesn’t work.

      The philosophical approach is, of course, define what you mean by intelligence.

      The OP is saying one way to avoid arguments over definition is to say: We’ll make a machine that thinks like a human, then we side-step that issue. That doesn’t really work for me. By what definition are humans intelligent? Looking at the average politician, I’m not convinced that intelligence is a human universal.

      Also, there’s a clear error being made in the OP – equating designed software with evolved brains.

      If I were in charge of funding these guys wouldn’t have got a penny. They just don’t have their ducks in a row.

      Peace.

  3. “Consider that computers today still have trouble recognizing a handwritten A.”

    Oh, for fucks sake, computers have been able to recognise hand written “A”s since the 1970s. Just because your average nerd can’t write the software is more about the lack of ability of the nerd than the difficulty of the task. See this @21:30.

    The journalist is an AI ignoramus who has no hope of conveying any information about AI, just blather.

    • In reply to #3 by God fearing Atheist:

      The journalist is an AI ignoramus who has no hope of conveying any information about AI, just blather.

      I went off on a tangent as soon as I read the bit about chess playing programs because I thought it was so clearly wrong and then never finished the actual article until now and I agree completely. The more I read the more I realized there were just some basic errors in understanding AI although I think the fault is more on Hofstader not the author, Hofstader is supposed to be an expert in the field but after reading the complete article (well more of it at least, I don’t care about what kind of car he drove) I think Hofstader is just a dilletante who really doesn’t understand some basic cognitive science issues.

      “Consider that computers today still have trouble recognizing a handwritten A.”

      Yes! Exactly, this whole thing with the “A” shows he may not even know what he’s talking about.

      I think contrasting the two tasks: 1) Recognize letters and 2) Play grand master chess

      Highlights how complex human cognition is and also highlights that the idea of finding one algorithm that qualifies as “intelligent” is doomed to failure. In fact the way you recognize letters has little to do with how you play chess. My guess is that most primates would be very good at recognizing letters. Not putting them together to make words but say recognizing an “A” from a “B” in order to get some reward, my guess is primates would do almost as well on that as humans, where as grand master chess, not so much.

      A significant amount of the hard work that goes into recognizing letters doesn’t even go on in the brain. There are bundles of neurons that make up the vision system and that pre-process data before it even gets to the brain to recognize things like edges, faces, etc. And this gets very specialized in different animals, for example scientists have identified “bug detector” neurons in frogs (for obvious reasons).

      • In reply to #14 by Red Dog:

        In reply to #3 by God fearing Atheist:

        A significant amount of the hard work that goes into recognizing letters doesn’t even go on in the brain. There are bundles of neurons that make up the vision system and that pre-process data before it even gets to the brain to recognize things like edges, faces, etc. And this gets very specialized in different animals, for example scientists have identified “bug detector” neurons in frogs (for obvious reasons).

        Depends on what you mean by “significant”. The retina does some processing, a bit may be done in the LGN, edges come out by V1. By V4 I expect simple shapes like letters have a good representation, but it is probably put together with word recognition in IT. I’ll guess 95% of the glucose consumed in recognising an “A” is V1 or after. The anatomy is not my strong point. I’m more interested in the layers of computation.

        The frog bug detector is fascinating. I did a primary school visit once where I got the kids doing simple convolution to detect a “ball”. One teacher started taking the piss in the common room after – “what’s that got to do with vision” – so I ripped him a new one with the frog example.

        • In reply to #15 by God fearing Atheist:

          In reply to #14 by Red Dog:
          Depends on what you mean by “significant”.

          This is an example of why AI can be valuable actually. I think a good working definition of significant in this case would be: the percentage of code that would be required in an AI system to solve the same problem.

          So in that sense I think it is accurate to say that a significant amount of what goes on in recognizing letters happens before the brain. Things like edge detection and rotation are very complex and at least some of them are done before the brain. Although I don’t know much of the details about vision so it could be that most of what I’m thinking about is done in the “primitive” areas of the brain, the parts that are least differentiated between us and primates. My main point is that the kind of processing that is required to recognize letters is radically different than playing chess and probably done by sub-systems of the mind that are independent of each other in many ways.

          • In reply to #16 by Red Dog:

            In reply to #15 by God fearing Atheist:

            In reply to #14 by Red Dog:
            Depends on what you mean by “significant”.

            This is an example of why AI can be valuable actually. I think a good working definition of significant in this case would be: the percentage of code that would be required in an AI syste…

            As far a computer is concerned a number of convolution filters are run over the pixels of an image, to give a 2d array of stacks of oriented edges. It can be done by hand using Gabor filters, or learned by an ANN, or even leaned by PCA or ICA. All techniques basically gets the same stuff – oriented bars. Hubel & Wiesel showed the same response from visual area 1 (V1) of cats and monkeys. So machine learning gets the same result as biological evolution, which is reassuring. Hubel & Wiesel’s book used to be on the web, but I can’t find it now. The book discusses all the processing that starts in the retina. It is a fascinating read.

            You are right, V1 is a very specialised area that just does this one task in mammals. Interestingly the number of neurons in each layer get smaller the more the information is processed. V1 is doing data processing on huge volumes, by the time it gets to IT the brain is doing information processing in lower volumes.

            Chess isn’t using the same modules. The interesting thing is is is still using the same layered neural architecture in a different part of the cortex. So how is chess the same as vision, and how is it different?

  4. Douglas Hofstadter is one of my favorite thinkers and authors. He is thinking about human thinking. That is not the same as thinking about how to make machines smarter, although some things may cross over. If you want a machine to do something that involves rule following (such as game playing), you can just write that (with enough time and resources). If you want a machine to do something that takes a long time for a human to learn, and can’t explain how that learning happens, you need some machine learning algorithms that are going to train up to instantiate software that you could not have written, and that you, probably, are not be able to explain by reverse engineering. IBM learned that lesson in the development of WATSON.

  5. I had the honor to work with some of the people from the Stanford AI program back in the 80’s. At the time Hofstadter’s book was all the rage and everyone in the industrial lab I was a part of had read it and talked about it. When I mentioned it to some people from Stanford (expecting to impress them that an industry guy would also delve into such weighty issues) I got polite laughter. I talked to one of the more honest students after and she told me that people don’t take Hofstadter seriously. Now I realize that is just anecdotal and also probably at least partly jealousy that Hofstadter’s name was well known where as people who actually invented AI like Feigenbaum weren’t but I also think there was something to it.

    Hofstadter never did any significant AI research that I’m aware of. He’s been employed as a professor in AI or cognitive science at some decent schools but as far as I know none of his research has had any impact or made any discoveries of note. And his writing reminds me of postmodernism, it sounds profound but when you strip away the high flown verbiage there isn’t much left.

    As for the “playing chess isn’t intelligent” complaint anyone who works in AI has been hearing that since the beginning. Although it didn’t used to be said about chess playing. When I first studied AI in the 70’s the common refrain was “sure computers can place blocks on top of each other but they will never do anything really intelligent like play grand master level chess”. Another common theme in AI is “once we figure out how to do it, it’s not AI anymore”. You can see that in the IT world quite a bit. Most of the large product suites like SAP have a rules engine for defining and executing business logic but it’s not touted as AI anymore because it’s just so accepted as one more tool that IT people need to know about.

    The way I look at it this comes down to something Daniel Dennett said in a recent panel with Krauss about Alan Turing (someone I have a great deal of respect for). What Turing tried to do was redefine nebulous concepts such as intelligence into concepts that could be measured and tested. That is what a good scientist, including a good AI researcher, does even though it leaves them open to the inevitable quips from the Chopras and Hofstaders that in doing that they have made the problem “not interesting” anymore.

  6. Red Dog…this kind of reminds me a conversation I had some time ago about the commonly used for “pathfinding” purposes A-Star algorithm(btw, disclaimer: by mentioning “a-star” I think it is pretty clear I’m really, really far from an…expert on the field of AI, I have a CS degree but that just means I’ve got some introduction on some basic stuff, so feel free to correct what follows) in which some people were stating that it’s not actually a “real” AI algorithm, just a graph traversal algorithm. Uhmmm, ok, no shit? Of course it’s an algorithm that operates on a data structure. So is ANN, genetic programming, and…well, every kind of programming. That’s what programming is. What would a “real” AI be made of, magic fairdust?

    Naturally, what matters in such algorithms like A-Star is also the data the programmer feeds it with…in case of pathfinding the environment needs to be transformed into a format that can be processed by the algorithm, like the way you divide up the 3D(or 2D, if it’s sufficient for your needs) space, the heuristic function, the weights you assign to every cell/region etc etc…and of course a machine that would be able to construct that representation itself from raw image information taken from a camera and then navigate, would be a signficantly more advanced AI, but it still would be “just” algorithms operating on data structures, merely more complex and involved ones. Now, starting arguing again about whether that machine really “understands” what it’s doing, or “merely” executing instructions and moving around data…yeah, no, let’s not do that…

    As for Deep Blue…I can’t be sure if indeed what they mainly used was the “brute force” approach of evaluating every possible move and counter-responses 7 levels deep, however if its done so it’s somewhat obvious that Kasparov’s brain didn’t exactly use that method, but was able to perform a fast cull of a large amount of potential moves, by recognizing patterns and having memorised a large amount of previous games played by other grandmasters (if I’m not mistaken, Deep Blue did utilize such a database too). Now, Kasparov is most probably not aware of all the calculations happening in his brain when he recognizes these patterns; he just does it. So, clearly, the goal here is to be able to figure out what the algorithms are that will result in a chess pattern recognition similar to what Kasparov’s brain does, and then, voila, it’s not “brute force” anymore.

    • In reply to #8 by JoxerTheMighty:

      Red Dog…this kind of reminds me a conversation I had some time ago about the commonly used for “pathfinding” purposes A-Star algorithm(btw, disclaimer: by mentioning “a-star” I think it is pretty clear I’m really, really far from an…expert on the field of AI, I have a CS degree but that just mea…

      Exactly. In the article Hofstadter says:

      “Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?”

      What a ridiculous thing to say. Of course Deep Blue says something interesting about how humans play chess. It provides us with an existence proof of what it takes to play grand master level chess. Now you can argue that the way Deep Blue does it isn’t the way Kasparov does it. Deep Blue uses powerful parallel search. So if Kasparov doesn’t do that he must have some other algorithms he uses to prune the search space and remove many of the branches that Deep Blue would need to search. Even if what Hofstadter claims about Deep Blue and Kasparov working in different ways is true it’s wrong to say that Deep Blue doesn’t say anything interesting for AI. The program gives us a starting point for understanding human performance and if humans do it differently there must be some understandable way to define what that alternative method is.

      As a side note, I’m not convinced that Hofstadter is even correct about the deep search. From what I recall there is evidence that Deep Blue and Kasparov are pretty similar, that chess grand masters consider huge numbers of alternative moves, far more and to a greater level of depth than normal good chess players. But either way it’s interesting, even if they aren’t using the same techniques that in itself says something potentially meaningful about how humans solve complex problems.

  7. What I find sort of “funny” is that we automatically assume that “intelligence” means whatever method the human brain to solve problems, and most times it involves some “fuzzy” thinking, even if it’s entirely inefficient. Also, let’s consider this: We give some laymen the task to sort a sequence of 100 integers(or books based on their titles, or rocks based on their weight, you get the point). Now, sorting really doesn’t fall into the definition of “AI” no matter how you look at it, but I use it as an example of a problem that needs solving. Most probably, the “path” and the number of operations each individual will perform in order to sort those 100 integers will be completely different, some will struggle more, some less, they probably won’t just shuffle the numbers randomly until they get them in correct order, there will be thinking involved, but when the task is finally complete, most of them wouldn’t be able to answer what exactly they did or why.

    And now, we give the same problem to a person that knows something about efficient sorting, and he decides to use, say, heapsorting: he does it faster than everyone else, and knows exactly what he did and why. But…heapsorting is just a fairly rigid algorithm, a simple sequence of steps any dumb machine could perform; in this case they were just performed by a human. Does that mean that the first group was using “real intelligence” to solve the problem, while the last individual was just mimicking a “program’s behavior that passes as intelligence when it has nothing to do with intelligence”? Why? Because he used the most efficient method, knew exactly what he was doing, and why?

  8. Best of luck to him. Programmers ahve been coming up with efficient problem solving algorithms for ages. The work of Simon and Newell in the 70’s is seminal. The issue is that our cognitive systems aren’t rational or go along efficient rules or mathematical niceties. We have evolved in a peculair way in which we detect patterns in small data sets which drive exploration of knowledge and make best use of our small short term memory. (Kareev 2012) It leaves us susceptible to seeing patterns which don’t exist and unpredictability which is a major factor in the development of our cognitive biases. Here’s the catch despite these ‘problems’ we’re remarkably successful as a species and the fact that evolution hasn’t eliminated the effects of small short term memory suggests it is an evolutionary advantage. Over to the mathematicians with their predictable rules.

    • In reply to #11 by Vorlund:

      Programmers ahve been coming up with efficient problem solving algorithms for ages. The work of Simon and Newell in the 70’s is seminal.

      To categorize what Simon and Newell did (or AI in general) as just being about “efficient problem solving algorithms” is incorrect. Simon and Newell used efficient search algorithms but to my knowledge they didn’t create any of them. It’s true that any decent computer science curriculum will include a course on efficient search (here I mean search as in traversing a graph not necessarily search as in what Google or other search engines do) but those things are usually covered in a class specifically on data structures and algorithms. And for the most part fast search algorithms such as A-Star that Joxer was talking about pre-date AI and were first documented by people like Dijkstra.

      In fact you could almost make an opposite claim, that what Newell and Simon were doing was to find inefficient problem solving techniques. Of course that wasn’t their goal but what they studied were very generic problem solving techniques, e.g. take the problem, decompose it into smaller problems and solve those sub-problems. They spent a lot of effort on developing a general problems techniques hoping to find an algorithm(s) that could replicate human thinking. The consensus of most cognitive science people these days is their approach was too general, that humans don’t have one huge problem solving loop that we use for all problems but rather a collection of many different problem solving modules that we use depending on the specific problem.

    • In reply to #11 by Vorlund:

      The issue is that our cognitive systems aren’t rational or go along efficient rules or mathematical niceties.

      It’s certainly true that human cognition isn’t completely rational but it’s totally wrong to say that cognitive systems aren’t rational or don’t follow mathematical “niceties”. That was actually one of my points earlier, is that by giving us an existence proof of a system that can play grand master level chess AI provides us with an additional artifact (besides the human brain) that can do an activity we would normally consider intelligent. We can deduce the mathematical properties of the AI system and we can then say that however the humans do it their approach must handle the same computational complexity (a mathematical “nicetie”).

      So to get specific if we know that on average Deep Blue’s search graphs process X number of nodes with Y number of edges and a depth of N we can then say that however the human masters do it, to reach the same level of efficiency they must be somehow either dealing with graphs of comparable size or invoking additional search pruning heuristics that prune out some of the nodes.

      That’s just one example, this is actually a general point that Chomsky made about the human mind a long time ago in relation to language. He demonstrated that any system that processes natural language has certain mathematical properties, that is what his first major book Syntactic Structures is all about, documenting the various kinds of languages and their mathematical properties and he then said that any model that takes into account human language had to explain how the mind could deal with these mathematical properties. In fact it’s one of the ways Chomsky showed that Skinner’s approach to Language had to be wrong, because there was no way the statistical approach Skinner advocated could handle the mathematical properties (e.g. recursion) that Chomsky demonstrated any language understanding system must have.

Leave a Reply