IBM Builds A Scalable Computer Chip Inspired By The Human Brain

13

By Alex Knapp

“I’m holding in my hand a chip with one million neurons, 256 million synapses, and 4096 cores. With 5.4 billion transistors, it’s the largest chip IBM has built.”

Dr. Dharmendra S. Modha sounds positively giddy as he talks to me on the phone. This is the third time I’ve talked to him about his long-term project – an IBM project with the goal of creating an entirely new type of computer chip, SyNAPSE, whose architecture is inspired by the human brain. This new chip is a major success in that project.

“Inspired” is the key word, though. The chip’s architecture is based on the structure of our brains, but very simplified. Still, within that architecture lies some amazing advantages over computers today. For one thing, despite this being IBM’s largest chip, it draws only a tiny amount of electricity – about 63 mW – a fraction of the power being drawn by the chip in your laptop.

What’s more, the new chip is also scalable – making possible larger neural networks of several chips connected together. The details behind their research has been published today in Science.

“In 2011, we had a chip with one core,” Modha told me. “We have now scaled that to 4096 cores, while shrinking each core 15x by area and 100x by power.”

Each core of the chip is modeled on a simplified version of the brain’s neural architecture. The core contains 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons). This structure is a radical departure from the von Neumann architecture that’s the basis of virtually every computer today (including the one you’re reading this on.)

Work on this project began in 2008 in a collaboration between IBM and several universities over the years. The project has received $53 million in funding from the Defense Advanced Research Projects Agency (DARPA). The first prototype chip was developed in 2011, and a programming language and development kit was released in 2013.

“This new chip will provide a powerful tool to researchers who are studying algorithms that use spiking neurons,” Dr. Terrence J. Sejnowski told me. Sejnowski heads Computational Neurobiology Laboratory at the Salk Institute. He’s unaffiliated with IBM’s project but is familiar with the technology. “We know that such algorithms exist because the brain uses spiking neurons and can outperform all existing approaches, with a power budget of 20 watts, less than your laptop.”

It’s important to note, though, that the SyNAPSE system won’t replace the computers of today – rather, they’re intended to supplement them. Modha likened them to co-processors used in high performance computers to help them crunch data faster. Or, in a more poetic turn as he continued talking to me, he called SyNAPSE a “right-brained” computer compared to the “left-brained” architecture used in computers today.

“Current von Neumann machines are fast, symbolic, number-crunchers,” he said. “SyNAPSE is slow, multi-sensory, and better at recognizing sensor data in real-time.”

So to crunch big numbers and do heavy computational lifting, we’ll still need conventional computers. Where these “cognitive” computers come in is in analyzing and discerning patterns in that data. Key applications include visual recognition of patterns – something that Dr. Modha notes would be very useful for applications such as driverless cars.

As Sejnowski told me, “The future is finding a path to low power computing that solves problems in sensing and moving — what we do so well and digital computers do so awkwardly.”

13 COMMENTS

  1. Heavy Breathing

    I just had a nerdgasm!

    it draws only a tiny amount of electricity – about 63 mW

    With such a small power draw, the TDP on this chip must be miniscule to the point that it probably needs no heat sink at all. For comparison, a current 4th Generation Intel Core i7 Desktop CPU’s have a TDP of 130W and need a heat sink with at least 0.5 sq.m of heat emitting surface area couples with a 140mm fan to remain at safe operating temperature.

    I wonder if we’ll eventually see chips like this in Desktop and Laptop PC’s, because this will open new horizons in terms of high end, ultra fast, small form factor machines. Even if this does happen, it will probably be a decade or more before we see them on store shelves, as I imagine that it cost an absolutely astronomical amount of money to manufacture just this prototype.

    • I wonder if we’ll eventually see chips like this in Desktop and Laptop PC’s,

      We won’t. This kind of chip would be useless for anything that your current devices can do. It’s a whole different paradigm, that’s what they mean by non-von neumann but it means that nothing that runs on a traditional computer can run on this chip. And it’s not just a question of recompiling the code the way it would be with other kinds of new advanced chips, you would have to rewrite the code completely from scratch (that’s why they said they needed a new programming language, it’s that different from conventional computing) and for most applications it wouldn’t make sense to do that.

      What this chip enables are new kinds of applications, things that neural networks are good at: signal recognition, pattern matching, things like that.

      • So in essence each chip will have to be custom built/configured for its intended task, which it will be able to fantastically fast, but be utterly useless for any other task?

        What this chip enables are new kinds of applications, things that
        neural networks are good at: signal recognition, pattern matching,
        things like that.

        Doesn’t that mean that chips like this could be intelligent (possibly even sentient) robots that will be able to learn in a similar way to humans, instead of having everything it needs to know/perform programmed from the start?

        • So in essence each chip will have to be custom built/configured for its intended task, which it will be able to fantastically fast, but be utterly useless for any other task?

          That’s an interesting point. I can’t say for sure because I don’t know any specifics about this chip and how the IBM people plan to use it but I think you are correct. In general that is a significant difference between a neural net approach (what this chip uses) and the conventional programming approach. With a normal program it’s more or less complete when the user gets it with a neural net it has to be trained with specific examples and hence each neural net becomes specialized based on the specific examples it has encountered.

          Doesn’t that mean that chips like this could be intelligent (possibly even sentient) robots that will be able to learn in a similar way to humans, instead of having everything it needs to know/perform programmed from the start?

          IMO that question can’t really be definitively answered either way yet because we don’t know enough about what human sentience is yet. But certainly there are many AI and cognitive psych people who would say yes to your question. The people who are “connectionist” in their outlook would say definitely yes, that the only way we can get real sentience is with a neural net approach. Other people think that neural nets are great for some things but we have to have a more symbolic logic approach as part of true sentience.

          I think though most AI researchers don’t speculate that much about what is or isn’t sentience and instead focus on solving specific problems and that any robot you encounter in the real world will be a mixture of both paradigms. It will probably use a neural net approach to find it’s way around, to process speech, to process vision (e.g. find edges, shapes, faces), but some amount of symbolic logic to do things like planning, speech generation, etc.

          • I would like to offer my deepest apologies for the absolutely terrible grammar and composition in my above post, which I only noticed with the benefit of sleep.

    • True, but biological brains have a 3.5 billion history of evolution by natural selection. This is new life form which (admittidly has been driven by atrificial selection) has only been around about a 100 years. At that rate guess where artificial intelligence/life will be in another 100 years? or 50 years or even in our life times?

      I am simply saying it is an interesting comparison.

      • This is new life form which

        By most of the standard definitions of life that I’m familiar with this is not a “new life form”. It’s a machine to process information which has some similarities with how we define life but they aren’t the same thing. Keep in mind that ultimately these things are still computer chips nothing more. If you don’t load programs on top of them they aren’t going to do anything.

    • >

      Possibly a great leap for classical computing but unfortunately it’s still far away from non-boolean quantum computing that propably is performed in our brains.

      There is absolutely no evidence that the human brain represents information at the quantum level. And even quantum computers are still “boolean” they still represent information as bits (ones and zeroes) they just use the quantum technology to represent and compute over much larger numbers of bits.

Leave a Reply