You are on page 1of 10

Abstract: 

 Abstract The Blue Brain Project began in July 2005 as a collaboration between Professor
Henry Markram from the Brain Mind Institute at the EPFL (Ecole Polytechnique F�d�rale de Lausanne)
and IBM (International Business Machines), aimed at modelling the neocortical column. The neocortical
column represents the basic functional unit of the cerebral cortex in mammals that underlies nearly all
sensory and cognitive processing. These units are repeated millions of times across the cortex, with the
basic structure remaining the fundamentally the same from mouse to man. From its origins � IBM's
BlueGene/L supercomputer and more than 10 years of experimental data from Professor Markram's
laboratory � the project has grown to include an international multidisciplinary team of over 35
experimentalists, modellers and computer scientists. The goal of Phase I was to build a cellular level
model of the somatosensory cortex of a 2- week-old rat corresponding to the dimensions of a
neocortical column as defined by the dendritic arborizations of the layer 5 pyramidal neurons. We have
achieved this goal by developing an entirely new data-driven process for creating, validating and
researching the neocortical column. Reverse-engineering a portion of the neocortex involves capturing
many levels of detail about microscopic cells and fibers � living, dynamic entities that are invisible to
the naked eye. Modelling efforts must examine the experimental design and weigh the potential
inconsistencies and relevance of the resulting data to the construction and refinement of the model. A
simulation-based research process requires that this consistency check occur in an ongoing fashion. The
simulation itself now serves as an essential tool for integrating experimental data and defining new
experiments that can precisely gather the information necessary to capture the complete biological
detail.

uses

The Blue Brain shows gamma oscillations


Cortical oscillations and synchrony have long been touted as candidate mechanisms to solve the
‘binding problem’ in theoretical neuroscience: when we examine the world around us, how do
our brains group multiple parts of the same object together into a coherent whole? A simple
example is the cat standing behind a fence. Even though whole segments of the kitty might be
blocked off from our view, we still perceive it as a single object. This happens even when the
segments of the visual scene are too far apart to be seen by overlapping cells in the retina – so the
information must be ‘bound’ somewhere else in the brain.

Experimental evidence implicating oscillations in this process was first found by Wolf Singer’s
lab in Germany in the late 1980s (Gray et al, 1989). They reported that spatially seperated
neurons in cat visual cortex mostly fired at the same time when the cat was presented with
moving bars of light, as long as the neurons both preferred bars of the same orientation and were
aligned in the same direction as the moving bar stimulus. This might seem like a banal result, but
it hinted for the first time that neurons in the neocortex might encode information in the exact
timing of their spikes (relative to some external osciallation), rather than just through their firing
rate over longer time periods. In this way, spatially seperated neurons might somehow co-
ordinate their firing patterns to become part of the same neuronal ensemble, and maybe represent
specific features of the outside world.

Following this discovery, a sustained experimental and theoretical scientific interest has resulted
in a huge library of data exploring the theory, and even implicating oscillations in attention and
consciousness. Many, many debates and arguments have ensued over the origins of these
oscillations and whether or not they are really used by the brain to code information. I am not
going to attempt to dip my toe into this ocean here. If you’re interested, I suggest looking up the
experimental work both of Wolf Singer and his former student Pascal Fries, who now runs his
own lab in Nijmegen, Netherlands. BU’s Nancy Kopell has been a driving force on the more
theoretical aspects of oscillations.

Despite the mountain of work on this topic, a real mechanistic description of these oscillations
has yet to be demonstrated in a realistic computational model of the brain. The Blue Brain
project – that other big science experiment in Switzerland – might finally make the link. Earlier
this week at the inagural INCF conference on Neuroinformatics, Henry Markam reported that a
recent modification to their detailed simulation of a rat cortical column produced persistent
oscillatory activity in the gamma frequency band (roughly 40-80Hz).

This is significant, because the model wasn’t designed in any way to produce this behaviour. It
simply emerged after setting up the cortical column of 10,000 cells with realisitic connectivity
patterns and electrophysiological properties. As far as I understood, they simply stimulated layer
IV and watched a wave of activity build up, propagate throughout the column via layer II/III and
initiate gamma ocillatory activity in layer V. This behaviour only emerged following one of their
weekly updates to the simulation. Markram wouldn’t say exactly what changes they made,
unsurprisingly enough. Expect a publication forthcoming.

We could argue all day about the Blue Brain project and its significance. Many people
(especially other experts) do. I had seen several presentations from the BB team before, and I
suppose that I had kind of made up my mind that it was probably going to be a useful logistical
excercise which would generate new tools for neural data sharing and analysis, but ultimately
hopeless in helping us to understand the brain. As Markram himself admitted, the whole model is
“half-baked”, and the experiment was done without a hypothesis. There are so many gaps in our
knowledge (e.g. plasticity rules, dendritic ion channel distributions, neuromodulation) that the
whole endeavour seemed to me to be a waste of time. But after seeing Markram’s talk last
Monday, I am sold. This is mainly for two reasons:

 Of course the current model isn’t perfect. But it is a first step on the long road to a
biologically realistic large-scale model of the brain. This will be a slow, iterative process.
 Despite the astronomical number of parameters, the model is actually fairly well
constrained by biology. It’s in the right ball park. There are many phenomena you can
reproduce in any abstract network model which wouldn’t work in the Blue Brain (or a
real brain for that matter). As Markram says, why explore all of theoretical parameter
space when we can focus on the biogically relevant subregion within it?

Imageeeeeeee

Blue Gene/L is built using system-on-a-chip technology in which all functions of a node (except for main
memory) are integrated onto a single application-specific integrated circuit (ASIC). This ASIC includes 2
PowerPC 440 cores running at 700 MHz. Associated with each core is a 64-bit 'double' floating point unit
(FPU) that can operate in single instruction, multiple data (SIMD) mode. Each (single) FPU can execute
up to 2 'multiply-adds' per cycle, which means that the peak performance of the chip is 8 floating point
operations per cycle (4 under normal conditions, with no use of SIMD mode). This leads to a peak
performance of 5.6 billion floating point operations per second (gigaFLOPS or GFLOPS) per chip or node,
or 2.8 GFLOPS in non-SIMD mode. The two CPUs (central processing units) can be used in 'co-processor'
mode (resulting in one CPU and 512 MB RAM (random access memory) for computation, the other CPU
being used for processing the I/O (input/output) of the main CPU) or in 'virtual node' mode (in which
both CPUs with 256 MB each are used for computation). So, the aggregate performance of a processor
card in virtual node mode is: 2 node = 2 2.8 GFLOPS = 5.6 GFLOPS, and its peak performance (optimal
use of double FPU) is: 2 5.6 GFLOPS = 11.2 GFLOPS. A rack (1,024 nodes = 2,048 CPUs) therefore has 2.8
teraFLOPS or TFLOPS, and a peak of 5.6 TFLOPS. The Blue Brain Project's Blue Gene is a 4-rack system
that has 4,096 nodes, equal to 8,192 CPUs, with a peak performance of 22.4 TFLOPS. A 64-rack machine
should provide 180 TFLOPS, or 360 TFLOPS at peak performance. BGL, Blue Gene/L; torus, torus-like
connectivity between processors. Modified with permission from IBM (International Business Machines)
© (2005) IBM Corporation.
Can a Supercomputer Think Like a Brain?
Computers have long been thought of as "electronic brains", but most scientists of course
smirked at that term because the machines were very crude representations of our brains for the
most part. A fairly significant number of intelligent scientists have been convinced that it could
take many more generations, if at all, before we can come up with machines that can think like
humans. For one, it might require far more computing power than what even the highest end
computers of today have on offer; for another, the human brain is far too complex, and it is not
clear to everyone that we have understood its functioning even remotely. Such thoughts however
do not deter some determined folks.

In the basement of a university in Switzerland sit four black boxes, each about the size of a
refrigerator, and filled with 2,000 IBM microchips stacked in repeating rows. Together they form
the processing core of a machine that can handle over 20 trillion operations per second. This is
Blue Brain. As their web site explains, "The Blue Brain project is the first comprehensive
attempt to reverse-engineer the mammalian brain, in order to understand brain function and
dysfunction through detailed simulations." This is done using a computer that has phenomenal
computing power - a supercomputer, in layman's terms.

The name of the supercomputer is literal: Each of its microchips has been programmed to act just
like a real neuron in a real brain. The Blue Brain team started with a neuron, a nanoscale pipette,
and added some really bold thinking and advanced electronic design, and wow, they have ended
up with something really commendable. The behavior of the computer replicates, with surprising
precision, the cellular events unfolding inside a mind. "This is the first model of the brain that
has been built from the bottom-up," says Henry Markram, a neuroscientist at Ecole
Polytechnique Fédérale de Lausanne (EPFL) and the director of the Blue Brain project.

This is hardly the first time scientists have made efforts to make computers mimic the brain. All
of us saw how Deep Blue, the IBM supercomputer, even beat the then world champ Gary
Kasparov in the famous chess championship. But most of these efforts were aimed at computers
trying to replicate human thought processes in a very narrow domain, and these domains were
often dominated by quantitative logic rather than qualitative ones.

Blue Brain can certainly be thought of as being only the next effort in this continuum, but the
difference is this one is biologically much closer. With previous basic structures, scientists have
been able to unveil physical details, molecules, chemical pathways, enzymes and genes that
power the brain. These efforts and experiments offered insights that enabled scientists in
understanding what the brain does, but not how it does it. This experiment however emulates
chemical signaling and actually functions as a real brain. The current simulation uses 400
segments for each neuron and they have precisely researched individual ion channels and
biological functions to precisely generate the simulation.

What has been most difficult even for supercomputers so far is to understand "experience". Blue
Brain, if it is to simulate our brains, needs to somehow figure out what "experiencing something"
means. (A nice quote from the philosopher David Chalmers, “Experience is information from
the inside; physics is information from the outside.” - Thank you, Clusterflock ). The Blue
Brain team intends to succeed in this by deciphering the connection between the sensations
entering the machine and the flickering voltages of its brain cells. Once the team has been able to
get this correlation right (and I'm not sure this will be easy!), reversing this process should be
relatively easy. If they are able to complete this cycle, the supercomputer should be in a position
to generate "experieces". Fascinating!

Analogous in scope to the Genome Project, the Blue Brain will provide a huge leap in our
understanding of brain function and dysfunction and help us explore solutions to intractable
problems in mental health and neurological disease.

By the end of 2006, the Blue Brain project had created a model of the basic functional unit of the
brain, the neocortical column. At the push of a button, the model could reconstruct biologically
accurate neurons based on detailed experimental data, and automatically connect them in a
biological manner, a task that involves positioning around 30 million synapses in precise 3D
locations.

In November, 2007, the Blue Brain project reached an important milestone and the conclusion of
its first Phase, with the announcement of an entirely new data-driven process for creating,
validating, and researching the neocortical column. Blue Brain has currently simulated one
column of a neocortex of a rat with 10,000 neurons and 30 million synapses - a human neocortex
column has 60,000 neurons.

Impressive!

Read a nice story on Blue Brain here, more updates from the Blue Brain Project web site, and the
Blue Brain IBM/EPFL page @ IBM

Other Related Web Resources

Blue Brain @ Wikipedia


Blue Brain - success?
The Blue Brain Breakthrough
Blue Brain Status and the Future of Whole Brain Simulation
A 2005 article from The Speculist
A 2005 BusinessWeek article

There's a whole range of fascinating resources on the topic of computers and human brains. We
try to list some that we found most useful and interesting:

Why People Think Computers Can't - by Marvin Minsky, the renowned AI pioneer. "Today,
surrounded by so many automatic machines, industrial robots. and the R2-D2's of Star Wars
movies, most people think AI is much more advanced than it is. But still. many `'computer
experts" don't believe that machines will ever "really think." I think those specialists are too used
to explaining that there's nothing inside computers but little electric currents. This leads them to
believe that there can't be room left for anything else-like minds or selves. And there are many
other reasons why so many experts still maintain that machines can never be creative. intuitive.
or emotional, and will never really think, believe, or understand anything. This essay explains
why they are wrong. (see this article in PDF format)

Brain vs. Computers @ Neuroscience for Kids - well, this has been written for kids, but precisely
for that reason, the language is so simple and easy to understand that all of us can learn
something from it!

Brains Don't Learn Using 0s and 1s, but They Learn Through Shades of Grey - The processors,
in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According
to a new study from Cornell University, this is not true, and our mental processing is continuous.
By tracking mouse movements of students working with their computers, the researchers found
that our learning process was similar to other biological organisms: we're not learning through a
series of 0's and 1's. Instead, our brain is cascading through shades of grey."

Computer Intelligence in the Extra-ordinary Future - One requirement for the extraordinary
future is that computers will be as smart as humans. Actually, the authors who present the
extraordinary future clearly think that within the next century computers will far surpass humans
in intelligence. In this chapter the writer describes their reasons for making this claim and
considers whether it is plausible. In order to do this the writer considers related issues such as the
nature of human intelligence, how the brain works, how computers work, realistic projections of
increases in computer processing speed, and different understandings of the concept of thought.

The Chinese Room - A person inside a room gets input in the form of Chinese characters on
cards, and produces output in the form of Chinese characters by looking up the input Chinese
characters in a rule book (written in English) that shows him what Chinese characters to give
back.? It turns out that the input Chinese characters are meaningful questions and the output
Chinese characters are appropriate answers to the questions, so to an outside observer, it looks as
if whatever's inside the room understands Chinese. But he doesnt: he's just following rules.
MORAL: computers are like the rule-follower.? They don't understand anything, even if they
appear to do so. Brief but interesting stuff discussed here titled Can Computers Think?...see a
related article by John Searle Is The Brain a Digital Computer?

Most neuroscientists adhere to the pixel view of neurons, arguing that individual cells can't
possibly be clever enough to make sense of subtle concepts; after all, the world's fastest
supercomputers have difficulty performing that pattern-recognition feat. But Itzhak Fried, a
neurosurgeon who leads this UCLA research program, believes he has found "thinking cells" in
the brains of his subjects. If he's right, neuroscientists may be forced to overhaul their view of
how the human brain works, says this 2005 article from MIT titled "Can A Single Brain Cell
Think?"

In a new MIT study (2007), a computer model designed to mimic the way the brain itself
processes visual information performs as well as humans do on rapid categorization tasks. The
model even tends to make similar errors as humans, possibly because it so closely follows the
organization of the brain's visual system. More from here

Human Brain Region Functions Like Digital Computer - ScienceDaily Oct., 2006 - A region of
the human brain that scientists believe is critical to human intellectual abilities surprisingly
functions much like a digital computer, according to psychology Professor Randall O'Reilly of
the University of Colorado at Boulder. In a review of biological computer models of the brain
that appeared in the Oct. 6 (2006) edition of the journal Science, O'Reilly contends that the
prefrontal cortex and basal ganglia operate much like a digital computer system. More from here

10 Important Differences Between Brains and Computers - this is a phenomenally useful and
entirely readable article. Please make sure you read it sometime, you will understand why we
should take any claims to mimicing the brain with a huge tablespoon of salt.

An interview with John McCarthy, an AI pioneer and the person credited with coining the term
Artificial Intelligence

Researchers at the MIT McGovern Institute for Brain Research have used a biological model to
train a computer model to recognize objects, such as cars or people, in busy street scenes. Their
innovative approach, which combines neuroscience and artificial intelligence with computer
science, mimics how the brain functions to recognize objects in the real world.

When will computer hardware match the human brain? ( a 1997 paper) - This paper describes
how the performance of AI machines tends to improve at the same pace that AI researchers get
access to faster hardware. The processing power and memory capacity necessary to match
general intellectual performance of the human brain are estimated. Based on extrapolation of past
trends and on examination of technologies under development, it is predicted that the required
hardware will be available in cheap machines in the 2020s.

Jeff Hawkins and his colleagues have been focused on researching the brain's neocortex, and
have made significant progress in understanding how it works. Using their theory, called
Hierarchical Temporal Memory, or HTM, they have created a software platform that allows
anyone to build HTMs for experimentation and deployment. You don't program an HTM as you
would a computer; rather you configure it with software tools, then train it by exposing it to
sensory data. HTMs thus learn in much the same way that children do. HTM is a rich theoretical
framework and this article provides a high level overview of the theory and technology. Details
of HTM are available at Numenta. An interview with Jeff here
A Working Brain Model

A computer simulation could eventually allow neuroscience to be carried out in silico.

 Wednesday, November 28, 2007


 By Duncan Graham-Rowe

E-mail Audio » Print

An ambitious project to create an accurate computer model of the brain has reached an
impressive milestone. Scientists in Switzerland working with IBM researchers have shown that
their computer simulation of the neocortical column, arguably the most complex part of a
mammal's brain, appears to behave like its biological counterpart. By demonstrating that their
simulation is realistic, the researchers say, these results suggest that an entire mammal brain
could be completely modeled within three years, and a human brain within the next decade.

"What we're doing is reverse-engineering the brain," says Henry Markram, codirector of the
Brain Mind Institute at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, who led
the work, called the Blue Brain project, which began in 2005. (See "IBM: The Computer Brain.")
By mimicking the behavior of the brain down to the individual neuron, the researchers aim to
create a modeling tool that can be used by neuroscientists to run experiments, test hypotheses,
and analyze the effects of drugs more efficiently than they could using real brain tissue.

The model of part of the brain was completed last year, says Markram. But now, after extensive
testing comparing its behavior with results from biological experiments, he is satisfied that the
simulation is accurate enough that the researchers can proceed with the rest of the brain.

"It's amazing work," says Thomas Serre, a computational-neuroscience researcher at MIT. "This
is likely to have a tremendous impact on neuroscience."

The project began with the initial goal of modeling the 10,000 neurons and 30 million synaptic
connections that make up a rat's neocortical column, the main building block of a mammal's
cortex. The neocortical column was chosen as a starting point because it is widely recognized as
being particularly complex, with a heterogeneous structure consisting of many different types of
synapse and ion channels. "There's no point in dreaming about modeling the brain if you can't
model a small part of it," says Markram.

The model itself is based on 15 years' worth of experimental data on neuronal morphology, gene
expression, ion channels, synaptic connectivity, and electrophysiological recordings of the
neocortical columns of rats. Software tools were then developed to process this information and
automatically reconstruct physiologically accurate 3-D models of neurons and their
interconnections.
Connect the dots: A representation of a mammalian neocortical column, the basic building
block of the cortex. The representation shows the complexity of this part of the brain, which has
now been modeled using a supercomputer.
Credit: BBP/EPFL

IBM Computers to Create Virtual Brain


By David Worthington | Published June 6, 2005, 8:35 AM

 Print Article
 E-mail Article
 Add Comment

The question has been proposed: When will computer hardware rival the human brain? In the
1980's futurist Vernor Vinge popularized the notion of a technological singularity where
artificial intelligence will one day overtake the human brain and even foil any attempt to
comprehend its complexity. That may yet happen, but for the time being, imitation is the
sincerest form of flattery.

IBM, in partnership with scientists at Switze


brain power: This representation shows the connectivity of the 10,000 neurons and 30 million
connections that make up a single neocortical column. (The different colors correspond to different
levels of electrical activity.) Having created a biologically accurate computer model of a neocortical
column scientists are now planning to model the entire human brain within just 10 years.
Credit: BBP/EPFL

You might also like