Question icon
Your Current Search
Choose below to refine your search
Research Topic
Download abstract book

Download the NI2012 abstract book here. The page numbers in the index are clickable for easy browsing.

 

Spiking neuronal network simulation technology for contemporary supercomputers

Filed under:
1.6

Moritz Helias (Inst of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience, Research Center Juelich), Susanne Kunkel (Functional Neural Circuits Group Albert-Ludwig University of Freiburg), Jochen Eppler (Inst of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Juelich), Gen Masumoto (High-Performance Computing Team, RIKEN, Computational Science Research Program Kobe), Jun Igarashi (Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako), Shin Ishii (Integrated Systems Biology, Laboratory Department of Systems Science, Graduate School of Informatics, Kyoto University), Tomoki Fukai (Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako), Abigail Morrison (Functional Neural Circuits Group Albert-Ludwig University of Freiburg), Markus Diesmann (Inst of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience, Research Center Juelich)

Functional neuronal networks, like the visual cortex of primates,
comprise on the order of 100 million neurons, consisting of areas that
exceed 10 million neurons and 100 billion synapses. The
memory demands of such simulations are only met by distributed
simulation software and supercomputers, like the Jugene BG/P
supercomputer in Juelich and the K computer in Kobe.

Though connectivity between brain areas is sparse, there are fewer
constraints within areas. A general simulation tool needs to
be able to simulate networks of 10 million neurons with arbitrary
connectivity, often assumed to be random. This presents the worst case
scenario: Firstly, there is no redundancy that allows
to compress the representation of synaptic connectivity. Secondly,
communication between the compute nodes is potentially all-to-all.

Here we quantitatively demonstrate the recent advances of neural
simulation technology [2] on the example of the simulator NEST [1],
which have lead to a readily usable tool for the neuroscientist. As
the memory rather than run time limits the maximal size of a neuronal
network, we explain the systematic improvements of the distributed
data structures adapted to the sparse and random connectivity. High
performance and good scaling of network setup and simulation are
achieved with a hybrid code combining OpenMP threads and MPI,
exploiting the multi-core architectures of K and Jugene. We
parameterize and employ a model of memory consumption to estimate the
machine size needed for a given neuroscientific question; a crucial
tool not only to plan simulations, but also for computation time grant
applications. Simulations of networks exceeding 10 million neurons on
K and Jugene are shown to determine the limits of the current
technology and computer architectures.

Partially supported by the Helmholtz Alliance on Systems Biology, the
Next-Generation Supercomputer Project of MEXT, EU Grant 269921
(BrainScaleS), the VSR computation time grant JINB33 on the JUGENE
supercomputer, and by early access to the K computer at the
RIKEN Advanced Institute for Computational Science.

[1] Gewaltig M-O and Diesmann M (2007) NEST. Scholarpedia, 2(4):1430.
[2] Kunkel S, Potjans TC, Eppler JM, Plesser HE, Morrison A and Diesmann M
(2012) Front. Neuroinform. 5:35. doi: 10.3389/fninf.2011.00035
Preferred presentation format: Poster
Topic: Large scale modeling

Filed under:
Andrew Davison
Andrew Davison says:
May 11, 2012 02:44 PM
Important work in pushing the limits of neuronal network simulation.