CNS 2015 Prague: Tutorials

The tutorials will be held on July 18th at the University of Economics, Prague. All tutorials run from 9am till 16:30.

Information about the meeting venue and rooms can be found here.


T1: Neural mass and neural field models (room RB 212)
Axel Hutt, Jeremie Lefebvre, Alistair Steyn-Ross, Nicolas Rougier

T2: Modeling and analysis of extracellular potentials (room RB 209)
Gaute T Einevoll, Szymon Łęski, Espen Hagen

T3: Modelling of spiking neural networks with the Brian simulator (room RB 211)
Dan Goodman, Pierre Yger, Romain Brette, Marcel Stimberg

T4: Theory of correlation transfer and correlation structure in recurrent networks (room RB 210)
Moritz Helias, Farzad Farkhoo

T5: Modeling of calcium dynamics and signaling pathways in neurons (room RB 213)
Kim "Avrama" Blackwell

T6: Interfaces in Computational Neuroscience Software: Combined use of the tools NEST, CSA and MUSIC (room RB 113)

Martin Jochen Eppler, Jan Morén, Mikael Djurfeldt

T1: Neural mass and neural field models (room RB 212)

Axel Hutt (INRIA Nancy, France)
Jeremie Lefebvre (University of Lausanne, Switzerland)
Alistair Steyn-Ross (University of Wakaito, New Zealand)
Nicolas Rougier (INRIA Bordeaux, France)

The brain exhibits dynamical processes on different spatial and temporal scales. Single neurons have a size of tens of micrometers and fire during few milliseconds, whereas macroscopic brain activity, such as encephalographic data or the BOLD response in functional Magnetic Resonance Imaging, evolve on a millimeter or centimeter scale during tens of milliseconds. To understand the relation between the two dynamical scales, the mesoscopic scale of neural populations between these scales is helpful. Moreover, it has been found experimentally that neural populations encode and decode cognitive functions. The tutorial presents a specific type of rate-coding models which is both mathematically tractable and verifiable experimentally. It starts with a physiological motivation of the model, followed by mathematical analysis techniques for neural mass models in the presence of noise, and applications to general anaesthesia and cognitive functions.


1. Paul C. Bressloff. Spatiotemporal Dynamics of Continuum Neural Fields. J. Phys. A 45 (2012) 033001.
2. P. C. Bressloff and S. Coombes. Physics of the extended neuron. Int. J. Mod. Phys. B11:2343-2393, 1997.
3. A. Hutt and L. Buhry, Study of GABAergic extra-synaptic tonic inhibition in single neurons and neural populations by traversing neural scales: application to propofol-induced anaesthesia. J Comput Neurosci 37(3), 417-437, 2014.
4. J. Lefebvre, A. Hutt, J.-F. Knebel, K. Whittingstall and M. Murray. Stimulus Statistics shape oscillations in nonlinear recurrent neural networks, J Neurosci, 35(7): 2895-2903, 2015.5. Steyn-Ross, M., Steyn-Ross, D., Sleigh, J. Interacting Turing-Hopf instabilities drive symmetry-breaking transitions in a mean-field model of the cortex: a mechanism for the slow oscillation, Physical Review X 3, 021005, 2013.
6. M. Steyn-Ross, D.A. Steyn-Ross, M.T. Wilson and J.W. Sleigh. Modelling brain activation patterns for the default and cognitive states, NeuroImage 45(2):298-311, 2009.
7. G. Is. Detorakis and N. P. Rougier. Structure of Receptive Fields in a Computational Model of Area 3b of Primary Sensory Cortex, Frontiers in Computational Neuroscience 8:76, 2014.
8. Rougier N.P. and J. Vitay. Emergence of Attention within a Neural Population, Neural Networks 19.5, 2006.
9. Hindriks R. and M. van Putten. Meanfield modeling of propofol-induced changes in spontaneous EEG rhythms Neuroimage 60 (4) pp. 2323-2334, 2012.
10. Wang K., M.L. Steyn-Ross, D.A. Steyn-Ross, M.T. Wilson, J.W. Sleigh. EEG slow-wave coherence changes in propofol-induced general anesthesia: experiment and theory Front Syst Neurosci 8 p.215, 2014.
11. Steyn-Ross ML, DA Steyn-Ross, JW Sleigh. Modelling general anaesthesia as a first-order phase transition in the cortex. Prog Biophys Mol Biol 85 (2-3) pp. 369-385, 2004.

T2: Modeling and analysis of extracellular potentials (room RB 209)

Gaute T Einevoll (Norwegian University of Life Sciences, Aas, Norway)
Szymon Łęski (Nencki Institute of Experimental Biology, Warsaw, Poland)
Espen Hagen (Jülich Research Centre and JARA, Jülich, Germany)

While extracellular electrical recordings have been the main workhorse in electrophysiology, the interpretation of such recordings is not trivial [1,2,3]. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode [4]. No such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens, hundreds or thousands of positions simultaneously, we thus need to develop new data analysis methods grounded in the underlying biophysics [1,3,4].  This is the topic of the present tutorial.
In the first part of this tutorial we will go through the biophysics of extracellular recordings in the brain, a scheme for biophysically detailed modeling of extracellular potentials and the application to modeling single spikes [5-7], MUAs [8] and LFPs, both from single neurons [9] and populations of neurons [8,10,11], and methods for estimation of current source density (CSD) from LFP data, such as the iCSD [12-14] and kCSD methods [15], and decomposition of recorded signals in cortex into contributions from various laminar populations, i.e., (i) laminar population analysis (LPA) [16,17] based on joint modeling of LFP and MUA, and (ii) a scheme using LFP and known constraints on the synaptic connections [18] In the second part, the participants will get demonstrations and, if wanted, hands-on experience with LFPy ( [19], a versatile tool based on Python and the simulation program NEURON [20] ( for calculation of extracellular potentials around neurons, and tools for iCSD analysis, in particular,
CSDplotter (for linear multielectrodes [8]) ( iCSD 2D (for 2D multishank electrodes [14]) ( Further, new results from applying the biophysical forward-modelling scheme to predict LFPs from comprehensive structured network models, in particular the Traub-model for thalamocortical activity [21], and the Potjans-Diesmann model of the early sensory cortex microcircuit using hybridLFPy  ( [22,23], will be presented.


1. KH Pettersen et al, “Extracellular spikes and CSD” in Handbook of Neural Activity Measurement, Cambridge (2012)
2. G Buzsaki et al, Nature Reviews Neuroscience 13:407 (2012)
3. GT Einevoll et al, Nature Reviews Neuroscience 14:770 (2013)
4. GT Einevoll et al, Current Opin Neurobiol 22:11 (2012)
5. G Holt, C Koch, J Comp Neurosci 6:169 (1999)
6. J Gold et al, J Neurophysiol 95:3113 (2006)
7. KH Pettersen and GT Einevoll, Biophys J 94:784 (2008)
8. KH Pettersen et al, J Comp Neurosci 24:291 (2008)
9. H Lindén et al, J Comp Neurosci 29: 423 (2010)
10. H Lindén et al, Neuron 72:859 (2011)
11. S Łęski et al, PLoS Comp Biol 9:e1003137 (2013)
12. KH Pettersen et al, J Neurosci Meth 154:116 (2006)
13. S Łęski et al, Neuroinform 5:207 (2007)
14. S Łęski et al, Neuroinform 9:401 (2011)
15. J Potworowski et al, Neural Comp 24:541 (2012)
16. GT Einevoll et al, J Neurophysiol 97:2174 (2007)
17. P Blomquist et al, PLoS Comp Biol 5:e1000328 (2009)
18. SL Gratiy et al, Front Neuroinf  5:32 (2011)
19. H Lindén et al, Front Neuroinf 7:41 (2014)
20. ML Hines et al, Front Neuroinf 3:1 (2009)
21. H Glabska et al, PLoS ONE 9:e105071 (2014)
22. TC Potjans and M Diesmann, Cereb Cort 24:785 (2014)
23. E Hagen et al, BMC Neuroscience 14(Suppl 1):P119 (2013)

T3: Modelling of spiking neural networks with the Brian simulator (room RB 211)

Dan Goodman (Imperical College London, UK)
Pierre Yger (Institut de la Vision, Paris, France)
Romain Brette (Institut de la Vision, Paris, France)
Marcel Stimberg (Institut de la Vision, Paris, France)

Brian [1,2] is a simulator for spiking neural networks, written in the Python programming language. It focuses on making the writing of simulation code as quick as possible and on flexibility: new and non-standard models can be readily defined using mathematical notation[3]. This tutorial will be based on Brian 2, the current Brian version under development.
We will start by giving a general introduction to Brian 2 and discussing differences between Brian 1 and Brian 2, with specific recommendations on how to convert scripts between the two Brian versions. We will then focus on the specification of neuronal and synaptic models, discussing the various ways Brian offers to implement non-standard models. We will finish by demonstrating Brian's code generation facilities, including the newly introduced "standalone" mode, giving recommendations for improving the simulation performance.

More details of the agenda for the tutorial along with teaching material will be posted here:


[2] Goodman DFM and Brette R (2009). The Brian simulator. Front Neurosci doi:10.3389/neuro.01.026.2009.
[3] Stimberg M, Goodman DFM, Benichoux V, and Brette R (2014). Equation-oriented specification of neural models for simulations. Frontiers in Neuroinformatics 8. doi:10.3389/fninf.2014.00006

T4: Theory of correlation transfer and correlation structure in recurrent networks (room RB 210)

Moritz Helias (Jülich Research Centre, Jülich, Germany)
Farzad Farkhoo (Freie Universität Berlin, Berlin, Germany)

In the first part of this tutorial, we introduce the mathematical tools to determine firing statistics of neurons receiving fluctuating input form a network. We show how one can apply an efficient Fokker-Planck method to derive the neurons’ output statistics whenever the input can be assumed to be  Gaussian white (iid) noise. We further study more realistic cases, where the input fluctuations depart from the iid assumptions. Using the integrate-and-fire neuron model, we will demonstrate how to compute the firing rate, auto-correlation and cross-correlation functions of the output spike trains. The transfer function of the output correlations given the time scale of the input correlations will be discussed [Moreno-Bote and Parga, 2006, Brunel et al 2001]. In particular, we will show that the output correlations are generally weaker than the input correlations and how the working regime of the neuron shapes the cross-correlation functions [Ostojic et al., 2009; Helias et al., 2013]. We conclude the first part by investigating the relation between neurons’ pairwise correlation due common fluctuations and their firing rates [de la Rocha et al., 2007].
In the second part, we will consider correlations in recurrent random networks. Using a binary neuron model [Ginzburg & Sompolinsky, 1994], we explain how mean-field theory determines the stationary state and how the network-generated noise linearizes the single neuron response. The resulting linear equation for the fluctuations in recurrent networks is then solved to obtain the correlation structure in balanced random networks. We discuss two different points of view of the recently reported active suppression of correlations in balanced networks by fast tracking [Renart et al., 2010] and by negative feedback [Tetzlaff  et al., 2012].  Finally, we consider extensions of the theory of correlations of linear Poisson spiking models [Hawkes, 1971, Pernice et al. 2011] to the leaky integrate-and-fire model [Trousdale et al. 2012, Pernice et al. 2012] and present a unifying view of linear response theory of weak correlations [Grytskyy et al, 2013].


1. Ginzburg & Sompolinsky (1994), Theory of correlations in stochastic neural networks, PRE 50:3171-3190
2. Hawkes (1971), Point Spectra of Some Mutually Exciting Point Processes, Journal of the Royal Statistical Society Series B 33(3):438-443
3. Helias et al. (2013), Echoes in correlated neural systems, New Journal of Physics 15(2):023002
4. Moreno-Bote & Parga (2006), Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses, PRL 96:02810
5. Moreno-Bote et al. (2014) Information-limiting correlations. Nature Neuroscience, 17(10):1410-7.
6. Ostojic et al. (2009), How Connectivity, Background Activity, and Synaptic Properties Shape the Cross-Correlation between Spike Trains, J Neurosci 29(33):10234-10253
7. Renart et al. (2010), The Asynchronous State in Cortical Circuits, Science 327(5965):587-590
8. de la Rocha et al. (2007), Correlation between neural spike trains increases with firing rate, Nature 448:802-6
9. Tetzlaff et al. (2012), Decorrelation of neural-network activity by inhibitory feedback, PLoS Comp Biol 8(8):e1002596, doi:10.1371/journal.pcbi.1002596
10. Brunel et al. (2001), “Effects of Synaptic Noise and Filtering on the Frequency Response of Spiking Neurons.” Physical Review Letters 86, no. 10: 2186.
11. Grytskyy D, Tetzlaff T, Diesmann M and Helias M (2013) A unified view on weakly correlated recurrent networks. Front. Comput. Neurosci. 7:131. doi: 10.3389/fncom.2013.00131
12. Pernice V, Staude B, Cardanobile S, Rotter S (2012) Recurrent interactions in spiking networks with arbitrary topology Phys. Rev. E 85, 031916
13. Trousdale, Y. Hu, E. Shea-Brown, and K. Josic (2012) Impact of network structure and cellular response on spike time correlations. PLOS Computational Biology, 8(3): e1002408.

T5: Modeling of calcium dynamics and signaling pathways in neurons (room RB 213)

Kim "Avrama" Blackwell (Krasnow Institute for Advanced Study, George Mason University, Fairfax VA, USA)

Modeling signaling pathways in neurons is of increasing importance for understanding brain function.  Biochemical and molecular mechanisms are crucial for the synaptic and intrinsic plasticity underlying learning and information processing, neuronal development, as well as pathological degeneration.  Novel biosensors, live cell imaging and other techniques are increasing the quantity of data and revealing the complexity of molecular processes generating these phenomena.
The purpose of this tutorial is to introduce techniques for modeling calcium dynamics and signaling pathways in neurons.  The first part presents the biological mechanisms (channels, diffusible second messengers, enzymes, kinases) that comprise signaling pathways and control calcium dynamics.  The second part presents the mathematical equations used to model the components of these pathways.  The third part of the tutorial provides an overview of some of the software packages available for such modeling, and explains how to develop deterministic and stochastic models using several of these software tools, including xppaut, genesis/Moose, smoldyn, and NeuroRD.


1. Andrews SS, Addy NJ, Brent R, Arkin AP. Detailed simulations of cell biology with Smoldyn 2.1. PLoS Comput Biol. 2010 Mar 12;6(3):e1000705. doi: 10.1371/journal.pcbi.1000705.
2. Blackwell KT. Approaches and tools for modeling signaling pathways and calcium dynamics in neurons. J Neurosci Methods. 2013 Nov 15;220(2):131-40. doi: 10.1016/j.jneumeth.2013.05.008. Epub 2013 Jun
3. Baudry M, Zhu G, Liu Y, Wang Y, Briz V, Bi X. Multiple cellular cascades participate in long-term potentiation and in hippocampus-dependent learning. Brain Res. 2014 Dec 4. pii: S0006-8993(14)01617-5. doi: 10.1016/j.brainres.2014.11.033
4. Holtmaat A1, Randall J, Cane M. Optical imaging of structural and functional synaptic plasticity in vivo. Eur J Pharmacol. 2013 Nov 5;719(1-3):128-36. doi: 10.1016/j.ejphar.2013.07.020. Epub 2013 Jul 18.
5. Gorshkov K, Zhang J. Visualization of cyclic nucleotide dynamics in neurons. Front Cell Neurosci. 2014 Dec 4;8:395. doi: 10.3389/fncel.2014.00395.
6. Kotaleski JH, Blackwell KT. Modelling the molecular mechanisms of synaptic plasticity using systems biology approaches. Nat Rev Neurosci. 2010 Apr;11(4):239-51. doi: 10.1038/nrn2807.
7. Pascoli V, Cahill E, Bellivier F, Caboche J, Vanhoutte P.Extracellular Signal-Regulated Protein Kinases 1 and 2 Activation by Addictive Drugs: A Signal Toward Pathological Adaptation. Biol Psychiatry. 2014 Dec 15;76(12):917-926. doi: 10.1016/j.biopsych.2014.04.005. Epub 2014 Apr 18.

T6: Interfaces in Computational Neuroscience Software: Combined use of the tools NEST, CSA and MUSIC (room RB 113)

Martin Jochen Eppler (Simulation Lab Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Forschungszentrum Jülich, Jülich, Germany)
Jan Morén (Neural Computation Unit, Okinawa Institute of Science and Technology, Okinawa, Japan)
Mikael Djurfeldt (PDC center for high performance computing, KTH and INCF Stockholm, Sweden)

In this workshop we demonstrate how the MUSIC and ConnectionGenerator interfaces allow the NEST simulator to work as a module in a larger simulation and use external libraries for generation of connectivity.

Current simulation environments in computational neuroscience, such as NEURON, NEST or Genesis, each provide many tools needed by the user to carry out high-quality simulation studies.  However, since models are described differently in each environment, and even may depend on specific features of the environment, it is hard to move models between environments and the modeler is stuck with the tools of the environment for which the model was developed.  This also makes it difficult to build larger simulations which re-use existing models as components.  As systems grow more complex and encompass more subsystems they rapidly become unwieldy to develop. Monolithic systems make it infeasible to reuse separate model implementations for parts of the system.

Furthermore, in other fields of numerical computation, the modeler often has the freedom to assemble the tools of choice out of a set of mesh generators, solvers, etc.  Again the monolithic structure of software in computational neuroscience prevents this.  We are not free to choose among wiring routines, solvers or neuronal spike communication frameworks. Standard model description languages, such as PyNN, NeuroML and NineML provide a partial solution by unifying the description of models, thereby improving reproducibility and making it easier to move the model between environments. Environments structured as frameworks, such as Genesis3 or MOOSE, also address the problems described above. Our aim with this workshop is to promote the use of generic interfaces in computational neuroscience software.

Interfaces allow for the use of alternative implementations of software components.  In this tutorial, we demonstrate and teach the tools NEST (a network simulator), CSA (a connectivity description language) and MUSIC (a tool for simulations across multiple environments) and show how they interact through generic interfaces. MUSIC is an interface and library which enables connecting separate models in real-time, even when they are implemented in separate simulator systems. The connections defined in MUSIC ports effectively implement an API for other models to use. This enables division of development of complex systems across areas and team members, and interfacing the model with outside data sources and sinks.  The ConnectionGenerator interface allows to use different connection generating libraries in the simulators supporting the interface. This lets you plug in the library of choice for more freedom in describing your models.

Hands-on sessions will allow participants to work on a coupling between own code and either the ConnectionGenerator interface or MUSIC. Support is provided by the authors and experienced users of the interfaces.