CNS 2014 Québec City: Tutorials

The tutorials will be held on July 26th at the Québec City Conference Center.

 

 Tutorial  Morning (9am to 12noon)  Afternoon (1:30pm to 4:30pm) 
T1 Room 207 Room 207
T2 --- Room 2102B
T3 Room 2104A Room 2104A
T4 Room 2104B Room 2104B
T5 Room 2103 ---
T6 Room 2102B ---
T7 Room 2101 Room 2101
T8 --- Room 2103
T9 Room 2105 Room 2105

List of Tutorials

The following list will be updated as more tutorials are confirmed and as abstracts are available.

T1: The Neural Engineering Framework (NEF): A General Purpose Method for Building Spiking Neuron Models

Chris Eliasmith and Terrence Stewart, University of Waterloo, CA

T2:  Themes in Computational Neuroendocrinology

Joel Tabak, Florida State University, USA

T3: Theory of correlation transfer and correlation structure in recurrent networks

Ruben Moreno-Bote, Foundation Sant Joan de Deu, Barcelona, Spain

T4:  Modeling and analysis of extracellular potentials

Gaute Einevoll (Norwegian University of Life Sciences, Ås, Norway) and others

T5: NEURON Simulation Software

Bill Lytton (SUNY Downstate Medical Center, US) and others

T6: Constructing biologically realistic neuron and network models with GENESIS

Hugo Cornelis, University of Texas Health Science Center, San Antonio, USA

T7: Modeling of Spiking Neural Networks with BRIAN

Romain Brette (Institut de la Vision, Paris, France) and others

T8: Simulating large-scale spiking neuronal networks with NEST

Jochen M. Eppler & Jannis Schücker (Research Center Jülich, Germany)

T9: Neuronal Model Parameter Search Techniques

Cengiz Günay (Emory University, USA) and others


Tutorial Abstracts

T1: The Neural Engineering Framework (NEF): A General Purpose Method for Building Spiking Neuron Models

Chris Eliasmith and Terrence Stewart, University of Waterloo, CA

We have recently created the world's largest biologically plausible brain model that is capable of performing several perceptual, motor, and cognitive tasks (Eliasmith et al., 2012). This model uses 2.5 million spiking neurons, takes visual input from a 28x28 pixel visual field, and controls a physically modelled arm. It has been shown to match a wide variety of neurophysiological and behavioral measures from animals and humans performing the same tasks. This tutorial is meant to introduce the software toolkit (Nengo) and theoretical background (NEF) to allow other researchers to use the same methods for exploring a wide variety of brain functions. We will focus on the underlying theory of the Neural Engineering Framework (NEF; Eliasmith and Anderson, 2003), a general method for implementing large-scale, nonlinear dynamics using spiking neurons.

Our emphasis will be on building such models using a GUI and scripting in our open-source toolkit Nengo (). We will help participants construct networks that perform linear and nonlinear computations in high dimensional state spaces, including arbitrary attractor networks (point, line, cyclic, chaotic), controlled oscillators and filters, and winner-take-all networks. We will discuss both how the networks can be learned online with a spike-based learning rule, or more efficiently constructed. If time permits, the tutorial will introduce our Semantic Pointer Architecture (Eliasmith, 2013), encapsulated in a Python module for Nengo which can be used to rapidly implement large-scale cognitive models that include (basic) visual processing, motor control, working memory, associative memory, and cognitive control.

Audience

All participants are encouraged to bring a laptop for installing and running Nengo (Linux, OS X, and Windows versions are provided), allowing for hands-on interactions with the models discussed.

References

  1. Eliasmith, C. (2013). How to build a brain: A neural architecture for biological cognition. New York, NY: Oxford University Press.
  2. Eliasmith, C., & Anderson, C. (2003). Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge: MIT Press.
  3. Eliasmith, C., Stewart T.C., Choo X., Bekolay T., DeWolf T., Tang Y., & Rasmussen, D. (2012). A largescale model of the functioning brain. Science. 338(6111), 12021205.

 

T2:  Themes in Computational Neuroendocrinology

Joel Tabak, Florida State University, USA

Computational neuroendocrinology regroups the various efforts, at different levels of organization, to better understand neuroendocrine regulations using computational models. Neuroendocrine systems are organized in “endocrine axes”. Each axis includes neuronal populations in the hypothalamus, cells in the pituitary gland that releases one or multiple hormones, and the target organ of this particular set of hormones.

Computational models describe the activity of hypothalamic neurons and how these neuroendocrine cells regulate the activity of pituitary cells that secrete hormones such as growth hormone, prolactin, luteinizing hormone, etc. They may also describe how hormones released by target organs in response to pituitary hormones, such as steroids, feedback and affect hypothalamo-pituitary regulations. One recurring theme is to understand how these regulations can produce pulsatile patterns of hormone secretion, and how target cells interpret these pulsatile patterns.

In this tutorial we will present examples of models that illustrate important themes in computational neuroendocrinology. These models range from the single cell level to the network level and, further, to the multi organ level. They will emphasize some key features of the neuroendocrine systems: endocrine cells have wide action potential and bursts that rely more on Ca2+ than Na+ voltage-dependent channels; the main transmitters of neuroendocrine regulations are not binding to receptor-channels but to G-protein coupled receptors that trigger second messenger cascades, leading to protein phosphorylation or gene expression; as a result neuroendocrine regulations do not operate at the millisecond time scale but at much slower time scales, from seconds to days.

 

T3: Theory of correlation transfer and correlation structure in recurrent networks

Ruben Moreno-Bote, Foundation Sant Joan de Deu, Barcelona, Spain

In the first part, we will study correlations arising from pairs of neurons sharing common fluctuations and/or inputs. Using integrate-and-fire neurons, we will show how to compute the firing rate, auto-correlation and cross-correlation functions of the output spike trains. The transfer function of the output correlations given the inputs correlations will be discussed. We will show that the output correlations are generally weaker than the input correlations [Moreno-Bote and Parga, 2006], that the shape of the cross-correlation functions depends on the working regime of the neuron [Ostojic et al., 2009; Helias et al., 2013], and that the output correlations strongly depend on the output firing rate of the neurons [de la Rocha et al, 2007]. We will study generalizations of these results when the pair of neurons is reciprocally connected.

In the second part, we will consider correlations in recurrent random networks. Using a binary neuron model [Ginzburg & Sompolinsky, 1994], we explain how mean-field theory determines the stationary state and how network-generated noise linearizes the single neuron response. The resulting linear equation for the fluctuations in recurrent networks is then solved to obtain the correlation structure in balanced random networks. We discuss two different points of view of the recently reported active suppression of correlations in balanced networks by fast tracking [Renart et al., 2010] and by negative feedback [Tetzlaff  et al., 2012].  Finally, we consider extensions of the theory of correlations of linear Poisson spiking models [Hawkes, 1971] to the leaky integrate-and-fire model and present a unifying view of linearized theories of correlations [Helias et al, 2011].

At last, we will revisit the important question of how correlations affect information and vice-versa [Zohary et al, 1994] in neuronal circuits, showing novel results about information content in recurrent networks of integrate-and-fire neurons [Moreno-Bote and Pouget, Cosyne abstracts, 2011].

References

  1. de la Rocha et al. (2007), Correlation between neural spike trains increases with firing rate, Nature 448:802-6

  2. Ginzburg & Sompolinsky (1994), Theory of correlations in stochastic neural networks, PRE 50:3171-3190

  3. Hawkes (1971), Point Spectra of Some Mutually Exciting Point Processes, Journal of the Royal Statistical Society Series B 33(3):438-443

  4. Helias et al. (2011), Towards a unified theory of correlations in recurrent neural networks, BMC Neuroscience 12(Suppl 1):P73

  5. Helias et al. (2013), Echoes in correlated neural systems, New Journal of Physics 15(2):023002

  6. Moreno-Bote & Parga (2006), Auto- and crosscorrelograms for the spike response of leaky integrate-and-fire neurons with slow synapses, PRL 96:02810

  7. Ostojic et al. (2009), How Connectivity, Background Activity, and Synaptic Properties Shape the Cross-Correlation between Spike Trains, J Neurosci 29(33):10234-10253

  8. Renart et al. (2010), The Asynchronous State in Cortical Circuits, Science 327(5965):587-590

  9. Shadlen & Newsome (1998), The variable discharge of cortical neurons: implications for connectivity, computation, and information coding, J Neurosci 18:3870-96

  10. Tetzlaff et al. (2012), Decorrelation of neural-network activity by inhibitory feedback, PLoS Comp Biol 8(8):e1002596, doi:10.1371/journal.pcbi.1002596   

  11. Zohary et al. (1994), Correlated Neuronal Discharge Rate and Its Implications for Psychophysical Performance, Nature 370:140-14

 

T4:  Modeling and analysis of extracellular potentials

Gaute Einevoll, Norwegian University of Life Sciences, Ås, Norway

Szymon Łęski (Nencki Institute of Experimental Biology, Warsaw)

Espen Hagen (Norwegian University of Life Sciences, Ås)

While extracellular electrical recordings have been the main workhorse in electrophysiology, the interpretation of such recordings is not trivial [1,2,3]. The recorded extracellular potentials in general stem from a complicated sum of contributions from all transmembrane currents of the neurons in the vicinity of the electrode contact. The duration of spikes, the extracellular signatures of neuronal action potentials, is so short that the high-frequency part of the recorded signal, the multi-unit activity (MUA), often can be sorted into spiking contributions from the individual neurons surrounding the electrode [4]. No such simplifying feature aids us in the interpretation of the low-frequency part, the local field potential (LFP). To take a full advantage of the new generation of silicon-based multielectrodes recording from tens, hundreds or thousands of positions simultaneously, we thus need to develop new data analysis methods grounded in the underlying biophysics [1,3,4].  This is the topic of the present tutorial.

In the first part of this tutorial we will go through

  • the biophysics of extracellular recordings in the brain,
  • a scheme for biophysically detailed modeling of extracellular potentials and the application to modeling single spikes [5-7], MUAs [8] and LFPs, both from single neurons [9] and populations of neurons [8,10,11], and
  • methods for
    • estimation of current source density from LFP data, such as the iCSD [12-14] and kCSD methods [15], and
    • decomposition of recorded signals in cortex into contributions from various laminar populations, i.e., (i) laminar population analysis (LPA) [16,17] based on joint modeling of LFP and MUA, and (ii) a scheme using LFP and known constraints on the synaptic connections [18]

In the second part, the participants will get demonstrations and, if wanted, hands-on experience with

Further, new results from applying the biophysical forward-modelling scheme to predict LFPs from comprehensive structured network models, in particular

  • the Traub-model for thalamocortical activity [21], and
  • the Potjans-Diesmann microcircuit model for a visual cortical column [22,23],

will be presented.

[1] KH Pettersen et al, “Extracellular spikes and CSD” in Handbook of Neural Activity Measurement, Cambridge (2012)

[2] G Buzsaki et al, Nature Reviews Neuroscience 13:407 (2012)

[3] GT Einevoll et al, Nature Reviews Neuroscience 14:770 (2013)

[4] GT Einevoll et al, Current Opin Neurobiol 22:11 (2012)

[5] G Holt, C Koch, J Comp Neurosci 6:169 (1999)

[6] J Gold et al, J Neurophysiol 95:3113 (2006)

[7] KH Pettersen and GT Einevoll, Biophys J 94:784 (2008)

[8] KH Pettersen et al, J Comp Neurosci 24:291 (2008)

[9] H Lindén et al, J Comp Neurosci 29: 423 (2010)

[10] H Lindén et al, Neuron 72:859 (2011)

[11] S Łęski et al, PLoS Comp Biol 9:e1003137 (2013)

[12] KH Pettersen et al, J Neurosci Meth 154:116 (2006)

[13] S Łęski et al, Neuroinform 5:207 (2007)

[14] S Łęski et al, Neuroinform 9:401 (2011)

[15] J Potworowski et al, Neural Comp 24:541 (2012)

[16] GT Einevoll et al, J Neurophysiol 97:2174 (2007)

[17] P Blomquist et al, PLoS Comp Biol 5:e1000328 (2009)

[18] SL Gratiy et al, Front Neuroinf  5:32 (2011)

[19] H Lindén et al, Front Neuroinf 7:41 (2014)

[20] ML Hines et al, Front Neuroinf 3:1 (2009)

[21] R Traub et al, J Neurophysiol 93:2194 (2005)

[22] TC Potjans and M Diesmann, Cereb Cort 24:785 (2014)

[23] E Hagen et al, BMC Neuroscience 14(Suppl 1):P119 (2013)


 

T5: NEURON Simulation Software

Bill Lytton (SUNY Downstate Medical Center, USA) and others (half day)

This half-day tutorial will focus on several new features that have been added recently to the NEURON simulator environment, as well as highlighting older features that have had recent upgrades.  Questions are encouraged during each talk and during time set aside at end of each talk.

Presentations will include the following:

  1.     Use of NEURON for multiscale modeling (Bill Lytton)
  2.     Use of the Python interpreter to work with hoc/nrniv objects (Sam Neymotin)
  3.     Reaction-diffusion (RxD) modeling techniques in NEURON (Robert McDougal)
  4.     === Coffee Break ====
  5.     Cell level modeling for synaptic distribution and current source density (Bill Lytton)
  6.     Design of large networks  (Cliff Kerr)
  7.     NEURON interfacing: robots, sense inputs, mean fields models (Salvador Dura-Bernal)
  8.     Modelview to evaluate and use modelDB to build your own sim (Robert McDougal)
  9.     Discussion, questions, further examples..

 

T6: Constructing biologically realistic neuron and network models with GENESIS

Hugo Cornelis, University of Texas Health Science Center, San Antonio, USA

This tutorial is aimed at people who are new to or have only elementary knowledge about the GENESIS-2 simulator, as well as those who have used GENESIS in the past and would like to learn of new developments in cortical network modeling with GENESIS. After a quick overview of the GENESIS project [1], the tutorial demonstrates methods for single neuron modeling. It then continues with the use of the GENESIS neural simulator for the efficient modeling of large networks of biologically realistic neurons. The tutorial ends with a summary about the recent development of functionality for modeling spike-timing dependent plasticity in network models that include realistic neuronal morphology and axonal conduction delays for the delivery of action potentials.

The tutorial is a guide to the use of the CNS 2014 release of the Ultimate GENESIS Tutorial Distribution [2]. This is a newly updated version of a self-paced course on biologically realistic modeling in general, and creating simulations with GENESIS in particular. This package contains the full GENESIS 2.3 distribution, as well as recent patches that will be incorporated into the GENESIS 2.4 release later this year. It includes materials used by several recent international courses on neural modeling as well as new cortical network simulation examples with tutorial documentation. It comes with suggested exercises for independent study.

This tutorial should give you everything that you need to get started modeling with GENESIS, and to develop your own simulations starting from these examples.

References
  [1] Bower JM, Beeman D (1998, 2003) The Book of GENESIS: Exploring Realistic Neural Models with the General NEural SImulation System, second edn. Springer-Verlag, New York. (Free internet edition available at: http://www.genesis-sim.org/GENESIS/bog/bog.html [www.genesis-sim.org])
  [2] The "Ultimate GENESIS Tutorial Distribution" (http://genesis-sim.org/GENESIS/UGTD.html [genesis-sim.org]).


 

T7: Modeling of spiking neural networks with BRIAN

Romain Brette, Marcel Stimberg, Pierre Yger (Institut de la Vision, Paris, France), Dan Goodman (Harvard Medical School, Boston, USA), and Bertrand Fontaine (KU Leuven, Belgium)

Brian [1,2] is a simulator for spiking neural networks, written in the Python programming language. It focuses on making the writing of simulation code as quick as possible and on flexibility: new and non-standard models can be readily defined using mathematical notation[3]. This tutorial will be based on Brian 2, the current Brian version under development.

In the morning, we will give an introduction to Brian and an overview of the existing Brian extensions (brian hears [4], model fitting toolbox [5], compartmental modelling). In the afternoon, more advanced topics (extending Brian; code generation[5], including the generation of "standalone code"; contributing to Brian) will be covered.

More details of the agenda for the tutorial along with teaching material will be posted here: http://briansimulator.org/brian-tutorial-at-cns-2014/.

References:

     [1] http://briansimulator.org

     [2] Goodman DFM and Brette R (2009). The Brian simulator. Front Neurosci doi:10.3389/neuro.01.026.2009.

     [3] Stimberg M, Goodman DFM, Benichoux V, and Brette R (2014). Equation-oriented specification of neural models for simulations. Frontiers in Neuroinformatics 8. doi:10.3389/fninf.2014.00006

     [4] Fontaine B, Goodman DFM, Benichoux V, Brette R (2011). Brian Hears: online auditory processing using vectorisation over channels. Frontiers in Neuroinformatics 5:9. doi:10.3389/fninf.2011.00009

     [5] Rossant C, Goodman DFM, Platkiewicz J and Brette, R. (2010).  Automatic fitting of spiking neuron models to electrophysiological recordings. Frontiers in Neuroinformatics. doi:10.3389/neuro.11.002.2010

     [6] Goodman, DFM (2010). Code generation: a strategy for neural network simulators. Neuroinformatics. doi:10.1007/s12021-010-9082-x



T8: Simulating large-scale spiking neuronal networks with NEST

Jochen M. Eppler & Jannis Schücker (Research Center Jülich, Germany)

The neural simulation tool NEST [1, www.nest-simulator.org] is a simulator for heterogeneous networks of point neurons or neurons with a small number of electrical compartments aiming at simulations of large neural systems. It is implemented in C++ and runs on a large range of architectures from single-processor desktop computers to large clusters and supercomputers with thousands of processor cores.

With the example of the microcircuit model published by Potjans and Diesmann [2], we explain the basic modeling paradigm and features of the recently released version 2.4 of NEST. The tutorial includes an introduction to the most important neuron and synapse models as well as the routines to set up and configure the network.

It is helpful (but not required) for the tutorial, if NEST or another simulator for spiking neuronal networks has been used previously and if basic knowledge about neuronal modeling in general is present.

[1] Marc-Oliver Gewaltig and Markus Diesmann (2007) NEST (Neural Simulation Tool), Scholarpedia 2 (4), p. 1430.
[2] Tobias C. Potjans and Markus Diesmann (2014) The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model, Cerebral Cortex, 24:785-806, doi:10.1093/cercor/bhs358.

 

T9: Neuronal Model Parameter Search Techniques

Cengiz Günay, Anca Doloc­Mihu (Emory University, USA), Vladislav Sekulić (University of Toronto, Canada), Tomasz G. Smolinski (Delaware State University, USA)

Parameter tuning of model neurons to mimic biologically realistic activity is a non‐trivial task. Multiple models may exhibit similar dynamics that match experimental data – i.e., there is no single “correct” model. To address this issue, the ensemble modeling technique proposes to represent properties of living neurons with a set of neuronal models. Several approaches to ensemble modeling have been proposed over the years, but the two most prevalent parameter tuning methods are systematic “brute‐force” searches [1, 2] and various evolutionary algorithms‐based techniques [3, 4, 5, 6]. Both approaches relay on traversing a very large parameter space (with thousands to millions of model instances), but utilize diametrically different ways to accomplish that. In both cases, however, entire collections of biologically realistic models are generated, whose neural activity characteristics can then be cataloged and studied using a database [1, 2].  

The tutorial covers “tips and tricks,” as well as various pitfalls in all stages of model construction, large‐scale simulations on high performance computing clusters [S2], database construction and analysis of neural data, along with a discussion about the strengths and weaknesses of the two parameter search techniques. We will review software implementations for each technique: PANDORA Matlab Toolbox [7][S1] for the brute force method and NeRvolver (i.e., evolver of nerve cells) for evolutionary algorithms. PANDORA was used in recent projects for tuning models of rat globus pallidus neurons [2][M1], lobster pyloric network calcium sensors [8][M2], leech heart interneurons [9][M3,S3] and hippocampal O‐LM interneurons (Skinner Lab, TWRI/UHN and Univ. Toronto). NeRvolver is a prototype of a computational intelligence‐based system for automated construction, tuning, and analysis of neuronal models, which is currently under development in the Computational Intelligence and Bio (logical) informatics Laboratory at Delaware State University [10]. Through the utilization of computational intelligence methods (i.e., Multi‐Objective Evolutionary Algorithms and Fuzzy Logic), the NeRvolver system generates classification rules describing biological  phenomena discovered during the process of model creation or tuning. Thus in addition to producing neuronal models, NeRvolver provides–via such rules–insights into the functioning of the biological neurons being modeled. In the tutorial, we will present basic functionalities of the system and demonstrate how to analyze the results returned by the software.  

We will allocate enough time for Q&A and if participants bring a laptop pre‐loaded with Matlab, they can follow some of our examples.  

References
[1]   Astrid A. Prinz, Cyrus P. Billimoria, and Eve Marder. Alternative to hand‐tuning conductance‐based models: Construction and analysis of databases of model neurons. J Neurophysiol, 90:3998–4015, 2003.

[2]   Cengiz Günay, Jeremy R. Edgerton, and Dieter Jaeger. Channel density distributions explain spiking variability in the globus pallidus: A combined physiology and computer simulation database approach. J. Neurosci., 28(30):7476–91, July 2008.

[3]   Pablo Achard and Erik De Schutter. Complex parameter landscape for a complex neuron model. PLoS Comput Biol, 2(7):794–804, Jul 2006.

[4]   Tomasz G. Smolinski and Astrid A. Prinz. Computational intelligence in modeling of biological neurons: A case study of an invertebrate pacemaker neuron. In Proceedings of the International Joint Conference on Neural Networks, pages 2964–2970, Atlanta, GA, 2009. 

[5]   Tomasz G. Smolinski and Astrid A. Prinz. Multi‐objective evolutionary algorithms for model neuron parameter value selection matching biological behavior under different simulation scenarios. BMC Neuroscience, 10(Suppl 1):P260, 2009. 

[6]   Damon G. Lamb and Ronald L. Calabrese. Correlated conductance parameters in leech heart motor neurons contribute to motor pattern formation. PLoS One, 8(11):e79267, 2013. 

[7]   Cengiz Günay, Jeremy R. Edgerton, Su Li, Thomas Sangrey, Astrid A. Prinz, and Dieter Jaeger. Database analysis of simulated and recorded electrophysiological datasets with PANDORA’s Toolbox. Neuroinformatics, 7(2):93–111, 2009. 

[8]   Cengiz Günay and Astrid A. Prinz. Model calcium sensors for network homeostasis: Sensor and readout parameter analysis from a database of model neuronal networks. J Neurosci, 30:1686–1698,
Feb 2010. NIHMS176368,PMC2851246. 

[9]   Anca Doloc‐Mihu and Ronald L. Calabrese. A database of computational models of a half‐center oscillator for analyzing how neuronal parameters influence network activity. J Biol Phys, 37(3):263–283, Jun 2011. 

[10]   Emlyne Forren, Myles Johnson‐Gray, Parth Patel, and Tomasz G. Smolinski. Nervolver: a computational intelligence‐based system for automated construction, tuning, and analysis of neuronal models. BMC Neuroscience, 13(Suppl 1):P36, 2012.  

Model and Software Links
[M1] Rat globus pallidus neuron model (https://senselab.med.yale.edu/modeldb/ShowModel.asp?model=114639)

[M2] Lobster stomatogastric ganglion pyloric network model (http://senselab.med.yale.edu/ModelDB/showmodel.asp?model=144387)

[M3] Half‐center oscillator database of leech heart interneuron model (http://senselab.med.yale.edu/ModelDB/ShowModel.asp?model=144518)

[S1] PANDORA Matlab Toolbox (http://software.incf.org/software/pandora)

[S2] Parallel parameter search scripts for simulating neuron models (https://github.com/cengique/param‐search‐neuro)

[S3] Half‐Center Oscillator model database (HCO‐db) (http://www.biology.emory.edu/research/Calabrese/hco‐db/hcoDB_Main.html)