CNS*2020 Online: Tutorials 

The detailed schedule that includes links to the video streams is hosted on our Sched instance. The instance is password protected to prevent these links from falling into the hands of trolls. Please register for CNS*2020 to receive the password.

Whole day tutorials

TitleLecturersReferenceWeb
New interfaces for teaching with NEST: hands-on with the NEST Desktop GUI and NESTML code generation Charl Linssen (JARA-Institute, Jülich, Germany), Sebastian Spreizer (University of Trier, Germany), and Renato Duarte  T1 Link
Building mechanistic multiscale models, from molecules to networks, using NEURON and NetPyNE Dr. Salvador Dura-Bernal (SUNY Downstate, USA), Dr. Robert A McDougal (Yale University, USA), and Dr. William W Lytton (SUNY Downstate, USA) T2 Link


Half-day tutorials

TitleLecturersReferenceWeb
Tools and techniques to bridge the gap between models and closed-loop neuroscience experiments Dr. Pablo Varona (Universidad Autónoma de Madrid, Spain), Manuel Reyes Sanchez (UAM, Spain), and Rodrigo Amaducci (UAM, Spain) T3 Link
Neuromorphic VLSI realization of the Hippocampal formation Dr. Anu Aggarwal (University of Illinois Urbana Champaign, USA) T4 Link
The use of Keras with Tensor Flow applied to neural models and data analysis Dr. Cecilia Jarne (University of Quilmes and CONICET Bernal, Argentina) T5 Link
Methods from Data Science for Model Simulation, Analysis, and Visualization Dr. Cengiz Gunay (SST, Georgia Gwinnett College, USA) and Dr. Anca Doloc-Mihu (SST, Georgia Gwinnett College, USA) T6 Link
Characterizing neural dynamics using highly comparative time-series analysis Dr. Ben D Fulcher (The University of Sydney) T7 Link

Showcases

TitleLecturersReferenceWeb
Information theory and directed network inference (using JIDT and IDTxl) Leonardo Novelli (The University of Sydney, Australia) and Dr. Joseph T. Lizier (The University of Sydney, Australia)  S1 Link
Introduction to the Brain Dynamics Toolbox Dr. Stewart Heitmann (Victor Chang Cardiac Research Institute, Australia) S2 TBA
Advances in the PANDORA Matlab Toolbox for intracellular electrophysiology data Dr. Cengiz Gunay (SST, Georgia Gwinnett College, USA) S3 Link

 

Descriptions

T1: New interfaces for teaching with NEST: hands-on with the NEST Desktop GUI and NESTML code generation.

  • Charl Linssen (Jülich Research Centre and JARA, Germany).
  • Sebastian Spreizer (Jülich Research Centre and JARA/ Human-Computer Interaction, University of Trier, Germany).
  • Renato Duarte (JARA, Germany).

Description of the tutorial

NEST is established community software for the simulation of spiking neuronal network models capturing the full detail of biological network structures [1]. The simulator runs efficiently on a range of architectures from laptops to supercomputers [2]. Many peer-reviewed neuroscientific studies have used NEST as a simulation tool over the past 20 years. More recently, it has become a reference code for research on neuromorphic hardware systems [3].

This tutorial provides hands-on experience with recent improvements of NEST. In the past, starting out with NEST could be challenging for computational neuroscientists, as models and simulations had to be programmed using SLI, C++ or Python. NEST Desktop changes this: It is an entirely graphical approach to the construction and simulation of neuronal network models. It runs installation-free in the browser and has proven its value in several university courses. This opens the domain of NEST to the teaching of neuroscience for students with little programming experience.

NESTML complements this new interface by enhancing the development process of neuron and synapse models. Advanced researchers often want to study specific features not provided by models already available in NEST. Instead of having to turn to C++, using NESTML they can write down differential equations and necessary state transitions in the mathematical notation they are used to. These descriptions are then automatically processed to generate machine-optimised code.

After a quick overview of the current status of NEST and upcoming new functionality, the tutorial works through a concrete example [4] to show how the combination of NEST Desktop and NESTML can be used in the modern workflow of a computational neuroscientist.

 Background reading and software tools

  • [1] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.
  • [2] Jordan J., Ippen T., Helias M., Kitayama I., Sato M., Igarashi J., Diesmann M., Kunkel S. (2018) Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Frontiers in Neuroinformatics 12: 2.
  • [3] Gutzen R., von Papen, M., Trensch G., Quaglio P. Grün S., Denker M. (2018) Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Frontiers in Neuroinformatics 12 (90).
  • [4] Duarte R. & Morrison A. (2014). “Dynamic stability of sequential stimulus representations in adapting neuronal networks”, Front. Comput. Neurosci.

 

T2: Building mechanistic multiscale models, from molecules to networks, using NEURON and NetPyNE.

  • Salvador Dura-Bernal (State University of New York Downstate, USA).
  • Robert A McDougal (Yale University, USA).
  • William W Lytton (State University of New York Downstate, USA).

Description of the tutorial

Understanding brain function requires characterizing the interactions occurring across many temporal and spatial scales. Mechanistic multiscale modeling aims to organize and explore these interactions. In this way, multiscale models provide insights into how changes at molecular and cellular levels, caused by development, learning, brain disease, drugs, or other factors, affect the dynamics of local networks and of brain areas. Large neuroscience data-gathering projects throughout the world (e.g. US BRAIN, EU HBP, Allen Institute) are making use of multiscale modeling, including the NEURON ecosystem, to better understand the vast amounts of information being gathered using many different techniques at different scales.

This tutorial will introduce multiscale modeling using two NIH-funded tools: the NEURON simulator [1], including thee Reaction-Diffusion (RxD) module [2,3], and the NetPyNE tool [4]. The tutorial will include background, examples and hands on exercises covering the implementation of models at four key scales: (1) intracellular dynamics (e.g. calcium buffering, protein interactions), (2) single neuron electrophysiology (e.g. action potential propagation), (3) neurons in extracellular space (e.g. spreading depression), and (4)
networks of neurons. For network simulations, we will use NetPyNE, a high-level interface to NEURON supporting both programmatic and GUI specification that facilitates the development, parallel simulation, and analysis of biophysically detailed neuronal circuits. We conclude with an example combining all three tools that links intracellular molecular dynamics with network spiking activity and local field potentials.

Basic familiarity with Python is recommended. No prior knowledge of NEURON or NetPyNE is required, however participants are encouraged to download and install each of these packages prior to the tutorial.

Software tools

  • NEURON: neuron.yale.edu/
  • RxD: neuron.yale.edu/neuron/docs/reaction-diffusion
  • NetPyNE: netpyne.org

Background reading

  • [1] Lytton WW, Seidenstein AH, Dura-Bernal S, McDougal RA, Schürmann F, Hines ML. Simulation Neurotechnologies for Advancing Brain Research: Parallelizing Large Networks in NEURON . Neural Comput. 28, 2063–2090, 2016.
  • [2] McDougal R, Hines M, Lytton W. (2013) Reaction-diffusion in the NEURON simulator. Front. Neuroinform. 7, 28. 10.3389/fninf.2013.00028.
  • [3] Newton AJH, McDougal RA, Hines ML and Lytton WW (2018) Using NEURON for Reaction-Diffusion. Modeling of Extracellular Dynamics. Front. Neuroinform. 12, 41. 10.3389/fninf.2018.00041.
  • [4] Dura-Bernal S, Suter B, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, Kedziora DJ, Chadderdon GL, Kerr CC, Neymotin SA, McDougal R, Hines M, Shepherd GMG, Lytton WW. (2019) NetPyNE: a tool for data-driven multiscale modeling of brain circuits. eLife 2019;8:e44494.

 

T3: Tools and techniques to bridge the gap between models and closed-loop neuroscience experiments.

  • Pablo Varona (Grupo de Neurocomputación Biológica. Escuela Politécnica Superior. Universidad Autónoma de Madrid, Spain).
  • Manuel Reyes Sanchez (Grupo de Neurocomputación Biológica. Escuela Politécnica Superior. Universidad Autónoma de Madrid, Spain).
  • Rodrigo Amaducci (Grupo de Neurocomputación Biológica. Escuela Politécnica Superior. Universidad Autónoma de Madrid, Spain).

Description of the tutorial

Models in computational neuroscience are typically used to reproduce and explain experimental findings, to draw new hypotheses from their predictive power, to undertake the low observability of the brain, etc. However, computational models can also be employed to interact directly with living nervous systems, which is a powerful way of unveiling key neural dynamics by combining experimental and theoretical efforts. However, protocols that simultaneously combine recordings from living neurons and input/outputs from computational models are not easy to design or implement. In this tutorial, we will describe several tools and techniques to build such kind of open and closed-loop interactions: from basic dynamic-clamp approaches to build hybrid circuits to more complex configurations that can include several interacting living and artificial elements. We will emphasize the need of open-source real-time software technology for some of these interactions.

In particular, we will focus on two software packages that can implement closed-loop interactions between living neurons and computational neuroscience models.  The first one, RTHybrid, is a solution to build hybrid circuits between living neurons and  models. This program, developed by the organizers, includes a library of neuron and synapse models and different algorithms for the automatic calibration and adaptation of hybrid configurations. The second software tool, RTXI, allows to program specific modules to implement a wide variety of closed-loop configurations and includes many handy modularization and visualization tools. Both programs can be used in very wide contexts of hybrid experimental design and deal with real-time constrains. During the tutorial, we will show how to install and use these programs in standard computer platforms, and we will provide attenders the possibility of building and testing their first designs.

Software tools

 

T4: Neuromorphic VLSI realization of the Hippocampal formation.

  • Anu Aggarwal (University of Illinois Urbana Champaign, USA).

Description of the tutorial

Neuromorphic circuits are inspired by the organizing principles of biological neural circuits. These designs implement the computational neuroscience models of different parts of the brain in silicon. These silicon devices can perform actual work unlike the computer models. One of the main reasons for interest in this field is that the electrical and computer engineers wish to implement the superior processing powers of the brain to build machines like computers. For similar processing power, brain consumes much less power than a computer. Thus, scientists are interested in building power-efficient machines that are based on brain algorithms. Neuromorphic architectures often rely on collective computation in parallel networks. Adaptation, learning and memory are implemented locally within the individual computational elements as opposed to separation between memory and computations in conventional computers. As the Moore’s law has hit the limits, there is interest in brain-inspired computing to build small, and power efficient computing machines. Application domains of neuromorphic circuits include silicon retinas, cochleas for machine vision and audition, real-time emulations of networks of biological neurons, the lateral superior olive and hippocampal formation for the development of autonomous robotic systems and even replacement of brain neuronal functions with silicon neurons. This tutorial covers introduction to silicon Neuromorphic design with example of silicon implementation of the hippocampal formation.

Presentations/Lectures

  1. Brief background of Neuromorphic VLSI design, anatomy and physiology (including lab experimental data) of the Hippocampal formation
  2. Computational Neuroscience Models of the Hippocampal formation 
  3. VLSI design or silicon realization of the Hippocampal formation

Background readings (not required)

  1. Analog VLSI and Neural systems by Carver Mead, 1989
  2. J. O’Keefe, 1976, “Place units in the hippocampus of the freely moving rat”, Exp. Neurol. 51, 78-109. 
  3. J. S. Taube, R. U. Muller, J. B Ranck., Jr., 1990a, “Head direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis”, J Neurosci., 10, 420-435.
  4. J. S. Taube, R. U. Muller, J. B Ranck., Jr., 1990b, “Head direction cells recorded from the post-subiculum in freely moving rats. II. Effects of environmental manipulations”, J Neurosci., 10, 436-447. 
  5. T. Hafting, M. Fyhn, S. Molden, M. B. Moser., E. I. Moser, August 2005, “Microstructure of a spatial map in the entorhinal cortex”, Nature, 436, 801-806. 
  6. B. L. McNaughton, F. P. Battaglia, O. Jensen, E. I. Moser & M. B. Moser, 2006, “Path integration and the neural basis of the 'cognitive map‘”, Nature Reviews Neuroscience, 7, 663-678. 
  7. H. Mhatre, A. Gorchetchnikov, and S. Grossberg, 2012, “Grid Cell Hexagonal Patterns Formed by Fast Self-Organized Learning within Entorhinal Cortex”, Hippocampus, 22:320–334.T. Madl, S. Franklin, K. Chen, D. Montaldi, R. Trappl, 2014, “Bayesian integration of information in hippocampal place cells”, PLOS one, 9(3), e89762. 
  8. Aggarwal, 2015, "Neuromorphic VLSI Bayesian integration synapse", the Electronics letters, 51(3):207-209. 
  9. A.Aggarwal, T. K. Horiuchi, 2015, “Neuromorphic VLSI second order synapse”, the Electronics letters, 51(4):319-321. 
  10. A.Aggarwal, 2015, “VLSI realization of neural velocity integrator and central pattern generator”, the Electronics letters, 51(18), DOI: 10.1049/el.2015.0544. 
  11. A.Aggarwal, 2016, “Neuromorphic VLSI realization of the Hippocampal Formation”, Neural Networks, May; 77:29-40. doi: 10.1016/j.neunet.2016.01.011. Epub 2016 Feb 4.

 

T5: The use of Keras with Tensor Flow applied to neural models and data analysis.

  • Cecilia Jarne (Department of Science and Technology from the University of Quilmes and CONICET Bernal, Buenos Aires, Argentina).

Description of the tutorial

This tutorial will help participants implement and explore simple neural models using Keras [1] as well as the implementation of neural networks to apply Deep learning tools for data analysis. It will include an introduction to modeling and hands-on exercises. The tutorial will focus on using Keras which is an open-source framework to develop Neural Networks for rapid prototyping and simulation with TensorFlow [2] as backend. The tutorial will show how models can be built and explored using python. The hands-on exercises will demonstrate how Keras can be used to rapidly explore the dynamics of the network. 

Keras is a framework that greatly simplifies the design and implementations of Neural Networks of many kinds (Regular classifiers, Convolutional Neural Networks, LSTM among others). In this mini-course we will study implementations of neural networks with Keras split into two sections: On one side we will introduce the main features of Keras, showcasing some examples; and in then we will do a set of two guided on-line hands-on with exercises to strengthen the knowledge.

For this tutorial, you will need basic knowledge of NumPy, SciPy, and matplotlib. To be able to carry out the tutorial, students need a laptop with Linux and these libraries installed:

Software tools

  • Python
  • Numpy
  • SciPy 
  • Matplotlib
  • Scikit learn
  • TensorFlow
  • Keras

I recommend the following sites where is explained the installation of following packages that include a set of the named libraries and some additional tools:

  • https://www.anaconda.com/distribution/ 
  • https://www.tensorflow.org/install/
  • https://keras.io/

 Background readings

[1] Francois Chollet et al. Keras. https://keras.io, 2015.
[2] Martín Abadi, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.

 

T6: Methods from Data Science for Model Simulation, Analysis, and Visualization.

  • Cengiz Gunay  (School of Science and Technology, Georgia Gwinnett College, Georgia, USA).
  • Anca Doloc-Mihu  (School of Science and Technology, Georgia Gwinnett College, Georgia, USA).

Description of the tutorial

Computational neuroscience projects often involve large number of simulations for parameter search of computer models, which generates large amount of data. With the advances in computer hardware, software methods, and cloud computing opportunities making this task easier, the amount of collected data has exploded, similar to what has been happening in many fields. High performance computing (HPC) methods have been used in the computational neuroscience field for a while. However, use of novel data science and big data methods are less frequent. In this tutorial, we will review established HPC methods and introduce novel data science tools to be used in computational neuroscience workflows, starting from the industry standard of Apache Hadoop (https://hadoop.apache.org/) to newer tools, such as Apache Spark (https://spark.apache.org/). These tools can be used for either model simulation or post-processing and analysis of the generated data. To visualize the data, we will review novel web-based interactive dashboard technologies mostly based on Javascript and Python.

 

T7: Characterizing neural dynamics using highly comparative time-series analysis.

  • Ben D. Fulcher (The University of Sydney, Australia).

Description of the tutorial

Massive open datasets of neural dynamics, from microscale neuronal circuits to macroscale population-level recordings, are becoming increasingly available to the computational neuroscience community. There are myriad ways to quantify different types of structure in the univariate dynamics of any individual component of a neural system, including methods from statistical time-series modeling, the physical nonlinear time-series analysis literature, and methods derived from information theory. Across this interdisciplinary literature of thousands of time-series analysis methods, each method gives unique information about the measured dynamics. However, the choice of analysis methods in any given study is typically subjective, leaving open the possibility that alternative methods might yield better understanding or performance for a given task.

In this tutorial, I will introduce highly comparative time-series analysis, implemented as the software package hctsa, which partially automates the selection of useful time-series analysis methods from an interdisciplinary library of over 7000 time-series features. I will demonstrate how hctsa can be used to extract useful information from various neural time-series datasets. We will work through a range of applications using fMRI (mouse and human) and EEG (human) time-series datasets, including how to: (i) determine the relationship between structural connectivity and fMRI dynamics in mouse and human; (ii) understand the effects of targeted brain stimulation using DREADDs using mouse fMRI; and (iii) classify seizure dynamics and extract sleep-stage information from EEG.

Software tools

  1. If you want to play along at home, you can read the README and install the hctsa software package (Matlab): https://github.com/benfulcher/hctsa
  2. hctsa documentation: https://hctsa-users.gitbook.io/hctsa-manual/

Background reading

  1. B.D. Fulcher, N. S. Jones. hctsa: A computational framework for automated time-series phenotyping using massive feature extraction. Cell Systems 5(5): 527 (2017). https://doi.org/10.1016/ j.cels.2017.10.001
  2. B.D. Fulcher, M.A. Little, N.S. Jones. Highly comparative time-series analysis: the empirical structure of time series and their methods. J. Roy. Soc. Interface 10, 20130048 (2013). https://doi.org/10.1098/rsif.2013.0048

 

S1: Information theory and directed network inference (using JIDT and IDTxl).

  • Leonardo Novelli (The University of Sydney, Australia).
  • Joseph T. Lizier (The University of Sydney, Australia).

Description of the showcase

Information theoretic measures including transfer entropy are widely used to analyse neuroimaging time series and to infer directed connectivity [1]. The JIDT [2] and IDTxl [3] software toolkits provide efficient measures and algorithms for these applications:

  • JIDT (https://github.com/jlizier/jidt) provides a fundamental computation engine for efficient estimation of information theoretic measures for a variety of applications. It can be easily used in Matlab, Python, and Java, and provides a GUI interface for push-button analysis and code template generation.
  • IDTxl (https://github.com/pwollstadt/IDTxl) is a specific Python toolkit for directed network inference in neuroscience. It employs multivariate transfer entropy and hierarchical statistical tests to control false positives and has been validated at realistic scales for neural data sets [4]. The inference can be run in parallel using GPUs or a high-performance computing cluster.

This tutorial session will help you get started with software analyses via brief overviews of the toolkits and demonstrations.

Software tools:

Background reading:

  1. Wibral, M., Vicente, R., & Lizier, J. T. (2014). Directed Information Measures in Neuroscience. Springer, Berlin. https://doi.org/10.1007/978-3-642-54474-3
  2. Lizier, J. T. (2014). JIDT: An Information-Theoretic Toolkit for Studying the Dynamics of Complex Systems. Frontiers in Robotics and AI, 1, 11. https://doi.org/10.3389/frobt.2014.00011
  3. Wollstadt, P., Lizier, J. T., Vicente, R., Finn, C., Martinez-Zarzuela, M., Mediano, P., Novelli, L., and Wibral, M. (2019). IDTxl: The Information Dynamics Toolkit xl: a Python package for the efficient analysis of multivariate information dynamics in networks. Journal of Open Source Software, 4(34), 1081. https://doi.org/10.21105/joss.01081
  4. Novelli, L., Wollstadt, P., Mediano, P., Wibral, M., & Lizier, J. T. (2019). Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Network Neuroscience, 3(3), 827–847. https://doi.org/10.1162/netn_a_00092

  

S2: Introduction to the Brain Dynamics Toolbox.

  • Stewart Heitmann (Victor Chang Cardiac Research Institute, Australia).

Description of the showcase

The Brain Dynamics Toolbox (https://bdtoolbox.org) is an open-source toolbox for simulating dynamical systems in neuroscience using Matlab. It specifically solves initial-value problems in user-defined systems of Ordinary Differential Equations (ODEs), Delay Differential Equations (DDEs), Stochastic Differential Equations (SDEs) and Partial Differential Equations (PDEs). New models can typically be written in less than 100 lines of code and then applied at all stages of the research lifecycle. Rapid prototyping is done via the graphical interface where the dynamics can be explored interactively without the need for graphical programming. Interactive parameter surveys can then be semi-automated using the Matlab command window. Large-scale simulations can be fully-automated in user-defined scripts. 

Once a model is written, the toolbox’s hub-and-spoke architecture allows unlimited combinations of plotting tools (display panels) and solver algorithms to be applied to that model with no additional programming effort. The toolbox currently supports a dozen solvers and display panels. It also ships with approximately 30 example models that can be used for teaching or as starting points for building new models. Online training courses are available from the bdtoolbox.org website. Extensive documentation is provided in the Handbook for the Brain Dynamics Toolbox. This software showcase aims to introduce the toolbox to a wider audience through a series of real-time demonstrations. The audience will learn how to get started with the toolbox, how to run
existing models and how to semi-automate the controls to generate a bifurcation diagram.

Software tools:

  • https://bdtoolbox.org 

Background reading:

  1. Heitmann S, Breakspear M (2017-2019) Handbook for the Brain Dynamics Toolbox. QIMR Berghofer Medical Research Institute. 1st Edition: Version 2017c, ISBN 978-1-5497-2070-3. 2nd Edition: Version 2018a, ISBN 978-1-9805-7250-3. 3rd Edition: Version 2018b, ISBN 978-1-7287-8188-4. 4th Edition: Version 2019a, ISBN 978-1-0861-1705-9. 

  2. Heitmann S, Aburn M, Breakspear M (2017) The Brain Dynamics Toolbox for Matlab. Neurocomputing. Vol 315. p82-88. doi:10.1016/j.neucom.2018.06.026. 

 

S3: Advances in the PANDORA Matlab Toolbox for intracellular electrophysiology data.

  • Cengiz Gunay (School of Science and Technology, Georgia Gwinnett College, Georgia, USA).

Description of the showcase

PANDORA is an open-source toolbox for Matlab (Mathworks, Natick, MA) has been proposed for analysis and visualization of single-unit intracellular electrophysiology data (RRID: SCR_001831, Günay et al. 2009Neuroinformatics, 7(2):93-111. doi: 10.1007/s12021-009-9048-z). Even though there are more modern and popular environments, such as the Python and Anaconda ecosystem, Matlab still offers an advantage in its simplicity, especially towards those less computationally inclined, for instance for collaboration with experimentalists. PANDORA was originally intended for managing and analyzing brute-force neuronal parameter search databases (Günay et al. 2008 J Neurosci. 28(30): 7476-7491; Günay et al. 2010 J Neurosci. 30: 1686–98). However, it has been proven useful for other types of simulation or experimental data analysis (Doloc-Mihu et al. 2011 Journal of biological physics, 37(3), 263–283. doi:10.1007/s10867-011-9215-y; Lin et al. 2012 J Neurosci 32(21): 7267–77; Wolfram et al. 2014 J Neurosci, 34(7): 2538–2543; doi: 10.1523/JNEUROSCI.4511-13.2014; Günay et al. 2015 PLoS Comp Bio. doi: 10.1371/journal.pcbi.1004189; Wenning et al. 2018 eLife 2018;7:e31123 doi: 10.7554/eLife.31123; Günay et al. 2019 eNeuro, 6(4), ENEURO.0417-18.2019. doi:10.1523/ENEURO.0417-18.2019). PANDORA’s original motivation was to offer object-oriented analysis specific to neuronal data inside the Matlab environment, in particular with a database table-like object, similar to R and the Python PANDAS toolbox’s “dataframe” object, and a new syntax for a powerful database querying system. The typical workflow would constitute of generating parameter sets for simulations, and then in the resulting output data, finding spikes and additional characteristics to construct databases, and finally analyze and visualize these database contents. PANDORA provides objects for loading datasets, controlling simulations, importing/exporting data, and visualization. Since it’s inception, it has grown with added functionality. In this showcase, we review the toolbox’s standard features and show how to customize them for a given project, and then introduce some of the new and experimental features, such as ion channel fitting, evolutionary/genetic algorithms. Furthermore, we will give a developers’ perspective for those who may be interested in adding modules to this toolbox.