Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Methods and Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003
Methods and Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003
Methods and Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003
Ebook1,388 pages14 hours

Methods and Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Neuroscience is an interdisciplinary field that strives to understand the functioning of neural systems at levels ranging from biomolecules and cells to behaviour and higher brain functions (perception, memory, cognition). Neurophysics has flourished over the past three decades, becoming an indelible part of neuroscience, and has arguably entered its maturity. It encompasses a vast array of approaches stemming from theoretical physics, computer science, and applied mathematics. This book provides a detailed review of this field from basic concepts to its most recent development.
LanguageEnglish
Release dateDec 11, 2004
ISBN9780080536385
Methods and Models in Neurophysics: Lecture Notes of the Les Houches Summer School 2003

Related to Methods and Models in Neurophysics

Titles in the series (10)

View More

Related ebooks

Biology For You

View More

Related articles

Reviews for Methods and Models in Neurophysics

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Methods and Models in Neurophysics - Elsevier Science

    Methods and Models in Neurophysics

    Les Houches

    C.C. Chow

    B. Gutkin

    D. Hansel

    C. Meunier

    J. Dalibard

    ISSN  0924-8099

    Volume 80 • Suppl. (C) • 2005

    Table of Contents

    Cover image

    Title page

    Contributors to this volume

    Copyright page

    École de Physique des Houches

    Previous sessions

    Lecturers

    Teaching assistants

    Speakers at the workshops

    Participants

    Preface

    Course 1: Experimenting with Theory

    1. Overcoming communication barriers

    2. Modeling with biological neurons-the dynamic clamp

    3. The traps inherent in building conductance-based models

    4. Theory can drive new experiments

    5. Conclusions

    Course 2: Understanding Neuronal Dynamics by Geometrical Dissection of Minimal Models

    1. Introduction

    2. Revisiting the Hodgkin–Huxley equations

    3. Morris-Lecar model

    4. Bursting, cellular level

    5. Bursting, network generated. Episodic rhythms in the developing spinal cord

    6. Chapter summary

    Course 3: Geometric Singular Perturbation Analysis of Neuronal Dynamics

    1. Introduction

    2. Introduction to dynamical systems

    3. Properties of a single neuron

    4. Two mutually coupled cells

    5. Excitatory-inhibitory networks

    6. Activity patterns in the basal ganglia

    Course 4: Theory of Neural Synchrony

    1. Introduction

    2. Weakly coupled oscillators

    3. Strongly coupled oscillators: mechanisms of synchrony

    4. Conclusion

    Acknowledgements

    Appendix A. Hodgkin–Huxley and Wang–Buszaki models

    Appendix B. Measure of synchrony and variability in numerical simulations

    Appendix C. Reduction of a conductance-based model to the QIF model

    Course 5: Some Useful Numerical Techniques for Simulating Integrate-and-Fire Networks

    1. Introduction

    2. The conductance-based I&F model

    3. Modified time-stepping schemes

    4. Synaptic interactions

    5. Simulating a V1 model

    Acknowledgments

    Course 6: Propagation of Pulses in Cortical Networks: The Single-Spike Approximation

    Abstract

    1. Introduction

    2. Propagating pulses in networks of excitatory neurons

    3. Propagating pulses in networks of excitatory and inhibitory neurons

    4. Discussion

    Acknowledgements

    Appendix A. Stability of the lower branch

    Course 7: Activity-Dependent Transmission in Neocortical Synapses

    Abstract

    1. Introduction

    2. Phenomenological model of synaptic depression and facilitation

    3. Dynamic synaptic transmission on the population level

    4. Recurrent networks with synaptic depression

    5. Conclusion

    Acknowledgements

    Course 8: Theory of Large Recurrent Networks: From Spikes to Behavior

    1. Introduction

    2. From spikes to rates I: rates in asynchronous states

    3. From spikes to rates II: dynamics and conductances

    4. Persistent activity and neural integration in the brain

    5. Feature selectivity in recurrent networks–-the ring model

    Conclusion

    6. Models of associative memory

    Discussion

    7. Concluding remarks

    Acknowledgements

    Course 9: Irregular Activity in Large Networks of Neurons

    1. Introduction

    2. A simple binary model

    3. A memory model

    4. A model of visual cortex hypercolumn

    5. Adding realism: integrate-and-fire network

    6. Discussion

    Acknowledgements

    Course 10: Network Models of Memory

    1. Introduction

    2. Persistent neuronal activity during delayed response experiments

    3. Scenarios for multistability in neural systems

    4. Networks of binary neurons with discrete attractors

    5. Learning

    6. Networks of spiking neurons with discrete attractors

    7. Plasticity of persistent activity

    8. Models with continuous attractors

    9. Conclusions

    Acknowledgements

    Course 11: Pattern Formation in Visual Cortex

    Course 12: Symmetry Breaking and Pattern Selection in Visual Cortical Development

    1. Introduction

    2. The pattern of orientation preference columns

    3. Symmetries in the development of orientation columns

    4. From learning to dynamics

    5. Generation and motion of pinwheels

    6. The problem of pinwheel stability

    7. Weakly nonlinear analysis of pattern selection

    8. A Swift–Hohenberg model with stable pinwheel patterns

    9. Discussion

    Course 13: Of The Evolution of the Brain

    1. Introduction and summary

    2. The phase transition that made us mammals

    3. Maps and patterns of threshold-linear units

    4. Validation of the lamination hypothesis

    5. What do we need DG and CA1 for?

    6. Infinite recursion and the origin of cognition

    7. Reducing local networks to Potts units

    Acknowledgments

    Course 14: Theory of Point Processes for Neural Systems

    1. Neural spike trains as point processes

    2. Integrate and fire models and interspike interval distributions

    3. The conditional intensity function and interevent time probability density

    4. Joint probability density of a point process [5]

    5. Special point process models

    6. The time-rescaling theorem

    7. Simulation of point processes

    8. Poisson limit theorems

    9. Problems

    Acknowledgments

    Course 15: Technique(s) for Spike-Sorting

    1. Introduction

    2. The problem to solve

    3. Two features of single neuron data we would like to include in the spike-sorting procedure

    4. Noise properties

    5. Probabilistic data generation model

    6. Markov chains

    7. The Metropolis–Hastings algorithm and its relatives

    The Most Important Comment of these Lectures Notes

    8. Priors choice

    9. The good use of the ergodic theorem. A warning

    10. Slow relaxation and the replica exchange method

    11. An Example from a simulated data set

    12. Conclusions

    13. Exercises solutions

    Course 16: The Emergence of Relevant DATA Representations: An Information Theoretic Approach

    Abstract

    1. Part I: the fundamental dilemma

    2. Part II: Shannon’s information theory—-a new perspective

    3. Part III: relevant data representation

    4. Part IV: applications and extensions

    Acknowledgments

    Contributors to this volume

    P. Bressloff, E. Brown, N. Brunel, D. Golomb, E. Marder, G. Mato, C. Pouzat, J. Rinzel, M. Shelley, H. Sompolinsky, D. Terman, T. Tishby, A. Treves, M. Tsodyks, C. van Vreeswijk and F. Wolf

    Copyright page

    ELSEVIER B.V.

    Sara Burgerhartstraat 25

    P.O. Box 211, 1000 AE

    Amsterdam,

    The Netherlands

    ELSEVIER Inc.

    525 B Street, Suite 1900

    San Diego, CA 92101-4495

    USA

    ELSEVIER Ltd

    The Boulevard, Langford Lane

    Kidlington, Oxford OX5 1GB

    UK

    ELSEVIER Ltd

    84 Theobalds Road

    London WC1X 8RR

    UK

    © 2005 Elsevier B.V. All rights reserved

    This work is protected under copyright by Elsevier B.V., and the following terms and conditions apply to its use:

    Photocopying

    Single photocopies of single chapters may be made for personal use as allowed by national copyright laws. Permission of the Publisher and payment of a fee is required for all other photocopying, including multiple or systematic copying, copying for advertising or promotional purposes, resale, and all forms of document delivery. Special rates are available for educational institutions that wish to make photocopies for non-profit educational classroom use.

    Permissions may be sought directly from Elsevier’s Rights Department in Oxford, UK: phone (+44) 1865 843830, fax (+44) 1865 853333, e-mail: permissions@elsevier.com. Requests may also be completed on-line via the Elsevier homepage (http://www.elsevier.com/locate/permissions).

    In the USA, users may clear permissions and make payments through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA; phone: (+1) (978) 7508400, fax: (+1) (978) 7504744, and in the UK through the Copyright Licensing Agency Rapid Clearance Service (CLARCS), 90 Tottenham Court Road, London W1P 0LP, UK; phone: (+44) 20 7631 5555, fax: (+44) 20 7631 5500. Other countries may have a local reprographic rights agency for payments.

    Derivative Works

    Tables of contents may be reproduced for internal circulation, but permission of the Publisher is required for external resale or distribution of such material. Permission of the Publisher is required for all other derivative works, including compilations and translations.

    Electronic Storage or Usage

    Permission of the Publisher is required to store or use electronically any material contained in this work, including any chapter or part of a chapter.

    Except as outlined above, no part of this work may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without prior written permission of the Publisher.

    Address permissions requests to: Elsevier’s Rights Department, at the fax and e-mail addresses noted above.

    Notice

    No responsibility is assumed by the Publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made.

    First edition 2005

    Library of Congress Cataloging in Publication Data

    A catalog record is available from the Library of Congress.

    British Library Cataloguing in Publication Data

    A catalogue record is available from the British Library.

    ISBN: 0-444-51792-8

    ISSN: 0924-8099

    The paper used in this publication meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).

    Printed in The Netherlands.

    École de Physique des Houches

    Service inter-universitaire commun

    à l’Université Joseph Fourier de Grenoble

    et à l’Institut National Polytechnique de Grenoble

    Subventionné par le Ministère de l’Éducation Nationale,

    de l’Enseignement Supérieur et de la Recherche,

    le Centre National de la Recherche Scientifique,

    le Commissariat à l’Énergie Atomique

    Membres du conseil d’administration:

    Yannick Vallée (président), Paul Jacquet (vice-président), Cécile DeWitt, Mauricette Dupois, Thérèze Encrenaz, Bertrand Fourcade, Luc Frappat, Jean-François Joanny, Michèle Leduc, Jean-Yves Marzin, Giorgio Parisi, Eva Pebay-Peyroula, Michel Peyrard, Luc Poggioli, Jean-Paul Poirier, Michel Schlenker, François Weiss, Philippe Wisler, Jean Zinn-Justin

    Directeur:

    Jean Dalibard, Laboratoire Kastler Brossel, Paris, France

    Directeurs scientifiques de la session LXXX:

    Carson C. Chow, University of Pittsburgh, USA

    Boris Gutkin, University College, London, UK

    David Hansel, CNRS and Université René Descartes, Paris, France

    Claude Meunier, CNRS and Université René Descartes, Paris, France

    Idan Segev, Institute of Life Sciences, Jerusalem, Israel

    Previous sessions

    Publishers:

    – Session VIII: Dunod, Wiley, Methuen

    – Sessions IX and X: Herman, Wiley

    – Session XI: Gordon and Breach, Presses Universitaires

    – Sessions XII–XXV: Gordon and Breach

    – Sessions XXVI–LXVIII: North-Holland

    – Session LXIX–LXXVIII: EDP Sciences, Springer

    – Session LXXIX–LXXX: Elsevier

    Lecturers

    Larry Abbott,     Volen Center for Complex Systems and Department of Biology, Brandeis University, Waltham, MA 02454-9110, USA

    Philippe Ascher,     Laboratoire de Physiologie Cérébrale, UMR 8118, Université René Descartes, 45 Rue des Saints-Pères, 75270, Paris, France

    Paul Bressloff,     Department of Mathematics, 155 South 1400 East 233 JWB, University of Utah, Salt Lake City, UT 84112, USA

    Emery N. Brown,     Neuroscience Statistics Research Laboratory, Massachusetts General Hospital, 55 Fruit Street, Clincs 3, Boston, MA 02114, USA

    Nicolas Brunel,     Laboratoire de Neurophysique et Physiologie du Système Moteur, CNRS-UMR8119, Universite René Descartes, 45 rue des Saints Pères, 75270 Paris Cedex 06, France

    Wulfram Gerstner,     Laboratory of Computational Neuroscience, EPFL – Batiment AA-B, CH-1015 Lausanne, Switzerland

    David Golomb,     Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0319, USA

    Eve Marder,     Volen Center for Complex Systems and Department of Biology, Brandeis University, Waltham, MA 02454-9110, USA

    German Mato,     Centro Atómico Bariloche e Instituto Balseiro, 8400 S.C. de Bariloche, Río Negro, Argentina

    Christophe Pouzat,     Laboratoire de Physiologie Cérébrale, CNRS UMR 8118, UFR biomédical de l’Université Paris V, 45 rue des Saints Pères 75006, Paris, France

    John Rinzel,     Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA

    Mike Shelley,     Courant Institute, 251 Mercer St. New York, NY 10012, USA

    Haim Sompolinsky,     Racah Institute of Physics, The Hebrew University, E. Safra Campus, Givat Ram 91904, Jerusalem Israel

    David Terman,     Department of Mathematics, 231 W. 18th Ave., The Ohio State University, Columbus, Ohio 43210-1174, USA

    Naftali Tishby,     Institute of Computer Science, The Hebrew University, The E. Safra Campus, 91904 Jerusalem, Israel

    Alessandro Treves,     SISSA – Cognitive Neuroscience, Via Beirut 2-4, I-34014, Italy

    Misha Tsodyks,     Brain Research Building Room 133, Department of Neurobiology, Weizmann, Institute of Science, Rehovot 76100, Israel

    Carl van Vreeswijk,     Neurophysique et Physiologie du Système Moteur, CNRS-UMR8119, Universite René Descartes, 45 rue des Saints Pères, 75270 Paris Cedex 06, France

    Fred Wolf,     MPI für Strömungsforschung, Dept. of Nonlinear Dynamics, Postfach 2853 Building 2, Room 02207, Germany

    Teaching assistants

    Mickey London,     Department of Neurobiology, Inst. Life Science, The Hebrew University of Jerusalem, Jerusalem 91904, Israel

    Oren Shriki,     Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel

    Organizers

    Carson C. Chow,     Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA

    Jean Dalibard,     Laboratoire Kastler Brossel, Ecole normale supérieure, 24 rue Lhomond, 75231 Paris cedex 05, France

    Boris Gutkin,     Gatsby Computational Neuroscience Unit, Alexandra House, 17 Queen Square, London WC1N 3AN, UK

    David Hansel,     Laboratoire de Neurophysique et Physiologie du Système Moteur, UMR-8119, 45 Rue des Saints-Pères, 75270 Paris, France

    Claude Meunier,     Laboratoire de Neurophysique et Physiologie du Système Moteur, UMR-8119, 45 Rue des Saints-Pères, 75270, Paris, France

    Idan Segev,     Institute of Life Sciences, Department of Neurobiology, The Hebrew University, Edmond Safra Campus, Givat Ram, Jerusalem, 91904, Israel.

    Speakers at the workshops

    Workshop 1: The neuron in the network

    Alexander Borst,     Department of Systems and Computational Neurobiology, Max-Planck-Institute of Neurobiology, Am Klopferspitz 18a, D-82152 Martinsried, Germany

    Lyle Graham,     Laboratoire de Neurophysique et Physiologie du Système Moteur, CNRS UMR 8119, UFR Biomédicale de l’Université René Descartes, 45 rue des Saint-Pères 75006 Paris, France

    Peter Jonas,     Physiologisches Institut, Universität Freiburg, Hermann-Herder-Strasse 7, D-79104, Freiburg, Germany

    Matthew Larkum,     Max Planck Institute for Medical Research, Jahnstr. 29, D-69120 Heidelberg, Germany

    Workshop 2: Dynamics in the somatosensory system

    Ehud Ahissar,     Department of Neurobiology, The Weizmann Institute, Rehovot 76100, Israel

    Michael Brecht,     Max-Planck Institut für medizinische Forschung, Abteilung Zellphysiologie, Jahnstraße 29, D-69120 Heidelberg, Germany

    Rasmus Petersen,     SISSA – Cognitive Neuroscience, Via Beirut 2-4, I-34014, Italy

    Workshop 3: Learning and Memory

    Merav Ahissar,     Department of Psychology, The Hebrew University, Mount Scopus, Jerusalem 91905, Israel

    Guo-Qiang Bi,     Department of Neurobiology, School of Medicine, E1451 Biomedical Science Tower, University of Pittsburgh, 3500 Terrace Street, Pittsburgh, PA 15261, USA

    Yadin Dudai,     Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel

    Mayank Mehta, Dr.,     E18-366, M.I.T., 50 Ames St., Cambridge, MA 02139, USA

    Participants

    Tanya Baker,     Department of Physics, University of Chicago, 5720 South Ellis Avenue, IL 60637, USA

    Alberto Bernacchia,     INFM, Gruppo NAL, Dipartimento di Fisica E.Fermi, Universita La Sapienza, Pie Aldo Mora 5, 00185 Roma, Italy

    Dmitry Bibichkov,     Department of Neurobiology, Weizmann Insitute of Science, 76100 Rehovot, Israel

    Alla Borishyuk,     Mathematical Biosciences Institute, Ohio State University, Columbus, OH, USA

    Georg Bruun,     Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark

    Tuan Bui,     Departement of Physiology, Botterell Hall, Queen’s University, Kinston, Ontario, Canada

    Simona Cocco,     Laboratoire de dynamique des fluides complexes, ULP-CNRS, 3 rue de l’Université, 67000 Strasbourg, France

    Rava da Silveira,     6 ch. Beau-Soleil, 1206 Genève, Switzerland

    Matthieu Delescluse,     Laboratoire de Physiologie Cérébrale, CNRS UMR 8118, 45 rue des Saints Pères, 75006 Paris, France

    Eliyahu Dremencov,     Life Sciences Faculty, Bar Ilan University, Ramat-gan 52900, Israel

    Alexander Fedanov,     Institute of Mathematical Problems in Biology, Russian Academy of Sciences, Puschino, Moscow region 142 290, Russia

    Virginia Flanagin,     MPI of Neurobiology, Systems and Computational Neurobiology, Am Klopferspitz 18a, 82152 Martinsried, Germany

    Stefanos Folias,     Department of Mathematics, University of Utah, 155 South, 1400 East, JWB 233, Salt Lake City, Utah 84112-0090, USA

    Tim Gollisch,     Institute for Theoretical Biology, Humboldt University Berlin, Invalidenstrasse 43, 10115 Berlin, Germany

    Robert Gütig,     Institute for Theoretical Biology, Humboldt University Berlin, Invalidenstraße 43, 10115 Berlin, Germany

    Mark Histed,     MIT E25-236, 77 Massachusetts Avenue, Cambridge MA 02139, USA

    Anne Hsu,     Tolman Hall, University of California, Berkeley CA 94720, USA

    Kukjin Kang,     Center for Neural Science, New York University, 4 Washington place, Room 809, New York, NY 10003, USA

    Matthias Kaschube,     MPI Strömungsforschung, Bunsenstr.10, 37073 Göttingen, Germany

    Mikhail Katkov,     Weizmann Institute of Science, Neurophysiology Department, Rehovot, 76100 Israel

    Kilian Koepsell,     Redwood Neuroscience Institute, 1010 El Camino Real, Suite 380, Menlo Park, CA 94025, USA

    Arvind Kumar,     Laboratory for Neurobiology and Biophysics, Albert Ludwigis University of Freiburg, 79104 Freiburg, Germany

    Arthur Leblois,     Laboratoire de Neurophysique et Physiologie du Système Moteur, Université René Descartes, 45 rue des Saints Pères, 75270 Paris cedex 06, France

    Eunjeong Lee,     PFC Workgroup, Netherlands Institute for Brain Research, Meibergdreef 33, 1105 AZ, The Netherlands

    Alexander Lerchner,     NORDITA, Blegdamsvej 17, 2100 Copenhagen, Denmark

    Dmitry Lesnik,     Heinrich Heine Universität, Institut für Theoretische Physik, Universitätsstr. 1, Gebaüde 25.32, 40225 Düsseldorf, Germany

    Alex Loebel,     The Weizmann Institute of Science, Neurobiology Department, Rehovot, 76100, Israel

    Keiji Miura,     Kyoto University, Department of Physics, Kyoto 606-8502, Japan

    Samat Moldakarimov,     Department of Mathematics, 301 Thackeray Hall, University of Pittsburgh, Pittsburgh, PA 15260, USA

    Rémi Monasson,     LPT/ENS, 24 rue Lhomond, 75231 Paris cedex, France

    Erez Persi,     Laboratoire de Neurophysique et Physiologie du Système Moteur, Université René Descartes, 45 rue des Saints Pères, 75270 Paris, France

    Benjamin Pfeuty,     Laboratoire de Neurophysique et Neurophysiologie du Système Moteur, Université René Descartes, 45 rue des Saints Pères, 75270 Paris cedex 06, France

    Jean-Pascal Pfister,     Laboratory of Computational Neuroscience, Swiss Federal Institute of Technology Lausanne EPFL, 1015 Lausanne, Switzerland

    Xaq Pitkow,     Harvard University, 16 Divinity Avenue, Biological Laboratories room 4033, Cambridge MA 02138, USA

    Dariusz Plewczynski,     Plac Przymierza 3ml, 03-944, Warsaw, Poland

    Irina Popova,     Institute of Theoretical and Experimental Biophysics, Russian Academy of Sciences, Puschino, Moscow district, 142290, Russia

    Son Preminger,     Weizmann Institute of Science, Department of Neurobiology, Rehovot 76100, Israel

    Yasser Roudi Rashtabadi,     Cognitive Neurosciences, SISSA, via Beirut 2-4, 34014 Trieste, Italy

    Noah Russell,     Department of Neurophysiology, National Institute for Medical Research, The Ridgeway, Mill Hill, London NW7 1AA, U.K.

    Michael Schnabel,     MPI Strömungsforschung, Bunsenstr. 10, 37073 Göttingen, Germany

    Erich Schulzke,     Institute of Theoretical Neurophysics, P.O. Box 330440, 28334 Bremen, Germany

    Vahid Shahrezaei,     Physics Department, Simon Fraser University, 8888 University Drive, Burnaby, V5A 154BC, Canada

    Greg Stephens,     Biophysics Group, MS-D454, Los Alamos National Laboratory, Los Alamos, New Mexico, 81545, USA

    Anton Tchijov,     Computational Physics Laboratory, A.F. Ioffe Physico-Technical Institute of RAS, Polytekhnicheskaya str. 26, 194021 St Petersburg, Russia

    Jennifer Wang,     MIT, Department of Brain and Cognitive Sciences, Seung lab., 77 Massachusetts Avenue, room E25-425, Cambridge, MA 02139, USA

    Olivia White,     Harvard University, Department of Physics, Jefferson Laboratories, 17 Oxford Str., Cambridge, MA 02140 USA

    Valentin Zhigulin,     California Institute of Technology, Physics Department, MC 103-33, Caltech Pasadena, CA 91125, USA

    Preface

    Carson C. Chow, Boris Gutkin, David Hansel, Claude Meunier and Jean Dalibard

    Neuroscience is an inherently interdisciplinary field that strives to understand the functioning of neural systems at levels ranging from molecules and cells to behaviour and cognition. Theoretical neuroscience has flourished over the past three decades, becoming an indelible part of neuroscience, and has arguably entered its maturity. It encompasses a vast array of approaches stemming from theoretical physics, computer science, and applied mathematics.

    Historically, the application of theoretical techniques to unravelling the operating principles of neurons started with the seminal work of Hodgkin and Huxley, who explained the generation of action potentials and their propagation. A decade later, Rail introduced an appropriate formal framework for investigating synaptic integration. Theoretical physics inspired new theories of network behaviour and organization, following the early work by Wilson and Cowan. The seminal work of Hopfield and of Amit, Gutfreund, and Sompolinsky demonstrated that statistical mechanics provided a powerful framework to analyse memory and learning. Progressively, mathematicians and physicists have elaborated a theoretical framework for analysing neural systems, creating a subfield one may term neurophysics. At the same time, the emphasis has moved beyond models loosely inspired by biology to ones that directly reflect organisational and biological principles of neural systems.

    Methods from statistical mechanics, dynamical systems theory, singular perturbation theory, theory of stochastic processes, theory of signal processing and information theory have enriched neurophysics over the years. For example, geometrical methods for nonlinear differential equations considerably improved our understanding of the dynamics of individual neurons and small circuits. Concepts from oscillation theory, such as averaging, phase reductions and discrete maps have elucidated how cell and synaptic properties determine the dynamical states of systems of interacting neurons. Mean field theories have been successfully used to describe and analyse collective dynamics of large scale networks. Recently, non-trivial extensions of mean field theory have arisen together with the use of symmetric bifurcation theory and group theoretic approaches to study pattern formation in cortical areas. Information theory is now commonly employed to analyse spike trains, to uncover neural coding strategies and to examine their optimality.

    This volume comes out of the five weeks course of the Summer School in Physics that took place at Les Houches in July–August 2003. The course was composed of a series of theoretical lectures on neurophysics. These lectures comprise this volume. To provide a functional counterpart to some of the theoretical issues addressed, there were three workshops dedicated to selected issues in experimental neuroscience on the topics of: 1) The neuron in the network, 2) The dynamics of the somatosensory system, and 3) Learning and Memory. There were also special seminars by Philippe Ascher and Idan Segev.

    A number of excellent textbooks in computational and theoretical neuroscience are now available. However, many of these books lack detailed coverage of the approaches originating from mathematics and physics. This book fills this need by providing complete derivations of the theoretical techniques now used in neurophysics. The lecturers have strived to present in their contributions coherent frameworks that include the important theorems and axioms upon which the methods are based and to delineate the limitations of the formalism. We hope this volume will provide the reader with the necessary tools and background to master theoretical neuroscience and to explore new tracks.

    We address this volume to an audience of advanced graduate students and post-doctoral fellows in mathematics and physics, practicing researchers who are moving into theoretical neuroscience, and to neuroscientists who would like to gain deeper understanding of theory. We expect the book to be both a desktop reference as well as a textbook used in advanced courses and seminars on neurophysics.

    We would like to thank the faculty for their invaluable contributions to the summer school and for the contents of this volume. We express our gratitude to all the experimentalists who invigorated the topical workshops through their inspired talks. We are indebted to Professor Philippe Ascher for his public lecture intended for a general audience. We would like to acknowledge CNRS, IBRO, NATO, and NSF for their generous support as well as the Les Houches Physics School, and its administrative team. Last but not least we would like to thank the attendees of the course who were instrumental in enlivening the school for five weeks.

    Course 1

    Experimenting with Theory

    Eve Marder,     Volen Center MS 013 Brandeis University Waltham, MA 02454-9110 U.S.A.

    Contents

    1. Overcoming communication barriers

    2. Modeling with biological neurons-the dynamic clamp

    3. The traps inherent in building conductance-based models

    4. Theory can drive new experiments

    5. Conclusions

    References

    The summer course at Les Houches was designed to provide additional training in methods of computational neuroscience to physicists and mathematicians interested in working on problems fundamental to our understanding of the brain. Although a biologist by training and an experimentalist by practice, I have been privileged for almost 20 years to work and interact with a number of theorists. I was initially drawn to collaboration with theorists 20 years ago because it was already clear to me that understanding the dynamics of the rhythmic motor patterns produced by central pattern generating networks on the basis of the cellular mechanisms underlying the network behavior would require more than the reductionist methods we were then using.

    Starting about 15 years ago the number of young people who were initially trained in physics, computer science, and mathematics who have been drawn to neuroscience has increased dramatically. The advent of gene microarray technology in the post-genomic era has sensitized many biologists to the need for new quantitative methods of all kinds as part of the analysis of complex biological systems in general, and neuroscience in specific. Thus, both the need for formal models to capture the dynamics of networks of molecules and neurons and the need for sophisticated statistical measures to extract meaning from multi-dimensional data sets have made experimental biologists more interested than ever before in the tools that can be provided by theory of all kinds. That said, there are significant cultural differences in the training of physicists and mathematicians and that of experimental biologists that may lead to different assumptions about how knowledge is created and transmitted that can be barriers to communication between theorists and experimentalists. In the first section of this essay I will make some general comments that may help those moving from physics and math into biology understand some of the culture of experimental biology that may at first mystify them. In the second section of this essay I will discuss some traps or confounds that are inherent as we try to build useful models of neuronal and network function as well as some modeling methods that I have been involved in developing. Finally, I will describe a specific case in which theory has been extremely instructive, and perhaps essential in illuminating a fundamental biological problem.

    1. Overcoming communication barriers

    Like many of my colleagues who did not benefit from advanced training in mathematics or physics, I was initially intimidated by many theory papers and talks. Nonetheless, over the years I have developed a strategy that helps me in approaching many theoretical studies. I ask the following questions: 1) What is the problem that the investigator is posing? Is it a problem that I as an experimental neuroscientist think important or interesting? 2) What are the assumptions and simplifications that are made in the model construction? Do any of these fly in the face of something that I consider a fundamental biological principle relevant to the problem at hand? 3) Does the final result make sense to me? If not, is it because the model was inappropriately chosen or is it because the model has revealed an essential inadequacy of my previous knowledge and set of assumptions? Obviously, these questions are not very different from those that I ask of experimental papers.

    Good theory, like good experimental work, has a single definition: at the end of the work the investigator has created new knowledge. New knowledge can sometimes be created with a very simple calculation or a very simple piece of equipment. For me, some of the most beautiful or useful papers, even in today’s world of high technology, come from experiments done with a single tension transducer (Morris and Hooper, 1997; Morris et al., 2000) or a trivial simulation (Kepler et al., 1990), if they result in new insights. Nonetheless one of the problems endemic to working at the juncture between two fields is that a given piece of work may provide new insight for one part of the community but fail to do so for another. For example, if you had asked the experimental neuroscience community in 1989 what would result if you depolarized an oscillatory neuron, almost everyone would have said the period would invariably decrease. I and many of my colleagues were very surprised that merely changing the relative conductance densities of the currents in the oscillator could alter this result. Indeed, depending on the specifics of the conductances found in a neuronal oscillator, it may either increase or decrease in period when depolarized (Kepler et al., 1990; Skinner et al., 1993). This result was hardly surprising to anyone with any mathematical sophistication or experience, required no special computational skills to demonstrate, but was illuminating to many experimentalists trying to understand what the consequences of altering a specific set of currents in a specific cell are likely to be.

    The converse also occurs: there are times when elegant mathematics that was stimulated by a problem in neuroscience may fail to bring insight to the experimental community. Some work may be perceived by the experimental community as nothing more than the translation of knowledge from the language of biophysics to mathematics. For example, most experimental neuroscientists believe that they understand the squid axon action potential, and most would not find much additional new knowledge in the analysis of many simplified models of the Hodgkin-Huxley equations. That said, there are many mathematicians who will not feel they understand the squid axon action potential until it has been translated into a mathematical model that describes its threshold and firing properties in geometric or analytical terms. So are mathematical models that reduce the Hodgkin-Huxley equations to forms that are more suited to mathematics useful? The answer to this is certainly yes for the mathematicians who will not find understanding in their absence. Less so for many experimentalists who will find it difficult to extract any new understanding from these translations from the language of biology to the language of mathematics.

    Thus there is a constant and unresolved tension that results from the fact that some of the most useful theory for the neurobiological community may not challenge the cleverness of the theorist, while large components of the neuroscience community may not be able to appreciate all of the theory inspired by neuroscience. Nevertheless, there continue to be numerous examples of theory papers that address important neuroscience problems and that speak to both the experimental and theoretical communities. Many of these studies are successful because they explore the computational consequences of biological phenomena that have been well-characterized but whose implications are incompletely understood (Abbott et al., 1997).

    Not infrequently theorists are mystified that many of the experimental papers that they read do not contain the data that they need to adequately constrain their models. Theorists are often very frustrated by incomplete data sets in experimental papers, and must often say to themselves, Why didn’t they measure × while they were measuring y? I am never surprised by these complaints, as I think it would be remarkable if the data needed to constrain a formal model would exist in the literature unless it was collected explicitly for that purpose. The key to understanding this is to appreciate that experimentalists will publish papers when they feel that they have a first pass understanding of a new finding. Such papers may often conclude with a word model or a cartoon that captures the experimentalist’s intuition of what the findings may mean. Papers that lead to new insights are easy to publish and satisfying to read. Papers that go over old ground and merely provide more quantitative data or more rigorous analyses are difficult for an experimentalist to publish, as they lack novelty unless these more quantitatively collected and analyzed data are explicitly coupled to the testing of a formal model that requires rigorous data collection and analysis. Moreover, motivating an experimentalist to spend months or even years going over what may be perceived as old ground will only be possible when the experimentalist himself or herself is actively involved in the implicit or explicit development and testing of more rigorous and formal models. Theorists who ask experimentalists to collect data for them are unlikely to often meet with much success. Theorists who ask experimentalists which problems intrigue and puzzle them are likely to find collaborators willing to do difficult experiments to test models.

    Many failures of communication between some theorists and experimentalists surround the issue of details. Biologists are trained to look for and exploit the differences among cells, synapses, channels, animals, and behaviors as clues to understanding what factors may be important in understanding a physiological or behavioral process. Therefore biologists will naturally assume that the details of synaptic properties, neuronal structure, channel subtypes, neurotransmitter content, etc will eventually be critical to understanding how the networks in which these neurons and synapses function operate. In the past, some theorists have been known to be dismissive of the differences among cells and synapses and look for global simplifications, suggesting that the large numbers of neurons in the brain make the diversity among cells and synapses irrelevant for understanding the system properties by which the brain computes. Most biologists understand fully well the necessity for simplification in the search for understanding, and have a highly developed intuition for what details are likely to be important for which questions and which can be ignored in a first approximation explanation of a phenomenon. Successful collaboration among experimental biologists and theorists requires achieving consensus about how to make decisions about the importance of specific details in the explanation of system properties when constructing models.

    2. Modeling with biological neurons-the dynamic clamp

    All model neurons by definition are incomplete representations of the biological neurons whose properties they are intended to capture. Even when attempts are made to construct models that are highly realistic representations of the conductances and geometry of biological neurons, and even when attempts are made to incorporate some pieces of intracellular signaling pathways, models will at best be missing some important features that contribute to the dynamics of neurons, and at worst, will be in a parameter regime that drastically fails to capture something important or essential. The dynamic clamp is a method that allows simulations to be done with biological neurons (Prinz et al., 2004a), thus serving as an intermediary between theoretical studies and experimental analyses. In fact, it is likely that many experimentalists who use the dynamic clamp will become more comfortable with theory in general as they experience the kinds of intuitions about system dynamics that can be obtained when one can systematically alter properties of currents and synapses in biological neurons and see their impact on cell and network behavior in living tissue.

    The dynamic clamp allows the investigator to ask a number of questions about the role of a current or a synapse, or a neuron in controlling the excitability of a single neuron or a network (Sharp et al., 1993a, b). In a dynamic clamp experiment intracellular electrodes are used to record membrane potential and inject current into one or several neurons (Sharp et al., 1993a, b). The injected current is calculated from a modeled conductance, which allows the investigator to set the maximal conductance and dynamic variables of the conductance. Thus the injected current varies as a function of the neuron’s membrane potential, essentially creating an artificial conductance in parallel with the other conductances in the neuron’s membrane. The dynamic clamp can be used in a number of different configurations: 1) The dynamic clamp allows the investigator to add or subtract conductances from a neuron, and to ask how the cell’s excitability is altered (Ma and Koester, 1996; Turrigiano et al., 1996). Unlike the case in a conventional simulation, it is not necessary to specify the densities and properties of all of the other conductances in the neuron, because the neuron itself provides these to the simulation. 2) The dynamic clamp can be used to create artificial synaptic connections among neurons, as the membrane potential of one neuron can be used to control an artificial synaptic conductance of a second neuron (Sharp et al., 1996). 3) The dynamic clamp can be used to construct hybrid circuits, in which model neurons are coupled to biological neurons (Bal et al., 2000; Le Masson et al., 2002). 4) The dynamic clamp can be used to apply realistic patterns of synaptic inputs to neurons in slice preparations to evaluate their dynamics under natural synaptic drive conditions (Chance et al., 2002; Reyes, 2003). All of these configurations have become invaluable for understanding how varying the parameters of one or more membrane or synaptic conductances influences system properties (Prinz et al., 2004a).

    Today the dynamic clamp is widely used in laboratories around the world (Prinz et al., 2004a). A number of different implementations have been developed, and these are continuing to evolve to take advantage of improved computer speed and board performance. Because the dynamic clamp allows simulations with biological neurons it provides a mechanism to easily test many predictions of both word and formal models without requiring the investigator to make too many assumptions or oversimplications. As such, it is likely to become more and more used as a modeling tool at the cellular level.

    3. The traps inherent in building conductance-based models

    For some, the holy grail of modeling neurons and ultimately networks, is to build a realistic neuron model that incorporates known measurements from biological neurons in a detailed compartmental model that captures the anatomical structure of the neuron. While this may appear to be more realistic, there are some fundamental assumptions that go into the construction of such models that are often neglected, but that should be taken into account as one evaluates which questions are must suited for this kind of approach.

    1) How much does the modeler have to make-up? The premise underlying building more and more detailed models is that as one incorporates more currents, signal transduction pathways and anatomical structure, that one is coming closer to the real thing. However, the unfortunate reality is that because of the nature of the data that we obtain on biological neurons adding increased complexity to a model often means increasing the number of parameters that have not been measured, or cannot be measured properly. For example, there are often benefits to building multi-compartment models that segregate different conductances into regions of the neuron (Traub et al., 1991). However, adding compartments to a model requires that the modeler decide which conductances to put where in what amounts. This is particularly difficult because there are relatively few cases in which conductance densities have been measured in different regions of the neuron (Magee, 1999; Magee and Carruth, 1999; Magee and Cook, 2000; Frick et al., 2004). While it would be certainly interesting and important to model explicitly and correctly the intracellular Ca2+ concentrations and dynamics at all portions of the neuron, this is often unknown, and only rarely correlated with direct biophysical measurements (Frick et al., 2004). Likewise, the first models of intracellular signal transduction are being developed (Bhalla and Iyengar, 2001; Bhalla, 2002, 2003a, b; Yu et al., 2004) but again, it is rare that the data on signal transduction pathways are collected on the same neuron type that is modeled, although the hippocampal CA1 neuron is a fairly notable exception.

    2) Lack of uniqueness in models: is this a problem or a solution? The construction of conductance-based models with many different currents immediately brings up the issue of uniqueness. To what extent is some combination of maximal conductances a unique solution to modeling a specific neuron’s intrinsic electrical excitability? To what extent is it the job of the modeler to tune the model to a specific set of conductances that adequately captures the firing properties of the neuron? And what criteria does one use to decide how good a fit to expect? These are difficult questions, and obviously must be answered on a case by case basis. Nonetheless, several things are clear: a) In models with more than 5 or so different currents, it is almost certainly the case that there will be many different combinations of conductance densities that give quite similar behavior (Goldman et al., 2001; Prinz et al., 2003). b) In some models there will be minor changes in a single parameter that will produce qualitative changes in behavior (Guckenheimer et al., 1993; Guckenheimer et al., 1997), and most importantly, it is the correlated values of many conductances that determine the neuron’s activity, not the density of a single conductance (Goldman et al., 2001; Golowasch et al., 2002; Prinz et al., 2003). This raises an important experimental difficulty: it is often very difficult to measure more than one conductance properly in a single neuron, and if averaged values are not ideally suited to constrain a model (Golowasch et al., 2002), then it becomes extremely difficult to collect the data necessary to build a detailed, conductance-based model. But the most important difficulty is that the range of variation of the properties of the individual neurons of that class is not often measured or known. This, most models are tuned to reflect mean behavior, although it is not clear many neurons of a given type actually conform to the modeled mean.

    As single neuron and network models become larger, the practical difficulties of hand-tuning models become more onerous. A new solution is now possible with the advent of cheap Beowulf clusters that allow the construction of data bases of model neurons (Prinz et al., 2003) in which multiple forms of models are simulated by brute force and then searched for individual models with certain desired properties. This approach avoids some of the potential pitfalls of hand-tuning (Prinz et al., 2003), as it is more likely to find all of the likely configurations of a given model by sampling over a large region of parameter space.

    The crustacean stomatogastric nervous system is a central pattern generating circuit that produces several rhythmic motor patterns, including the fast pyloric rhythm and the slower gastric mill rhythm (Harris-Warrick et al., 1992). The stomatogastric ganglion (STG) has only 30 neurons, all of which are individually identifiable. This has facilitated the study of the intrinsic membrane properties of the neurons of the STG and the synaptic connections among them. Consequently, this preparation allows a whole series of investigations into the mechanisms by which its system properties emerge from the interactions among its component parts. Towards this end we have used the data base approach to ask how variations in synaptic strength and intrinsic properties together cooperate to produce the triphasic pyloric rhythm. We first constructed a data base of 1.7 million single neuron models (Prinz et al., 2003). We then selected from this data base candidate neuron models to represent three different cell types in the pyloric rhythm of the lobster stomatogastric ganglion. Using these, we constructed more than 20,000,000 versions of the pyloric network of the crustacean stomatogastric ganglion (Prinz et al., 2004b), thus allowing us to ask how the strengths of the synaptic connections and the properties of the individual neurons lead to the production of a well-constrained pyloric rhythm. We see that some synaptic strengths are inconsistent with the production of a pyloric rhythm, while other synapses can vary dramatically in strength, as long as these changes are accompanied by other changes in network parameters. These findings lead to the conclusion that there are multiple solutions consistent with the production of very similar network output patterns (Prinz et al., 2004a).

    4. Theory can drive new experiments

    One of the most important uses of theory is to allow the investigator to step back from a puzzling problem and imagine a possible solution to that problem, and make specific predictions that can stimulate new lines of investigation. It is extremely gratifying when the underlying premise of the model turns out to have experimental validity. An example of this is our work on the control of intrinsic neuronal excitability. In the early 1990’s we were attempting to construct a realistic model of one of the neurons, the LP neuron, in the crab stomatogastric ganglion based on our voltage-clamp measurements of the currents in that neuron (Buchholtz et al., 1992; Golowasch et al., 1992; Golowasch and Marder, 1992). While we were able to hand-tune this model to approximate much of the behavior of the LP neuron, this procedure was inherently unsatisfying, because like others before us, we were unable to measure all of the currents in the LP neuron. More critically, in the process of tuning this model, it became clear that the model was very sensitive to some parameters, including some that we were not able to measure, and others that showed considerable variance in their measurements. This caused us to pose the question of how individual neurons regulated their conductance densities to maintain their intrinsic membrane properties over the life-time of the animal, while the individual membrane channels turn-over in the membrane in hours, days, or weeks (LeMasson et al., 1993; Marder et al., 1996; Liu et al., 1998; Marder and Prinz, 2002).

    We have constructed a class of self-tuning models in which activity sensors are used as a feedback signal to slowly alter the density of the membrane channels so that constant neuronal and network activities are maintained (LeMasson et al., 1993; Siegel et al., 1994; Marder et al., 1996; Liu et al., 1998; Golowasch et al., 1999b; Marder and Prinz, 2002). In conventional conductance-based models the maximal conductance of each current is a fixed parameter and a neuron’s activity is a consequence of the number and distribution of its ion channels. This assumption presumes that the number of each kind of membrane channel is independently controlled. Alternatively, in these self-tuning models we assume that early in development a neuron’s target activity levels are specified, and then homeostatically regulated. The paradigm shift comes from the assumption that it is the neuron’s output that is regulated rather than the number of ion channels in the membrane. In the first generation models of this sort the activity sensor was a measure of the bulk intracellular Ca2+ concentration (LeMasson et al., 1993). In response to elevated activity, the inward currents would be decreased and the outward currents increased to make the neuron less excitable. In later models, multiple sensors were used: a fast, slow and DC filter of the Ca2+ current (Liu et al., 1998). In this class of models, each membrane current was individually controlled to a varying degree by all three sensors. In all of these models the change in conductance density must occur slowly relative to the firing dynamics of the neuron. Specifically, the change in channel density must be orders of magnitude slower than the firing of the neuron or the activation time constants of its currents.

    These models make several predictions: A) Individual neurons of the same class might have similar activity profiles but different current densities. B) The same neuron might have different current densities at different times in its life. C) Genetic knock-outs of some channels may be compensated for by alterations in the densities of other channels. D) Strong perturbations of activity might result in substantial alterations in channel densities. Experimental data obtained over the years, much of it motivated by these models, are consistent with all of these predictions (Turrigiano et al., 1994; Turrigiano et al., 1995; Thoby-Brisson and Simmers, 1998; Desai et al., 1999; Golowasch et al., 1999a; Thoby-Brisson and Simmers, 2000; Goldman et al., 2001; Thoby-Brisson and Simmers, 2002; Luther et al., 2003; MacLean et al., 2003). However, much more work will be needed to understand how neurons and networks wire up in development as a consequence of genetic programming and activity. Much more experimental work is needed to discover whether some conductances are obligatorily coregulated (MacLean et al., 2003). At the same time, a great deal of theoretical work remains before we understand how local and global tuning signals interact in the construction and maintenance of complex networks.

    5. Conclusions

    Neuroscience is today ideally poised to profit optimally from the influx of talented theorists. Today more than ever it is clear that theory is necessary to catalyze paradigm shifts in the way we pose problems about the nervous system. These paradigm shifts will occur when smart experimentalists and smart theorists find common language and common ground to reveal how the glorious richness and detailed idiosyncrasies of neurobiological systems contribute to their ability to be at the same time plastic and stable. After all, it is not yet obvious how networks can learn and develop without losing their ability to function as they are changed.

    References

    [1] Abbott, LF, Sen, K, Varela, J, Nelson, SB. Synaptic depression and cortical gain control. Science. 1997;275:220–224.

    [2] Bal, T, Debay, D, Destexhe, A. Cortical feedback controls the frequency and synchrony of oscillations in the visual thalamus. J Neurosci. 2000;20:7478–7488.

    [3] Bhalla, US. Biochemical signaling networks decode temporal patterns of synaptic input. J Comput Neurosci. 2002;13:49–62.

    [4] Bhalla, US. Temporal computation by synaptic signaling pathways. J Chem Neuroanat. 2003;26:81–86.

    [5] Bhalla, US. Understanding complex signaling networks through models and metaphors. Prog Biophys Mol Biol. 2003;81:45–65.

    [6] Bhalla, US, Iyengar, R. Robustness of the bistable behavior of a biological signaling feed-back loop. Chaos. 2001;11:221–226.

    [7] Buchholtz, F, Golowasch, J, Epstein, IR, Marder, E. Mathematical model of an identified stomatogastric ganglion neuron. J Neurophysiol. 1992;67:332–340.

    [8] Chance, FS, Abbott, LF, Reyes, AD. Gain modulation from background synaptic input. Neuron. 2002;35:773–782.

    [9] Desai, NS, Rutherford, LC, Turrigiano, GG. Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nature Neuroscience. 1999;2:515–520.

    [10] Frick, A, Magee, J, Johnston, D. LTP is accompanied by an enhanced local excitability of pyramidal neuron dendrites. Nat Neurosci. 2004;7:126–135.

    [11] Goldman, MS, Golowasch, J, Marder, E, Abbott, LF. Global structure, robustness, and modulation of neuronal models. J Neurosci. 2001;21:5229–5238.

    [12] Golowasch, J, Marder, E. Ionic currents of the lateral pyloric neuron of the stomatogastric ganglion of the crab. J Neurophysiol. 1992;67:318–331.

    [13] Golowasch, J, Abbott, LF, Marder, E. Activity-dependent regulation of potassium currents in an identified neuron of the stomatogastric ganglion of the crab Cancer borealis. J Neurosci. 1999;19:RC33.

    [14] Golowasch, J, Buchholtz, F, Epstein, IR, Marder, E. Contribution of individual ionic currents to activity of a model stomatogastric ganglion neuron. J Neurophysiol. 1992;67:341–349.

    [15] Golowasch, J, Casey, M, Abbott, LF, Marder, E. Network stability from activity-dependent regulation of neuronal conductances. Neural Comput. 1999;11:1079–1096.

    [16] Golowasch, J, Goldman, MS, Abbott, LF, Marder, E. Failure of averaging in the construction of a conductance-based neuron model. J Neurophysiol. 2002;87:1129–1131.

    [17] Guckenheimer, J, Gueron, S, Harris-Warrick, RM. Mapping the dynamics of a bursting neuron. Philos Trans R Soc Lond B. 1993;341:345–359.

    [18] Guckenheimer, J, Harris-Warrick, R, Peck, J, Willms, A. Bifurcation, bursting, and spike frequency adaptation. J Computat Neurosci. 1997;4:257–277.

    [19] Harris-Warrick, RM, Marder, E, Selverston, AI, Moulins, M. Dynamic Biological Networks. The Stomatogastric Nervous System. MIT Press: Cambridge, 1992.

    [20] Kepler, TB, Marder, E, Abbott, LF. The effect of electrical coupling on the frequency of model neuronal oscillators. Science. 1990;248:83–85.

    [21] Le Masson, G, Renaud-Le Masson, S, Debay, D, Bal, T. Feedback inhibition controls spike transfer in hybrid thalamic circuits. Nature. 2002;417:854–858.

    [22] LeMasson, G, Marder, E, Abbott, LF. Activity-dependent regulation of conductances in model neurons. Science. 1993;259:1915–1917.

    [23] Liu, Z, Golowasch, J, Marder, E, Abbott, LF. A model neuron with activity-dependent conductances regulated by multiple calcium sensors. J Neurosci. 1998;18:2309–2320.

    [24] Luther, JA, Robie, AA, Yarotsky, J, Reina, C, Marder, E, Golowasch, J. Episodic bouts of activity accompany recovery of rhythmic output by a neuromodular- and activity-deprived adult neural network. J Neurophysiol. 2003;90:2720–2730.

    [25] Ma, M, Koester, J. The role of potassium currents in frequency-dependent spike broadening in Aplysia R20 neurons: a dynamic clamp analysis. J Neurosci. 1996;16:4089–4101.

    [26] MacLean, JN, Zhang, Y, Johnson, BR, Harris-Warrick, RM. Activity-independent homeostasis in rhythmically active neurons. Neuron. 2003;37:109–120.

    [27] Magee, JC. Dendritic Ih normalizes temporal summation in hippocampal CA1 neurons. Nat Neurosci. 1999;2:848.

    [28] Magee, JC, Carruth, M. Dendritic voltage-gated ion channels regulate the action potential firing mode of hippocampal CA1 pyramidal neurons. J Neurophysiol. 1999;82:1895–1901.

    [29] Magee, JC, Cook, EP. Somatic EPSP amplitude is independent of synapse location in hippocampal pyramidal neurons. Nat Neurosci. 2000;3:895–903.

    [30] Marder, E, Prinz, AA. Modeling stability in neuron and network function: the role of activity in homeostasis. Bioessays. 2002;24:1145–1154.

    [31] Marder, E, Abbott, LF, Turrigiano, GG, Liu, Z, Golowasch, J. Memory from the dynamics of intrinsic membrane currents. Proc Natl Acad Sci (USA). 1996;93:13481–13486.

    [32] Morris, LG, Hooper, SL. Muscle response to changing neuronal input in the lobster (Panulirus interruptus) stomatogastric system: spike number-versus spike frequency-dependent domains. J Neurosci. 1997;17:5956–5971.

    [33] Morris, LG, Thuma, JB, Hooper, SL. Muscles express motor patterns of non-innervating neural networks by filtering broad-band input. Nat Neurosci. 2000;3:245–250.

    [34] Prinz, AA, Billimoria, CP, Marder, E. Alternative to hand-tuning conductance-based models: construction and analysis of databases of model neurons. J Neurophysiol. 2003;90:3998–4015.

    [35] Prinz, AA, Abbott, LF, Marder, E. The dynamic clamp comes of age. Trends Neurosci. 2004;27:218–224.

    [36] submitted Prinz, AA, Bucher, D, Marder, E, Multiple combinations of intrinsic properties and synaptic strengths produce similar network activity, 2004.

    [37] Reyes, AD. Synchrony-dependent propagation of firing rate in iteratively constructed networks in vitro. Nat Neurosci. 2003;6:593–599.

    [38] Sharp, AA, Skinner, FK, Marder, E. Mechanisms of oscillation in dynamic clamp constructed two-cell half-center circuits. J Neurophysiol. 1996;76:867–883.

    [39] Sharp, AA, O’Neil, MB, Abbott, LF, Marder, E. The dynamic clamp: artificial conductances in biological neurons. Trends Neurosci. 1993;16:389–394.

    [40] Sharp, AA, O’Neil, MB, Abbott, LF, Marder, E. Dynamic clamp: computer-generated conductances in real neurons. J Neurophysiol. 1993;69:992–995.

    [41] Siegel, M, Marder, E, Abbott, LF. Activity-dependent current distributions in model neurons. Proc Natl Acad Sci USA. 1994;91:11308–11312.

    [42] Skinner, FK, Turrigiano, GG, Marder, E. Frequency and burst duration in oscillating neurons and two-cell networks. Biol Cybern. 1993;69:375–383.

    [43] Thoby-Brisson, M, Simmers, J. Neuromodulatory inputs maintain expression of a lobster motor pattern-generating network in a modulation-dependent state: evidence from long-term decentralization In Vitro. J Neurosci. 1998;18:212–2225.

    [44] Thoby-Brisson, M, Simmers, J. Transition to endogenous bursting after long-term decentralization requires de novo transcription in a critical time window. J Neurophysiol. 2000;84:596–599.

    [45] Thoby-Brisson, M, Simmers, J. Long-term neuromodulatory regulation of a motor pattern-generating network: maintenance of synaptic efficacy and oscillatory properties. J Neurophysiol. 2002;88:2942–2953.

    [46] Traub, RD, Wong, RK, Miles, R, Michelson, H. A model of a CA3 hippocampal pyramidal neuron incorporating voltage-clamp data on intrinsic conductances. J Neurophysiol. 1991;66:635–650.

    [47] Turrigiano, G, Abbott, LF, Marder, E. Activity-dependent changes in the intrinsic properties of cultured neurons. Science. 1994;264:974–977.

    [48] Turrigiano, GG, LeMasson, G, Marder, E. Selective regulation of current densities underlies spontaneous changes in the activity of cultured neurons. J Neurosci. 1995;15:3640–3652.

    [49] Turrigiano, GG, Marder, E, Abbott, LF. Cellular short-term memory from a slow potassium conductance. J Neurophysiol. 1996;75:963–966.

    [50] Yu, X, Byrne, JH, Baxter, DA. Modeling Interactions between Electrical Activity and Second-Messenger Cascades in Aplysia Neuron R15. J Neurophysiol. 2004;91:2297–2311.

    Course 2

    Understanding Neuronal Dynamics by Geometrical Dissection of Minimal Models

    A. Borisyuk¹ and J. Rinzel²,     ¹Mathematical Biosciences Institute, Ohio State University, Columbus, OH, USA; ²Center for Neural Science, New York University, New York, NY, USA; Center for Neural Science, New York University, NY, New York, USA

    Contents

    1. Introduction

    1.1. Nonlinear behaviors, time scales, our approach

    1.2. Electrical activity of cells

    2. Revisiting the Hodgkin–Huxley equations

    2.1. Background and formulation

    2.2. Hodgkin–Huxley gating equations as idealized kinetic models

    2.3. Dissection of the action potential

    2.3.1. Current-voltage relations

    2.3.2. Qualitative view of fast-slow dissection

    2.3.3. Stability of the fast subsystem’s steady states

    2.4. Repetitive firing

    2.4.1. Stability of the four-variable model’s steady state

    2.4.2. Stability of periodic solutions

    2.4.3. Bistability

    3. Morris-Lecar model

    3.1. Excitable regime

    3.2. Post-inhibitory rebound

    3.3. Single steady state. Onset of repetitive firing, type II

    3.4. Three steady states

    3.4.1. Large ϕ. Bistability of steady states

    3.4.2. Small ϕ. Onset of repetitive firing, Type I

    3.4.3. Intermediate ϕ. Bistability of rest state and a depolarized oscillation

    3.5. Similar phenomena in the Hodgkin–Huxley model

    3.6. Summary: onset of repetitive firing, Types I and II

    4. Bursting, cellular level

    4.1. Geometrical analysis and fast-slow dissection of bursting dynamics

    4.2. Examples of bursting behavior

    4.2.1. Square wave bursting

    4.2.2. Parabolic bursting

    4.2.3. Elliptic bursting

    4.2.4. Other types of bursting

    5. Bursting, network generated. Episodic rhythms in the developing spinal cord

    5.1. Experimental background

    5.2. Firing rate model

    5.2.1. Basic recurrent network

    5.2.2. Full model

    5.3. Predictions of the model

    6. Chapter summary

    6.1. Appendix A. Mathematical formulation of fast-slow dissection

    6.2. Appendix B. Stability of periodic solutions

    References

    1. Introduction

    1.1. Nonlinear behaviors, time scales, our approach

    It has been said that the currency of the nervous system is spikes. Indeed, at some level it is important to understand how neurons generate spikes and patterns of spikes. What is their language and how do they convert stimuli into spike patterns? Actually, these are two different questions. The first is about processing and storing information, and a neuron’s role in a neural computation. The second is more mechanistic, about the how of converting inputs into spike output. With regard to the first, it is rare that we know what neural computation(s) a given neuron carries out, especially since computations more typically involve the collective interaction of many cells. However, we can, as do many cellular neurophysiologists, approach the second question, asking from a more reductionist viewpoint what are the biophysical mechanisms that underlie spike generation and transmission. How do the properties of different ionic channels and their distributions over the cell’s dendritic, somatic, axonal membrane determine the neuron’s firing modes? How might the various mechanisms be modulated or recruited if there are changes in the cell or circuitry in which it is embedded or in the brain state or in the read-out targets? We usually imagine that the typical time scales for action potential generation are milliseconds (msecs), but there are examples of where even a brief (msecs) stimulus can evoke a long duration transient spike pattern or where pre-conditioning can delay a spike’s onset by 100s of msecs. Some neurons fire repetitively (tonically) for steady or slowly changing stimuli, some fire with complex temporal patterns (e.g., bursts of spikes), but some only respond (phasically) to the rapidly changing features of a stimulus. These behaviors reflect a neuron’s biophysical makeup.

    In these lectures we attempt to describe how different response properties and firing patterns arise. We seek especially to provide insight into the underlying mathematical structure that might be common to classes of firing behaviors. Indeed, the mathematical structure is more general and the physiological implementation could involve different biophysical components. Our approach will be to use concepts from nonlinear dynamics, especially geometrical methods like phase planes or phase space projections from higher dimensional systems. A key feature of our viewpoint is to exploit time scale differences to reduce dimensionality by dissecting the dynamics using fast-slow analysis, i.e., to separately understand the behaviors on the different time scales and then patch the behaviors together. We will begin by dissecting the classical Hodgkin-Huxley model in this way to distinguish the rapid upstroke and downstroke of the spike from the slower behavior during the action potential’s depolarized plateau phase and hyperpolarized recovery phase. Analogously we will segregate a burst pattern’s active and silent phases from the transitions between these phases. Geometrically, the trajectories during the slow phases are restricted to lower dimensional manifolds and the

    Enjoying the preview?
    Page 1 of 1