計算神経科学の基礎<br>Fundamentals of Computational Neuroscience

計算神経科学の基礎
Fundamentals of Computational Neuroscience

  • ただいまウェブストアではご注文を受け付けておりません。 ⇒古書を探す
  • 製本 Hardcover:ハードカバー版/ページ数 338 p.
  • 言語 ENG
  • 商品コード 9780198515821
  • DDC分類 153

基本説明

An introductory text. Includes a review of relevant biophysical processes in single neurons, models of spiking neurons, detailed coverage of basic network architectures, and more.

Full Description


Computational neuroscience is the theoretical study of the brain to uncover the principles and mechanisms that guide the development, organization, information processing, and mental functions of the nervous system. Known, but ill-defined as a field, not until the late-20th century has there been knowledge enough to establish computational neuroscience as a scientific discipline in its own right. With this in mind and given the complexity of the topic and its increasing importance in progressing our understanding of how the brain works, there is a need for an introductory text on this complex and (often assumed) impenetrable topic. This work is one of the first introductory books on this topic. It introduces the theoretical foundations of neuroscience with a focus on the nature of information processing in the brain. It covers the introduction and motivation of simplified models of neurons that are suitable for exploring information processing in large brain-like networks.

Contents

1.1 WHAT IS COMPUTATIONAL NEUROSCIENCE?; 1.2 Domains in Computational Neuroscience; 1.3 What is a model?; 1.4 Emergence and adaptation; 1.5 From exploration to a theory of the brain; 1.6 Some notes on the book; 2.1 MODELLING BIOLOGICAL NEURONS; 2.2 Neurons are specialized cells; 2.3 Basic synaptic mechanisms; 2.4 The generation of action potentials: Hodgkin-Huxley equations; 2.5 Dendritic trees, the propagation of action potentials, and compartmental models; 2.6 Above and Beyond the Hodgkin-Huxley neuron: Fatigue, bursting and simplifications; 3.1 INTEGRATE-AND-FIRE NEURONS; 3.2 The spike-response model; 3.3 Spike time variability; 3.4 Noise Models for IF neurons; 4.1 ORGANIZATIONS OF NEURONAL NETWORKS; 4.2 Information transmission in networks; 4.3 Population Dynamics: modelling the average behaviour of neurons; 4.4 The sigma node; 4.5 Networks with non-classical synapses: the sigma-pi node; 5.1 HOW NEURONS TALK; 5.2 Information theory; 5.3 Information in spike trains; 5.4 Population coding and decoding; 5.5 Distributed representation; 6.1 PERCEPTION, FUNCTION REPRESNTATION, AND LOOK-UP TABLES; 6.2 The sigma node as perception; 6.3 Multi-layer mapping networks; 6.4 Learning, generalization and biological interpretations; 6.5 Self-organizing network architectures and geentic algorighms; 6.6 Mapping networks with context units; 6.7 Probabilistic mapping networks; 7.1 ASSOCIATIVE MEMORY AND HEBBIAN LEARNING; 7.2 An example of learning association; 7.3 The biochemical basis of synaptic plasticity; 7.4 The temporal structure of Hebbian plasticity: LTP and LTD; 7.5 Mathematical formulation of Hebian plasticity; 7.6 Weight distributions; 7.7 Neuronal response variability, gain control, and scaling; 7.8 Features of associators and Hebbian learning; 8.1 SHORT-TERM MEMORY AND REVERBERATING NETWORK ACTIVITY; 8.2 Long-term memory and auto-associators; 8.3 Point attractor networks: The Grossberg-Hopfield model; 8.4 The phase diagram and the Grossberg-Hopfield model; 8.5 Sparse attractor neural networks; 8.6 Chaotic networks: a dynamical systems view; 8.7 Biologically more realistic variation of attractor networks; 9.1 SPATIAL REPRESENTATIONS AND THE SENSE OF DIRECTIONS; 9.2 Learning with continuous pattern representations; 9.3 Asymptotic states and the dynamics of neural fields; 9.4 Path-integration, Hebbian trace rule, and sequence learning; 9.5 Competitive networks and self-organizing maps; 10.1 MOTOR LEARNING AND CONTROL; 10.2 The delta rule; 10.3 Generalized delta rules; 10.4 Reward learning; 111.1 SYSTEM LEVEL ANATOMY OF THE BRAIN; 11.2 Modular mapping networks; 11.3 Coupled attractor networks; 11.4 Working memory; 11.5 Attentive vision; 11.6 An interconnecting workspace hypothesis; 12.1 INTRODUCTION TO HTE MATLAB PROGRAMMING ENVIRONMENT; 12.2 Spiking neurons and numerical integration in MATLAB; 12.3 Associators and Hebbian learning; 12.4 Recurrent networks and networks dynamics; 12.5 Continuous attractor neural networks; 12.6 Error-backpropagation network; SOME USEFUL MATHEMATICS; BASIC PROBABILITY THEORY; NUMERICAL INTEGRATION; INDEX