コネクショニズムと精神(第2版)<br>Connectionism and the Mind : Parallel Processing, Dynamics, and Evolution in Networks (2ND)

個数:

コネクショニズムと精神(第2版)
Connectionism and the Mind : Parallel Processing, Dynamics, and Evolution in Networks (2ND)

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合、分割発送となる場合がございます。
    3. 美品のご指定は承りかねます。
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Paperback:紙装版/ペーパーバック版/ページ数 356 p.
  • 言語 ENG
  • 商品コード 9780631207139
  • DDC分類 149

基本説明

Provides a clear and balanced introduction to connectionist networks and explores their theoretical and philosophical implications.

Full Description

Connectionism and the Mind provides a clear and balanced introduction to connectionist networks and explores theoretical and philosophical implications. Much of this discussion from the first edition has been updated, and three new chapters have been added on the relation of connectionism to recent work on dynamical systems theory, artificial life, and cognitive neuroscience.
Read two of the sample chapters on line:


Connectionism and the Dynamical Approach to Cognition:
http://www.blackwellpublishing.com/pdf/bechtel.pdf


Networks, Robots, and Artificial Life:
http://www.blackwellpublishing.com/pdf/bechtel2.pdf

Contents

Preface xiii

1 Networks Versus Symbol Systems: Two Approaches To Modeling Cognition 1

1.1 A Revolution in the Making? 1

1.2 Forerunners of Connectionism: Pandemonium and Perceptrons 2

1.3 The Allure of Symbol Manipulation 7

1.3.1 From logic to artificial intelligence 7

1.3.2 From linguistics to information processing 10

1.3.3 Using artificial intelligence to simulate human information processing 11         

1.4 The Decline and Re-emergence of Network Models 12

1.4.1 Problems with perceptrons 12

1.4.2 Re-emergence: The new connectionism 13

1.5 New Alliances and Unfinished Business 15

Notes 17

Sources and Suggested Readings 17

2 Connectionist Architectures 19

2.1 The Flavor of Connectionist Processing: A Simulation of Memory Retrieval 19

2.1.1 Components of the model 20

2.1.2 Dynamics of the model 22
2.1.2.1 Memory retrieval in the Jets and Sharks network 22
2.1.2.2 The equations 23

2.1.3 Illustrations of the dynamics of the model 24
2.1.3.1 Retrieving properties from a name 24
2.1.3.2 Retrieving a name from other properties 26
2.1.3.3 Categorization and prototype formation 26
2.1.3.4 Utilizing regularities 28

2.2 The Design Features of a Connectionist Architecture 29

2.2.1 Patterns of connectivity 29
2.2.1.1 Feedforward networks 29
2.2.1.2 Interactive networks 31

2.2.2 Activation rules for units 32
2.2.2.1 Feedforward networks 32
2.2.2.2 Interactive networks: Hopfield networks and Boltzmann machines 34
2.2.2.3 Spreading activation vs. interactive connectionist models 37

2.2.3 Learning principles 38

2.2.4 Semantic interpretation of connectionist systems 40
2.2.4.1 Localist networks 41
2.2.4.2 Distributed networks 41

2.3 The Allure of the Connectionist Approach 45

2.3.1 Neural plausibility 45

2.3.2 Satisfaction of soft constraints 46

2.3.3 Graceful degradation 48

2.3.4 Content-addressable memory 49

2.3.5 Capacity to learn from experience and generalize 51

2.4 Challenges Facing Connectionist Networks 51

2.5 Summary 52

Notes 52

Sources and Recommended Readings 53

3 Learning 54

3.1 Traditional and Contemporary Approaches to Learning 54

3.1.1 Empiricism 54

3.1.2 Rationalism 55

3.1.3 Contemporary cognitive science 56

3.2 Connectionist Models of Learning 57

3.2.1 Learning procedures for two-layer feedforward networks 58
3.2.1.1 Training and testing a network 58
3.2.1.2 The Hebbian rule 58
3.2.1.3 The delta rule 60
3.2.1.4 Comparing the Hebbian and delta rules 67
3.2.1.5 Limitations of the delta rule: The XOR problem 67

3.2.2 The backpropagation learning procedure for multi-layered networks 69
3.2.2.1 Introducing hidden units and backpropagation learning 69
3.2.2.2 Using backpropagation to solve the XOR problem 74
3.2.2.3 Using backpropagation to train a network to pronounce words 77
3.2.2.4 Some drawbacks of using backpropagation 78

3.2.3 Boltzmann learning procedures for non-layered networks 79

3.2.4 Competitive learning 80

3.2.5 Reinforcement learning 81

3.3 Some Issues Regarding Learning 82

3.3.1 Are connectionist systems associationist? 82

3.3.2 Possible roles for innate knowledge 84
3.3.2.1 Networks and the rationalist-empiricist continuum 84
3.3.2.2 Rethinking innateness: Connectionism and emergence 85

Notes 87

Sources and Suggested Readings 88

4 Pattern Recognition and Cognition 89

4.1 Networks as Pattern Recognition Devices 90

4.1.1 Pattern recognition in two-layer networks 90

4.1.2 Pattern recognition in multi-layered networks 93
4.1.2.1 McClelland and Rumelhart's interactive activation model of word recognition 93
4.1.2.2 Evaluating the interactive activation model of word recognition 100

4.1.3 Generalization and similarity 101

4.2 Extending Pattern Recognition to Higher Cognition 102

4.2.1 Smolensky's proposal: Reasoning in harmony networks 103

4.2.2 Margolis's proposal: Cognition as sequential pattern recognition 103

4.3           Logical Inference as Pattern Recognition 106

4.3.1 What is it to learn logic? 106

4.3.2 A network for evaluating validity of arguments 109

4.3.3 Analyzing how a network evaluates arguments 112

4.3.4 A network for constructing derivations 115

4.4 Beyond Pattern Recognition 117

Notes 118

Sources and Suggested Readings 119

5 Are Rules Required to Process Representations? 120

5.1 Is Language Use Governed by Rules? 120

5.2 Rumelhart and McClelland's Model of Past-tense Acquisition 122

5.2.1 A pattern associator with Wickelfeature encodings 122

5.2.2 Activation function and learning procedure 126

5.2.3 Overregularization in a simpler network: The rule of 78 127

5.2.4 Modeling U-shaped learning 130

5.2.5 Modeling differences between different verb classes 133

5.3Pinker and Prince's Arguments for Rules 135

5.3.1 Overview of the critique of Rumelhart and McClelland's model 135

5.3.2 Putative linguistic inadequacies 136

5.3.3 Putative behavioral inadequacies 139

5.3.4 Do the inadequacies reflect inherent limitations of PDP networks? 140

5.4 Accounting for the U-shaped Learning Function 141

5.4.1 The role of input for children 142

5.4.2 The role of input for networks: The rule of 78 revisited 146

5.4.3 Plunkett and Marchman's simulations of past-tense acquisition 148

5.5 Conclusion 152

Notes 153

Sources and Suggested Readings 155

6 Are Syntactically Structured Representations Needed? 156

6.1 Fodor and Pylyshyn's Critique: The Need for Symbolic Representations with Constituent Structure 156

6.1.1 The need for compositional syntax and semantics 156

6.1.2 Connectionist representations lack compositionality 158

6.1.3 Connectionism as providing mere implementation 160

6.2 First Connectionist Response: Explicitly Implementing Rules and Representations 163

6.2.1 Implementing a production system in a network 163

6.2.2 The variable binding problem 165

6.2.3 Shastri and Ajjanagadde's connectionist model of variable binding 166

6.3Second Connectionist Response: Implementing Functionally Compositional Representations 170

6.3.1 Functional vs. concatenative compositionality 170

6.3.2 Developing compressed representations using Pollack's RAAM networks 171

6.3.3 Functional compositionality of compressed representations 175

6.3.4 Performing operations on compressed representations 177

6.4 Third Connectionist Response: Employing Procedural Knowledge with External Symbols 178

6.4.1 Temporal dependencies in processing language 179

6.4.2 Achieving short-term memory with simple recurrent networks 180

6.4.3 Elman's first study: Learning grammatical categories 181

6.4.4 Elman's second study: Respecting dependency relations 184

6.4.5 Christiansen's extension: Pushing the limits of SRNs 187

6.5 Using External Symbols to Provide Exact Symbol Processing 190

6.6 Clarifying the Standard: Systematicity and Degree of Generalizability 194

6.7 Conclusion 197

Notes 198

Sources and Suggested Readings 199

7 Simulating Higher Cognition: a Modular Architecture For Processing Scripts 200

7.1 Overview of Scripts 200

7.2 Overview of Miikkulainen's DISCERN System 201

7.3Modular Connectionist Architectures 203

7.4 FGREP: An Architecture that Allows the System to Devise Its Own Representations 206

7.4.1 Why FGREP? 206

7.4.2 Exploring FGREP in a simple sentence parser 208

7.4.3 Exploring representations for words in categories 210

7.4.4 Moving to multiple modules: The DISCERN system 212

7.5 A Self-organizing Lexicon Using Kohonen Feature Maps 212

7.5.1 Innovations in lexical design 212

7.5.2 Using Kohonen feature maps in DISCERN's lexicon 213
7.5.2.1 Orthography: From high-dimensional vector representations to map units 213
7.5.2.2 Associative connections: From the orthographic map to the semantic map 216
7.5.2.3 Semantics: From map unit to high-dimensional vector representations 216
7.5.2.4 Reversing direction: From semantic to orthographic representations 216

7.5.3 Advantages of Kohonen feature maps 216

7.6 Encoding and Decoding Stories as Scripts 217

7.6.1 Using recurrent FGREP modules in DISCERN 217

7.6.2 Using the Sentence Parser and Story Parser to encode stories 218

7.6.3 Using the Story Generator and Sentence Generator to paraphrase stories 221

7.6.4 Using the Cue Former and Answer Producer to answer questions 223

7.7 A Connectionist Episodic Memory 223

7.7.1 Making Kohonen feature maps hierarchical 223

7.7.2 How role-binding maps become self-organized 225

7.7.3 How role-binding maps become trace feature maps 225

7.8 Performance: Paraphrasing Stories and Answering Questions 228

7.8.1 Training and testing DISCERN 228

7.8.2 Watching DISCERN paraphrase a story 229

7.8.3 Watching DISCERN answer questions 229

7.9 Evaluating DISCERN 231

7.10 Paths Beyond the First Decade of Connectionism 233

Notes 234

Sources and Suggested Readings 234

8 Connectionism and the Dynamical Approach to

Cognition 235

8.1 Are We on the Road to a Dynamical Revolution? 235

8.2 Basic Concepts of DST: The Geometry of Change 237

8.2.1 Trajectories in state space: Predators and prey 237

8.2.2 Bifurcation diagrams and chaos 240

8.2.3 Embodied networks as coupled dynamical systems 242

8.3Using Dynamical Systems Tools to Analyze Networks 243

8.3.1 Discovering limit cycles in network controllers for robotic insects 244

8.3.2 Discovering multiple attractors in network models of reading 246
8.3.2.1 Modeling the semantic pathway 248
8.3.2.2 Modeling the phonological pathway 249

8.3.3 Discovering trajectories in SRNs for sentence processing 253

8.3.4 Dynamical analyses of learning in networks 256

8.4 Putting Chaos to Work in Networks 257

8.4.1 Skarda and Freeman's model of the olfactory bulb 257

8.4.2 Shifting interpretations of ambiguous displays 260

8.5 Is Dynamicism a Competitor to Connectionism? 264

8.5.1 Van Gelder and Port's critique of classic connectionism 264

8.5.2 Two styles of modeling 265

8.5.3 Mechanistic versus covering-law explanations 266

8.5.4 Representations: Who needs them? 270

8.6 Is Dynamicism Complementary to Connectionism? 276

8.7 Conclusion 280

Notes 280

Sources and Suggested Readings 281

9 Networks, Robots, and Artificial Life 282

9.1 Robots and the Genetic Algorithm 282

9.1.1 The robot as an artificial lifeform 282

9.1.2 The genetic algorithm for simulated evolution 283

9.2 Cellular Automata and the Synthetic Strategy 284

9.2.1 Langton's vision: The synthetic strategy 284

9.2.2 Emergent structures from simple beings: Cellular automata 286

9.2.3 Wolfram's four classes of cellular automata 288

9.2.4 Langton and l at the edge of chaos 289

9.3Evolution and Learning in Food-seekers 291

9.3.1 Overview and study 1: Evolution without learning 291

9.3.2 The Baldwin effect and study 2: Evolution with learning 293

9.4 Evolution and Development in Khepera 295

9.4.1 Introducing Khepera 295

9.4.2 The development of phenotypes from genotypes 296

9.4.3 The evolution of genotypes 298

9.4.4 Embodied networks: Controlling real robots 298

9.5 The Computational Neuroethology of Robots 300

9.6 When Philosophers Encounter Robots 301

9.6.1 No Cartesian split in embodied agents? 301

9.6.2 No representations in subsumption architectures? 302

9.6.3 No intentionality in robots and Chinese rooms? 303

9.6.4 No armchair when Dennett does philosophy? 304

9.7 Conclusion 305

Sources and Suggested Readings 305

10 Connectionism and the Brain 306

10.1 Connectionism Meets Cognitive Neuroscience 306

10.2 Four Connectionist Models of Brain Processes 309

10.2.1 What/Where streams in visual processing 309

10.2.2 The role of the hippocampus in memory 313
10.2.2.1 The basic design and functions of the hippocampal system 313
10.2.2.2 Spatial navigation in rats 315
10.2.2.3 Spatial versus declarative memory accounts 316
10.2.2.4 Declarative memory in humans and monkeys 318

10.2.3 Simulating dyslexia in network models of reading 323
10.2.3.1 Double dissociations in dyslexia 323
10.2.3.2 Modeling deep dyslexia 327
10.2.3.3 Modeling surface dyslexia 331
10.2.3.4 Two pathways versus dual routes 335

10.2.4 The computational power of modular structure in neocortex 338

10.3The Neural Implausibility of Many Connectionist Models 341

10.3.1 Biologically implausible aspects of connectionist networks 342

10.3.2 How important is neurophysiological plausibility? 343

10.4 Whither Connectionism? 346

Notes 347

Sources and Suggested Readings 348
Appendix A: Notation 349
Appendix B: Glossary 350
Bibliography 363
Name Index 384
Subject Index 395