Introduction to Connectionist Modelling of Cognitive Processes

by ; ;
Format: Hardcover
Pub. Date: 1998-04-23
Publisher(s): Oxford University Press
Availability: This title is currently not available.
  • Free Shipping Icon

    This Item Qualifies for Free Shipping!*

    *Excludes marketplace orders.

List Price: $230.39

Rent Textbook

Select for Price
There was a problem. Please try again later.

New Textbook

We're Sorry
Sold Out

Used Textbook

We're Sorry
Sold Out

eTextbook

We're Sorry
Not Available

Summary

Connectionism is a way of modelling how the brain uses streams of sensory inputs to understand the world and produce behaviour, based on cognitive processes which actually occur. This book describes the principles, and their application to explaining how the brain produces speech, formsmemories and recognises faces, how intellect develops, and how it deteriorates after brain damage. Part I explores the basic concepts, the architecture and properties of the most common connectionist models, and how connectionist learning rules work. Part II describes and evaluates connectionistmodels of a variety of cognitive processes, including the learning and production of speech, the formation of episodic memories and visual representations, the development of cognitive processes in infancy, and their breakdown in brain-damaged patients. The models range from some well-known classicsto others at the frontiers of current research. Each chapter ends with a list of recommended further reading. Also included is a disk with the software for running tlearn, a user-friendly simulator for connectionist modelling of cognitive processes, which will run on either PCs or Macs. Thesoftware includes exercises to introduce the simulator, and working copies to explore some of the models described in the text. A reference handbook for tlearn is included to enable readers to build their own models. The authors, as well as being leading researchers in their field, have extensiveexperience of teaching connectionism to undergraduates. They have written the first comprehensive, up-to-date textbook on connectionist modelling, designed specifically for advanced undergraduates, and accessible to those with only limited knowledge of mathematics. This will be an essentialintroductory text for all students in psychology or cognitive science taking a course on connectionism.

Table of Contents

Prologue 1(7)
What is the problem? 1(1)
What connectionist models can do 2(6)
Part I Principles 8(147)
1 The basics of connectionist information processing
8(22)
Neurally inspired information processing
8(3)
Five assumptions about computation in the brain on which connectionist models are based
11(4)
Symbols and elementary equations
15(5)
Connectionism in a nutshell
20(1)
Exercises with tlearn
21(9)
2 The attraction of parallel distributed processing for modelling cognition
30(21)
The representation of knowledge in connectionist networks is distributed
31(1)
Distributed representations are damage resistant and fault tolerant
32(2)
Connectionist networks allow memory access by content
34(1)
Retrieving information from a distributed database
35(10)
Constraint satisfaction in connectionist networks
45(3)
There is no distinction between `memory' and `processing' in connectionist models
48(1)
Problems for distributed representations
49(2)
3 Pattern association
51(21)
The architecture and operation of a pattern associator
51(10)
A pattern association network
52(2)
The Hebb rule
54(1)
Learning with the Hebb rule
54(2)
Recall from a Hebb trained matrix
56(1)
Learning different associations on the same weight matrix
56(3)
Recall reflects the similarity of retrieval pattern and stored patterns
59(2)
Properties of pattern associators
61(4)
Generalisation
61(1)
Fault tolerance
61(1)
The importance of distributed representations for pattern associators
62(1)
Prototype extraction and noise removal
63(1)
Speed
63(1)
Interference is not necessarily a bad thing
64(1)
Further reading
65(1)
Training a pattern associator with tlearn
65(7)
4 Autoassociation
72(24)
The architecture and operation of an autoassociator
72(3)
Architecture
72(2)
Learning with the Delta rule
74(1)
Properties of autoassociator memories
75(8)
What an autoassociator learns
76(2)
Storage of different memories on the same connections
78(2)
Pattern completion
80(1)
Noise resistance
81(2)
Forming categories and prototypes from individual experiences
83(5)
Discovering a prototype from exemplars with an autoassociator
84(3)
Learning different prototypes on the same matrix
87(1)
Further reading
88(1)
Autoassociation exercises with tlearn
88(8)
5 Training a multi-layer network with an error signal: hidden units and backpropagation
96(31)
The perceptron convergence rule
97(2)
Gradient descent
99(6)
Gradient descent with a sigmoid activation function
103(2)
Linear separability
105(3)
Solving the XOR problem with hidden units
107(1)
Hidden units and internal representation
108(4)
Hinton's family tree problem
108(2)
What the hidden units represent in the family tree task
110(2)
Backpropagation
112(5)
The problem
113(1)
An informal account
114(1)
Local minima
114(2)
Backpropagation and biological plausibility
116(1)
Exercise: learning Exclusive OR with tlearn
117(10)
6 Competitive networks
127(12)
The architecture and operation of a competitive network
128(8)
Excitation
128(1)
Competition
129(1)
Weight adjustment
129(3)
Limiting weight growth
132(1)
Competitive learning in the brain
133(3)
Pattern classification
136(2)
Correlated teaching
137(1)
Further reading
138(1)
7 Recurrent networks
139(16)
Controlling sequences with an associative chain
139(1)
Controlling sequences with a recurrent net
140(2)
State units and plan units
141(1)
Simple recurrent networks (SRNs)
142(3)
Learning to predict the next sound in a sequence
144(1)
Attractors
145(3)
Learning sequences with tlearn
148(7)
Part II Applications 155(176)
8 Reading aloud
155(23)
The traditional `2-route' model of reading aloud
156(1)
The connectionist approach
157(1)
The Seidenberg and McClelland model of reading aloud
158(9)
Replicating the results of word naming experiments
161(4)
What has the model learnt?
165(1)
Limitations of the model
166(1)
The Plaut, McClelland, Seidenberg and Patterson model
167(4)
Input coding
167(1)
Pronunciation
168(1)
Reading with an attractor network
169(2)
Componential attractors
171(1)
What have these models achieved?
171(1)
Further reading
172(1)
Reading aloud with tlearn
172(6)
9 Language acquisition
178(32)
Learning the English past tense
179(8)
A symbolic account of past tense learning
180(3)
A connectionist account of past tense learning
183(4)
Early lexical development
187(7)
A connectionist model of early lexical development
188(4)
Evaluation of the model
192(2)
The acquisition of syntax
194(8)
Further reading
202(1)
Learning the English past tense with tlearn
202(8)
10 Connectionism and cognitive development
210(33)
Stages in development-a challenge for connectionism?
210(2)
The development of object permanence
212(7)
Modelling the development of representations which could produce object permanence
212(6)
Evaluating the model
218(1)
The balance beam problem
219(15)
Modelling the balance beam problem
222(2)
Running the model
224(7)
Evaluating the model
231(3)
Stage-like behaviour from continuous change
234(1)
Variability in learning
234(6)
Individual differences
235(3)
Critical periods
238(2)
Further reading
240(1)
Modelling the balance beam problem with tlearn
240(3)
11 Connectionist neuropsychology-lesioning networks
243(25)
The simulation of deep dyslexia
245(10)
Hinton and Shallice (1991)
246(2)
Attractors
248(1)
Attractor basins
249(1)
Lesioning an attractor
250(1)
Is the result dependent on fine details of the model?
251(3)
The interpretation of double dissociation
254(1)
Modelling a deficit in semantic memory
255(5)
Modality and category specificity in semantic memory
255(3)
Modelling an asymmetrical double dissociation
258(2)
Modelling an information processing deficit in schizophrenia
260(8)
Selective attention
260(2)
Modelling the Stroop task
262(1)
Lesioning the model
263(2)
Further reading
265(1)
Exercises with tlearn
265(3)
12 Mental representation: rules, symbols and connectionist networks
268(10)
Learning minority default rules
268(5)
Default mapping
269(1)
Minority defaults
270(3)
Symbols and distributed representations
273(3)
Representing mental types
274(1)
Symbolic attractors
275(1)
Levels of explanation
276(1)
Further reading
277(1)
13 Network models of brain function
278(25)
Memory formation in the hippocampus
278(14)
The role of the hippocampus in memory formation
279(1)
Information flow to and from the hippocampus
280(1)
The internal structure of the hippocampus
281(2)
A computational theory of hippocampal operation
283(3)
A neural network simulation of hippocampal operation
286(1)
The performance measure
287(1)
Running the model
288(1)
Performance of the network
288(4)
Invariant visual pattern recognition in the inferior temporal cortex
292(10)
How not to achieve position invariant object recognition
292(2)
The flow of visual information from retina to temporal lobe
294(1)
VisNet-an approach to biologically plausible visual object identification
294(3)
Testing the network
297(2)
The importance of the trace rule for forming invariant representations
299(1)
Brains, networks and biological plausibility
300(2)
Further reading
302(1)
14 Evolutionary connectionism
303(11)
The evolution of goal directed behaviour
304(4)
The evolutionary advantage of the capacity to learn
306(2)
Innately guided learning in speech perception
308(6)
The Nakisa and Plunkett model
309(1)
Network training and evolution
310(1)
Speed
311(1)
Cross-linguistic performance
311(1)
Categorical perception
312(1)
Nativism or constructivism?
313(1)
15 A selective history of connectionism before 1986
314(17)
McCulloch and Pitts (1943)
314(4)
Logical operations with neuron-like computational units
314(1)
Computing AND, OR and NOT
315(1)
Producing the sensations of hot and cold
316(2)
Hebb (1949)
318(2)
Neuronal inspiration in psychological modelling
318(1)
The Hebb synapse
319(1)
Rosenblatt (1958)-the perceptron
320(3)
Minsky and Papert (1969)-a critique of perceptrons
323(2)
The XOR problem and the perception of connectedness
323(2)
Hinton and Anderson (1981)
325(1)
Hopfield (1982)
326(5)
Content-addressable memory in networks with attractor states
326(1)
Input patterns and energy
327(2)
Novel inputs produce higher values of E than memories
329(1)
Changes in state lead to a reduction in E
329(2)
Appendix 1 Installation procedures for tlearn 331(2)
Appendix 2 An introduction to linear algebra for neural networks 333(7)
Appendix 3 User manual for tlearn 340(36)
Bibliography 376(7)
Index 383

An electronic version of this book is available through VitalSource.

This book is viewable on PC, Mac, iPhone, iPad, iPod Touch, and most smartphones.

By purchasing, you will be able to view this book online, as well as download it, for the chosen number of days.

Digital License

You are licensing a digital product for a set duration. Durations are set forth in the product description, with "Lifetime" typically meaning five (5) years of online access and permanent download to a supported device. All licenses are non-transferable.

More details can be found here.

A downloadable version of this book is available through the eCampus Reader or compatible Adobe readers.

Applications are available on iOS, Android, PC, Mac, and Windows Mobile platforms.

Please view the compatibility matrix prior to purchase.