Prologue |
|
1 | (7) |
What is the problem? |
|
1 | (1) |
What connectionist models can do |
|
2 | (6) |
Part I Principles |
|
8 | (147) |
|
1 The basics of connectionist information processing |
|
|
8 | (22) |
|
Neurally inspired information processing |
|
|
8 | (3) |
|
Five assumptions about computation in the brain on which connectionist models are based |
|
|
11 | (4) |
|
Symbols and elementary equations |
|
|
15 | (5) |
|
Connectionism in a nutshell |
|
|
20 | (1) |
|
|
21 | (9) |
|
2 The attraction of parallel distributed processing for modelling cognition |
|
|
30 | (21) |
|
The representation of knowledge in connectionist networks is distributed |
|
|
31 | (1) |
|
Distributed representations are damage resistant and fault tolerant |
|
|
32 | (2) |
|
Connectionist networks allow memory access by content |
|
|
34 | (1) |
|
Retrieving information from a distributed database |
|
|
35 | (10) |
|
Constraint satisfaction in connectionist networks |
|
|
45 | (3) |
|
There is no distinction between `memory' and `processing' in connectionist models |
|
|
48 | (1) |
|
Problems for distributed representations |
|
|
49 | (2) |
|
|
51 | (21) |
|
The architecture and operation of a pattern associator |
|
|
51 | (10) |
|
A pattern association network |
|
|
52 | (2) |
|
|
54 | (1) |
|
Learning with the Hebb rule |
|
|
54 | (2) |
|
Recall from a Hebb trained matrix |
|
|
56 | (1) |
|
Learning different associations on the same weight matrix |
|
|
56 | (3) |
|
Recall reflects the similarity of retrieval pattern and stored patterns |
|
|
59 | (2) |
|
Properties of pattern associators |
|
|
61 | (4) |
|
|
61 | (1) |
|
|
61 | (1) |
|
The importance of distributed representations for pattern associators |
|
|
62 | (1) |
|
Prototype extraction and noise removal |
|
|
63 | (1) |
|
|
63 | (1) |
|
Interference is not necessarily a bad thing |
|
|
64 | (1) |
|
|
65 | (1) |
|
Training a pattern associator with tlearn |
|
|
65 | (7) |
|
|
72 | (24) |
|
The architecture and operation of an autoassociator |
|
|
72 | (3) |
|
|
72 | (2) |
|
Learning with the Delta rule |
|
|
74 | (1) |
|
Properties of autoassociator memories |
|
|
75 | (8) |
|
What an autoassociator learns |
|
|
76 | (2) |
|
Storage of different memories on the same connections |
|
|
78 | (2) |
|
|
80 | (1) |
|
|
81 | (2) |
|
Forming categories and prototypes from individual experiences |
|
|
83 | (5) |
|
Discovering a prototype from exemplars with an autoassociator |
|
|
84 | (3) |
|
Learning different prototypes on the same matrix |
|
|
87 | (1) |
|
|
88 | (1) |
|
Autoassociation exercises with tlearn |
|
|
88 | (8) |
|
5 Training a multi-layer network with an error signal: hidden units and backpropagation |
|
|
96 | (31) |
|
The perceptron convergence rule |
|
|
97 | (2) |
|
|
99 | (6) |
|
Gradient descent with a sigmoid activation function |
|
|
103 | (2) |
|
|
105 | (3) |
|
Solving the XOR problem with hidden units |
|
|
107 | (1) |
|
Hidden units and internal representation |
|
|
108 | (4) |
|
Hinton's family tree problem |
|
|
108 | (2) |
|
What the hidden units represent in the family tree task |
|
|
110 | (2) |
|
|
112 | (5) |
|
|
113 | (1) |
|
|
114 | (1) |
|
|
114 | (2) |
|
Backpropagation and biological plausibility |
|
|
116 | (1) |
|
Exercise: learning Exclusive OR with tlearn |
|
|
117 | (10) |
|
|
127 | (12) |
|
The architecture and operation of a competitive network |
|
|
128 | (8) |
|
|
128 | (1) |
|
|
129 | (1) |
|
|
129 | (3) |
|
|
132 | (1) |
|
Competitive learning in the brain |
|
|
133 | (3) |
|
|
136 | (2) |
|
|
137 | (1) |
|
|
138 | (1) |
|
|
139 | (16) |
|
Controlling sequences with an associative chain |
|
|
139 | (1) |
|
Controlling sequences with a recurrent net |
|
|
140 | (2) |
|
State units and plan units |
|
|
141 | (1) |
|
Simple recurrent networks (SRNs) |
|
|
142 | (3) |
|
Learning to predict the next sound in a sequence |
|
|
144 | (1) |
|
|
145 | (3) |
|
Learning sequences with tlearn |
|
|
148 | (7) |
Part II Applications |
|
155 | (176) |
|
|
155 | (23) |
|
The traditional `2-route' model of reading aloud |
|
|
156 | (1) |
|
The connectionist approach |
|
|
157 | (1) |
|
The Seidenberg and McClelland model of reading aloud |
|
|
158 | (9) |
|
Replicating the results of word naming experiments |
|
|
161 | (4) |
|
What has the model learnt? |
|
|
165 | (1) |
|
|
166 | (1) |
|
The Plaut, McClelland, Seidenberg and Patterson model |
|
|
167 | (4) |
|
|
167 | (1) |
|
|
168 | (1) |
|
Reading with an attractor network |
|
|
169 | (2) |
|
|
171 | (1) |
|
What have these models achieved? |
|
|
171 | (1) |
|
|
172 | (1) |
|
Reading aloud with tlearn |
|
|
172 | (6) |
|
|
178 | (32) |
|
Learning the English past tense |
|
|
179 | (8) |
|
A symbolic account of past tense learning |
|
|
180 | (3) |
|
A connectionist account of past tense learning |
|
|
183 | (4) |
|
Early lexical development |
|
|
187 | (7) |
|
A connectionist model of early lexical development |
|
|
188 | (4) |
|
|
192 | (2) |
|
The acquisition of syntax |
|
|
194 | (8) |
|
|
202 | (1) |
|
Learning the English past tense with tlearn |
|
|
202 | (8) |
|
10 Connectionism and cognitive development |
|
|
210 | (33) |
|
Stages in development-a challenge for connectionism? |
|
|
210 | (2) |
|
The development of object permanence |
|
|
212 | (7) |
|
Modelling the development of representations which could produce object permanence |
|
|
212 | (6) |
|
|
218 | (1) |
|
|
219 | (15) |
|
Modelling the balance beam problem |
|
|
222 | (2) |
|
|
224 | (7) |
|
|
231 | (3) |
|
Stage-like behaviour from continuous change |
|
|
234 | (1) |
|
|
234 | (6) |
|
|
235 | (3) |
|
|
238 | (2) |
|
|
240 | (1) |
|
Modelling the balance beam problem with tlearn |
|
|
240 | (3) |
|
11 Connectionist neuropsychology-lesioning networks |
|
|
243 | (25) |
|
The simulation of deep dyslexia |
|
|
245 | (10) |
|
Hinton and Shallice (1991) |
|
|
246 | (2) |
|
|
248 | (1) |
|
|
249 | (1) |
|
|
250 | (1) |
|
Is the result dependent on fine details of the model? |
|
|
251 | (3) |
|
The interpretation of double dissociation |
|
|
254 | (1) |
|
Modelling a deficit in semantic memory |
|
|
255 | (5) |
|
Modality and category specificity in semantic memory |
|
|
255 | (3) |
|
Modelling an asymmetrical double dissociation |
|
|
258 | (2) |
|
Modelling an information processing deficit in schizophrenia |
|
|
260 | (8) |
|
|
260 | (2) |
|
Modelling the Stroop task |
|
|
262 | (1) |
|
|
263 | (2) |
|
|
265 | (1) |
|
|
265 | (3) |
|
12 Mental representation: rules, symbols and connectionist networks |
|
|
268 | (10) |
|
Learning minority default rules |
|
|
268 | (5) |
|
|
269 | (1) |
|
|
270 | (3) |
|
Symbols and distributed representations |
|
|
273 | (3) |
|
Representing mental types |
|
|
274 | (1) |
|
|
275 | (1) |
|
|
276 | (1) |
|
|
277 | (1) |
|
13 Network models of brain function |
|
|
278 | (25) |
|
Memory formation in the hippocampus |
|
|
278 | (14) |
|
The role of the hippocampus in memory formation |
|
|
279 | (1) |
|
Information flow to and from the hippocampus |
|
|
280 | (1) |
|
The internal structure of the hippocampus |
|
|
281 | (2) |
|
A computational theory of hippocampal operation |
|
|
283 | (3) |
|
A neural network simulation of hippocampal operation |
|
|
286 | (1) |
|
|
287 | (1) |
|
|
288 | (1) |
|
Performance of the network |
|
|
288 | (4) |
|
Invariant visual pattern recognition in the inferior temporal cortex |
|
|
292 | (10) |
|
How not to achieve position invariant object recognition |
|
|
292 | (2) |
|
The flow of visual information from retina to temporal lobe |
|
|
294 | (1) |
|
VisNet-an approach to biologically plausible visual object identification |
|
|
294 | (3) |
|
|
297 | (2) |
|
The importance of the trace rule for forming invariant representations |
|
|
299 | (1) |
|
Brains, networks and biological plausibility |
|
|
300 | (2) |
|
|
302 | (1) |
|
14 Evolutionary connectionism |
|
|
303 | (11) |
|
The evolution of goal directed behaviour |
|
|
304 | (4) |
|
The evolutionary advantage of the capacity to learn |
|
|
306 | (2) |
|
Innately guided learning in speech perception |
|
|
308 | (6) |
|
The Nakisa and Plunkett model |
|
|
309 | (1) |
|
Network training and evolution |
|
|
310 | (1) |
|
|
311 | (1) |
|
Cross-linguistic performance |
|
|
311 | (1) |
|
|
312 | (1) |
|
Nativism or constructivism? |
|
|
313 | (1) |
|
15 A selective history of connectionism before 1986 |
|
|
314 | (17) |
|
McCulloch and Pitts (1943) |
|
|
314 | (4) |
|
Logical operations with neuron-like computational units |
|
|
314 | (1) |
|
Computing AND, OR and NOT |
|
|
315 | (1) |
|
Producing the sensations of hot and cold |
|
|
316 | (2) |
|
|
318 | (2) |
|
Neuronal inspiration in psychological modelling |
|
|
318 | (1) |
|
|
319 | (1) |
|
Rosenblatt (1958)-the perceptron |
|
|
320 | (3) |
|
Minsky and Papert (1969)-a critique of perceptrons |
|
|
323 | (2) |
|
The XOR problem and the perception of connectedness |
|
|
323 | (2) |
|
Hinton and Anderson (1981) |
|
|
325 | (1) |
|
|
326 | (5) |
|
Content-addressable memory in networks with attractor states |
|
|
326 | (1) |
|
Input patterns and energy |
|
|
327 | (2) |
|
Novel inputs produce higher values of E than memories |
|
|
329 | (1) |
|
Changes in state lead to a reduction in E |
|
|
329 | (2) |
Appendix 1 Installation procedures for tlearn |
|
331 | (2) |
Appendix 2 An introduction to linear algebra for neural networks |
|
333 | (7) |
Appendix 3 User manual for tlearn |
|
340 | (36) |
Bibliography |
|
376 | (7) |
Index |
|
383 | |