Project Details
Description
Can we use Deep Neural Networks to understand how the mind works? The 5-year ERC grant entitled “Generalisation in Mind and Machine” compares how humans and artificial neural networks generalise across a range of domains, including visual perception, memory, language, reasoning, and game playing.
Why focus on generalisation? Generalisation provides a critical test-bed for contrasting two fundamentally different theories of mind, namely, symbolic and non-symbolic theories of mind. Symbolic representations are compositional (e.g., Fodor and Pylyshyn, 1988) and are claimed to be necessary to generalise “outside the training space” (Marcus,1998, 2017). By contrast, non-symbolic models, including PDP models and most Deep Neural Networks reject the claim that symbolic representations are required to support human-like intelligence. So can non-symbolic neural networks generalise as broadly as humans? If so, this would seriously challenge a core motivation for symbolic theories of mind and brain. For recent discussion on this issue, see Bowers (2017) in Trends in Cognitive Science.
Our research team is carrying out a series of empirical and modelling investigations that explore the generalisation capacities of humans and machines across a wide range of domains. These studies are designed to:
(1) Focus on tasks that require symbols for the sake of generalisation.
(2) Focus on generalisation across a range of domains in which human performance is well characterised, including vision, memory, and reasoning.
(3) Develop new learning algorithms designed to make symbolic systems biologically plausible.
Why focus on generalisation? Generalisation provides a critical test-bed for contrasting two fundamentally different theories of mind, namely, symbolic and non-symbolic theories of mind. Symbolic representations are compositional (e.g., Fodor and Pylyshyn, 1988) and are claimed to be necessary to generalise “outside the training space” (Marcus,1998, 2017). By contrast, non-symbolic models, including PDP models and most Deep Neural Networks reject the claim that symbolic representations are required to support human-like intelligence. So can non-symbolic neural networks generalise as broadly as humans? If so, this would seriously challenge a core motivation for symbolic theories of mind and brain. For recent discussion on this issue, see Bowers (2017) in Trends in Cognitive Science.
Our research team is carrying out a series of empirical and modelling investigations that explore the generalisation capacities of humans and machines across a wide range of domains. These studies are designed to:
(1) Focus on tasks that require symbols for the sake of generalisation.
(2) Focus on generalisation across a range of domains in which human performance is well characterised, including vision, memory, and reasoning.
(3) Develop new learning algorithms designed to make symbolic systems biologically plausible.
| Alternative title | Generalization in Mind and Machine |
|---|---|
| Status | Finished |
| Effective start/end date | 1/09/17 → 31/08/22 |
| Links | https://cordis.europa.eu/project/rcn/210239_en.html https://mindandmachine.blogs.bristol.ac.uk/ |
Research Groups and Themes
- Brain Imaging
- Cognitive Neuroscience
- Language
- Brain and Behaviour
- Cognitive Science
- Visual Perception
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
Research output
-
Biological convolutions improve DNN robustness to noise and generalisation
Evans, B. D., Malhotra, G. & Bowers, J. S., 1 Apr 2022, In: Neural Networks. 148, p. 96-110 15 p.Research output: Contribution to journal › Article (Academic Journal) › peer-review
21 Citations (Scopus) -
Deep Problems with Neural Network Models of Human Vision
Bowers, J. S., Malhotra, G., Dujmovic, M., Llera Montero, M., Tsvetkov, C. I., Biscione, V., Puebla, G., Gonzalez Adolfi, F., Hummel, J., Heaton, R., Evans, B. D., Mitchell, J. & Blything, R., 1 Dec 2022, In: Behavioral and Brain Sciences. p. 1 74 p.Research output: Contribution to journal › Article (Academic Journal) › peer-review
Open Access111 Citations (Scopus) -
Learning online visual invariances for novel objects via supervised and self-supervised training
Biscione, V. & Bowers, J. S., 1 Jun 2022, In: Neural Networks. 150, p. 222-236 15 p.Research output: Contribution to journal › Article (Academic Journal) › peer-review
Open Access8 Citations (Scopus)