Projects per year
Novelty detection allows robots to recognise unexpected data in their sensory field and can thus be utilised in applications such as reconnaissance, surveillance, self-monitoring, etc. We assess the suitability of Grow When Required Neural Networks (GWRNNs) for detecting novel features in a robot's visual input in the context of randomised physics-based simulation environments. We compare, for the first time, several GWRNN architectures, including new Plastic architectures in which the number of activated input connections for individual neurons is adjusted dynamically as the robot senses a varying number of salient environmental features. The networks are studied in both one-shot and continuous novelty reporting tasks and we demonstrate that there is a trade-off, not unique to this type of novelty detector, between robustness and fidelity. Robustness is achieved through generalisation over the input space which minimises the impact of network parameters on performance, whereas high fidelity results from learning detailed models of the input space and is especially important when a robot encounters multiple novelties consecutively or must detect that previously encountered objects have disappeared from the environment. We propose a number of improvements that could mitigate the robustness-fidelity trade-off and demonstrate one of them, where localisation information is added to the input data stream being monitored.
- novelty detection
- self-organised neural networks
- unsupervised learning
Wilson, R. E., Johnson, A., Bullock, S., Lawry, J., Richards, A. G., Noyes, J. M., Hauert, S., Bode, N. W. F., Wilson, R. E., Pitonakova, L., Kent, T., Crosscombe, M., Zanatto, D., Alkan, B., Drury, K. L., Hogg, E., Bonnell, W. D., Bennett, C. D., Clarke, C. E. M., Wisetjindawat, W., Potts, M. W., Ellinas, C., Sartor, P. N., Harvey, D., Rayneau-Kirkhope, B., Galvin, K., Lam, J., Barden, E. & Chattington, M.
1/10/17 → 30/09/22