AbstractThe task of visually identifying similar objects often necessitates multiple, variant observations. As humans, we actively manipulate conditions towards differing viewpoints, object configurations and more. Within this process, an agent integrates past and present information to build understanding and inform future observations. Considering the realm of visual animal biometrics, current approaches typically operate within a paradigm of evaluating a single iteration or single image. When faced with especially fine-grained object categories however, single images may provide insufficient evidence alone. In this thesis, these intuitions are capitalised on to demonstrate that animal identification benefits from an iterative and integrative paradigm; proposing several visual animal biometric processes.
In the context of this work, the task of individual animal identification is investigated to demonstrate the advantages of enacting such a paradigm. Specifically, considering the automated and minimally intrusive identification of all individuals in a herd of Holstein Friesian or dairy cattle; a species exhibiting massively-variant coat pattern visual features, structures and alignments. The idea being to deem such features as an individually-unique biometric entity and perform online herd identification via an active robot agent in an agriculturally-relevant setting. Natural challenges arise from these intentions by virtue of intra-species visual similarities and alignments, surface deformability and occlusion, target position discovery and more.
This thesis demonstrates (in order) that the evaluation of single dorsal coat pattern still images for identification purposes via classical local features and representation learning provides an identification baseline. In scenarios with partial (self-)occlusion of discriminative features however, identification performance is improved upon by temporally-integrating architectures operating on image sequences of tracked individuals over time in a passive setting. Whilst this form of approach is sufficient across herds exhibiting little intra-population similarities, an active identity recovery framework is proposed next. In a realistic simulation environment it is shown that, actively navigating to viewpoint positions that reveal disambiguating features can improve upon purely passive scenarios; concluding individual cattle identification.
Next, inter-individual navigation is considered, where an agent is tasked with locating individuals in dynamically moving herds. This culminates in the finding that artificial neural networks can effectively learn herd-like spatio-temporal distributions from example. Finally, preliminary real-world experiments provide a proof-of-concept that an Unmanned Aerial Vehicle (UAV) agent can robustly discover and passively identify individual members of a small herd – combining the tasks of exploration and identification.
Altogether, this work suggests that contemporary approaches founded in deep learning in conjunction with a UAV agent utilising existing technologies can play a viable role in improving livestock welfare within the growing future of robotic and automated agriculture.
|Date of Award
|7 May 2019
|Tilo Burghardt (Supervisor), Colin Greatwood (Supervisor) & Arthur G Richards (Supervisor)