Operational Adaptation of Deep Neural Networks

  • Abanoub Ghobrial

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

The decision-making of neural networks involves heavily nested calculations, which makes their decision-making for each possible input difficult to comprehend. As such, neural networks are often treated as black-boxes. This black-box treatment, combined with the vast range of possible inputs in their expected operational environments, makes relying solely on their verification and validation at design time insufficient to ensure their reliability and trustworthiness. As a result, humans are exploring the potential for neural networks to continuously learn, adapt and validate their outputs during operation to increase their reliability and trustworthiness. This is especially important for neural networks in autonomous systems or machine learning systems where the system is likely to encounter a domain shift.

Predeployment development, testing, and evaluation of machine learning systems are important; however, they are limited by problems such as parameter space explosion and a lack of transparency in decision-making for output predictions. This makes it difficult to identify the operational domain in which machine learning systems may fail. To overcome this problem, systems employing machine learning functionalities need to adapt during runtime to fit their operational environments, whilst also continuously validating their outputs. This is crucial for enhancing reliability and trustworthiness for operational domains that the system may encounter.

To achieve the goal of runtime adaptation, this thesis introduces a high-level approach for continuous adaptation during runtime of autonomous systems, which is divided into two sub-processes: 1) monitoring and detection and 2) retraining of neural networks. Based on these two sub-processes, this thesis contributes by developing several pieces of work benefiting the process of runtime adaptation.

For the monitoring and detection sub-process a novel method and a trustworthiness score to
enable autonomous systems to produce reliable predictions supported by transparent reasoning are presented. These enable the detection of untrustworthy predictions. The thesis also explored the venue of applying assertions as runtime monitors from codes of practice based on an existing methodology for codifying codes of practice. These codes of practice are written in natural language and need to be expressed as logical predicates to enable continuous runtime behavioural monitoring of autonomous systems.

For the retraining of neural networks sub-process, two novel methods for runtime domain adaptation have been proposed. One method is for supervised retraining and the other is for unsupervised retraining at runtime. The methods use a few samples, between 1-100 samples, and regularisation techniques to retrain by utilising transfer learning without having access to the initial training dataset. Both approaches achieve state-of-the-art domain adaption accuracy by improving prediction accuracy during runtime by an average of 15% and can reach above 40% improvement for certain domains.

Through the work carried out in this thesis, an improved understanding is generated of how autonomous systems may be deployed with evolving functionalities. The methods introduced in this thesis may enable innovators and regulators to be equipped with more tools to achieve runtime adaptation and evaluation of autonomous systems bringing them closer to deployment.
Date of Award18 Jun 2024
Original languageEnglish
Awarding Institution
  • University of Bristol
SupervisorKerstin I Eder (Supervisor)

Cite this

'