Abstract
Many collaborative efforts, from policy initiatives to cutting-edge research, have beendedicated to develop a trustworthy Human Machine Teaming ecosystem that enhances
machines in ethical decision-making and aids humans in making better decisions. However, practical implementation of a reliable interdependence between machine learning and
humans faces significant challenges.
Firstly, existing explainable machine learning methods struggle to leverage their interpretations to enhance accuracy while balancing computational intensity, prediction accuracy, and
explainability. This limitation reduces human decision-makers’ trust in interpretable models.
Secondly, while current machine learning methods are designed to adhere to human ethical values, they fall short in addressing the incommensurability of human moral values and managing
complex, conflicting objectives. Lastly, decision-making scenarios are often uncertain and unpredictable, requiring substantial computational resources and time. However, current machine
decision-making lacks computational feasibility and generalizability to unseen scenarios, leading
to inefficient and inadequate outcomes.
Therefore, in this thesis, we will explore the reliable interdependence between machine
learning and humans from three aspects and develop novel computational frameworks to address
the above challenges. First, we present a framework for enhancing decision-making efficiency that
enables machine learning models to be trusted by humans. Specifically, we propose an energy efficient and trustworthy unsupervised anomaly detection framework. This framework achieves
a 10% improvement in anomaly detection accuracy, along with enhanced energy efficiency and
trustworthiness. To address the second challenge, we introduce an ethical decision-making
system designed to enable human value-compliant machine learning for safer decision-making.
Our system is applied to moral dilemmas within autonomous buses and demonstrates a 27%
improvement in recovering optimal solutions within the convex coverage set, as well as a policy
matching rate of 76.8% with human participants’ decision-making. Lastly, targeting the third
challenge of enabling a symbiotic cycle between machine learning and humans for decision making generalizability, we propose a Digital Twin-assisted ethical dilemma decision-making
framework. This framework achieves a 38% increase in the convergence rate for finding optimal
solutions and demonstrates adaptability to unseen scenarios.
Date of Award | 4 Feb 2025 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Yulei Wu (Supervisor) |