Abstract
The rise of machine and deep learning has revolutionized artificial intelligence (AI) across diverse domains. However, most AI research focuses on optimizing detection accuracy or decision-making precision for specific input data, often overlooking the integration of ethical considerations needed to address the complexities of real-world scenarios. Applications like autonomous driving require not only reliable data processing performance but also strict adherence to ethical principles that align with societal values. This paper introduces an Ethically Responsible Decision-Making (ER-DM) model, wherein ethical principles are mathematically formulated and integrated into the reinforcement learning (RL) framework. To address the challenges in operationalizing abstract ethical principles, we introduce a dual ethical paradigm based on Deontology and Consequentialism, enabling regulatory constraints in state transitions, policy networks and outcome evaluation in reward functions, respectively. Additionally, we propose a novel task, Ethically Responsible Anomaly Detection (ER-AD), which leverages enriched ethical scenario information to classify obstacles into four risk levels based on their ethical abnormality. The ER-DM model is systematically validated in complex driving scenarios through experiments, demonstrating at least a 6% improvement in decision-making accuracy compared to baseline models. Furthermore, by integrating the ER-DM model with deep learning segmentation models, we establish an end-to-end detection system, achieving significant enhancements in image-based anomaly detection tasks.
| Original language | English |
|---|---|
| Article number | 103226 |
| Number of pages | 15 |
| Journal | Information Fusion |
| Volume | 122 |
| DOIs | |
| Publication status | Published - 24 Apr 2025 |
Bibliographical note
Publisher Copyright:© 2025 Elsevier B.V.