A Decision-Making Process to Implement the ‘Right to be Forgotten’ in Machine Learning

Research output: Contribution to conferenceConference Paper

121 Downloads (Pure)


The unprecedented scale at which personal data is used to train ma-chine learning (ML) models is a motivation to examine the ways in which it can be erased when implementing the GDPR’s ‘right to be forgotten’. The existing literature investigating this right focus on a purely technical or legal approach, lacking the collaboration required for this interdisciplinary space. Recent works has identified there is no one solution to erasure in ML and this must therefore be decided on a case-by-case basis. However, there is an absence of guidance for controllers to follow when personal data must be erased in ML. In this paper we develop a novel, decision-making flow that encompasses the necessary consider-ations for a controller. Addressing, in particular, the interdisciplinary considera-tions relevant to the EU GDPR and data protection scholarship, as well as con-cepts from computer science and its application in industry. This results in several optimal solutions for the controller and data subject, differing with levels of eras-ure. To validate the proposed decision-making flow a real case study is discussed throughout the paper. The paper highlights the need for a clearer framework when personal data must be erased in ML; empowering the regulator, controller and data subject.
Original languageEnglish
Publication statusPublished - 2 Jun 2023
EventAnnual Privacy Forum 2023
- Université Lumière Lyon 2, Lyon, France
Duration: 1 Jun 20232 Jun 2023


ConferenceAnnual Privacy Forum 2023
Abbreviated titleAPF
Internet address


Dive into the research topics of 'A Decision-Making Process to Implement the ‘Right to be Forgotten’ in Machine Learning'. Together they form a unique fingerprint.

Cite this