Abstract
Powerful generative tools are becoming popular amongst the general public as question-answering systems, and are being utilised by vulnerable groups such as children. With children increasingly interacting with these tools, it is imperative for researchers to scrutinise the safety of LLMs, especially for applications that could lead to serious outcomes, such as online child safety queries. In this paper, the efficacy of LLMs for online grooming prevention is explored both for identifying and avoiding grooming through advice generation, and the impact of prompt design on model performance is investigated by varying the provided context and prompt specificity. In results reflecting over 6,000 LLM interactions, we find that no models were clearly appropriate for online grooming prevention, with an observed lack of consistency in behaviours, and potential for harmful answer generation, especially from open-source models. We outline where and how models fall short, providing suggestions for improvement, and identify prompt designs that heavily altered model performance in troubling ways, with findings that can be used to inform best practice usage guides.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2024 European Interdisciplinary Cybersecurity Conference (EICC) |
Editors | Shujun Li, Kovila Coopamootoo, Michael Sirivianos |
Publisher | Association for Computing Machinery (ACM) |
Pages | 1-10 |
Number of pages | 10 |
ISBN (Electronic) | 9798400716515 |
ISBN (Print) | 9798400716515 |
DOIs | |
Publication status | Published - 5 Jun 2024 |
Publication series
Name | ACM International Conference Proceeding Series |
---|
Bibliographical note
Publisher Copyright:© 2024 Owner/Author.
Research Groups and Themes
- Cyber Security