Abstract
Large Language Models (LLMs) have rapidly been adopted by the general public, and as usage of these models becomes commonplace, they naturally will be used for increasingly human-centric tasks, including security advice and risk identifications for personal situations. It is imperative that systems used in such a manner are well-calibrated. In this paper, 6 popular LLMs were evaluated for their propensity towards false or over-cautious risk finding in online interactions between real people, with a focus on the risk of online grooming, the advice generated for such contexts, and the impact of prompt specificity. Through an analysis of 3840 generated answers, it was found that models could find online grooming in even the most harmless of interactions, and that the generated advice could be harmful, judgemental, and controlling. We describe these shortcomings, and identify areas for improvement, including suggestions of future research directions.
| Original language | English |
|---|---|
| Title of host publication | The First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security (NLPAICS’2024) Proceedings |
| Editors | Ruslan Mitkov, Saad Ezzini, Cengiz Acarturk, Tharindu Ranasinghe, Paul Rayson, Mo El-Haj, Ignatius Ezeani, Matthew Bradbury, Nouran Khallaf |
| Publisher | ACL Anthology |
| Pages | 219-288 |
| Number of pages | 10 |
| Publication status | Published - 30 Jul 2024 |
| Event | First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security - Lancaster University, Lancaster, United Kingdom Duration: 29 Jul 2024 → 30 Jul 2024 https://nlpaics.com/ |
Conference
| Conference | First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security |
|---|---|
| Abbreviated title | NPLAICS'2024 |
| Country/Territory | United Kingdom |
| City | Lancaster |
| Period | 29/07/24 → 30/07/24 |
| Internet address |
Research Groups and Themes
- Cyber Security
Fingerprint
Dive into the research topics of 'Not everything is online grooming: False risk finding in large language model assessments of human conversations'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver