Abstract
Can artificial intelligence accurately label open-text survey responses? We compare the accuracy of six large language models (LLMs) using a few-shot approach, three supervised learning algorithms (SVM, DistilRoBERTa, and a neural network trained on BERT embeddings), and a second human coder on the task of categorizing “most important issue” responses from the British Election Study Internet Panel into 50 categories. For the scenario where a researcher lacks existing training data, the accuracy of the highest-performing LLM (Claude-1.3: 93.9%) neared human performance (94.7%) and exceeded the highest-performing supervised approach trained on 1000 randomly sampled cases (neural network: 93.5%). In a scenario where previous data has been labeled but a researcher wants to label novel text, the best LLM’s (Claude-1.3: 80.9%) few-shot performance is only slightly behind the human (88.6%) and exceeds the best supervised model trained on 576,000 cases (DistilRoBERTa: 77.8%). PaLM-2, Llama-2, and the SVM all performed substantially worse than the best LLMs and supervised models across all metrics and scenarios. Our results suggest that LLMs may allow for greater use of open-ended survey questions in the future.
Original language | English |
---|---|
Pages (from-to) | 1-7 |
Number of pages | 7 |
Journal | Research & Politics |
Volume | 11 |
Issue number | 1 |
Early online date | 12 Feb 2024 |
DOIs | |
Publication status | Published - 1 Apr 2024 |
Bibliographical note
Funding Information:The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study is supported by Economic and Social Research Council (ES/S015671/1 and ES/S012435/1).
Publisher Copyright:
© The Author(s) 2024.