A Systematic Comparison of Large Language Models for Data Annotation in NER Tasks
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
High-quality annotated data is essential for training effective machine learning models, especially for fine-grained tasks like Named Entity Recognition (NER), where each token in a sentence must be tagged with a golden annotation. While Large Language Models (LLMs) show strong potential in automating data annotation, existing literature lacks extensive evaluations that systematically compare different models, embedding strategies, and context selection methods, particularly on complex, real-world datasets. This paper fills this gap by conducting a comprehensive study of LLMs for NER annotation across four diverse datasets. It benchmarks both proprietary and open-source LLMs at the 7B to 70B parameter scale, including a 32B reasoning-optimized model, and explores multiple context selection strategies. Two evaluations are performed: (i) the assessment of the practical utility of LLM-generated annotations by fine-tuning a RoBERTa model on LLM-generated annotations and measuring downstream performance; (ii) the assessment of only LLM-generated annotations using token-level metrics, like Precision, Recall, F1, and agreement with human annotations (Cohen’s κ). Empirical results, supported by statistical tests, highlight the importance of choosing suitable LLMs and embedding models and reveal key trade-offs between model scale and annotation quality. Challenging datasets like SKILLSPAN further expose the limitations of current LLM-based annotation pipelines, emphasizing the need for benchmarking on difficult, real-world tasks.