Reference-free Evaluation at Inference for NER/NEL over OCRed Historical Texts
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Named Entity Recognition (NER) and Named Entity Linking (NEL) are core tasks in entity extraction, yet their robustness is limited when applied to noisy documents, such as those generated by Optical Character Recognition (OCR) over historical documents. Although large language models (LLMs) have shown strong zero-shot and few-shot performance on NER and NEL tasks, prior work has largely focused on using LLMs as direct predictors rather than evaluating extraction performance. In this study, we explore the feasibility of using LLMs as learned evaluators to estimate the quality of NER/NEL outputs, especially in settings where human-annotated references are unavailable at inference time. We propose supervised approaches that fine-tune LLMs to predict quality scores based on training data with gold annotations, enabling reference-free quality estimation once trained. Experiments on the HIPE-2020 benchmark across English, French, and German languages demonstrate that fine-tuned LLMs provide reliable estimates of output quality. Our findings suggest that LLM-based evaluation can support quality control and enable evaluation in noisy setting.