Privacy-Preserving Information Extraction with Local LLMs: A Comparative Study on Dutch Debt Collection Letters
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
For individuals in financial distress, understanding debt collection letters is critical. These documents are often unstructured, use complex legal language, and contain highly sensitive personal data. Automating information extraction is essential for assisting caseworkers, who currently perform this task manually; a slow and error-prone process. The sensitive nature of this data requires efficient, privacy-preserving, locally-deployed solutions. This paper compares the feasibility of various local NLP models for this task. We evaluated a feature-engineered Conditional Random Field (CRF), a fine-tuned spaCy NER model, and several Large Language Models (LLMs) (1.1B to 14B parameters) on a new synthetic dataset of 1,000 Dutch debt letters. Models were compared using accuracy (F1-score) and deployment metrics (CPU runtime, memory usage). Our results show a clear performance-resource trade-off. Lightweight CRF and spaCy models efficiently extracted structured data but failed in many critical unstructured fields. In contrast, LLM performance scaled directly with model size. The 14B DeepSeek model achieved the highest accuracy (95.2% average F1), successfully handling all field types. In conclusion, larger local LLMs are the most viable solution for accurate, private document processing. Alternatively, a hybrid approach using lightweight models for structured data and LLMs only for complex, unstructured fields, would also be adequate.