CrisisCL: A Domain Incremental Learning Benchmark for Crisis Management
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
This paper proposes CrisisCL, a domain incremental learning benchmark for crisis management. Based on previous crisis management protocols, it improves consistency by allowing continual learning (CL) of new crises. A set of experiments have been conducted on multilingual datasets relying on continual learning methods and transformers to improve performance and ensure model generalization. Results reveal that regularization methods are more effective on large, coherent domains, whereas replay strategies struggle under constrained memory. Additional experimental protocols further expose the limitations of current CL methods when generalizing to unforeseen crisis events.