Dynamic Model Switching to Mitigate Outdated Knowledge in Large Language Models
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
Generating timely and accurate content is a significant challenge for Large Language Models (LLMs). Obsolete information reduces their reliability and user trust. To overcome the limitations of single models in adapting to evolving information, we propose a dynamic switching model. A multitask trained switch model objective, adaptively picks between a large model that does not have recent information and a smaller model fine-tuned on recent information using contextual and temporal indicators. This method incorporates semantic update detection and temporal switching, which predicts text obsolescence through aggregation of reward signals. For evaluation, we curated the Temporally-aware Dynamic Dataset (TaDD) on Wikipedia and Guardian articles, which are frequently updated. Our framework achieves a balanced precision-recall trade-off on five datasets without continuous retraining, which shows that the model is efficient and adaptable compared to static pretrained models.