Aligned Parallel Corpus of the Vedic Saṁhitās for Machine Translation
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
We introduce a verse-/paragraph-aligned parallel corpus for three Vedic Saṁhitās –the R̥gveda (R̥V), the Atharvaveda Śaunaka (AVŚ), and the Taittirīya Saṁhitā (TS)– paired with authoritative public-domain translations (Geldner for R̥V, Whitney for AVŚ, and Keith for TS). The source texts are drawn from established digital editions (e.g., TITUS and VedaWeb) and normalized under ISO 15919. Each Sanskrit segment is aligned to exactly one translated unit (verse or paragraph for TS prose), yielding a unified, model-ready format. Using this resource, we fine-tune and evaluate three large language models –GPT-4.1 nano, Gemini 2.5 Flash, and Mitra– on Vedic→German/English translation. Evaluation combines surface and semantic metrics (case-insensitive sacreBLEU and COMET), enabling a balanced assessment of form and meaning. Results show consistent in-domain gains after supervised fine-tuning, but substantial cross-domain degradation when models are tested on unseen Saṁhitās, indicating pronounced stylistic and lexical divergence among R̥V, AVŚ, and TS. These findings motivate domain-aware training and reporting practices for Vedic machine translation. We release the corpus with standardized splits and preprocessing to support reproducibility and future d research on historical language modeling, alignment, and translation for low-resource ancient languages.