Back to Main Conference 2026
LREC 2026main

CodeClarity: A Framework and Benchmark for Evaluating Multilingual Code Summarization

Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)

DOI:10.63317/3cmt3ycig8a7

Abstract

Large Language Models (LLMs) are increasingly used to summarize and document code, yet most research and training data remain limited to English. This creates barriers for developers working in other languages and leaves the multilingual capabilities of LLMs largely unexplored. We present CodeClarity, a framework for evaluating multilingual code summarization across six programming and six natural languages. It combines reference-based metrics, LLM-judge ratings, and faithfulness checks (identifiers and script) to capture surface similarity, semantic adequacy, and code-aware fidelity. Our experiments reveal that lexical metrics penalize morphologically rich languages, while judge-based evaluations provide more stable, semantically aligned assessments. This work establishes the first reproducible foundation for studying multilingual code summarization and points toward fairer, more inclusive evaluation of code intelligence systems. CodeClarity-Bench and the full evaluation pipeline are publicly available at huggingface.co/CodeClarity and github.com/MadhuNimmo/CodeClarity, enabling community-scale human validation and follow-up studies.

Details

Paper ID
lrec2026-main-511
Pages
pp. 6439-6451
BibKey
chakraborty-etal-2026-codeclarity
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
978-2-493814-49-4
Conference
The Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Location
Palma, Mallorca, Spain
Date
11 May 2026 16 May 2026

Authors

  • MC

    Madhurima Chakraborty

  • DS

    Drishti Sharma

  • MS

    Maryam Sikander

  • EN

    Eman Nisar

Links