Back to Main Conference 2024
LREC-COLING 2024main

HuLU: Hungarian Language Understanding Benchmark Kit

Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

DOI:10.63317/2tv86be3yd8b

Abstract

The paper introduces the Hungarian Language Understanding (HuLU) benchmark, a comprehensive assessment framework designed to evaluate the performance of neural language models on Hungarian language tasks. Inspired by the renowned GLUE and SuperGLUE benchmarks, HuLU aims to address the challenges specific to Hungarian language processing. The benchmark consists of various datasets, each representing different linguistic phenomena and task complexities. Moreover, the paper presents a web service developed for HuLU, offering a user-friendly interface for model evaluation. This platform not only ensures consistent assessment but also fosters transparency by maintaining a leaderboard showcasing model performances. Preliminary evaluations of various LMMs on HuLU datasets indicate that while Hungarian models show promise, there’s room for improvement to match the proficiency of English-centric models in their native language.

Details

Paper ID
lrec2024-main-0733
Pages
pp. 8360-8371
BibKey
ligeti-nagy-etal-2024-hulu
Editor
N/A
Publisher
European Language Resources Association (ELRA) and ICCL
ISSN
2522-2686
ISBN
979-10-95546-34-4
Conference
Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Location
Turin, Italy
Date
20 May 2024 25 May 2024

Authors

  • NL

    Noémi Ligeti-Nagy

  • GF

    Gergő Ferenczi

  • EH

    Enikő Héja

  • LL

    László János Laki

  • NV

    Noémi Vadász

  • ZY

    Zijian Győző Yang

  • TV

    Tamás Váradi

Links