Back to Main Conference 2022
LREC 2022main

JGLUE: Japanese General Language Understanding Evaluation

Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2022)

DOI:10.63317/4t8apzxjf2bw

Abstract

To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.

Details

Paper ID
lrec2022-main-317
Pages
pp. 2957-2966
BibKey
kurihara-etal-2022-jglue
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-38-2
Conference
Thirteenth Language Resources and Evaluation Conference
Location
Marseille, France
Date
20 June 2022 25 June 2022

Authors

  • KK

    Kentaro Kurihara

  • DK

    Daisuke Kawahara

  • TS

    Tomohide Shibata

Links