Back to Main Conference 2022
LREC 2022main

Attention-Focused Adversarial Training for Robust Temporal Reasoning

Proceedings of the Thirteenth International Conference on Language Resources and Evaluation (LREC 2022)

DOI:10.63317/3u534cognzra

Abstract

We propose an enhanced adversarial training algorithm for fine-tuning transformer-based language models (i.e., RoBERTa) and apply it to the temporal reasoning task. Current adversarial training approaches for NLP add the adversarial perturbation only to the embedding layer, ignoring the other layers of the model, which might limit the generalization power of adversarial training. Instead, our algorithm searches for the best combination of layers to add the adversarial perturbation. We add the adversarial perturbation to multiple hidden states or attention representations of the model layers. Adding the perturbation to the attention representations performed best in our experiments. Our model can improve performance on several temporal reasoning benchmarks, and establishes new state-of-the-art results.

Details

Paper ID
lrec2022-main-800
Pages
pp. 7352-7359
BibKey
kanashiro-pereira-etal-2022-attention
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
79-10-95546-38-2
Conference
Thirteenth Language Resources and Evaluation Conference
Location
Marseille, France
Date
20 June 2022 25 June 2022

Authors

  • LK

    Lis Kanashiro Pereira

  • KD

    Kevin Duh

  • FC

    Fei Cheng

  • MA

    Masayuki Asahara

  • IK

    Ichiro Kobayashi

Links