Generating Sign Language Poses from HamNoSys and Natural Language Descriptions
Proceedings of the Fifteenth Language Resources and Evaluation Conference (LREC 2026)
Abstract
One of the steps involved in the process of sign language generation is generating a sequence of poses that represent the signs. This paper presents a method for using textual information to improve the translation of signs in HamNoSys format into sequences of poses. The method comprises a description generator that translates HamNoSys into a textual description, an LLM fine-tuned to the task of predicting a pose sequence from a HamNoSys description, and a VQ-VAE network that encodes and decodes pose sequences as a list of discrete symbols. Our experiments found that even using simple dictionary descriptions of HamNoSys, it is possible to improve the predictions of pose sequences by leveraging the information from a pretrained LLM.