Back to Main Conference 2000
LREC 2000main
Dialogue Annotation for Language Systems Evaluation
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000)
Abstract
The evaluation of Natural Language Processing (NLP) systems is still an open problem demanding further research progress from the research community to establish general evaluation frameworks. In this paper we present an experimental multilevel annotation process to be followed during the testing phase of Spoken Language Dialogue Systems (SLDSs). Based on this process we address some issues related to an annotation scheme of evaluation dialogue corpora and particular annotation tools and processes.