Back to Main Conference 2006
LREC 2006main

Automated Summarization Evaluation with Basic Elements.

Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC 2006)

DOI:10.63317/56mw4kwuy2rp

Abstract

As part of evaluating a summary automati-cally, it is usual to determine how much of the contents of one or more human-produced “ideal” summaries it contains. Past automated methods such as ROUGE compare using fixed word ngrams, which are not ideal for a variety of reasons. In this paper we describe a framework in which summary evaluation measures can be instantiated and compared, and we implement a specific evaluation method using very small units of content, called Basic Elements that address some of the shortcomings of ngrams. This method is tested on DUC 2003, 2004, and 2005 systems and produces very good correlations with human judgments.

Details

Paper ID
lrec2006-main-256
Pages
N/A
BibKey
hovy-etal-2006-automated
Editor
N/A
Publisher
European Language Resources Association (ELRA)
ISSN
2522-2686
ISBN
2-9517408-2-4
Conference
Fifth International Conference on Language Resources and Evaluation
Location
Genoa, Italy
Date
24 May 2006 26 May 2006

Authors

  • EH

    Eduard Hovy

  • CL

    Chin-Yew Lin

  • LZ

    Liang Zhou

  • JF

    Junichi Fukumoto

Links