For the current REF see the REF 2021 website REF 2021 logo

Output details

11 - Computer Science and Informatics

University of Brighton

Return to search Previous output Next output
Output 7 of 32 in the submission
Article title

An investigation into the validity of some metrics for automatically evaluating natural language generation systems

Type
D - Journal article
Title of journal
Computational Linguistics
Article number
-
Volume number
35
Issue number
4
First page of article
529
ISSN of journal
0891-2017
Year of publication
2009
Number of additional authors
1
Additional information

<22>

This paper presents an empirical investigation into the validity of corpus-based evaluation metrics such as BLEU for evaluating Natural Language Generation (NLG) systems. It is helping to shape the NLG community’s perspective on using corpus-based evaluation metrics. The experimental design, for human ratings-based evaluations of NLG systems, has since been adapted and used by other NLG researchers, such as in the context of the Generation Challenges series of NLG system competitions. Computational Linguistics is a top journal in the field and is high on international journal rankings, e.g. A* on the Australian ERA/CORE list.

Interdisciplinary
-
Cross-referral requested
-
Research group
None
Citation count
16
Proposed double-weighted
No
Double-weighted statement
-
Reserve for a double-weighted output
No
Non-English
No
English abstract
-