Background
Type: Article

Evaluation metrics on text summarization: comprehensive survey

Journal: Knowledge and Information Systems (02191377)Year: December 2024Volume: 66Issue: Pages: 7717 - 7738
Davoodijam E.Alambardar M.a
DOI:10.1007/s10115-024-02217-0Language: English

Abstract

Automatic text summarization is the process of shortening a large document into a summary text that preserves the main concepts and key points of the original document. Due to the wide applications of text summarization, many studies have been conducted on it, but evaluating the quality of generated summaries poses significant challenges. Selecting the appropriate evaluation metrics to capture various aspects of summarization quality, including content, structure, coherence, readability, novelty, and semantic relevance, plays a crucial role in text summarization application. To address this challenge, the main focus of this study is on gathering and investigating a comprehensive set of evaluation metrics. Analysis of various metrics can enhance the understanding of the evaluation method and leads to select appropriate evaluation text summarization systems in the future. After a short review of various automatic text summarization methods, we thoroughly analyze 42 prominent metrics, categorizing them into six distinct categories to provide insights into their strengths, limitations, and applicability. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.