ASSESSING THE EFFICACY OF LSTM, TRANSFORMER, AND RNN ARCHITECTURES IN TEXT SUMMARIZATION


Abstract views: 124 / PDF downloads: 669

Authors

  • Seda BAYAT Igdir University
  • Gultekin ISIK Igdir University

DOI:

https://doi.org/10.59287/icaens.1099

Keywords:

Automatic Text Summarization, LSTM, GRU, RNN, Transformer

Abstract

The need for efficient and effective techniques for automatic text summarization has become increasingly critical with the exponential growth of textual data in different domains. Summarizing long texts into short summaries facilitates a quick understanding of the key information contained in the documents. In this paper, we evaluate various architectures for automatic text summarization using the TEDx dataset, a valuable resource consisting of a large collection of TED talks with rich and informative speech transcripts. Our research focuses on evaluating the performance of Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN) and Transformer architectures for automatic text summarization. We measure the accuracy of each model by comparing the generated summaries with human-written summaries. The findings show that the Transformer model achieves the highest accuracy, followed closely by the GRU model. However, LSTM, RNN exhibit relatively lower accuracies. We also investigate the trade-off between accuracy and conciseness in summarization. Our study reveals that the Transformer model succeeds in producing accurate and concise summaries, albeit at a higher computational cost. On the other hand, the GRU model strikes a desirable balance between accuracy and conciseness, making it a suitable choice. Overall, this research provides valuable insights into the effectiveness of different architectures for automatic text summarization and highlights the superiority of the Transformer and GRU models in this area.

Author Biographies

Seda BAYAT, Igdir University

Mechatronic Engineering,  Turkey

Gultekin ISIK, Igdir University

Computer Engineering,  Turkey

Downloads

Published

2023-07-21

How to Cite

BAYAT, S., & ISIK, G. (2023). ASSESSING THE EFFICACY OF LSTM, TRANSFORMER, AND RNN ARCHITECTURES IN TEXT SUMMARIZATION. International Conference on Applied Engineering and Natural Sciences, 1(1), 813–820. https://doi.org/10.59287/icaens.1099