Document Summarization Model Based on General Context in RNN


Heechan Kim, Soowon Lee, Journal of Information Processing Systems Vol. 15, No. 6, pp. 1378-1391, Dec. 2019  

10.3745/JIPS.02.0123
Keywords: Document Summarization, General Context, Natural Language Processing, Sequence-to-Sequence Model
Fulltext:

Abstract

In recent years, automatic document summarization has been widely studied in the field of natural language processing thanks to the remarkable developments made using deep learning models. To decode a word, existing models for abstractive summarization usually represent the context of a document using the weighted hidden states of each input word when they decode it. Because the weights change at each decoding step, these weights reflect only the local context of a document. Therefore, it is difficult to generate a summary that reflects the overall context of a document. To solve this problem, we introduce the notion of a general context and propose a model for summarization based on it. The general context reflects overall context of the document that is independent of each decoding step. Experimental results using the CNN/Daily Mail dataset show that the proposed model outperforms existing models.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from November 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.




Cite this article
[APA Style]
Heechan Kim and Soowon Lee (2019). Document Summarization Model Based on General Context in RNN. Journal of Information Processing Systems, 15(6), 1378-1391. DOI: 10.3745/JIPS.02.0123.

[IEEE Style]
H. Kim and S. Lee, "Document Summarization Model Based on General Context in RNN," Journal of Information Processing Systems, vol. 15, no. 6, pp. 1378-1391, 2019. DOI: 10.3745/JIPS.02.0123.

[ACM Style]
Heechan Kim and Soowon Lee. 2019. Document Summarization Model Based on General Context in RNN. Journal of Information Processing Systems, 15, 6, (2019), 1378-1391. DOI: 10.3745/JIPS.02.0123.