Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words
Tae-Seok Lee, Hyun-Young Lee, Seung-Shik Kang, Journal of Information Processing Systems Vol. 18, No. 3, pp. 344-358, Jun. 2022
Keywords: BERT, Deep Learning, Generative Summarization, Selective OOV Copy Model, Unknown Words
Fulltext:
Abstract
Statistics
Show / Hide Statistics
Statistics (Past 3 Years)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
Statistics (Past 3 Years)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.
|
|
Cite this article
[APA Style]
Lee, T., Lee, H., & Kang, S. (2022). Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words. Journal of Information Processing Systems, 18(3), 344-358. DOI: 10.3745/JIPS.02.0172.
[IEEE Style]
T. Lee, H. Lee, S. Kang, "Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words," Journal of Information Processing Systems, vol. 18, no. 3, pp. 344-358, 2022. DOI: 10.3745/JIPS.02.0172.
[ACM Style]
Tae-Seok Lee, Hyun-Young Lee, and Seung-Shik Kang. 2022. Improving Abstractive Summarization by Training Masked Out-of-Vocabulary Words. Journal of Information Processing Systems, 18, 3, (2022), 344-358. DOI: 10.3745/JIPS.02.0172.