[1] |
RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
|
[2] |
BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.
|
[3] |
LIU P J, SALEH M, POT E, et al. Generating Wikipedia by Summarizing Long Sequences[C]// International Conference on Learning Representations. 2018.
|
[4] |
李金鹏, 张闯, 陈小军, 等. 自动文本摘要研究综述[J]. 计算机研究与发展, 2021, 58(1): 1-21.
|
[5] |
张龙凯, 王厚峰. 文本摘要问题中的句子抽取方法研究[J]. 中文信息学报, 2012, 26(2): 97-101.
|
[6] |
赵伟, 王文娟, 任彦凝, 等. 基于改进Transformer的生成式文本摘要模型[J]. 重庆邮电大学学报(自然科学版), 2023, 35(1): 185-192.
|
[7] |
FAN M, SUN F, MA Y. A survey of text summarization techniques[J]. Journal of Artificial Intelligence Research, 2017, 59: 185-212.
|
[8] |
ZHANG Y, CHEN J, CHEN X. Fine-grained definition generation for scientific terms via a hierarchical and compositional approach[J]. Journal of the Association for Information Science and Technology, 2021, 72(9): 1051-1065.
|
[9] |
JIANG T, LIU S, ZHOU Z, et al. A method for scientific term definition expansion based on machine learning[J]. Computer Applications in Engineering Education, 2021, 29(5): 1097-1112.
doi: 10.1002/cae.v29.5
URL
|
[10] |
WU X, CHEN H, CHEN Y. A Method of Acronym Expansions with Attention Mechanism[J]. Journal of Information Science Theory and Practice, 2021, 9(1): 73-90.
|
[11] |
CUI Q, LIU Z, WANG J, et al. Knowledge-based text summarization: Enriching summarization by incorporating knowledge graphs[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. 2018: 579-584.
|
[12] |
CHEN X, ZHOU Y, YANG H, et al. An Improved Text Summarization Method Based on Entity-Relation Graph[J]. Future Internet, 2020, 12(2): 27.
doi: 10.3390/fi12020027
URL
|
[13] |
陈松林, 高志勇, 刘艳芳. 中文自动文摘综述: 基于深度学习技术[J]. 数据分析与知识发现, 2021, 5(7): 1-18.
|
[14] |
CHEN J, LIU Y, JIN J. Neural summarization: A survey of supervised and unsupervised approaches[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2018, 10(3): 1-28.
|
[15] |
WANG W, PAN Y, DAHLMEIER D. A survey of deep learning-based text summarization models[J]. 2019, arXiv preprint arXiv:1912.01722.
|
[16] |
电子计算机[DB/OL]. 术语在线. (2023-02-13) [2023-03-05]. https://www.termonline.cn/search?k=%E7%94%B5%E5%AD%90%E8%AE%A1%E7%AE%97%E6%9C%BA&r=1679891581708
|
[17] |
学习强国·每日科技名词编委会. 学习强国·每日科技名词(2021)[M]. 长春: 吉林出版集团股份有限公司, 2022.
|
[18] |
SU J. T5 PEGASUS - ZhuiyiAI[EB/OL]. (2022-04-03) [2023-03-05]. https://github.com/ZhuiyiTechnology/t5-pegasus
|
[19] |
XUE L, CONSTANT N, ADAM R, et al. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. 2021: 483-498.
|
[20] |
ZHANG J, ZHAO Y, SALEH M, et al. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization[C]// Proceedings of the 37 th International Conference on Machine Learning. PMLR, 2020: 11328-11339.
|