본문 바로가기

전체 글536

(논문번역) FLM-101B: An Open LLM and How to Train It with $100K Budget 원문 : https://arxiv.org/abs/2309.03852 이 글에 대한 모든 권리는 원문 저자에게 있음. ### FLM-101B: An Open LLM and How to Train It with $100K Budget Xiang Li1†, Yiqun Yao1†, Xin Jiang1†, Xuezhi Fang1†, Xuying Meng2, Siqi Fan3, Peng Han3, Jing Li4, Li Du1, Bowen Qin1, Zheng Zhang1, Aixin Sun5, Yequan Wang1∗ 1Beijing Academy of Artificial Intelligence, Beijing, China 2Institute of Computing Technology, Chinese Academ.. 2023. 9. 27.
(논문번역) chain-of-verification reduces hallucination in large language models 원문 : https://arxiv.org/pdf/2309.11495.pdf 이 글에 대한 모든 권리는 원문 저자”들”에게 있음. (너무 많아서 이름 생략) ### CHAIN-OF-VERIFICATION REDUCES HALLUCINATION IN LARGE LANGUAGE MODELS Shehzaad Dhuliawala Meta AI & ETH Zu¨rich Mojtaba Komeili Meta AI Jing Xu Meta AI Roberta Raileanu Meta AI Xian Li Meta AI Asli Celikyilmaz Meta AI Jason Weston Meta AI ABSTRACT Generation of plausible yet incorrect factual information, t.. 2023. 9. 27.
논문번역) Simple Conversational Data Augmentationfor Semi-supervised Abstractive Conversation Summarization 원문 : https://aclanthology.org/2021.emnlp-main.530.pdf 이 글에 대한 모든 권리는 원문 저자인 Jiaao Chen, Diyi Yang에게 있음. ### Simple Conversational Data Augmentation for Semi-supervised Abstractive Conversation Summarization Jiaao Chen School of Interactive Computing, Georgia Institute of Technology jiaaochen@gatech.edu Diyi Yang School of Interactive Computing, Georgia Institute of Technology dyang888@gatech.edu.. 2023. 9. 27.
(논문번역) Narrate Dialogues for Better Summarization 원문 : https://aclanthology.org/2022.findings-emnlp.261/ 이 글에 대한 모든 권리는 원문 저자인 Ruochen Xu, Chenguang Zhu, Michael Zeng에게 있음. ############ Narrate Dialogues for Better Summarization Ruochen Xu, Chenguang Zhu, Michael Zeng Azure Cognitive Services Research, Microsoft {ruox, chezhu, nzeng}@microsoft.com Abstract Dialogue summarization models aim to generate a concise and accurate summary for multi- p.. 2023. 9. 26.