Research and Application of Large Model-Based Intelligent Customer Service System


  • Yuan Xi Beijing Ziwen Technology Co., Ltd. Author



Intelligent customer service, RAG large model, Dialogue management, Knowledge base, Modular design


With the rapid development of artificial intelligence technology, intelligent customer service systems have been widely used. This paper addresses the limitations of traditional intelligent customer service systems, such as limited language understanding ability, narrow knowledge coverage, and insufficient personalized service. It proposes an intelligent customer service system design scheme based on the RAG model. The scheme leverages the powerful language understanding and generation capabilities of large models, combined with dialogue management and knowledge base retrieval enhancement techniques, to build an efficient and intelligent customer service system. This paper introduces the overall architecture of the system, the design and implementation of each module, and comprehensively evaluates the system through experiments. The experimental results show that the system can provide accurate and fluent customer service, significantly improving customer satisfaction. The research in this paper provides new ideas and references for the development of intelligent customer service systems.


Download data is not yet available.


Q. Yang, Y. Zhang, W. Dai, and S. J. Pan, Transfer Learning. Cambridge University Press, 2022.

J.Wei et al., “Chain of thought prompting elicits reasoning in large language models,” 2023.

R.Thoppilan et al., “Lamda: Language models for dialog applications,” CoRR, vol. abs/2201.08239, 2022.

M. Nuruzzaman and O. K. Hussain, “A survey on chatbot implementation in customer service industry through deep neural networks,” in 2018 IEEE 15th International Conference on e-Business Engineering (ICEBE), 2018, pp. 54–61.

L. Cui, S. Huang, F. Wei, C. Tan, C. Duan, and M. Zhou, “Superagent: A customer service chatbot for e-commerce websites,” in Proceedings of ACL 2017, System Demonstrations, 2017, pp. 97–102.

A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” 2019.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pretraining of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 4171–4186.

Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, “Xlnet: Generalized autoregressive pretraining for language understanding,” Advances in Neural Information Processing Systems, vol. 32, 2019.

T. Brown et al., “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.

L.Ouyang et al., “Training language models to follow instructions with human feedback,” 2022.

Y. Wang, C. Li, H. Huang, X. Sun, and Q. Wu, “A survey on dialogue summarization: Recent advances and new frontiers,” 2021.

Y.Zhang et al., “Dialogpt: Large-scale generative pre-training for conversational response generation,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2020, pp. 270–278.

L. Zhou, J. Gao, D. Li, and H.-Y. Shum, “The design and implementation of xiaoice, an empathetic social chatbot,” Computational Linguistics, vol. 46, no. 1, pp. 53–93, 2020.

D.Adiwardana et al., “Towards a human-like open-domain chatbot,” CoRR, vol. abs/2001.09977, 2020.

J. Gao, M. Galley, and L. Li, “Neural approaches to conversational ai,” Foundations and Trends in Information Retrieval, vol. 13, no. 2-3, pp.127–298, 2019.

Y.Bai et al., “Training a helpful and harmless assistant with reinforcement learning from human feedback,” 2022.






Research Articles


How to Cite

Y. Xi, “Research and Application of Large Model-Based Intelligent Customer Service System”, ijetaa, vol. 1, no. 3, pp. 12–16, Apr. 2024, doi: 10.62677/IJETAA.2403114.