Inference Performance Optimization For Large Language Models On Cpus | Awesome LLM Papers Contribute to Awesome LLM Papers

Inference Performance Optimization For Large Language Models On Cpus

Pujiang He, Shan Zhou, Wenhuan Huang, Changqing Li, Duyi Wang, Bin Guo, Chen Meng, Sheng Gui, Weifei Yu, Yi Xie . No Venue 2024

[Code] [Paper] [Paper]   Search on Google Scholar   Search on Semantic Scholar
Efficiency Has Code Memory & Context Tools

Large language models (LLMs) have shown exceptional performance and vast potential across diverse tasks. However, the deployment of LLMs with high performance in low-resource environments has garnered significant attention in the industry. When GPU hardware resources are limited, we can explore alternative options on CPUs. To mitigate the financial burden and alleviate constraints imposed by hardware resources, optimizing inference performance is necessary. In this paper, we introduce an easily deployable inference performance optimization solution aimed at accelerating LLMs on CPUs. In this solution, we implement an effective way to reduce the KV cache size while ensuring precision. We propose a distributed inference optimization approach and implement it based on oneAPI Collective Communications Library. Furthermore, we propose optimization approaches for LLMs on CPU, and conduct tailored optimizations for the most commonly used models. The code is open-sourced at https://github.com/intel/xFasterTransformer.

https://huggingface.co/discussions/paper/668f48825156d55f729b4778

Similar Work