Hardware Acceleration Of Fully Quantized BERT For Efficient Natural Language Processing | Awesome LLM Papers Add your paper to Awesome LLM Papers

Hardware Acceleration Of Fully Quantized BERT For Efficient Natural Language Processing

Zejian Liu, Gang Li, Jian Cheng . 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) 2021 – 50 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Interactive Environments Model Architecture Neural Machine Translation Productivity Enhancement

BERT is the most recent Transformer-based model that achieves state-of-the-art performance in various NLP tasks. In this paper, we investigate the hardware acceleration of BERT on FPGA for edge computing. To tackle the issue of huge computational complexity and memory footprint, we propose to fully quantize the BERT (FQ-BERT), including weights, activations, softmax, layer normalization, and all the intermediate results. Experiments demonstrate that the FQ-BERT can achieve 7.94x compression for weights with negligible performance loss. We then propose an accelerator tailored for the FQ-BERT and evaluate on Xilinx ZCU102 and ZCU111 FPGA. It can achieve a performance-per-watt of 3.18 fps/W, which is 28.91x and 12.72x over Intel(R) Core(TM) i7-8700 CPU and NVIDIA K80 GPU, respectively.

Similar Work