The Transformer architecture revolutionized the field of natural language processing (NLP).
Transformers-based models (e.g., BERT) power many important Web services, such as search, translation, question-answering, etc.
While enormous research attention is paid to the training of those models, relatively little efforts are made to improve their inference performance.
This paper comes to address this gap by presenting an empirical analysis of scalability and performance of inferencing a
Transformer-based model on CPUs.
Focusing on the highly popular BERT model, we identify key components of the Transformer architecture where the bulk of the computation happens,
and propose an Adaptive Linear Module Optimization (ALMO) to speed them up.
The optimization is evaluated using the inference benchmark from HuggingFace, and is shown to achieve the speedup of up to x1.71.
Notably, ALMO does not require any changes to the implementation of the models nor affects their accuracy.