Transformers In Time-series Analysis: A Tutorial | Awesome LLM Papers Add your paper to Awesome LLM Papers

Transformers In Time-series Analysis: A Tutorial

Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ghulam Rasool, Ravi P. Ramachandran . Circuits, Systems, and Signal Processing 2022 – 192 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Applications Model Architecture

Transformer architecture has widespread applications, particularly in Natural Language Processing and computer vision. Recently Transformers have been employed in various aspects of time-series analysis. This tutorial provides an overview of the Transformer architecture, its applications, and a collection of examples from recent research papers in time-series analysis. We delve into an explanation of the core components of the Transformer, including the self-attention mechanism, positional encoding, multi-head, and encoder/decoder. Several enhancements to the initial, Transformer architecture are highlighted to tackle time-series tasks. The tutorial also provides best practices and techniques to overcome the challenge of effectively training Transformers for time-series analysis.

Similar Work