---
tags: 論文分析
---
# 論文分析:Temporal Fusion Transformers for interpretable multi-horizon time series forecasting
# 摘要:
Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed in the past – without any prior information on how they interact with the target. Several deep learning methods have been proposed, but they are typically ‘black-box’ models that do not shed light on how they use the full range of inputs present in practical scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, TFT uses recurrent layers for local processing and interpretable self-attention layers for long-term dependencies. TFT utilizes specialized components to select relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of scenarios. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and highlight three practical interpretability use cases of TFT.
# 想解決的問題
這篇論文想要解決的問題是在多時間預測中,如何有效地處理包含靜態、已知未來和過去時間序列等複雜的輸入,並提供可解釋性的模型來揭示模型如何使用這些輸入。該論文提出了一種名為Temporal Fusion Transformer (TFT)的新型注意力模型,並在多個真實世界數據集上展示了其顯著的性能優勢和可解釋性。
# 使用的方法
該論文提出了一種名為Temporal Fusion Transformer (TFT)的新型注意力模型,用於多時間預測。TFT使用循環層進行局部處理,使用可解釋的自注意層進行長期依賴關係的學習,並利用專門的組件選擇相關特徵和一系列閘控層抑制不必要的組件,以實現在各種實際情況下的高性能。該論文在多個真實世界數據集上展示了TFT相對於現有基準的顯著性能提升,並突出了TFT的三種實用的可解釋性應用案例。
# 最後的成果
該論文提出了一種名為Temporal Fusion Transformer (TFT)的新型注意力模型,用於多時間預測。通過在多個真實世界數據集上的實驗,該論文展示了TFT相對於現有基準的顯著性能提升。同時,該論文還突出了TFT的三種實用的可解釋性應用案例,使得模型的預測結果更加可靠和可解釋。
# 關鍵字
多時間預測、解釋性、注意力模型、循環層、自注意力層