# [A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://www.ijcai.org/Proceedings/2017/0366.pdf) *Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence* ###### tags: `references` implementation: http://chandlerzuo.github.io/blog/2017/11/darnn ![](https://i.imgur.com/feU0W6u.png) ### **a. Encoder:** Adaptively extract the relevant driving series at each time step by referring to the previous encoder hidden state. ![](https://i.imgur.com/tzCsJwe.png) ![](https://i.imgur.com/id1MPSj.png) ### **b. Decoder:** Select relevant encoder hidden states across all time steps. ![](https://i.imgur.com/UQyjv4W.png) ![](https://i.imgur.com/hBuF9Ks.png) ![](https://i.imgur.com/Yz80Ssc.png) * Evaluation: RMSE, MAPE * For NADSAQ 100, best performance yields at window length=10, encoder hidden state = decoder hidden state =128