# [A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction](https://www.ijcai.org/Proceedings/2017/0366.pdf)
*Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence*
###### tags: `references`
implementation: http://chandlerzuo.github.io/blog/2017/11/darnn

### **a. Encoder:**
Adaptively extract the relevant driving series at each time step by referring to the previous encoder hidden state.


### **b. Decoder:**
Select relevant encoder hidden states across all time steps.



* Evaluation: RMSE, MAPE
* For NADSAQ 100, best performance yields at window length=10,
encoder hidden state = decoder hidden state =128