模型演算法 / AutoRegression === ###### tags: `ML / 時間序列` ###### tags: `ML`, `時間序列`, `模型演算法`, `AutoRegression` <br> [TOC] <br> ## 來源資料 - ### [Time series forecasting with FEDOT. Guide](https://github.com/ITMO-NSS-team/fedot-examples/blob/main/notebooks/latest/3_intro_ts_forecasting.ipynb) <br> ## 模型演算法 - 時間序列資料:`[5,7,9,7,5,5,3,4,6,14,6,3,5]` ![](https://i.imgur.com/8HMKf6Y.png) - 動畫展示:從訓練資料生成,到預測未來資料 ![](https://github.com/ITMO-NSS-team/fedot-examples/raw/d72d57b20cc51a94079c4d0c744f81e4bc48e843/notebooks/jupyter_media/time_series/animation_forecast.gif) <br> ## 實作 ### 程式碼 ```python= import pandas from sklearn.linear_model import LinearRegression from sklearn.ensemble import RandomForestClassifier class simple_autoregression: def __init__(self, window_size, model = LinearRegression): self.window_size = window_size self.model = model() self.lagged_df = None def train(self, df): ws = self.window_size # there will be 'window_size' rows to be dropped # the shape will be row(df)-window_size : col(df) # # At least "window_size" rows are required # to be enough to be transformed into a lagged row for forecasting if len(df) - ws < ws: raise ValueError('the size of the dataset is not enough for forecasting') # generates lagged dataframe lagged_df = [] for s in range(ws, -1, -1): lagged_df.append(df.shift(s)) lagged_df = pandas.concat(lagged_df, axis=1) # fill in the column names X_columns = [] for s in range(ws, 0, -1): X_columns.append('shift-' + str(s)) y_column = ['target'] lagged_df.columns = X_columns + y_column # drop na rows (the first `shift_size` rows) lagged_df.dropna(inplace = True) self.model.fit(lagged_df[X_columns], lagged_df[y_column]) self.lagged_df = lagged_df return self def predict(self, next_steps=1): rows, cols = self.lagged_df.shape future_df = pandas.DataFrame(columns=self.lagged_df.columns) # predict the first step X = self.lagged_df['target'][rows - cols + 1 : rows + 1].to_list() y = [self.model.predict([X]).ravel()[0]] future_df.loc[0] = X + y # predict the remaining steps for step in range(next_steps - 1): X = future_df.loc[step][1:].to_list() y = [self.model.predict([X]).ravel()[0]] future_df.loc[step+1] = X + y return future_df def get_fitted_target(self): ''' get the y_train (used to measure the performance of the model) ''' if self.lagged_df is None: raise Exception('Please call the train() API first.') feature_columns = list(self.lagged_df.columns) feature_columns.remove('target') fitted_target = self.model.predict(self.lagged_df[feature_columns]) fitted_target = fitted_target.ravel() return fitted_target ``` <br> <hr> <br> ## 測試:每月搭機旅客的人數預估 ### 程式碼 ```python= window_size = 24 ar = simple_autoregression(window_size, LinearRegression) df_train = pandas.read_csv('air-passengers-train.csv', index_col=[0]) ar.train(df_train) #display(ar.lagged_df) next_steps = 28 df_forecast = ar.predict(next_steps) #display(df_forecast) print(f'Next {next_steps}-steps:', [round(y, 2) for y in df_forecast['target']] ) ``` - 執行結果 ```= Next 28-steps: [436.01, 369.54, 326.94, 353.03, 362.01, 339.86, 385.33, 374.89, 399.95, 485.79, 553.09, 571.71, 496.52, 420.95, 376.18, 398.17, 403.3, 376.82, 415.63, 405.38, 434.47, 534.64, 612.65, 635.82, 551.5, 463.61, 414.1, 431.07] ``` <br> ### 效能評估 ```python= from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error fitted_target = ar.get_fitted_target() print('train: r2:', r2_score( ar.lagged_df.target, fitted_target)) df_test = pandas.read_csv('air-passengers-test.csv', index_col=[0]) print('test: r2:', r2_score( df_test['#Passengers'], df_forecast.target)) print('test: rmse:', mean_squared_error( df_test['#Passengers'], df_forecast.target, squared = False)) ``` - 執行結果 ```= train: r2: 0.9915987330708442 test: r2: 0.923178511195158 test: rmse: 21.76724589725269 ``` <br> <hr> <br> ## 測試:合成時間序列 ### 資料來源 - [模擬時間序列資料](/7h4bK5EqRvSANplJSoq2Mg#模擬時間序列資料) ### 程式碼 ```python= from sklearn.metrics import r2_score, mean_squared_error df_train = pandas.DataFrame(train_data, columns=['value']) df_metrics = pandas.DataFrame(columns=['window-size', 'train-r2', 'train-rmse', 'test-r2', 'test-rmse']) metrics = None for window_size in range(25, 600, 25): ar_model = simple_autoregression(window_size) ar_model.train(df_train) metrics = [window_size] y_train = ar_model.get_fitted_target() y_true = train_data[-len(y_train):] score = r2_score(y_true, y_train) metrics.append(score) score = mean_squared_error(y_true, y_train, squared=False) metrics.append(score) y_pred = ar_model.predict(next_steps=100).target y_true = test_data score = r2_score(y_true, y_pred) metrics.append(score) score = mean_squared_error(y_true, y_pred, squared=False) metrics.append(score) df_metrics.loc[len(df_metrics)] = metrics display(df_metrics) ``` - 執行結果 ![](https://i.imgur.com/WDwAo7C.png) <br> ### 效能評估 - 當 window-size 為 525,r2=0.992840, 擬合結果最好 - 當 window-size 為 350,測試結果最好 ### window-size 為 525, 預測結果: ``` ar_525 = [-0.564139808565589, -0.7608801559257944, -0.902611737869864, -0.8517697458236834, -0.8641404510752494, -0.949944249817342, -0.962676419709535, -1.0055782908898792, -1.007211805860501, -1.0235850300419835, -1.15494902398216, -1.1719211760978732, -1.2677455421939963, -1.1637006281447246, -1.2222538088899886, -1.2584202499668822, -1.2791403854598398, -1.2500774975518767, -1.3386431563448322, -1.3098274485392927, -1.450853920938774, -1.4874908474002833, -1.31688223129555, -1.5547006674911121, -1.3518209071539857, -1.6113636200184156, -1.5897941011346925, -1.7366983062527823, -1.5383322030187665, -1.7384735718215418, -1.681672456325635, -1.7226156674293054, -1.7286592076487695, -1.7667902405901248, -1.7422905696978315, -1.8236094532985907, -1.8994433604345766, -1.6929714771963698, -1.844734435509336, -1.7968920854304586, -1.9844369806418898, -1.7645220171519946, -1.9248730911602745, -1.997277686327893, -2.0167777522197405, -1.8341822586739278, -1.9255398739275598, -1.993045464666058, -2.1253004965010573, -1.9438405986225646, -1.9217881532885057, -2.0752130138327822, -2.032685440538597, -1.9430766154960317, -2.1251592055088864, -1.9744134445199455, -1.9496752634398198, -2.0938102777224623, -1.950453953646385, -2.0764059362627885, -1.9059163842755413, -1.8510767053599189, -2.0054288266072726, -1.965201040376339, -1.9067382612947046, -1.9711594225169942, -1.8709629869232083, -2.002133705624956, -1.9383917180575736, -1.9176184470735262, -1.8628419884980618, -1.930576852897873, -2.001045132922131, -1.7624076532930046, -1.7137470865379307, -1.7936226012077614, -1.8320102296032441, -1.8314569360843385, -1.6577393448432043, -1.754497513482834, -1.7480097524855804, -1.7514712772063101, -1.5379927578709418, -1.5018360653374108, -1.6465206026780292, -1.5289554840872117, -1.5955697307979415, -1.5232572257411472, -1.515250000600795, -1.4649251183156364, -1.5247251562350748, -1.6551058271764194, -1.4265602449989714, -1.3567340208384968, -1.3639097614878093, -1.3130231897983349, -1.2711513885072205, -1.3294816581572535, -1.2876800722317545, -1.2405035662130746] ``` - test - R2: 0.817341 - MSE: 0.024158 - RMSE: 0.155429 ### window-size 為 350, 預測結果: ``` ar_350 = [-0.6647629238296976, -0.7787093954496396, -0.9100762543017857, -0.8713925785133143, -0.8930092664799648, -0.8610066214704813, -0.9063982989524706, -0.9631980619375495, -1.02558770457494, -1.017193389161309, -1.0569346700529454, -1.1220204506554883, -1.2193818748130743, -1.1821102667438652, -1.1914114565490894, -1.2361287933736287, -1.2169237402405253, -1.2480491045732702, -1.2810638722221153, -1.2823283726027255, -1.4013764452348059, -1.4777683923178697, -1.3403831950186291, -1.4613882185539606, -1.4141760343219891, -1.5916118928242797, -1.554169515338598, -1.5495097867260923, -1.5187713155233353, -1.6502123414310859, -1.6443364197220867, -1.6274560934238043, -1.7443222193872512, -1.6923474562158105, -1.6796639656844732, -1.7195574063498809, -1.8112570129660541, -1.787476394271645, -1.7666599106439045, -1.7855168976439268, -1.9521326033430548, -1.7541114279551566, -1.8664488788020264, -1.93290511791524, -1.873711386351126, -1.8007350643229254, -1.8371905456220734, -1.891415987994455, -1.9770612989330376, -1.8724271527224838, -1.9211664827330601, -1.85733632841089, -1.8999888831913634, -1.8904750559777468, -2.0448699814281692, -1.8656819044008095, -1.8823906930794085, -1.9451046811744819, -1.985220649162245, -1.9694680974632897, -1.8238428657979742, -1.8343104933948875, -1.8728387098639945, -1.8810678647309447, -1.8122000508566563, -1.915022288859247, -1.7932816377659424, -1.8190289580851609, -1.8201168970870143, -1.8390660646221462, -1.7711148825260672, -1.8301633237002197, -1.759938053819957, -1.6882714276969741, -1.6291662262389528, -1.6840910188851455, -1.6780149853481812, -1.6759049845641905, -1.582920335862519, -1.6243845035803264, -1.6020395600363226, -1.6183302860296986, -1.5106296596485238, -1.4472571367947882, -1.5354031337677583, -1.388649922398054, -1.4926138125460768, -1.3902600036837605, -1.3746260700020871, -1.3339910100790886, -1.4492400041411726, -1.4379645773112555, -1.253945419040938, -1.244122305134214, -1.2741531654919493, -1.1921061788652916, -1.1243133808044699, -1.1745650982802056, -1.136389543001667, -1.0894773323660474] ``` - test - R2: 0.911244 - MSE: 0.011739 - RMSE: 0.108345 ### window-size 為 XXX, 預測結果的程式碼: ```python= window_size = 525 ar_model = simple_autoregression(window_size) ar_model.train(df_train) y_pred = ar_model.predict(next_steps=100).target y_true = test_data print('test:') print(' - R2:', round(r2_score(y_true, y_pred), 6)) print(' - MSE:', round(mean_squared_error(y_true, y_pred, squared=True), 6)) print(' - RMSE:', round(mean_squared_error(y_true, y_pred, squared=False), 6)) print(list(y_pred)) ```