# Transfer Learning Transfer learning is a well known method that train a base knowledge model first, then train the desired model by taking this base model as start point. In this project, this method is also applied on a relatively huge database to learn the knowledge of chess game first. The model is trained by 10k chess game of players whose RIO rate are higher than 2300+. * **performance** ``` Pick Prediction : Top 1 Accuracy : 50% Top 3 Accuracy : 81% Top 5 Accuracy : 91% Move Prediction : Top 1 Accuracy : 36% Top 3 Accuracy : 58% Top 5 Accuracy : 67% Combined Matching Accuracy : Top 1 Accuracy : 32% Top 3 Accuracy : 49% Top 5 Accuracy : 59% ``` The base model has a low accuracy on each step and final combination. This is as expected, because there are various player in this database, ecah of them might have a different move in same condition. After preparing the base model, we start to train the model on the dataset of player that isn't include in the base database. The detailed accuracy won't be shown, but from the experimental data of 6 individual players, apllying transfer learning enhance about 15% ~ 20% performance on 2 step prediction and final combination. The scatter plot of our experiment is shown below. ![](https://i.imgur.com/8vqOisq.png) *Each shape represent an individual player, and color red represent Pick, blue for final combination , green for Move.* There are some interesting phenomenon in the experimental result. First, the size of dataset is a positive factor only for Pick prediction. This is due to the possible choice of moving and final combination is much more than picking, thus training on more data also generate more noise. Second, when the size of dataset is fixed, there exists a obvious individual difference. The imitation performance is highly depend on the individual player, if a chess player likes to try various stratergy and opening, it will be more difficult to catch the decision.