owned this note
owned this note
Published
Linked with GitHub
# Parallel Monte-Carlo Tree Search Final Report
[TOC]
## Links
[slides](https://docs.google.com/presentation/d/11DV9MijsDh2fgR8W5I8kdCT4u630Bibq6ZfMPpffD6U/edit#slide=id.p)
[source code](https://github.com/CyCTW/PP-Final_Project)
## The participant(s)
<pre style="background-color: lightblue;">0516097 Chieh Ming Jiang 蔣傑名</pre>
<pre style="background-color: lightyellow;">0616225 Cheng Yuan Chang 張承遠</pre>
<pre style="background-color: lightgreen;">309555025 Wen Sheng Lo 羅文笙</pre>
## Abstract
> summarize your contribution in 100 words or less. An informed reader should be able to stop at the abstract and know roughly what you are doing.
Monte-Carlo Tree Search (MCTS) is a search algorithm for decision processes. It is widely used in kinds of games, especially in board games, such as Go, Gomoku, and Othello. Parallelizing MCTS is straightforward to increase the strength of the program, however, it's hard to do perfect parallelization on MCTS because it consists of a serial of sequential functions. After some search, we decide to do Leaf, Root, and Tree parallelization to parallelize the Monte-Carlo Tree Search algorithm. We will analyze the pros and cons of the three methods and show the results.
## Introduction:
> background on the current state-of-the-art, why your topic is important, and what is the motivation for your work
Monte Carlo Tree Search (MCTS) is an important algorithm for Reinforcement Learning (RL). The popular algorithm [Alpha-Zero](https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go) exploit this algorithm.
We have implemented MCTS in our college project. However, we implemented a serial version of the MCTS program, and we want to exploit parallelism in MCTS. In conclusion, we want to parallelize the MCTS algorithm to increase the strength of board game programs in less time.
## Proposed Solution:
>detailed description, but not code!
![](https://i.imgur.com/5HzhSks.png)
### Block diagram of root parallelization
![](https://i.imgur.com/hx06lX5.png)
There are five components in our expected block diagram. First, the Current board indicates the input board state, and we want to know which next-step is the best step in this board state. Second, select, expand, simulation, and backpropagation are four main functions that comprise the MCTS algorithm. There are some ways to exploit parallelism in the MCTS algorithm, and we will explain below.
Here are three methods we use to parallelize MCTS.
1. [Leaf parallelization (Fig.A)](#Block-diagram-of-leaf-parallelization)
Leaf parallelization is a fundamental way to implement parallel MCTS. Parallelism will be implemented in the simulation function, and one thread runs a simulation process. The simulation result will be combined to pass to the backpropagation function.
2. [Root Parallelization (Fig.B)](#Block-diagram-of-root-parallelization)
Root parallelization exploits the parallelism in the whole MCTS algorithm. We can see that one thread runs a whole MCTS algorithm and the search result of each thread will be combined to make the final decision.
3. [Tree Parallelization (Fig.C)](#Block-diagram-of-tree-parallelization)
Tree Parallelization uses a shared tree to play several MCTS games simultaneously. As a consequence, there might be race condition problems when doing backpropagation. We use atomic functions to avoid this problem. Moreover, in order not to traverse the same nodes between different threads, we apply the virtual loss function to avoid wasting computing resources. Giving a negative value when the node is visited, and remove it when doing backpropagation.
Here are the simple pros and cons of these three methods.
| |Pros |Cons |
|--------------------|---------------------------------|--------|
|Leaf Parallelization|Easy to implement |Fastest thread have to wait|
|Root Parallelization|Greatest parallelism |Each tree search similar pattern|
|Tree Parallelization|Every thread run in the same tree|Race condition|
## Experimental Methodology:
> tests, input sets, environment, etc.
- Environments
- CPU : [Intel® Xeon® Gold 5118 CPU @ 2.30GHz](https://ark.intel.com/content/www/us/en/ark/products/120473/intel-xeon-gold-5118-processor-16-5m-cache-2-30-ghz.html)
- OS: Ubuntu 20.04.1 LTS
- Memory: 96 GB
- Applied game: [Surakarta](https://en.wikipedia.org/wiki/Surakarta_(game))
- Testing target
- Speedup
- Game Per Second (GPS): Total simulation count in 100 games / Total time
- Speedup = Parallel GPS / Serial GPS
- Winrate
- The gap between our parallel MCTS methods and perfectly parallelized MCTS.
- ex: Parallel methods (N threads, 1 second) V.S. Serial method (1 thread, N seconds)
- The optimal winrate will be 0.5, i.e the parallel method is perfectly parallelized.
## Experimental Results:
> Quantitative data and analysis!
We had implemented both the Pthread and OpenMP version of Root and Tree parallelization. So let's compare the speedup of two kinds of versions. In the following figures, we could see there are not much difference between Pthread and OpenMP version. So we use only the OpenMP version of Root parallelization and Tree parallelization in the following analysis.
![Imgur](https://i.imgur.com/ItIRy2u.jpg)
- Speedup
- ![Imgur](https://i.imgur.com/LheIki8.png)
- Root parallelization: We could see Root parallelization has the best speedup and scalability because it creates # THREADS trees to parallelize the MCTS algorithm. Moreover, neither communication between threads nor race condition occurs in this method. For these reasons, Root parallel performs best in speedup.
- Leaf parallelization: Leaf parallelization has a fatal defect in speedup. It uses all the threads in the simulation stage. As a consequence, it is necessary to wait for the longest thread finishing its simulation to backpropagate the values back to the root of the tree. For this reason, it has poor performance in speedup, and the scalability is bad as well.
- Tree Parallelization: Tree parallelization has minor performance in the three methods. Although it may encounter a race condition problem, its speedup still performs better than tree parallelization. However, the lock overhead makes it poor scalability.
- Winrate
- Reason:
**GPS speedup measure might be misleading** since it is not always the case that a faster program is stronger. For example... **TODO**
- Experiment Result:
We propose a method that can evaluate the **difference between perfect parallelization and real parallelization.**
Parallel methods (N threads , 1 seccond) V.S. Serial method (1 thread , N seconds)
### 4 thread and 16 thread win_rate
![Imgur](https://i.imgur.com/zrECmUl.jpg)
From the figure above, we could find out as the number of threads increases, the gap between the parallelized method and perfectly parallelized is larger. As more threads are created, leaf parallelization has to wait for the longest simulation, and root parallelization is more likely to traverse the same nodes, and tree parallelization has high lock overhead.
In the [figure of 4 threads](#4-thread-win-rate), we can observe that there is not much difference between [the three methods](#Proposed-Solution). So we run it again with 16 threads.
Finally, we can see that in both figures both root and tree parallelization perform well. Root parallelization creates # THREADS trees simultaneously, so it performs well in both speedup and winrate. As for tree parallelization, although it uses a lock and atomic function and resulting in poor speedup, the use of virtual loss prevents it from traversing similar nodes. So both tree and root parallelization are good methods to parallelize MCTS.
## Related work
> Relate your work to research by others. Any time you mention some other work, compare or contrast it to your own.
The paper[1] applies the parallel MCTS method in GO, which uses Elo-rating to denote the strength of a program. It takes a specific baseline opponent to measure its winrate. However, we cannot get such a program in the Surakarta game. So we only compare the speedup.
![Imgur](https://i.imgur.com/aQLVigY.jpg)
From the figures above, we can see the paper's[1] results are a little better than us in leaf and tree parallelization, and much better than us in root parallelization. We come up with three reasons.
### Analysis
1. environment
- The paper's[1] experiments were performed on the supercomputer Huygens, which has 120 nodes, each with 16 cores POWER5 running at 1.9 GHz and having 64 Gigabytes of memory per node. The hardware difference may mainly contribute to the result.
2. The games are different
- The paper[1] applies parallel MCTS in GO, while we apply it to Surakarta. The game rules are different, and the average game length of the two games are different as well. For these reasons, the efficiency of parallelization could be different.
3. MCTS efficiency
- We did not optimize our MCTS algorithm, the original speed could be slower.
## Conclusions
> Highlight the important points of your analysis and contribution. Also, give prospects for future research on this or related topics.
In conclusion, we implemented the three parallelization methods *(leaf, root, and tree parallelization)* and use these methods in our Surakarta game. We gain better performance and our experiment result is similar to the paper's[1] result. **Both Root and Tree parallelization are good in MCTS parallelization.**
In the future, we want to search for more parallelization methods and try to build up a new method for parallelization.
## References
1. [G. Chaslot, M. Winands, and J. van den Herik, “Parallel Monte-Carlo Tree Search,” in the 6th International Conference on Computers andGames, vol. 5131. Springer Berlin Heidelberg, 2008, pp. 60–71.](https://dke.maastrichtuniversity.nl/m.winands/documents/multithreadedMCTS2.pdf)
2. ["A New Method for Parallel Monte Carlo Tree Search", S. Ali Mirsoleimani, Aske Plaat, Jaap van den Herik, Jos Vermaseren](https://arxiv.org/abs/1605.04447)
3. [“A Survey of Monte Carlo Tree Search Methods,” C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and S. Colton](https://ieeexplore.ieee.org/document/6145622)
4. ["Software Framework for Parallel Monte Carlo Tree Search", Ting-Fu Liao, I-Chen Wu](https://ir.nctu.edu.tw/bitstream/11536/73308/1/600101.pdf)
5. ["A Lock-free Algorithm for Parallel MCTS". S. Ali irsoleimani, Jaap van den Herik, Aske Plaat and Jos Vermaseren](http://liacs.leidenuniv.nl/~plaata1/papers/paper_ICAART18.pdf)
6. ["Structured Parallel Programming for Monte Carlo Tree Search" S. Ali Mirsoleimani∗†, Aske Plaat, Jaap van den Herik and Jos Vermaseren](https://arxiv.org/abs/1704.00325)
7. [MCTS Introduction](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search)