or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing
xxxxxxxxxx
Parallel Monte-Carlo Tree Search Final Report
Links
slides
source code
The participant(s)
Abstract
Monte-Carlo Tree Search (MCTS) is a search algorithm for decision processes. It is widely used in kinds of games, especially in board games, such as Go, Gomoku, and Othello. Parallelizing MCTS is straightforward to increase the strength of the program, however, it's hard to do perfect parallelization on MCTS because it consists of a serial of sequential functions. After some search, we decide to do Leaf, Root, and Tree parallelization to parallelize the Monte-Carlo Tree Search algorithm. We will analyze the pros and cons of the three methods and show the results.
Introduction:
Monte Carlo Tree Search (MCTS) is an important algorithm for Reinforcement Learning (RL). The popular algorithm Alpha-Zero exploit this algorithm.
We have implemented MCTS in our college project. However, we implemented a serial version of the MCTS program, and we want to exploit parallelism in MCTS. In conclusion, we want to parallelize the MCTS algorithm to increase the strength of board game programs in less time.
Proposed Solution:
Block diagram of root parallelization
There are five components in our expected block diagram. First, the Current board indicates the input board state, and we want to know which next-step is the best step in this board state. Second, select, expand, simulation, and backpropagation are four main functions that comprise the MCTS algorithm. There are some ways to exploit parallelism in the MCTS algorithm, and we will explain below.
Here are three methods we use to parallelize MCTS.
Leaf parallelization (Fig.A)
Leaf parallelization is a fundamental way to implement parallel MCTS. Parallelism will be implemented in the simulation function, and one thread runs a simulation process. The simulation result will be combined to pass to the backpropagation function.
Root Parallelization (Fig.B)
Root parallelization exploits the parallelism in the whole MCTS algorithm. We can see that one thread runs a whole MCTS algorithm and the search result of each thread will be combined to make the final decision.
Tree Parallelization (Fig.C)
Tree Parallelization uses a shared tree to play several MCTS games simultaneously. As a consequence, there might be race condition problems when doing backpropagation. We use atomic functions to avoid this problem. Moreover, in order not to traverse the same nodes between different threads, we apply the virtual loss function to avoid wasting computing resources. Giving a negative value when the node is visited, and remove it when doing backpropagation.
Here are the simple pros and cons of these three methods.
Experimental Methodology:
Experimental Results:
We had implemented both the Pthread and OpenMP version of Root and Tree parallelization. So let's compare the speedup of two kinds of versions. In the following figures, we could see there are not much difference between Pthread and OpenMP version. So we use only the OpenMP version of Root parallelization and Tree parallelization in the following analysis.
Speedup
Winrate
GPS speedup measure might be misleading since it is not always the case that a faster program is stronger. For example… TODO
We propose a method that can evaluate the difference between perfect parallelization and real parallelization.
Parallel methods (N threads , 1 seccond) V.S. Serial method (1 thread , N seconds)
4 thread and 16 thread win_rate
From the figure above, we could find out as the number of threads increases, the gap between the parallelized method and perfectly parallelized is larger. As more threads are created, leaf parallelization has to wait for the longest simulation, and root parallelization is more likely to traverse the same nodes, and tree parallelization has high lock overhead.
In the figure of 4 threads, we can observe that there is not much difference between the three methods. So we run it again with 16 threads.
Finally, we can see that in both figures both root and tree parallelization perform well. Root parallelization creates # THREADS trees simultaneously, so it performs well in both speedup and winrate. As for tree parallelization, although it uses a lock and atomic function and resulting in poor speedup, the use of virtual loss prevents it from traversing similar nodes. So both tree and root parallelization are good methods to parallelize MCTS.
Related work
The paper[1] applies the parallel MCTS method in GO, which uses Elo-rating to denote the strength of a program. It takes a specific baseline opponent to measure its winrate. However, we cannot get such a program in the Surakarta game. So we only compare the speedup.

From the figures above, we can see the paper's[1] results are a little better than us in leaf and tree parallelization, and much better than us in root parallelization. We come up with three reasons.
Analysis
Conclusions
In conclusion, we implemented the three parallelization methods (leaf, root, and tree parallelization) and use these methods in our Surakarta game. We gain better performance and our experiment result is similar to the paper's[1] result. Both Root and Tree parallelization are good in MCTS parallelization.
In the future, we want to search for more parallelization methods and try to build up a new method for parallelization.
References