# NYCU-course Game Theory HW1
### Expriment Setting
- Each question will run 10000 ficious play
- Each play has 10000 iteration
- The initial strategy for each player will be randomly initilzed
- The graph represent the result of each play
#### A1.
- It will always converge to $(r_2,c_2)$.
- All the plays converge to $(r_2,c_2)$


#### A2.
- It will converge to $(r_2,c_2)$ or $(r_1,c_1)$ depend on the intial belief

#### A3.
- It will always converge to $(r_1,c_1)$.

#### A4.
- It will always converge to $P(r_1)=\frac{4}{5}$, $P(r_2)=\frac{1}{5}$ for Player $1$ and $P(c_1)=\frac{1}{2}$, $P(c_2)=\frac{1}{2}$ for Player $2$.


#### A5.
- It will always converge to $P(r_1)=\frac{1}{2}$, $P(r_2)=\frac{1}{2}$ for Player $1$ and $P(c_1)=\frac{1}{2}$, $P(c_2)=\frac{1}{2}$ for Player $2$.

#### A6.
- It will only converge to either $(r_2,c_2)$ or $(r_1,c_1)$ depending initial belief.

#### A7.
- It will only converge to either $(r_1,c_2)$ or $(r_2,c_1)$ depending initial belief.


#### A8.
- It will only converge to either $(r_1,c_1)$ or $(r_2,c_2)$ depending initial belief.

#### A9.
- It will only converge to either $(r_1,c_1)$ or $(r_2,c_2)$ depending initial belief.

#### A10.
Accroding to the above observation, fictitious play is a useful tool in many situations, but it's not always guaranteed to converge to a Nash equilibrium for every game matrix. The reliability of fictitious play depends on the specific characteristics of the game being analyzed. For example. In the Matching Pennies game, players have no dominant strategy, and both can benefit by randomizing their choices. Fictitious play in this case may not converge to a Nash equilibrium, as players continuously adjust their strategies to match their opponents.
| | $H$ | $T$ |
|:---------:|:-----:|:-----:|
| **$H$** | (1,-1) | (-1,1) |
| **$T$** | (-1,1) | (1,-1) |
#### Source code
- Setting of initial belief
- Randomly generate an array which its elements are either 0/1.

- Play algorithm
- Calculated each player's expected value accroding to its opponent's strategy history
- Select a strategy that has higher expected value, if the value is the same then randomly pick a best response

