---
tags: micro
---
# Game Theory
This note is basically based on MWG.
## Basic Definitons
### Extensive Form
**Definition** A game in extensive form is specified by the collegtion $$\Gamma_E=\{\mathcal{X},\mathcal{A},I,p(.),\alpha(.),\mathcal{H},H(.),i(.),\rho(.),u\}$$
$\mathcal{X}$: finite set of nodes.
$\mathcal{A}$: finite set of possible actions.
$I$: the number of players.
$p:\mathcal{X}\to\{\mathcal{X} \cup \emptyset\}$: the function assigns predessesor of current node $x$.
$\alpha: \mathcal{X}\setminus\{x_0\} \to \mathcal{A}$: the function indicates the action lead to current node.
$\mathcal{H}$: the collection of information sets.
$H:\mathcal{X} \to \mathcal{H}$: the function assigns each node into one information set. Hence, $\mathcal{H}$ is a partition of $\mathcal{X}$.
$i:\mathcal{H} \to \{0,1,...,I\}$: the functions assisng each information set to each palyers. Nature is the player 0.
$\rho:\mathcal{H}_0 \times \mathcal{A}\to [0, 1]$: the function assigns probabilities at Natures' information sets.
$u=\{u_1(.),...,u_I(.)\}$: the set of all players' utility functions, where $T$ is the set of terminal nodes and $u_i:T \to \mathbb{R}$.
### Strategy
**Definition** Let $\mathcal{H}_i$ denotes the collection of player $i$'s information set, $\mathcal{A}$ the set of possible actions in the game, and $C(H) \subset \mathcal{A}$ the set of possible actions at information set $H$. A strategy for player $i$ is a functions $s_i: \mathcal{H}_i \to \mathcal{A}$ such that $s_i(H) \in C(H)$ for all $H \in \mathcal{H}_i$.
A strategy assigns feasible actions in each information set, but it may not availbale in the whole game.
### Normal Form
**Definition** For a game with $I$ players, the normal formal representation is
$$\Gamma_N =\{I, \{S_i\}, \{u_i(.)\}\}$$
$S_i$: the set of strategies ($s_i \in S_i$) of player $i$.
$\prod_{i=1}^I S_i = S_1 \times ... \times S_I$: the set of all possible outcomes, which is the combination of all players' strategies. $(s_1,...,s_I) \in \prod_{i=1}^I S_i$ if $s_1 \in S_1,..., s_I \in S_I$.
$u_i:\prod_{i=1}^I S_i \to \mathbb{R}$: the utility funtion of player $i$, which assigns utility from the outcomes to real number.
### Mixed Strategy
**Definition** Given player $i$'s pure strategy set $S_i$, a mixed strategy of player $i$ is
$$\sigma_i:S_i \to [0,1] $$ such that for each $s \in S_i$, $\sigma_i(s) \ge 0$ and $\sum_{s \in S_i} \sigma_i(s_i) = 1$
Suppose that player $i$ has $M$ pure strategies in set $S_i = \{s_{1i},...,s_{Mi}\}$, the set of all possible miexed strategies is
$$\Delta (S_i) = \{(\sigma_{1i},...,\sigma_{mi}) \in \mathbb{R}^M|\sigma_{mi} \ge 0 , m = 1,...,M \text{ and } \sum_{m=1}^M \sigma_{mi} = 1 \}$$
Clearly, a pure strategy is also a special element of the set of mixed strategies.
### Normal Form with Mixed Strategies
**Definition** For a game with $I$ players, the normal formal representation with mixed strategies is
$$\Gamma_N =\{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$$
### Behavior Strategy
**Definition** Given an extensive form game $\Gamma_E$, a behavior strategy for player $i$ is based on each information set $H \in \mathcal{H}_i$ and available action $a \in C(H)$,
$$\lambda_i(a, H) \ge 0 \text{ for all } H \in \mathcal{H}_i \text{ and } a \in C(H) $$
such that $$\sum_{a \in C(H)}\lambda_i(a, H) = 1$$
**Note** Mixed Strategy is based on pure strategy; behavior strategy is based on available actions in each information set. These two types of randomization are quivalent in games of perfect recall, but not equivalent in the general cases.
### Others' strategies
**Notation** We use $s_{-i}=(s_1,..., s_{i-1}, s_{i+1},...s_I)$ to denote the strategies choose by everyone except player $i$. Hence $S_{-i} = S_1 \times ... \times S_{i-1} \times S_{i+1},... \times S_I$ is the set of all combination of possible strategies choose by everyone except player $i$.
## Categories of Strategies
### Strictly Dominant Strategy
**Definition** A strategy $s_i \in S_i$ is a strictly dominant strategy for player $i$ in game $\Gamma_N = \{I, \{S_i\}, \{u_i(.)\}\}$ if for all $s_i^* \neq s_i$, we have
$$u_i(s_i, s_{-i})> u_i(s_i^*, s_{-i})$$
for all $s_{-i} \in S_{-i}$
### Strictly Dominated Strategy
**Definition** A strategy $s_i \in S_i$ is a strictly dominated strategy for player $i$ in game $\Gamma_N = \{I, \{S_i\}, \{u_i(.)\}\}$ if there exitsts another strategy $s_i^* \in S_i$, we have
$$u_i(s_i^*, s_{-i})> u_i(s_i, s_{-i})$$
for all $s_{-i} \in S_{-i}$.
We say that the strategy $s_i^*$ strictly dominates the strategy $s_i$.
### Weakly Dominated Strategy
**Definition** A strategy $s_i \in S_i$ is a weakly dominated strategy for player $i$ in game $\Gamma_N = \{I, \{S_i\}, \{u_i(.)\}\}$ if there exitsts another strategy $s_i^* \in S_i$, we have
$$u_i(s_i^*, s_{-i}) \ge u_i(s_i, s_{-i})$$
for all $s_{-i} \in S_{-i}$, and
$$u_i(s_i^*, s_{-i}) > u_i(s_i, s_{-i})$$
for some $s_{-i} \in S_{-i}$.
### Weakly Dominant Strategy
**Definition** A strategy $s_i \in S_i$ is a weakly dominant strategy for player $i$ in game $\Gamma_N = \{I, \{S_i\}, \{u_i(.)\}\}$ if it weakly dominants any other strategy in $S_i$.
### Strictly Dominated Strategy with Mixed Strategies
**Definition** A strategy $\sigma_i \in \Delta (S_i)$ is a strictly dominated strategy for player $i$ in game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$ if there exitsts another strategy $\sigma_i^* \in \Delta(S_i)$, we have
$$u_i(\sigma_i^*, \sigma_{-i}) > u_i(\sigma_i^, \sigma_{-i})$$
for all $\sigma_{-i} \in \prod_{j \neq i}\Delta (S_{-j})$.
### Strictly Dominated Strategy in Pure and Mixed Strategies
**Proposition** In a game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$,
$$u_i(\sigma_i^*,\sigma_{-i}) > u_i(\sigma_i,\sigma_{-i}), \forall \sigma_{-i}$$
if and only if
$$u_i(\sigma_i^*, s_{-i}) > u_i(\sigma_i, s_{-i}), \forall s_{-i}$$
**proof**
$$u_i(\sigma_i^*,\sigma_{-i}) - u_i(\sigma_i,\sigma_{-i}) = \sum_{s_{-i} \in S_{-i}} [\prod_{k \neq i} \sigma_k(s_k)][u_i(\sigma_i^*, s_{-i}) - u_i(\sigma_i, s_{-i})].$$
$$u_i(\sigma_i^*,\sigma_{-i}) - u_i(\sigma_i,\sigma_{-i}) > 0, \forall \sigma_{-i} \iff [u_i(\sigma_i^*, s_{-i}) - u_i(\sigma_i, s_{-i})]>0, \forall s_{-i}$$
This proposition suggest that to check whether $\sigma_i^*$ dominates $\sigma_i$, we only need need to check their payoffs under other players' pure strategies, but not the infinte situations of mixed strategies.
### Best Response
**Definition** In a game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$, strategy $\sigma_i^*$ is a best response for player $i$ to his rivals' strategies $\sigma_{-i}$ if
$$u_i(\sigma_i^*, \sigma_{-i}) \ge u_i(\sigma_i, \sigma_{-i}), \forall \sigma_i \in \Delta(S_i)$$
Strategy $\sigma_i^*$ is never a best response if there is no $\sigma_{-i}$ for which $\sigma_i^*$ is a best response.
### **Rationalizable Strategies**
**Definition** In a game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$, the strategies in $\Delta(S_i)$ that survive the iterated removal of strategies that are never a best response are known as player $i$'s rationalizable strategies.
## Nash Equilibrium
### Nash Equilibrium
**Definition** A strategy profile $s=(s_1,...,s_I)$ is a Nash equilibrium of game $\Gamma_N = \{I, \{(S_i)\}, \{u_i(.)\}\}$ if for every $i=1,...,I$,
$$u_i(s_i,s_{-i}) \ge u_i(s_i',s_{-i}), \forall s_i' \in S_i$$
**Note** In a Nash equilibrium, each player's startegy is a best response to the strategies actually played by her rivals.
### Best Response Correspondence
**Definition** In a game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$, player $i$'s best response correspondence is $b_i: S_{-i} \to S_i$ such that
$$b_i(s_{-i}) = \{s_i \in S_i | u_i(s_i,s_{-i}) \ge u_i(s_i',s_{-i}) \forall s_i' \in S_i\}$$
where $b_i(s_{-i})$ is a subset of $S_i$, it may containe more than one element.
### Nash Equilibrium with Mixed Strategy
**Definition** A strategy profile $\sigma=(\sigma_1,...,\sigma_I)$ is a Nash equilibrium of game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$ if for every $i=1,...,I$,
$$u_i(\sigma_i,\sigma_{-i}) \ge u_i(\sigma_i',\sigma_{-i}), \forall \sigma_i' \in \Delta(S_i)$$
### Indifference of Pure Strategies in NE
**Proposition** Let $S_i^+ \subset S_i$ denote the set of pure strategies that player $i$ plays with positive probabilitiy in mixed strategy profile $\sigma=(\sigma_1,...,\sigma_I)$. $\sigma$ is a Nash equilibrium if and only if for all $i = 1,...,I$,
1. $$u_i(s_i,\sigma_{-i}) = u_i(s_i',\sigma_{-i}), \forall s_i,s_i' \in S_i^+ $$
2. $$u_i(s_i,\sigma_{-i}) \ge u_i(s_i',\sigma_{-i}), \forall s_i \in S_i^+ \text{ and } \forall s_i' \notin S_i^+ $$
**Proof**
**Only if part**
If condition 1 does not hold,
$$\exists s_i,s_i' \in S_i^+ \text{ such that } u_i(s_i',\sigma_{-i}) >u_i(s_i,\sigma_{-i}) $$
If condition 2 does not hold,
$$\exists s_i \in S_i^+ \text{ and } s_i' \notin S_i^+ \text{ such that } u_i(s_i',\sigma_{-i})> u_i(s_i,\sigma_{-i}) $$
Then player $i$'s can strictly increse her payoff by playing strategy $s_i'$ whenever her would have played strategy $s_i$, and $\sigma$ will not be a Nash equilibrium.
**If part**
Condition 1 implies $u_i(s_i, \sigma_{-i}) = u_i(\sigma_i, \sigma_{-i}), \forall s_i \in S_i^+$.
Suppose that the two conditions hold but that $\sigma$ is not a Nash equilibrium, then there are some player $i$ who has a strategy $\sigma_i'$ such that,
$$u_i(\sigma_i', \sigma_{-i})> u_i(\sigma_i, \sigma_{-i})$$ Then there must be some pure strategy $s_i'$ that is played with positive probability under $\sigma_i'$ such that
$$u_i(s_i', \sigma_{-i}) > u_i(\sigma_i, \sigma_{-i})$$
If $s_i' \in S_i^+$, it contradicts condition 1; if $s_i' \notin S_i^+$, it contradicts condition 2.
This proposition provides a short cut to calculate mixed strategy.
## Existence of Nash Eauilibria
### Existence of NE with Mixed Strategies
**Proposition** Every game $\Gamma_N = \{I, \{\Delta (S_i)\}, \{u_i(.)\}\}$ with finite pure strategies $S_1,...,S_I$ has a mixed strategy Nash equilibrium.
### Existence of NE with Infinite Pure Strategies
**Proposition** A game $\Gamma_N = \{I, \{S_i\}, \{u_i(.)\}\}$ has a Nash equilibrium if for all $i=1,...,I$,
1. $S_i$ is a nonempty, convex, and compact subset of $\mathbb{R}^N$
2. $u_i(s_1,...,s_I)$ is continuous in $(s_1,...,s_O)$ and quasiconcave in $s_i$.
An auction with bids in continuos range is applicable with this proposition.
### Kakutani fixed-point theorem
**Statement** Let S be a non-empty, compact and convext subset of $\mathbb{R}^n$, and $\phi: S \to 2^S$ be a set-valued function(correspondence) on $S$. $\phi$ has a fixed point if
1. The graph of $\phi$, i.e. $\{(x,y)|x \in S , y \in \phi(x)\}$ is closed.
2. $\phi(x)$ is non-empty and convex for all $x \in S$.
There is an instructive [proof](https://en.wikipedia.org/wiki/Kakutani_fixed-point_theorem#Proof_outline) on Wikipedia.
**Note:** If we defien the best response correspondence $b :S \to S=b_1 \times ... \times b_I$ as the product of each palyer's best response correspondence, then the existense of Nash equilibrium is equivalent to the existence of fixed point of $b$. i.e.,
$$ s \in S \text{ such that } s \in b(s)$$
in other words,
$$ s=(s_1, ..., s_I) \in S \text{ such that } s_1 \in b_1(s_{-1}) , ..., s_I \in b_I(s_{-I}))$$
Hence, above two existence theorem of NE are just the directly result of Kakutani fixed-point theorem.
## Incomplete Information
### Bayesian Game
**Definition** A Bayesian game is a set
$$\{I, \{S_i\}, \{u_i(\cdot)\}, \Theta, F(\cdot)\}$$
$\Theta: \Theta_1 \times,...,\times \Theta_I$ is the product of each player's type space.
$F(\cdot):F(\theta_1, ...,\theta_I)$ is the joint probability of each player's type
$u_i: S \times \Theta_i \to \mathbb{R}$ is palyer $i$'s utility function, which depned on the realization type.
The pure strategy of player $i$ in a Bayesian game is a function $s_i(\theta_i): \Theta_i \to S_i$, and the set of all functions is $\mathcal{F_i}$.
The payoff of player $i$ is the expected utility of $u_i$, that is
$$\tilde{u_i}(s_1(\cdot),...,s_I(\cdot)) = E_{\Theta}[u_i(s_1(\theta_1),...,s_I(\theta_I),\theta_i)]$$
*Note:* In MWG's definitoin, $u_i$ only depends on $\Theta_i$, but in other textbooks $u_i$ also depends on $\Theta_{-i}$.
### Pure Strategy Bayesian NE
**Definition** A pure strategy Bayesian Nash eauilibrium for the Bayesian game $\{I, \{S_i\}, \{u_i(\cdot)\}, \Theta, F(\cdot)\}$ is a profile of decision rules $(s_1(\cdot),...,s_I(\cdot))$ that constitutes a Nash equilibrium of game $\Gamma_N = \{ I, \{\mathcal{F_i}\}, \{\tilde{u_i}(\cdot)\}\}$. That is
$$\tilde{u_i}(s_i(\cdot), s_{-i}(\cdot)) \ge \tilde{u_i}(s_i(\cdot), s_{-i}(\cdot)) \text{ for all } s_i' \in \mathcal{F_i}$$
## Sequential Games
### Perfect Information
A game is one of perfect information if each information set contains only one decision node. Otherwise, it is a game of imperfect information.
### Zermelo's Theorem
**Proposition** Every finite extensive game of perfect information $\Gamma_E$ has a pure strategy Nash equilibrium that can be derived through backward induction. Moreover, if no player has the same payoffs at any two terminal nodes, then there is a unique Nash equilibrium that can be derived in this manner.
**Proof**
Since $\Gamma_E$ is a finite game, we can define a stragegy $\sigma = (\sigma_1, ..., \sigma_I)$ by backward induction. If no player has the same payoffs at any two terminal nodes, backward induction will only induce one strategy.
We need to show that $\sigma$ is a Nash equilibrium. Suppose not, i.e., there exists another strategy $\sigma_i'$ for player $i$ such that $u_i(\sigma_i', \sigma_{-i})> u_i(\sigma_i, \sigma_{-i})$.
For player $i$, we define the distance of her last decision node in one specific path as $0$ and define the distance of one of her decision nodes $x$ as is the longest path through $x$ to last nodes. Because the game is finite, the distance of $i$'s first decision node is also a finite number $N$. We then construct a seris of strategies as $\sigma_i'(n), n=0,1,..,N$. $\sigma_i'(n)$ makes the same decisions as $\sigma_i$ at decision nodes at distance $0, 1, ..., n$ and makes the same decisions as $\sigma_i'$ at decision nodes at distance $n+1, ..., N.$
Since the only difference between $\sigma_i'(0)$ and $\sigma_i'$ is the decision at the last decision nodes, the rule of backward induction ensures that $u_i(\sigma_i'(0), \sigma_{-i}) \ge u_i(\sigma_i', \sigma_{-i})$. Moreover, if $u_i(\sigma_i'(n-1), \sigma_{-i}) \ge u_i(\sigma_i', \sigma_{-i})$, we have $u_i(\sigma_i'(n), \sigma_{-i}) \ge u_i(\sigma_i', \sigma_{-i})$, which result is also based on the rule of backward induction. By Mathematical induciton, $u_i(\sigma_i'(N), \sigma_{-i}) \ge u_i(\sigma_i', \sigma_{-i})$. However, $\sigma_i'(N)=\sigma_i$, which is a contracdiction. Hence, $\sigma$ is a Nash eauilibrium.
### Subgame
A subgame of an extensive form game $\Gamma_E$ is a subset of the game having the following properties:
1. It begins with an information set containing a single decision node, contains all the decision nodes that are successors (both immediate and later) of this node, and contains only these nodes.
2. If one decision node $x$ is in the subgame, then every node is the same information set should also be in the subgame.
### SPNE
A profile of strategies $\sigma = (\sigma_1, ..., \sigma_I)$ in an $I$ player extensive form game $\Gamma_E$ is a **subgame perfect Nash equilibrium (SPNE)** if it induces a Nash equilibrium in every subgame of $\Gamma_E$.