Try   HackMD

Rationalizable Behavior

tags: Microeconomics

Before we go into the world of game theory, we have to first look at the differences between individual choice and game theory.

  • Individual choice:
    • A single individual’s decision affects some a priori unknown outcome.
    • Under suitable conditions over preferences, we can represent preferences over outcomes with a utility function.
    • A rational individual maximizes their expected utility.
  • Game theory
    • The outcome also depends on the choices made by others, who each rationally maximize their own expected utilities.
    • Rational individuals take those choices into account and maximize their expected utility, given the anticipated decisions of others.

Static Game

A static game (靜態賽局) is similar to the very simple decision problems in which a player makes a once-and-for-all decision, after which outcomes are realized. In a static game, a set of players independently choose once-and-for-all actions, which in turn cause the realization of an outcome. Thus a static game can be thought of as having two distinct steps:

  • Step 1: Each player simultaneously and independently chooses an action.
  • Step 2: Conditional on the players’ choices of actions, payoffs are distributed to each player.

A static game

G=(I,(Ai)iI,(ui)iI) consists of:

  1. A finite set of players
    I=1,,n
    ,
  2. A set
    Ai
    of pure actions available to player
    i
    for each
    iI
    ,
  3. A payoff function
    ui:AR
    for each player
    iI
    , where
    A:=A1×An

    is the set of pure action profiles
    a=(a1,,an)
    .

There are some remarks we have to notice:

  • Crucially, each player's payoff can depend on the actions of others.
  • u:ARn
    defined by
    u=(u1,,un)
    is the players' payoff vector.

Assumptions About Players

We impose:

  1. Rationality: each player maximizes his/her expected utility.
  2. Awareness: each player knows
    G
    , that is, who is playing, the available actions of each player, and each player’s preferences over outcomes.
  3. It is common knowledge[1] that players are rational and aware.

Some Examples of Static Game

To Dope or Not to Dope?

The players

I are two athletes competing in a race. Each athlete chooses whether to use a performance-enhancing (D)rug at the risk of being found out or (N)ot to use it, i.e.,
Ai={D,N}
. If they both dope, the benefits cancel out and only the risks remain. We can describe payoffs for each outcome in a payoff matrix:
KenNDN(0,0)(2,1)D(1,2)(1,1)

In this case, we have the following informations:

  • Players:
    N={1,2}
    .
  • Strategy sets:
    Si={M,F}
    for
    i{1,2}
    .
  • Payoffs: Let
    vi(s1,s2)
    be the payoff to player
    i
    if athlete 1 chooses
    s1
    and athlete 2 chooses
    s2
    . We can then write payoffs as
    • v1(N,N)=v2(N,N)=0
    • v1(N,D)=v2(D,N)=2
    • v1(D,N)=v2(N,D)=1
    • v1(D,D)=v2(D,D)=1

Cournot Competition

A static game can be used to model the scenario, where firms decide their output levels simultaneously, assuming rivals' output levels to be fixed. Each firm chooses its optimal output given the output of its competitors. The Nash equilibrium of the game provides the output levels and resulting profits for each firm in the market.

When analyzing the fluctuations of oil prices over time, the Organisation for Economic Co-operation and Development (OECD) provides a useful case study. As a grouping of 38 high-income countries, the OECD's collective demand for oil influences the global market. By examining the factors driving changes in OECD oil consumption, such as economic growth or policy changes, we can gain insight into the broader dynamics of the oil market. Therefore, we can model how oil suppliers set price as follow:

  • Players:
    I
    denotes the few global oil suppliers.
  • Strategy sets:
    Ai=[0,)
    for
    i{1,2}
    and firms choose quantities
    aiAi
    .
  • Payoffs: First, buyers are agnostic about the oil's origin so that the price is a decreasing function
    p(a1++an)
    of total supply. If
    ci(ai)
    is firm
    i
    's cost of producing
    ai
    , then the firm
    i
    's utility is
    ui(a)=p(a1++an)aici(ai)

Rock-Paper-Scissors

Take the popular game of rock-paper-scissors as an example. In this game, rock (

R) triumphs over scissors (
S
), scissors beats paper (
P
), and paper beats rock. Assigning a payoff of
1
to the winner and
1
to the loser, with a tie resulting in a payoff of
0
for both players, we can construct a simple framework to analyze the strategic choices of the players involved.

  • Players:
    N={1,2}
    .
  • Strategy sets:
    Si={R,P,S}
    for
    i{1,2}
    .

Given the previous information, we
can write the matrix representation of this game as follows:

Player 1RPSR(0,0)(1,1)(1,1)Player 1P(1,1)(0,0)(1,1)S(1,1)(1,1)(0,0)

Mixed Actions

Mixed actions in game theory refer to a strategy where a player randomly selects between two or more pure strategies with a certain probability. This differs from a pure strategy, where a player chooses a single action with 100% certainty. By using mixed actions, a player can introduce uncertainty into the game, making it more difficult for their opponent to predict their actions and potentially creating a strategic advantage. Mixed actions are often used in games with no dominant strategy or where the payoffs are dependent on the actions of other players.

A mixed action of player

i is a distribution
αiΔ(Ai)
. If
Ai
is finite, a distribution over
Ai
is an
|Ai|
-dimensional vector with
αi(ai)[0,1]aiAi,aiAiαi(ai)=1,

where
αi(ai)
indicates the probability with which
aiAi
is selected.[2]

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Back to the rock-paper-scissors example, suppose player 1 chooses

P and player 2 chooses
R
and
S
with
50%
each. The outcome of the game is
(P,R)
and
(P,S)
with
50%
each. The outcome, denoted
A=(A1,,An)
, is a random variable such that:

  1. each player
    i
    's realized action
    Ai
    is distributed according to
    αi
    ,
  2. the players' actions are realized independently of each other.

Players observe only the outcome

A, not its distribution
α
. Hence, the action profile itself is the distribution consisting random variables.

Ex-post Payoff

In the action profile

(P,12R+12S), the players receive
u1(A)={1if A2=R,1if A2=Su2(A)={1if A2=R,1if A2=S

Each player
i
's realized payoff
ui(A)
is a random variable as well. Hence we can take the expected value of the ex-post payoff to analyze how the players choose.

Ex-ante Expected Payoff

Mixed action profile

α=(α1,,αn) induces probability measure
Pα
over outcomes, defined by
Pα(A=a)=α1(a1)α1(an)
. The expected payoff of the mixed action profile
α
is
ui(α):=Eα[ui(A)]=αAui(a)Pα(A=a)=αAui(a)j=1nαj(aj)

Strategies

In game theory, there is a distinction between actions and strategies:

  • Actions are the fundamental units of the game, defined by the game's rules.
  • Strategies, on the other hand, are contingent plans of actions that are formulated for the entire game, indicating which actions to take in different scenarios.
  • Pure strategies, denoted by
    siSi
    , involve a player choosing a specific action with
    100%
    probability, while mixed strategies, denoted by
    σiΔ(Si)
    , involve a player randomly selecting between multiple pure strategies with certain probabilities.
  • In static games with complete information, the set of actions
    Ai
    is equivalent to the set of strategies
    Si
    , but this may not be the case for games with incomplete information or dynamic games.

Games are defined by their actions, but their outcomes are determined by the strategies employed by the players. To capture this relationship, we use

Pσ to represent the distribution of outcomes that results from a particular strategy profile
σ
. In the case of static games,
Pσ
is equivalent to
Pα
, underscoring the relevance of the results for games encountered in the future.[3]

Solution concept:

Describing situations abstractly is only useful if we can use the model to analyze what will/should happen. A solution concept is a method of analyzing the game to restrict all possible outcomes to a set of reasonable outcomes.

Evaluation Criteria of a Game

Thus, we have to follow the evaluation criteria:

  • Existence: the solution concept should be broadly applicable.
  • Predictive power: the set of solutions should be significantly smaller than the set of all outcomes.
  • Robustness: is the solution concept robust to small modeling errors?

Robustness

Consider a game

G=(I,(Ai),(ui)). A game
G~=(I,(Ai),(u~i))
is an
ε
-perturbation of
G
if for every action profile
aA
and every player
i
,
|u~i(a)ui(a)|<ε.

A solution concept is robust if any solution
s
to
G
is a solution of any
ε
-perturbation
Gε
of
G
as
ε0
.

Iterated Strict Dominance

The dilemma of whether or not to use performance-enhancing drugs in sports can be analyzed using the concept of the Prisoner's Dilemma. Each athlete faces the choice of whether to dope or not, with the payoff depending on both their own action and their opponent's action. If both choose not to dope, the result is a "cooperative" outcome with a moderate payoff for each. However, if one chooses to dope while the other doesn't, the "defector" receives a higher payoff while the "cooperator" gets the lowest payoff. In this case, the rational choice for each player depends on their opponent's action, making it a more complex scenario than the simple Prisoner's Dilemma.

Finding ourselves in a situation where our best action is independent of our opponent's action is a rare occurrence. Therefore, we start with a less stringent concept that aligns with rationality.

Strict Dominance

A strategy

σiΔ(Si) is strictly dominated by
σiΔ(Si)
if
ui(σi,si)<ui(σi,si)siSi,

where we use
i
to refer to the strategy profile of all of player
i
's opponent. A rational player will never choose a strictly dominated strategy: regardless of opponents' strategies there exists a better reply.

A pure strategy

siSi is strictly dominant if every other pure strategy
siSi
is strictly dominated by
si
. A rational player must choose a strictly dominant strategy: regardless of opponents' strategies it is the unique best reply. We will write
siisi
to denote that
si
is strictly dominated by
si
.

Domination by a Mixed Strategy

No strategy of either player is strictly dominated by a pure strategy. For any

s1S1, the
|S2|
-dimensional vector
v1(s1):=u1(s1,s2)s2S2
corresponds to player 1's payoff row from playing
s1
. Any mixed strategy
σ1
attains a convex combination of payoff rows
v1(σ1)=s1S1σ1(s1)v1(s1).

Thus,
v1(Δ(S1))
coincides with the convex hull
V1:=convv1(S1)
.

Dominance and Pareto efficiency

A strategy

σ1 is strictly dominated by
σ1
if
u1(σ1,s2)<u1(σ1,s2)s2S2

In terms of payoff row,
v1(σ1)
lies "above and to the right" of
v1(σ1)
. Undominated payoff rows are on the efficient frontier of
V1
. In this example, only mixtures of
M
and
B
are undominated. Similarly, for player 2, we can obtain the result by looking at the space of payoff columns with same approach.

Strict Dominant-Strategy Equilibrium

A strategy profile

s is a strict dominant-strategy equilibrium if
si
is a strictly dominant strategy for every player
i
.

Here are some remarks of the properties of the solution concept:

  • It does not exist in all games, for example in Rock, Paper, Scissors.
  • The predictive power is excellent since it is unique if it exists.
  • Strict dominant-strategy equilibria are robust if
    S
    is finite.

Furthermore, there are some necessary assumptions:

  • Rationality and awareness are sufficient.
  • No knowledge about rationality of others is required.

  1. An event

    E is common knowledge if (1) everyone knows
    E
    , (2) everyone knows that everyone knows
    E
    , and so on ad infinitum. ↩︎

  2. The set of

    n-dimensional vectors with non-negative entries that sum up to
    1
    is called the
    (n1)
    -simplex. ↩︎

  3. Formally, with any game

    G among players
    I
    with pure strategy sets
    (Si)i
    and payoff functions
    (ui)i
    , we can associate a static game
    G=(I,(Si)i,(u~i)i)
    with
    u~i(s)=Es[ui(A)]
    , called the strategic-form game of
    G
    , to which the results apply. ↩︎