# Task 1
a) Let's recall the definition of strategic dominance:
> Strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play
There is no dominant strategies for any player as there is no such strategy that is better than other ones in all possible strategies that opponent can play.
b) However, there are strictly dominated strategies in this example. Let's recall what does it mean for strategy to be strictly dominated:
> Strategy B is strictly dominated if some other strategy exists that strictly dominates B
So, strictly dominated strategies **for player one**:
1. C (strictly dominated by A)
2. B (strictly dominated by D)
**For player two** there is no strict, neither weak dominated strategies.
c) Equilibrium of the game is (7,4).
*Steps of achieving it:*


# Task 2
a)


b)

# Task 3
**Case with known limit of played steps**
If the Prisoner’s Dilemma is played exactly N times (or has known upper limit after which game ends), and all players know this, all rounds are optimal for defects. Defecting is the only feasible Nash equilibrium. The evidence is inductive: on the last turn, one may as well defect, because the opponent would not have a chance to retaliate later. So, on the last turn, both players will defect. Thus, on the second-to-last turn, the player may as well defect, because the opponent would defect on the last no matter what is done, and so on.
**Case with no known limit of played steps**
In this case the idea of strategy to defect on the last step will not work as it is not clear which round will be last one.
We can divide this case on two smaller ones:
1. **Iterated prisoner's dilemma in population (>2)**
In this case, it will be more profitable to play by *Tit for Tat* strategy. The reason why this strategy is more robust than others is that it satisfyes the criteria Axelrod introduced for strategies to be successful one: nice, retaliating, non-envious[1]. So that, player that uses *Tit for Tat* strategy benefits from both players that cooperates with others and always-defect players, not being blind optimist. Thereby, it will be evolutionaly stable strategy.
Even better strategy could be *forgiving Tit for Tat* as Axelrod stated[1] - after the opponent defecting *tit for tat* still could (with some probability 1-5%) cooperate with him. This makes it easier to rebound from being stuck in a loop of defections sometimes, when occasional errors happens.
*Remark:* The above statements will be true if there will be at least some (>=2) players that are ready for cooperating in population. In such case of diversity Tit for Tat player could survive through the stages of natural selection as he/she would gain more than always-defecting players via cooperaion. But in case of population consisting of only defecting players, the evolutionary stable strategy will be *always defecting*. That is, it is always useful to have at least one person you can trust in your population.
2. **Iterated prisoner dillema with 2 playes**
As it was mentioned above, in case of always-defecting players prevailing, Tit for Tat strategy would fail. This is bacause Tit for Tat's first step will be trying to cooperate with someone who always defects. As player can not know which type of player he will be faced, the best strategy for surviving (as 2 players is a special case of population of size 2) is always defect and not lose points by futile attempts of cooperating.
# Task 4
2
# References:
1. Axelrod, R. (2006). The evolution of cooperation. New York, NY: Basic Books. https://ee.stanford.edu/~hellman/Breakthrough/book/pdfs/axelrod.pdf