---
title : Public Communication with Externalities TODO
---
## New structure of the argument
0. Model exposition
1. Continuation game:
a. With public signal
\begin{align}
\mathbb{E} (\theta| \hat x, y) = \frac{\alpha y + \beta \hat x + \gamma \mu}{\alpha + \beta + \gamma} &= 0\\
\alpha y + \beta \hat x + \gamma \mu & = 0\\
\hat x &= -\frac{1}{\beta} (\alpha y + \gamma \mu),
\end{align}
b. Without public signal,
$$(1-p)\bar\theta(x_\varnothing) + p \int_{\mathcal{Y}^N} \frac{\int_\Theta \theta f(y|\theta) f(\theta|x_\varnothing) \mathrm{d}\theta}{\int_{\mathcal{Y}^N} f(y|x_\varnothing)\mathrm{d}y}\mathrm{d}y = 0$$
2. Communication stage (informed sender)
a. Conflict of interests
$x^\star(y) = - \frac{ \alpha + \gamma}{\beta}\bar\theta(y) - \frac{\alpha + \beta + \gamma}{\beta}(r+b)$
$\Delta(y) \triangleq \hat x(y) - x^\star(y) = \frac{\alpha + \beta + \gamma}{\beta}(r+b)$
b. Concealment region
\begin{align}
0 & \leq \Pi(x_\varnothing, y) - \Pi(\hat x(y),y)\\
0 & \leq (r+b + \bar \theta(y)) (F(\hat x|y) - F(x_\varnothing|y)) + \frac{1}{\alpha + \gamma} (f(x_\varnothing|y) - f(\hat x |y))\\
0 & \leq r+b + \bar \theta(y) - \frac{1}{\alpha + \gamma} \frac{f(x_\varnothing|y) - f(\hat x |y)}{ F(x_\varnothing|y) - F(\hat x|y)}
\end{align}
\begin{align}
\frac{f(x_\varnothing|y) - f(\hat x |y)}{(\alpha+\gamma)(F(x_\varnothing|y) - F(\hat x|y))(r+b + \bar \theta(y))}=1
\end{align}
Truncated normal r.v. mean:
$$\mathbb{E}(x|x_1<x<x_2) = \bar \theta(y) - \frac{1}{h_{x|y}} \frac{f(x_1|y) - f(x_2|y)}{F(x_1|y) - F(x_2|y)} \Rightarrow \\ \frac{f(x_1|y) - f(x_2|y)}{F(x_1|y) - F(x_2|y)} = h_{x|y} \left(\bar \theta(y) - \mathbb{E}(x|x_1<x<x_2)\right)\\ \in (h_{x|y}\left(\bar \theta(y) - x_2\right);h_{x|y}\left(\bar \theta(y) - x_1) \right)$$
c. Theorem 1
$\mathcal{Y}^N = [y_1, y_2]$
\begin{equation}
\bar{\theta}(x_\varnothing) - p \frac{1}{\beta + \gamma} \frac{f(y_2|x_\varnothing) - f(y_1|x_\varnothing)}{F(y_2|x_\varnothing) - F(y_1|x_\varnothing)} = 0
\end{equation}
\begin{equation}
x_\varnothing = \hat x (y_2) = - \frac{h_{\theta|y}}{\beta} \bar \theta(y_2) = -\frac{1}{\beta}(\alpha y_2 + \gamma \mu) \end{equation}
Proof :
\begin{equation}
\begin{cases}
\left(r+b + \bar \theta(y_1)\right) \left( F(x_\varnothing|y_1) - F(\hat x(y_1)|y_1)\right) &- \frac{1}{\alpha + \gamma}\left(f(x_\varnothing|y_1) - f(\hat x(y_1) |y_1)\right) = 0\\
\bar \theta(x_\varnothing)\left(F(y_2|x_\varnothing) - F(y_1|x_\varnothing)\right) &- \frac{p}{\beta + \gamma}\left(f(y_2|x_\varnothing) - f(y_1 |x_\varnothing)\right) = 0\\
x_\varnothing - \hat x(y_2)&=0
\end{cases}
\end{equation}
## Things to do
- [ ] Вставить графики: Sender’s and overall welfare gain (левый) и equilibrium construction.
- [ ] Сделать сравнительную статику на $p, r$ и relative precision (на $y_1, y_2, x_\varnothing$)
- [ ] Показать, что при $b=b^T$ welfare не максимальный
- [ ] Подрихтовать Introduction
- [ ] Заресубмитить статью
- [ ] define `ass:distribution`
- [x] убрать к хуям $F, H, \Psi$
- [ ] Исследовать функцию в footnote 16 на стр. 18 (нарисовать)
- [ ] Rewrite Sections 3.2-3.4 to be consistent with Section 2.
- [ ] Переделать Section 4
- [ ] Убрать нах Section 5
- [ ] Переписать Intro & Conclusion
- [ ] comparative statics
- [ ] $$
- [ ] wrt $\alpha,\beta,$
- [x] $\Pi(x,y)$ in closed form and diagram
\begin{align}
\Pi(x,y) &= (r+b + \bar \theta(y)) (1 - F(x|y)) + \frac{1
}{\alpha + \gamma} f(x|y)\\
\partial_x\Pi(x,y) &= - (r+b + \bar \theta(x,y)) f(x|y)\\
\partial_y\Pi(x,y) &=\frac{\alpha}{\alpha+\gamma}(1 - F(x|y) - \partial_x\Pi(x,y))
\end{align}
<iframe src="https://www.desmos.com/calculator/wtkfbblcec?embed" width="500" height="500" style="border: 1px solid #ccc" frameborder=0></iframe>
We are demonstrating that $\Pi(\hat x(y),y) - \Pi(x,y)$ has a unique minimum which is negative in $y$ for a fixed $x$ and is positive on the boundary of the signal domain.
\begin{align}
\frac{\dd}{\dd y}(\Pi(\hat x(y),y) - \Pi(x,y)) &= \partial_y\Pi(\hat x(y),y) + \partial_x\Pi(\hat x(y),y) \frac{\dd \hat x (y)}{\dd y} - \partial_y\Pi(x,y)\\
&= \partial_y\Pi(\hat x(y),y) -\frac{\alpha}{\beta} \partial_x\Pi(\hat x(y),y) - \partial_y\Pi(x,y)\\
&=\partial_y\Pi(\hat x(y),y) - \partial_y\Pi(x,y) + \frac{\alpha}{\beta}(r+b)f(\hat x|y)
\end{align}
---
### Continuation game equilibrium
Let all the receivers play the threshold strategy $\hat x (y)$. Consider receiver $i$, who observes public message $m$ and his private signal $x_i$.
Receiver $i$'s expected payoff is
$$ rA + \mathbb{E}(\theta| x_i,m)a_i.$$
It is obvious that $i$'s best response to $\hat x$ is playing
$$a_i(x_i) = \begin{cases} 1, \quad & \text{if } \mathbb{E}(\theta| x_i,m) \geq 0, \\
0, \quad & \text{if } \mathbb{E}(\theta| x_i,m) < 0.
\end{cases}$$
For the receiver $i$ to be indifferent between taking action and staying put, his posterior expectation of $\theta$ therefore must be zero:
$$\mathbb{E}(\theta| x_i,m) = 0.$$
The equilibrium threshold strategy of each individual receiver only depends on the public message $m$ and does not depend on the play of other receivers.
If the public message is the signal received by the Sender, $m=y$, it is straightforward, given the normality of the signals, to show that the equilibrium play is a linear function of the public signal:
Since the posterior of $\theta$ given $\hat{x}$ and $y$ is normal, we have
\begin{align}
\mathbb{E} (\theta| \hat x, y) = \frac{\alpha y + \beta \hat x + \gamma \mu}{\alpha + \beta + \gamma} &= 0\\
\alpha y + \beta \hat x + \gamma \mu & = 0\\
\hat x &= -\frac{1}{\beta} (\alpha y + \gamma \mu),
\end{align}
$$\hat x(y) = - \frac{ \alpha + \gamma}{\beta} \bar\theta(y).$$
If the Sender's message is empty, $m=\varnothing$, a receiver understands that either the Sender is concealing public information or she not informed herself. Using the posterior density for this case, defined in \eqref{bayesupdate}, the \eqref{eq:marginal_receiver_indifference_no_info} gives the following equation
\begin{align}
&(1-p)\bar\theta(x) + p \int_{y_1}^{y_2} \frac{\int_\Theta \theta f(y|\theta) f(\theta|x) \mathrm{d}\theta}{\int_{y_1}^{y_2} f(y|x_i)\mathrm{d}y}\mathrm{d}y &= 0\\
\text{(Use Bayes theorem)}\quad &(1-p)\bar\theta(x) + p \int_{y_1}^{y_2} \frac{\int_\Theta \theta f(\theta|y,x) f(y|x) \mathrm{d}\theta}{\int_{y_1}^{y_2} f(y|x_i)\mathrm{d}y}\mathrm{d}y &= 0\\
\text{(factorize $f(y|x)$)}\quad &(1-p)\bar\theta(x) + p \int_{y_1}^{y_2} \frac{\bar\theta(y,x) f(y|x)}{\int_{y_1}^{y_2} f(y|x_i)\mathrm{d}y}\mathrm{d}y &= 0\\
\text{(use $\bar \theta$ linearity)}\quad&(1-p)\bar\theta(x) + p\left( \frac{\alpha}{\alpha+\beta+\gamma}\int_{y_1}^{y_2} y\frac{f(y|x)}{\int_{y_1}^{y_2} f(y|x_i)\mathrm{d}y}\mathrm{d}y + \frac{\beta x + \gamma \mu}{\alpha+\beta+\gamma}\right) &= 0\\
\text{(truncated normal mean)}\quad &(1-p)\bar\theta(x) + p\left(\bar\theta(x) - \frac{\alpha}{\alpha + \beta + \gamma}\frac{1}{h_{y|x}} \frac{f(y_2|x) - f(y_1|x)}{F(y_2|x) - F(y_1|x)} \right) = 0,
\end{align}
Finally, by opening the paranthesis and simplifying the expression we get the following characterization of optimal "no info" receiver threshold, $\hat x (\varnothing)$:
\begin{equation}
\bar{\theta}(\hat{x}(\varnothing)) - p \frac{1}{\beta + \gamma} \frac{f(y_2|\hat{x}(\varnothing)) - f(y_1|\hat{x}(\varnothing))}{F(y_2|\hat{x}(\varnothing)) - F(y_1|\hat{x}(\varnothing))} = 0
\end{equation}
---
- [x] characterise Sender optimal threshold $x^\star(y)$
$$x^\star(y) = - \frac{ \alpha + \gamma}{\beta}\bar\theta(y) - \frac{\alpha + \beta + \gamma}{\beta}(r+b)$$
$$\Delta(y) \triangleq \hat x(y) - x^\star(y) = \frac{\alpha + \beta + \gamma}{\beta}(r+b)$$
> Proposition 3 shows that the endogenous conflict of interests is (i) increasing in the magnitude of $|r+b|$, and (ii) increasing in the precison of the signal of the Sender, $\alpha$, and (iii) decreasing in the precision of the receivers' signals, $\beta$.
> Intuitively, the sender finds the aggregate participation excessive whenever $r+b>0$, and thus will choose to conceal the public signal as long as this allows to reduce participation.
> Likewise, the conflict of interest is smaller, the greater is the share of information, $\frac{\beta}{\alpha + \beta + \gamma}$, ``endowed'' to the receivers.
$\psi(\theta|\varnothing,x_i) = (1 - p + p\frac{\int_{y_1}^{y_2} f(y|\theta)\mathrm{d}y}{\int_{y_1}^{y_2} f(y|x_i)\mathrm{d}y})f(\theta|x_i)$
- [ ] Переделать доказательство Theorem 1
- [ ] appendix
- [ ] main text
- [ ] characterize $\tilde x_\varnothing$
For $y \in [y_1,y_2]$ the informed Sender prefers to conceal the signal, inducing receivers to play accor
$$\Pi(x_\varnothing, y) \geq \Pi(\hat x(y),y)$$ .
\begin{align}
0 & \leq \Pi(x_\varnothing, y) - \Pi(\hat x(y),y)\\
0 & \leq (r+b + \bar \theta(y)) (F(-x_\varnothing|y) - F(-\hat x|y)) + \frac{1
}{\alpha + \gamma} (f(x_\varnothing|y) - f(\hat x |y))\\
0 & \leq r+b + \bar \theta(y) - \frac{1
}{\alpha + \gamma} \frac{f(x_\varnothing|y) - f(\hat x |y)}{ F(x_\varnothing|y) - F(\hat x|y)}
\end{align}
\begin{equation}
x_\varnothing = \hat x (y_2) = - \frac{h_{\theta|y}}{\beta} \bar \theta(y_2) = -\frac{1}{\beta}(\alpha y_2 + \gamma \mu) \end{equation}
System that characterizes the equilibrium $(y_1,x_\varnothing)$:
\begin{equation}
\begin{cases}
\left(r+b + \bar \theta(y_1)\right) \left( F(x_\varnothing|y_1) - F(\hat x(y_1)|y_1)\right) &- \frac{1}{\alpha + \gamma}\left(f(x_\varnothing|y_1) - f(\hat x(y_1) |y_1)\right) = 0\\
\bar \theta(x_\varnothing)\left(F(y_2|x_\varnothing) - F(y_1|x_\varnothing)\right) &- \frac{p}{\beta + \gamma}\left(f(y_2|x_\varnothing) - f(y_1 |x_\varnothing)\right) = 0\\
x_\varnothing - \hat x(y_2)&=0
\end{cases}
\end{equation}
There is a certain symmetry in the above system of equations:
\begin{align}
\text{Sender's indifference:} \quad &A^1\cdot (\Phi(P^1) - \Phi(Q^1)) + a^1 (\phi(P^1) - \phi(Q^1)) &&= 0\\
\text{Receiver's indifference:} \quad &A^2\cdot (\Phi(P^2) - \Phi(Q^2)) + a^2 (\phi(P^2) - \phi(Q^2)) &&= 0,
\end{align}
where $A^i,P^i,Q^i$ are linear polynomials in $x_\varnothing, y_1$ and $a_i$ are constants, all specified in the table below:
|Poly|Sender's indifference|Receiver's indifference|
|---|---|---|
|$A$|$r+b +\frac{\gamma \mu}{\alpha+\gamma} + \frac{\alpha}{\alpha+\gamma} y$ | $\frac{\gamma \mu}{\beta+\gamma} + \frac{\beta}{\beta+\gamma} x_\varnothing$|
| - $A_x$|$0$ | $\frac{\beta}{\beta+\gamma}$|
|- $A_y$|$\frac{\alpha}{\alpha+\gamma}$ | $0$|
|P|$\sqrt h_{x\mid y} (x_\varnothing - \frac{\alpha}{\alpha+\gamma}y - \frac{\gamma \mu}{\alpha+\gamma})$|$\sqrt h_{y\mid x} \frac{-1}{h_{y\mid x }}(\beta x_\varnothing + \gamma \mu)$|
|- $P_x$|$\sqrt h_{x\mid y}$|$\sqrt h_{y\mid x} \frac{-\beta}{h_{y\mid x }}$|
|- $P_y$|$- \sqrt h_{x\mid y} \frac{\alpha}{\alpha+\gamma}$|$0$|
|- $A - a P$|$r+b + \bar \theta(x,y)$|$(1-p) \bar \theta(x)$|
|Q|$\sqrt h_{x\mid y}\frac{-1}{h_{x\mid y}}(\alpha y + \gamma \mu)$|$\sqrt h_{y\mid x} (y- \frac{\beta}{\beta+\gamma}x_\varnothing - \frac{\gamma \mu}{\beta+\gamma})$|
|- $Q_x$|$0$|$-\sqrt h_{y\mid x} \frac{\beta}{\beta+\gamma}$|
|- $Q_y$|$\sqrt h_{x\mid y}\frac{-\alpha}{h_{x\mid y}}$|$\sqrt h_{y\mid x}$|
|- $A - aQ$ | $r+b$ | $(1 -p )\bar\theta(x) + p \bar \theta(x,y)$|
|a|$-\frac{\sqrt h_{x\mid y}}{h_{\theta\mid y}}$|$-p\frac{\sqrt h_{y\mid x}}{h_{\theta\mid x}}$|
The Jacobian matrix of the system is
$$\begin{bmatrix}
A^1_x(\Phi(P^1) - \Phi(Q^1)) + (A^1 - a^1 P^1)\phi(P^1)P^1_x - (A^1 - a^1 Q^1)\phi(Q^1)Q^1_x & A^1_y(\Phi(P^1) - \Phi(Q^1)) + (A^1 - a^1 P^1)\phi(P^1)P^1_y - (A^1 - a^1 Q^1)\phi(Q^1)Q^1_y\\
A^2_x(\Phi(P^2) - \Phi(Q^2)) + (A^2 - a^2 P^2)\phi(P^2)P^2_x - (A^2 - a^2 Q^2)\phi(Q^2)Q^2_x & A^2_y(\Phi(P^2) - \Phi(Q^2)) + (A^2 - a^2 P^2)\phi(P^2)P^2_y - (A^2 - a^2 Q^2)\phi(Q^2)Q^2_y
\end{bmatrix}$$
$$\begin{bmatrix}
(r + b + \bar \theta(x,y))\phi(P^1)P^1_x & A^1_y(\Phi(P^1) - \Phi(Q^1)) + (r + b + \bar \theta(x,y))\phi(P^1)P^1_y - (r+b)\phi(Q^1)Q^1_y\\
A^2_x(\Phi(P^2) - \Phi(Q^2)) + (1-p)\bar \theta(x)\phi(P^2)P^2_x - ((1-p)\bar \theta(x) + p\bar \theta(x,y))\phi(Q^2)Q^2_x & - ((1-p)\bar \theta(x) + p\bar \theta(x,y))\phi(Q^2)Q^2_y
\end{bmatrix}$$


- [ ] Graphs for $y_1, y_2$, bounds of the non-disclosure region
## Welfare
- [ ] Welfare-optimal disclosure policy
\begin{align}
\max_{y_1,y_2,x} V &=\int_\Theta \Bigg(\left(1-p + p (F(y_2|\theta) - F(y_1|\theta))\right)\cdot (r+\theta)(1-F(x_\varnothing|\theta))\\
&+ p\int_{y_1}^{y_2}(r+\theta)(1 - F(\hat x(y)|\theta))f(y|\theta)\mathrm{d}y \Bigg) \mathrm{d}F(\theta)\\
&= V_\text{Full Disclosure} + \underbrace{p\int_\Theta \int_{y_1}^{y_2} (r+\theta)\cdot (F(x_\varnothing\mid\theta) - F(\hat x(y)\mid\theta))\mathrm{d}F(\theta,y)}_{\text{Welfare gain}}\\
s.t. & \quad \bar \theta(x) = p \frac{1}{\beta + \gamma} \frac{f(y_2|x) -f(y_1 |x )}{F(y_2|x) -F(y_1 |x )}
\end{align}
Changing the order of integration and using the expression for the Sender's interim payoff the welfare gain can be further decomposed into the sum of Sender's "concealment gain" and the aggregate "action gain":
$$\Delta V = p \underbrace{\int_{y_1}^{y_2}\left(\Pi(x_\varnothing,y) - \Pi(\hat x (y), y) \right) \mathrm{d} F(y)}_\text{Sender's Concealment Gain} - b\underbrace{\int_{y_1}^{y_2} \left(F(\hat x(y) \mid y) - F(x_\varnothing\mid y)\right) \mathrm{d} F(y)}_\text{Aggregate Action Gain} $$
The Sender's concealment gain is always positive by the definition of the interval $[y_1,y_2]$. The aggregate action gain sign depends on the magnitude of $r+b$. If $r+b >0$, then the endogenous conflict of interest is positive and concealing the public signal is the means for the Sender to induce more aggregate action (lower equilibrium threshold $x_\varnothing < \hat x(y), \forall y\in (y_1,y_2)$). If $r+b<0$ then there is a negative conflict of interest, the Sender prefers less action then in the full-disclosure equilibrium and concealing info is her means of achieving that : $$


We use the implicit function theorem by taking the full differential of the equilibrium system with respect to $b$.
Denote $J$ the matrix of first partial derivatives of the system, then the full differential takes the matrix form:
\begin{equation}
\begin{bmatrix}
\Delta F(x\mid y)\\0
\end{bmatrix}
+ J \cdot
\begin{bmatrix}
\frac{\partial x}{\partial b}\\
\frac{\partial y}{\partial b}
\end{bmatrix} = 0
\end{equation}
Naturally,
\begin{equation}
\begin{bmatrix}
\frac{\partial x}{\partial b}\\
\frac{\partial y}{\partial b}
\end{bmatrix}
= - J^{-1} \cdot
\begin{bmatrix}
\Delta F(x\mid y)\\0
\end{bmatrix}
\end{equation}
The Jacobian is given by
\begin{equation}
\begin{bmatrix}
(r + b + \bar \theta(x,y)) f(x\mid y) & \frac{\alpha}{\alpha + \gamma}\Delta F(x\mid y) - \frac{\alpha}{\alpha+\gamma} (r + b + \bar \theta(x,y))f(x\mid y) + (r+b)\frac{\alpha}{h_{x\mid y}} f(\hat x(y)\mid y)\\
\frac{\beta}{\beta + \gamma}\Delta F(y\mid x) -\frac{\beta}{h_{y\mid x}} (1-p)\bar \theta(x)f(y_2\mid x) + ((1-p)\bar \theta(x) + p\bar \theta(x,y)) \frac{\beta}{\beta + \gamma} f(y\mid x) & - ((1-p)\bar \theta(x) + p\bar \theta(x,y))f(y\mid x)
\end{bmatrix}.
\end{equation}
In the equlibtium
\begin{equation}
\begin{cases}
\Delta F(x\mid y) = \frac{1}{h_{\theta\mid y}\left(r+b + \bar \theta(y)\right)}\Delta f(x\mid y)\\
\Delta F(y\mid x) = \frac{p}{h_{\theta\mid x}\bar \theta(x)}\Delta f(y\mid x),
\end{cases}
\end{equation}
so the Jacobian in the equilibrium point is given by
\begin{equation}
\begin{bmatrix}
(r + b + \bar \theta(x,y)) f(x| y) & \left(\frac{\alpha}{h^2_{\theta| y}\left(r+b + \bar \theta(y)\right)} - \frac{\alpha}{h_{\theta| y}} (r + b + \bar \theta(x,y)) \right) f(x| y) - \left(\frac{\alpha}{h^2_{\theta| y}\left(r+b + \bar \theta(y)\right)} - (r+b)\frac{\alpha}{h_{x| y}}\right) f(\hat x| y)\\
\left( \frac{\beta p}{h^2_{\theta| x}\bar \theta(x)} -\frac{\beta}{h_{y| x}} (1-p)\bar \theta(x) \right) f(y_2| x) - \left( \frac{\beta p}{h^2_{\theta| x}\bar \theta(x)} - ((1-p)\bar \theta(x) + p\bar \theta(x,y)) \frac{\beta}{h_{\theta| x}} \right) f(y| x) & - ((1-p)\bar \theta(x) + p\bar \theta(x,y))f(y| x)
\end{bmatrix}.
\end{equation}
---
### Accomplished (most recent first)
- [x] change the payoff of safe action to 0
- [x] rewrite the equilibrium private posterior belief
- [x] change prior distribution to normal
- [x] add statement about the prior
- [x] edit the formulas accordingly
- [x] add intermediate line into the receiver expost payoff equation
- [x] Redraw Figure 1: the bold curve above has to be a straight line
- [x] Set up `git` tracking
- [x] Modular structure