---
tags: stat340, learning-targets
---
# Stat 340 Learning Target Quiz 2 Study Guide
Learning Target Quiz #2 will include questions on learning targets 1, 7, and 8. In addition, you can reattempt targets that appeared on Quiz 1.
#### 1. Given your prior belief, specify an appropriate prior distribution for a univariate model.
- Know what prior elicitation (tuning) is and describe what it means to do elicitation well
- Define informative prior, weak/diffuse/flat prior, conjugate prior and understand when each might be used
- Construct a discrete prior distribution to express the plausibility of parameter values before sampling
- Construct a continuous prior distribution to express the plausibility of parameter values before sampling. The book discusses two primary methods: the prior sample size method and the quantile method.
- Construct a weakly informative (vague) prior distribution to express the lack of prior information
- Construct a conjugate prior distribution to simplify calculation of the posterior distribution. The book presents a few examples (e.g., beta-binomial) but you should be able to utilize this idea more generally.
- Understand how the posterior and prior are related in sequential Bayesian analysis
<ins>Example question:</ins>
Suppose that you are trying to estimate the proportion, $\theta$, of Carls who prefer Little Joy coffee to Goodbye Blue Monday. You decide to use a binomial likelihood and a beta prior distribution.
1. Prior to talking a poll, you believe that the proportion of Carls preferring Little Joy will have mean 0.3, and you would be surprised if it was greater than 0.6. Set up a system of equations that you could solve to determine the parameters, $a$ and $b$, of your prior distribution.
2. Briefly describe how you can use the prior predictive distribution to help tune your prior.
#### 7. Given your prior belief, specify an appropriate prior distribution for a multivariate model.
- Define independence prior and be able to derive one
- For each parameter, be able to utilize the ideas from Learning Target #1.
<ins>Example question:</ins>
Let $\mu$ be the average 3 p.m. temperature in Perth, Australia. Not knowing much about Australian weather, your friend’s prior understanding is that the average temperature is likely around 30 degrees Celsius, though might be anywhere between 10 and 50 degrees Celsius. Your friend isn't sure how variable 3 p.m. temperatures in Perth are. To learn about these temperatures, your friend plans to analyze 1000 days of temperature data and use a $\mathcal{N}(\mu, \sigma)$.
1. What prior distribution would you suggest your friend use for $\mu$? Be sure to specify the prior parameters, our outline how you would select them.
2. What prior distribution would you suggest your friend use for $\phi$? Be sure to specify the prior parameters, our outline how you would select them.
3. Assuming that $\mu$ and $\phi$ are independent, write an expression for the joint prior distribution, $\pi(\mu, \phi)$. There's no need to simplify this expression.
#### 8. Given the prior distribution and data, derive the posterior distribution for a multivariate model.
- Derive the likelihood function
- Identify the likelihood through the story of a distribution (i.e., choose a logical distribution to model the data)
- Derive the posterior distribution up to the normalizing constant
<ins>Example question:</ins>
Suppose that you have a random sample, $x_1, \ldots, x_n$, from a Galenshore distribution with PDF.
$$f(x_i | \alpha, \theta) = \frac{2}{\Gamma(\alpha)} \theta^{2\alpha} x_i^{2\alpha-1}e^{-\theta^2 x_i^2}$$
where $x_i, \alpha, \theta >0$. Further, you assume that $\alpha$ and $\theta$ are independent and you put a Gamma$(a, b)$ prior on both $\alpha$ and $\theta$. Derive the joint posterior distribution of $\alpha$ and $\theta$. (There's no need to spend too much time simplying your final expression or matching distributional forms.)