# SilviaTerra forecasting model
A reasonable goal would be to determine an stochastic evolution equation of the form
$$
\partial_t \rho + F[\rho] + \eta_t = 0
$$
where $\rho \in L^1(S^2)$ is the density of the forrest, $F:L^1(S^2) \to TL^1(S^2)$ is a functional and $\eta_t$ is a stationary noise term. Without doing a proper literature survey, my guess is that a a reaction-diffusion type PDE would be a reasonable guess for $F$. Basemap gives us $\rho$ over some range of time. To forecast into the future, we need to estimate $F$ and $\eta_t$. The most direct route to do this is to choose a finitely parametrized family for $F$ and $\eta_t$ and then do the standard MLE. That is to say, if $\theta$ is the parameter which determines $F$ and the distribution of the random variable $\eta_t$, then we seek to find $$
\theta^* := \arg \max_\theta \int \log\left( \Pr(\partial_t\rho \mid \theta) \right) dt
$$
where $\Pr(\partial_t \rho \mid \theta)$ is the likelihood of observing $\partial_t\rho$ if $F$ and $\eta_t$ were determined by $\theta$. If we have missing data at certain time stamps, it's not an issue. We simply drop those time-stamps from the above integral (or really it would be a sum in practice).
## Data fusion with econometric data
We may have other data sources beyond $\rho$, such as knowledge about pent up demand for lumber. We might even have leading indicators, such as a spike in demolitions. In order to integrate these economic indicators into our forecasting model we should probably allow for a driving term. The lightest touch would be to consider a model of the form
$$
\partial_t \rho + F[\rho, u_t] + \eta_t = 0
$$
where now $F$ takes in $\rho$ as well as a time-dependent driving term, $u_t$, which consists of time-stamped economic data. At this point, we can still proceed as before using an MLE (or MAP) approach, but with a family of functionals that incorporates $u_t$. More generally, we can consider a functional, $F$, which takes the history of $u$'s up to time $t$.
## Uncertainty estimation
One of the benefits of the MLE approach is that in addition to finding a "best" model, we can also estimate how much confidence we should have in our model by training on bootstrapped samples of our dataset. This allows us to measure a distribution of $\theta$'s, and tells us how much confidence we should have in the predictions of the forecast.
## Bayesian updates and