# Approximate Dynamic Programming
## Introduction
* **Dynamic Programming (DP)** offers a powerful framework for solving sequential decision-making problems. However, traditional DP becomes impractical for problems with large or infinite state spaces due to the challenges of storing and computing values for every possible state.
* **Approximate Dynamic Programming (ADP)** addresses these limitations by combining the core concepts of DP with two key techniques:
* **Sampling:** Instead of exhaustively evaluating all states, ADP uses sampled transitions to update value estimates.
* **Function Approximation:** ADP employs function approximators (like linear models or neural networks) to efficiently represent value functions, avoiding the need for full tabular storage.
## Function Approximation
**Function approximation** is the process of learning a mapping between a set of input variables (predictors), $x$, and a target variable (response), $y$. The goal is to discover the underlying relationship between $x$ and $y$ to make predictions about new input values.
### Conceptual Overview
* **Modeling Relationships:** We model the relationship between $x$ and $y$ as a probability distribution. Our goal is to estimate the conditional probability $\mathbb{P}[y∣x]$ – the probability of observing a particular value of $y$ given a specific input $x$.
* **Parameterized Functions:** We use a function $f$ with adjustable parameters $w$ to represent this conditional probability. The choice of function affects the types of relationships the model can learn.
* **Learning from Data:** We have a dataset of pairs $(x_i, y_i)$. Maximum likelihood estimation (MLE) is used to find the parameters $w^*$ that maximize the likelihood of the observed data: $$w^*=\arg\max_{w}\{\prod_{i=1}^{n}f(x_i;w)(y_i)\}=\arg\max_{w}\{\sum_{i=1}^{n} \log f(x_i;w)(y_i)\}.$$
### Optimization and Prediction
* **Minimizing Discrepancy:** We measure the difference between the empirical data distribution D and the model distribution $M$ (defined by our function $f$) using a loss function like cross-entropy: $\mathcal{H}(D,M)=-\mathbb{E}_D[\log M]$.
* **Gradient Descent:** We use gradient descent to iteratively update parameters $w$ and minimize the loss function, improving the model's fit.
* **Making Predictions:** Once trained, we predict the expected value of $y$ for a new input $x$:
* $\mathbb{E}_M[y|x]=\mathbb{E}_{f_{(x;w)}}[y]=\int^{+\infty}_{-\infty}y\cdot f(x;w)(y)dy$
### Adapting to New Information
* **Incremental Learning:** It's important for models to adapt as new data becomes available. Many frameworks support updating parameters $w$ as we collect more information, helping the model improve over time.
### Python Implementation
```python=
class FunctionApprox(ABC, Generic[X]):
'''Interface for function approximations.
An object of this class approximates some function X ↦ ℝ in a way
that can be evaluated at specific points in X and updated with
additional (X, ℝ) points.
'''
@abstractmethod
def __add__(self: F, other: F) -> F:
pass
@abstractmethod
def __mul__(self: F, scalar: float) -> F:
pass
@abstractmethod
def objective_gradient(
self: F,
xy_vals_seq: Iterable[Tuple[X, float]],
obj_deriv_out_fun: Callable[[Sequence[X], Sequence[float]], np.ndarray]
) -> Gradient[F]:
'''Computes the gradient of an objective function of the self
FunctionApprox with respect to the parameters in the internal
representation of the FunctionApprox. The gradient is output
in the form of a Gradient[FunctionApprox] whose internal parameters are
equal to the gradient values. The argument `obj_deriv_out_fun'
represents the derivative of the objective with respect to the output
(evaluate) of the FunctionApprox, when evaluated at a Sequence of
x values and a Sequence of y values (to be obtained from 'xy_vals_seq')
'''
@abstractmethod
def evaluate(self, x_values_seq: Iterable[X]) -> np.ndarray:
'''Computes expected value of y for each x in
x_values_seq (with the probability distribution
function of y|x estimated as FunctionApprox)
'''
def __call__(self, x_value: X) -> float:
return self.evaluate([x_value]).item()
@abstractmethod
def update_with_gradient(
self: F,
gradient: Gradient[F]
) -> F:
'''Update the internal parameters of self FunctionApprox using the
input gradient that is presented as a Gradient[FunctionApprox]
'''
def update(
self: F,
xy_vals_seq: Iterable[Tuple[X, float]]
) -> F:
'''Update the internal parameters of the FunctionApprox
based on incremental data provided in the form of (x,y)
pairs as a xy_vals_seq data structure
'''
def deriv_func(x: Sequence[X], y: Sequence[float]) -> np.ndarray:
return self.evaluate(x) - np.array(y)
return self.update_with_gradient(
self.objective_gradient(xy_vals_seq, deriv_func)
)
@abstractmethod
def solve(
self: F,
xy_vals_seq: Iterable[Tuple[X, float]],
error_tolerance: Optional[float] = None
) -> F:
'''Assuming the entire data set of (x,y) pairs is available
in the form of the given input xy_vals_seq data structure,
solve for the internal parameters of the FunctionApprox
such that the internal parameters are fitted to xy_vals_seq.
Since this is a best-fit, the internal parameters are fitted
to within the input error_tolerance (where applicable, since
some methods involve a direct solve for the fit that don't
require an error_tolerance)
'''
@abstractmethod
def within(self: F, other: F, tolerance: float) -> bool:
'''Is this function approximation within a given tolerance of
another function approximation of the same type?
'''
def iterate_updates(
self: F,
xy_seq_stream: Iterator[Iterable[Tuple[X, float]]]
) -> Iterator[F]:
'''Given a stream (Iterator) of data sets of (x,y) pairs,
perform a series of incremental updates to the internal
parameters (using update method), with each internal
parameter update done for each data set of (x,y) pairs in the
input stream of xy_seq_stream
'''
return iterate.accumulate(
xy_seq_stream,
lambda fa, xy: fa.update(xy),
initial=self
)
def rmse(
self,
xy_vals_seq: Iterable[Tuple[X, float]]
) -> float:
'''The Root-Mean-Squared-Error between FunctionApprox's
predictions (from evaluate) and the associated (supervisory)
y values
'''
x_seq, y_seq = zip(*xy_vals_seq)
errors: np.ndarray = self.evaluate(x_seq) - np.array(y_seq)
return np.sqrt(np.mean(errors * errors))
def argmax(self, xs: Iterable[X]) -> X:
'''Return the input X that maximizes the function being approximated.
Arguments:
xs -- list of inputs to evaluate and maximize, cannot be empty
Returns the X that maximizes the function this approximates.
'''
args: Sequence[X] = list(xs)
return args[np.argmax(self.evaluate(args))]
```
## Linear Function Approximation
### Key Concepts
* **Feature-Based Representation:** Instead of working directly with raw input variables $x$, we transform them into a set of features $\phi_j(x)$. This allows us to model more complex relationships.
* **Feature Vector:** The features are combined into a feature vector: $$\phi(x)=(\phi_1(x),\phi_2(x),...,\phi_m(x)).$$
* **Weight Vector:** Each feature $\phi_j(x)$ has an associated weight $w_j$. These weights are combined into a vector $w=(w_1,w_2,...w_m) \in \mathbb{R}^m$.
### Modeling Assumptions
* **Linear Mean:** We assume the expected value of the target variable $y$ (conditioned on the input $x$) is a linear combination of the features: $$\mathbb{E}[y|x]= \phi(x)^T\cdot w$$
* **Gaussian Distribution:** We assume the target variable $y$, conditioned on the input $x$, follows a Gaussian distribution with the linear mean specified above and a constant variance $\sigma^2$: $$\mathbb{P}[y|x]=f(x;w)(y)=\frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(y-\phi(x)^T\cdot w)^2}{2\sigma^2}}$$
### Loss Function and Optimization
* **Objective:** Find weights $w$ that minimize the difference between our model's predictions and true values of $y$.
* **Regularized Mean Squared Error (MSE):** We use the regularized MSE loss function: $$\mathcal{L}(w)=\frac{1}{2n}\cdot \sum_{i=1}^n(\phi(x_i)^T\cdot w-y_i)^2+\frac{1}{2}\cdot \lambda \cdot |w|^2.$$ The regularization term helps prevent overfitting.
#### Optimization Methods
* **Gradient Descent:** Iteratively update weights using gradient descent to minimize the loss function:
* $\mathcal{G}_{(x,y)}(w) := \nabla_w \mathcal{L}(w)=\frac{1}{n}\cdot \sum_{i=1}^n\phi(x_i)(\phi(x_i)^T\cdot w-y_i)+\lambda \cdot w$
* $w_{t+1}=w_t-\alpha_t\cdot \mathcal{G}_{(x_t,y_t)}(w_t)$
* **Direct Solution (Optional):** If all data is available upfront and the number of features isn't too large, the optimal weights can be found directly using matrix operations: \begin{split}
&\frac{1}{n} \sum_{i=1}^n\Phi^T \cdot (\Phi\cdot w^*-Y)+\lambda \cdot w^*=0 \\
\Rightarrow& (\Phi^T\cdot\Phi+n\lambda\cdot I_m)\cdot w^*=\Phi^T\cdot Y \\
\Rightarrow& w^*=(\Phi^T\cdot\Phi+n\lambda\cdot I_m)^{-1}\cdot\Phi^T\cdot Y
\end{split} where
* $\Phi$ is an $m \times n$ matrix defined by $\Phi_{i,j} = \phi_j(x_i)$.
* $Y \in \mathbb{R}^n$ defined as $Y_i = y_i$.
* $I_m$ is the $m \times m$ identity matrix.
### Python Implementation
```python=
@dataclass(frozen=True)
class LinearFunctionApprox(FunctionApprox[X]):
feature_functions: Sequence[Callable[[X], float]]
regularization_coeff: float
weights: Weights
direct_solve: bool
@staticmethod
def create(
feature_functions: Sequence[Callable[[X], float]],
adam_gradient: AdamGradient = AdamGradient.default_settings(),
regularization_coeff: float = 0.,
weights: Optional[Weights] = None,
direct_solve: bool = True
) -> LinearFunctionApprox[X]:
return LinearFunctionApprox(
feature_functions=feature_functions,
regularization_coeff=regularization_coeff,
weights=Weights.create(
adam_gradient=adam_gradient,
weights=np.zeros(len(feature_functions))
) if weights is None else weights,
direct_solve=direct_solve
)
def get_feature_values(self, x_values_seq: Iterable[X]) -> np.ndarray:
return np.array(
[[f(x) for f in self.feature_functions] for x in x_values_seq]
)
def objective_gradient(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
obj_deriv_out_fun: Callable[[Sequence[X], Sequence[float]], float]
) -> Gradient[LinearFunctionApprox[X]]:
x_vals, y_vals = zip(*xy_vals_seq)
obj_deriv_out: np.ndarray = obj_deriv_out_fun(x_vals, y_vals)
features: np.ndarray = self.get_feature_values(x_vals)
gradient: np.ndarray = \
features.T.dot(obj_deriv_out) / len(obj_deriv_out) \
+ self.regularization_coeff * self.weights.weights
return Gradient(replace(
self,
weights=replace(
self.weights,
weights=gradient
)
))
def __add__(self, other: LinearFunctionApprox[X]) -> \
LinearFunctionApprox[X]:
return replace(
self,
weights=replace(
self.weights,
weights=self.weights.weights + other.weights.weights
)
)
def __mul__(self, scalar: float) -> LinearFunctionApprox[X]:
return replace(
self,
weights=replace(
self.weights,
weights=self.weights.weights * scalar
)
)
def evaluate(self, x_values_seq: Iterable[X]) -> np.ndarray:
return np.dot(
self.get_feature_values(x_values_seq),
self.weights.weights
)
def update_with_gradient(
self,
gradient: Gradient[LinearFunctionApprox[X]]
) -> LinearFunctionApprox[X]:
return replace(
self,
weights=self.weights.update(
gradient.function_approx.weights.weights
)
)
def solve(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
error_tolerance: Optional[float] = None
) -> LinearFunctionApprox[X]:
if self.direct_solve:
x_vals, y_vals = zip(*xy_vals_seq)
feature_vals: np.ndarray = self.get_feature_values(x_vals)
feature_vals_T: np.ndarray = feature_vals.T
left: np.ndarray = np.dot(feature_vals_T, feature_vals) \
+ feature_vals.shape[0] * self.regularization_coeff * \
np.eye(len(self.weights.weights))
right: np.ndarray = np.dot(feature_vals_T, y_vals)
ret = replace(
self,
weights=Weights.create(
adam_gradient=self.weights.adam_gradient,
weights=np.linalg.solve(left, right)
)
)
else:
tol: float = 1e-6 if error_tolerance is None else error_tolerance
def done(
a: LinearFunctionApprox[X],
b: LinearFunctionApprox[X],
tol: float = tol
) -> bool:
return a.within(b, tol)
ret = iterate.converged(
self.iterate_updates(itertools.repeat(list(xy_vals_seq))),
done=done
)
return ret
def within(self, other: FunctionApprox[X], tolerance: float) -> bool:
if isinstance(other, LinearFunctionApprox):
return self.weights.within(other.weights, tolerance)
return False
```
## Deep Neural Networks
**Deep Neural Networks (DNNs)** offer a powerful way to model complex, non-linear relationships. They build upon linear function approximation with multiple interconnected layers of "neurons," enabling them to learn intricate patterns in data.
### Network Structure
* **Layers:** A DNN consists of several layers (indexed $l=0,1,...,L$).
* **Input Layer $(l=0)$:** Receives the feature vector $I_0 = \phi(x)$.
* **Output Layer $(l=L)$:** Produces the final prediction $\mathbb{E}_M[y|x]$.
* **Connections:** Each layer's output $O_L$ becomes the input $I_{l+1}$ to the next, forming a feed-forward architecture.
* **Layer Parameters:** Each layer $l$ has a weight matrix $\mathbf{w}_l$ that governs how it transforms its input.
### How DNNs Process Information
1. **Linear Transformation:** A layer first applies a linear transformation to its input: $$S_l = \mathbf{w}_l \cdot I_l$$
2. **Non-linear Activation:** An activation function $g_l$ is applied element-wise to the result: $$O_l=g_l(S_l)$$
* Common activation functions include ReLU, sigmoid, and tanh.
3. **Forward Propagation:** Information flows sequentially through the layers.
### Training: Backpropagation and Gradient Descent
* **Backpropagation:** Efficiently calculates gradients $\nabla_{\mathbf{w}_l}\mathcal{L}$ of the loss function with respect to each layer's weights.
* **Gradient Descent:** Gradients guide iterative updates of weights to minimize the loss function.
#### Calculating Gradients
Calculating gradients can be simplified:
$$
\nabla_{\mathbf{w}_l}\mathcal{L}=P_l\cdot I_l^T+\lambda_l \cdot \mathbf{w}_l
$$
where:
* $P_l=\nabla_{S_l}\mathcal{L}$ is the gradient with respect to a layer's input before activation.
* $\lambda_l$ is the regularization coefficient for layer $l$.
**Backpropagation Theorem** The gradient $P_l$ can be computed recursively
$$
P_l=(\mathbf{w}_{l+1}^T \cdot P_{l+1}) \circ g'_l(S_l)
$$
(where $\circ$ is the element-wise product)
#### Output Layer Considerations
* **Choosing a Distribution:** The output layer must model a probability distribution $\mathbb{P}[y|S_L]$ that aligns with the target variable $y$. The exponential family is a flexible choice: $$\mathbb{P}[y|S_L]=p(y|S_L,\tau)=h(y,\tau)\cdot e^{\frac{S_L\cdot y-A(S_L)}{d(\tau)}}$$
* **Matching the Expected Value:** To ensure our prediction $O_L$ matches $\mathbb{E}_p[y|S_L]$, we set the final layer's activation function as $g(S_L)=A'(S_L)$.
* $P_L=\frac{\partial\mathcal{L}}{\partial S_L}=\frac{O_L-y}{d(\tau)}$
* **Common Examples:**
* Normal distribution $\mathcal{N}(\mu,\sigma^2)$: $S_L=\mu$, $\tau=\sigma$, $h(y,\tau)=\frac{e^{\frac{-y^2}{2\tau^2}}}{\sqrt{2\pi}\tau},A(S_L)=\frac{S_L^2}{2}$, $d(\tau)=\tau^2,g_L(S_L)=S_L$.
* Bernoulli distribution (binary-valued $y$, parameterized by $p$): $S_L=\log(\frac{p}{1-p})$, $\tau=h(y,\tau)=d(\tau)=1$, $A(S_L)=\log(1+e^{S_L})$, $g_L(S_L)=\frac{1}{1+e^{-S_L}}$.
* Poisson distribution ($y$ parameterized $\lambda$): $S_L=\log\lambda$, $\tau=d(\tau)=1$, $h(y,\tau)=\frac{1}{y!}$, $A_L(S_L)=e^{S_L}\cdot g_L(S_L)=e^{S_L}$.
### Python Implementation
```python=
@dataclass(frozen=True)
class DNNSpec:
neurons: Sequence[int]
bias: bool
hidden_activation: Callable[[np.ndarray], np.ndarray]
hidden_activation_deriv: Callable[[np.ndarray], np.ndarray]
output_activation: Callable[[np.ndarray], np.ndarray]
output_activation_deriv: Callable[[np.ndarray], np.ndarray]
@dataclass(frozen=True)
class DNNApprox(FunctionApprox[X]):
feature_functions: Sequence[Callable[[X], float]]
dnn_spec: DNNSpec
regularization_coeff: float
weights: Sequence[Weights]
@staticmethod
def create(
feature_functions: Sequence[Callable[[X], float]],
dnn_spec: DNNSpec,
adam_gradient: AdamGradient = AdamGradient.default_settings(),
regularization_coeff: float = 0.,
weights: Optional[Sequence[Weights]] = None
) -> DNNApprox[X]:
if weights is None:
inputs: Sequence[int] = [len(feature_functions)] + \
[n + (1 if dnn_spec.bias else 0)
for i, n in enumerate(dnn_spec.neurons)]
outputs: Sequence[int] = list(dnn_spec.neurons) + [1]
wts = [Weights.create(
weights=np.random.randn(output, inp) / np.sqrt(inp),
adam_gradient=adam_gradient
) for inp, output in zip(inputs, outputs)]
else:
wts = weights
return DNNApprox(
feature_functions=feature_functions,
dnn_spec=dnn_spec,
regularization_coeff=regularization_coeff,
weights=wts
)
def get_feature_values(self, x_values_seq: Iterable[X]) -> np.ndarray:
return np.array(
[[f(x) for f in self.feature_functions] for x in x_values_seq]
)
def forward_propagation(
self,
x_values_seq: Iterable[X]
) -> Sequence[np.ndarray]:
"""
:param x_values_seq: a n-length iterable of input points
:return: list of length (L+2) where the first (L+1) values
each represent the 2-D input arrays (of size n x |i_l|),
for each of the (L+1) layers (L of which are hidden layers),
and the last value represents the output of the DNN (as a
1-D array of length n)
"""
inp: np.ndarray = self.get_feature_values(x_values_seq)
ret: List[np.ndarray] = [inp]
for w in self.weights[:-1]:
out: np.ndarray = self.dnn_spec.hidden_activation(
np.dot(inp, w.weights.T)
)
if self.dnn_spec.bias:
inp = np.insert(out, 0, 1., axis=1)
else:
inp = out
ret.append(inp)
ret.append(
self.dnn_spec.output_activation(
np.dot(inp, self.weights[-1].weights.T)
)[:, 0]
)
return ret
def evaluate(self, x_values_seq: Iterable[X]) -> np.ndarray:
return self.forward_propagation(x_values_seq)[-1]
def backward_propagation(
self,
fwd_prop: Sequence[np.ndarray],
obj_deriv_out: np.ndarray
) -> Sequence[np.ndarray]:
"""
:param fwd_prop represents the result of forward propagation (without
the final output), a sequence of L 2-D np.ndarrays of the DNN.
: param obj_deriv_out represents the derivative of the objective
function with respect to the linear predictor of the final layer.
:return: list (of length L+1) of |o_l| x |i_l| 2-D arrays,
i.e., same as the type of self.weights.weights
This function computes the gradient (with respect to weights) of
the objective where the output layer activation function
is the canonical link function of the conditional distribution of y|x
"""
deriv: np.ndarray = obj_deriv_out.reshape(1, -1)
back_prop: List[np.ndarray] = [np.dot(deriv, fwd_prop[-1]) /
deriv.shape[1]]
# L is the number of hidden layers, n is the number of points
# layer l deriv represents dObj/ds_l where s_l = i_l . weights_l
# (s_l is the result of applying layer l without the activation func)
for i in reversed(range(len(self.weights) - 1)):
# deriv_l is a 2-D array of dimension |o_l| x n
# The recursive formulation of deriv is as follows:
# deriv_{l-1} = (weights_l^T inner deriv_l) haddamard g'(s_{l-1}),
# which is ((|i_l| x |o_l|) inner (|o_l| x n)) haddamard
# (|i_l| x n), which is (|i_l| x n) = (|o_{l-1}| x n)
# Note: g'(s_{l-1}) is expressed as hidden layer activation
# derivative as a function of o_{l-1} (=i_l).
deriv = np.dot(self.weights[i + 1].weights.T, deriv) * \
self.dnn_spec.hidden_activation_deriv(fwd_prop[i + 1].T)
# If self.dnn_spec.bias is True, then i_l = o_{l-1} + 1, in which
# case # the first row of the calculated deriv is removed to yield
# a 2-D array of dimension |o_{l-1}| x n.
if self.dnn_spec.bias:
deriv = deriv[1:]
# layer l gradient is deriv_l inner fwd_prop[l], which is
# of dimension (|o_l| x n) inner (n x (|i_l|) = |o_l| x |i_l|
back_prop.append(np.dot(deriv, fwd_prop[i]) / deriv.shape[1])
return back_prop[::-1]
def objective_gradient(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
obj_deriv_out_fun: Callable[[Sequence[X], Sequence[float]], float]
) -> Gradient[DNNApprox[X]]:
x_vals, y_vals = zip(*xy_vals_seq)
obj_deriv_out: np.ndarray = obj_deriv_out_fun(x_vals, y_vals)
fwd_prop: Sequence[np.ndarray] = self.forward_propagation(x_vals)[:-1]
gradient: Sequence[np.ndarray] = \
[x + self.regularization_coeff * self.weights[i].weights
for i, x in enumerate(self.backward_propagation(
fwd_prop=fwd_prop,
obj_deriv_out=obj_deriv_out
))]
return Gradient(replace(
self,
weights=[replace(w, weights=g) for
w, g in zip(self.weights, gradient)]
))
def __add__(self, other: DNNApprox[X]) -> DNNApprox[X]:
return replace(
self,
weights=[replace(w, weights=w.weights + o.weights) for
w, o in zip(self.weights, other.weights)]
)
def __mul__(self, scalar: float) -> DNNApprox[X]:
return replace(
self,
weights=[replace(w, weights=w.weights * scalar)
for w in self.weights]
)
def update_with_gradient(
self,
gradient: Gradient[DNNApprox[X]]
) -> DNNApprox[X]:
return replace(
self,
weights=[w.update(g.weights) for w, g in
zip(self.weights, gradient.function_approx.weights)]
)
def solve(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
error_tolerance: Optional[float] = None
) -> DNNApprox[X]:
tol: float = 1e-6 if error_tolerance is None else error_tolerance
def done(
a: DNNApprox[X],
b: DNNApprox[X],
tol: float = tol
) -> bool:
return a.within(b, tol)
return iterate.converged(
self.iterate_updates(itertools.repeat(list(xy_vals_seq))),
done=done
)
def within(self, other: FunctionApprox[X], tolerance: float) -> bool:
if isinstance(other, DNNApprox):
return all(w1.within(w2, tolerance)
for w1, w2 in zip(self.weights, other.weights))
else:
return False
```
## Tabular Representation as Function Approximation
* **Key Idea:** Tabular representations, commonly used in Dynamic Programming (DP) to store values for each possible state, can be viewed as a specialized form of linear function approximation.
* **Indicator Features:** We create a unique indicator feature for every state. A feature takes the value 1 if we're in its corresponding state, and 0 otherwise. This transforms raw states into feature vectors.
* **Weights as Averages:** With this representation, the weight associated with each feature directly represents the average of the target values $y$ observed when the system is in the state corresponding to that feature.
### Python Implementation
```python=
@dataclass(frozen=True)
class Tabular(FunctionApprox[X]):
'''Approximates a function with a discrete domain (`X'), without any
interpolation. The value for each `X' is maintained as a weighted
mean of observations by recency (managed by
`count_to_weight_func').
In practice, this means you can use this to approximate a function
with a learning rate α(n) specified by count_to_weight_func.
If `count_to_weight_func' always returns 1, this behaves the same
way as `Dynamic'.
Fields:
values_map -- mapping from X to its approximated value
counts_map -- how many times a given X has been updated
count_to_weight_func -- function for how much to weigh an update
to X based on the number of times that X has been updated
'''
values_map: Mapping[X, float] = field(default_factory=lambda: {})
counts_map: Mapping[X, int] = field(default_factory=lambda: {})
count_to_weight_func: Callable[[int], float] = \
field(default_factory=lambda: lambda n: 1.0 / n)
def objective_gradient(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
obj_deriv_out_fun: Callable[[Sequence[X], Sequence[float]], float]
) -> Gradient[Tabular[X]]:
x_vals, y_vals = zip(*xy_vals_seq)
obj_deriv_out: np.ndarray = obj_deriv_out_fun(x_vals, y_vals)
sums_map: Dict[X, float] = defaultdict(float)
counts_map: Dict[X, int] = defaultdict(int)
for x, o in zip(x_vals, obj_deriv_out):
sums_map[x] += o
counts_map[x] += 1
return Gradient(replace(
self,
values_map={x: sums_map[x] / counts_map[x] for x in sums_map},
counts_map=counts_map
))
def __add__(self, other: Tabular[X]) -> Tabular[X]:
values_map: Dict[X, float] = {}
counts_map: Dict[X, int] = {}
for key in set.union(
set(self.values_map.keys()),
set(other.values_map.keys())
):
values_map[key] = self.values_map.get(key, 0.) + \
other.values_map.get(key, 0.)
counts_map[key] = counts_map.get(key, 0) + \
other.counts_map.get(key, 0)
return replace(
self,
values_map=values_map,
counts_map=counts_map
)
def __mul__(self, scalar: float) -> Tabular[X]:
return replace(
self,
values_map={x: scalar * y for x, y in self.values_map.items()}
)
def evaluate(self, x_values_seq: Iterable[X]) -> np.ndarray:
'''Evaluate the function approximation by looking up the value in the
mapping for each state.
if an X value has not been seen before and hence not initialized,
returns 0
'''
return np.array([self.values_map.get(x, 0.) for x in x_values_seq])
def update_with_gradient(
self,
gradient: Gradient[Tabular[X]]
) -> Tabular[X]:
'''Update the approximation with the given gradient.
Each X keeps a count n of how many times it was updated, and
each subsequent update is scaled by count_to_weight_func(n),
which defines our learning rate.
'''
values_map: Dict[X, float] = dict(self.values_map)
counts_map: Dict[X, int] = dict(self.counts_map)
for key in gradient.function_approx.values_map:
counts_map[key] = counts_map.get(key, 0) + \
gradient.function_approx.counts_map[key]
weight: float = self.count_to_weight_func(counts_map[key])
values_map[key] = values_map.get(key, 0.) - \
weight * gradient.function_approx.values_map[key]
return replace(
self,
values_map=values_map,
counts_map=counts_map
)
def solve(
self,
xy_vals_seq: Iterable[Tuple[X, float]],
error_tolerance: Optional[float] = None
) -> Tabular[X]:
values_map: Dict[X, float] = {}
counts_map: Dict[X, int] = {}
for x, y in xy_vals_seq:
counts_map[x] = counts_map.get(x, 0) + 1
weight: float = self.count_to_weight_func(counts_map[x])
values_map[x] = weight * y + (1 - weight) * values_map.get(x, 0.)
return replace(
self,
values_map=values_map,
counts_map=counts_map
)
def within(self, other: FunctionApprox[X], tolerance: float) -> bool:
if isinstance(other, Tabular):
return all(abs(self.values_map[s] - other.values_map.get(s, 0.))
<= tolerance for s in self.values_map)
return False
```
## Approximate Policy Evaluation & Value Iteration
* **Tackling Large State Spaces:** Approximate policy evaluation and approximate value iteration address the challenge of representing value functions in problems with large or infinite state spaces. They use function approximators (like linear models or neural networks) to compactly represent these functions.
* **Sample-Based Updates:** Instead of exhaustively calculating expectations for every possible state, these methods rely on sampled transitions. These samples are used to estimate expectations in the Bellman equation, and the function approximator is updated accordingly.
**Key Idea:** By combining function approximation with sample-based updates, approximate policy evaluation and value iteration enable the application of dynamic programming principles to complex, real-world problems.
### Python Implementation
```python=
from rl.iterate import iterate
def evaluate_mrp(
mrp: MarkovRewardProcess[S],
γ: float,
approx_0: ValueFunctionApprox[S],
non_terminal_states_distribution: NTStateDistribution[S],
num_state_samples: int
) -> Iterator[ValueFunctionApprox[S]]:
'''Iteratively calculate the value function for the given Markov Reward
Process, using the given FunctionApprox to approximate the value function
at each step for a random sample of the process' non-terminal states.
'''
def update(v: ValueFunctionApprox[S]) -> ValueFunctionApprox[S]:
nt_states: Sequence[NonTerminal[S]] = \
non_terminal_states_distribution.sample_n(num_state_samples)
def return_(s_r: Tuple[State[S], float]) -> float:
s1, r = s_r
return r + γ * extended_vf(v, s1)
return v.update(
[(s, mrp.transition_reward(s).expectation(return_))
for s in nt_states]
)
return iterate(update, approx_0)
```
```python=
def value_iteration(
mdp: MarkovDecisionProcess[S, A],
γ: float,
approx_0: ValueFunctionApprox[S],
non_terminal_states_distribution: NTStateDistribution[S],
num_state_samples: int
) -> Iterator[ValueFunctionApprox[S]]:
'''Iteratively calculate the Optimal Value function for the given
Markov Decision Process, using the given FunctionApprox to approximate the
Optimal Value function at each step for a random sample of the process'
non-terminal states.
'''
def update(v: ValueFunctionApprox[S]) -> ValueFunctionApprox[S]:
nt_states: Sequence[NonTerminal[S]] = \
non_terminal_states_distribution.sample_n(num_state_samples)
def return_(s_r: Tuple[State[S], float]) -> float:
s1, r = s_r
return r + γ * extended_vf(v, s1)
return v.update(
[(s, max(mdp.step(s, a).expectation(return_)
for a in mdp.actions(s)))
for s in nt_states]
)
return iterate(update, approx_0)
```
## Finite-Horizon ADP
* **Extending Dynamic Programming:** Finite-horizon Approximate Dynamic Programming (ADP) adapts the backward induction principle from classic Dynamic Programming to handle problems with large or infinite state spaces.
* **Key Technique-- Function Approximation:** Instead of storing value functions in tables, ADP uses function approximators (like neural networks) to represent them compactly. This allows for efficient updates at each time step.
### Python Implementation
```python=
MDP_FuncApproxV_Distribution = Tuple[
MarkovDecisionProcess[S, A],
ValueFunctionApprox[S],
NTStateDistribution[S]
]
def back_opt_vf_and_policy(
mdp_f0_mu_triples: Sequence[MDP_FuncApproxV_Distribution[S, A]],
γ: float,
num_state_samples: int,
error_tolerance: float
) -> Iterator[Tuple[ValueFunctionApprox[S], DeterministicPolicy[S, A]]]:
'''Use backwards induction to find the optimal value function and optimal
policy at each time step, using the given FunctionApprox for each time step
for a random sample of the time step's states.
'''
vp: List[Tuple[ValueFunctionApprox[S], DeterministicPolicy[S, A]]] = []
for i, (mdp, approx0, mu) in enumerate(reversed(mdp_f0_mu_triples)):
def return_(s_r: Tuple[State[S], float], i=i) -> float:
s1, r = s_r
return r + γ * (extended_vf(vp[i-1][0], s1) if i > 0 else 0.)
this_v = approx0.solve(
[(s, max(mdp.step(s, a).expectation(return_)
for a in mdp.actions(s)))
for s in mu.sample_n(num_state_samples)],
error_tolerance
)
def deter_policy(state: S) -> A:
return max(
((mdp.step(NonTerminal(state), a).expectation(return_), a)
for a in mdp.actions(NonTerminal(state))),
key=itemgetter(0)
)[1]
vp.append((this_v, DeterministicPolicy(deter_policy)))
return reversed(vp)
```
```python=
MDP_FuncApproxQ_Distribution = Tuple[
MarkovDecisionProcess[S, A],
QValueFunctionApprox[S, A],
NTStateDistribution[S]
]
def back_opt_qvf(
mdp_f0_mu_triples: Sequence[MDP_FuncApproxQ_Distribution[S, A]],
γ: float,
num_state_samples: int,
error_tolerance: float
) -> Iterator[QValueFunctionApprox[S, A]]:
'''Use backwards induction to find the optimal q-value function policy at
each time step, using the given FunctionApprox (for Q-Value) for each time
step for a random sample of the time step's states.
'''
horizon: int = len(mdp_f0_mu_triples)
qvf: List[QValueFunctionApprox[S, A]] = []
for i, (mdp, approx0, mu) in enumerate(reversed(mdp_f0_mu_triples)):
def return_(s_r: Tuple[State[S], float], i=i) -> float:
s1, r = s_r
next_return: float = max(
qvf[i-1]((s1, a)) for a in
mdp_f0_mu_triples[horizon - i][0].actions(s1)
) if i > 0 and isinstance(s1, NonTerminal) else 0.
return r + γ * next_return
this_qvf = approx0.solve(
[((s, a), mdp.step(s, a).expectation(return_))
for s in mu.sample_n(num_state_samples) for a in mdp.actions(s)],
error_tolerance
)
qvf.append(this_qvf)
return reversed(qvf)
```
## Constructing Non-Terminal States Distribution
* **Importance in ADP:** Understanding how often different states are visited is crucial in Approximate Dynamic Programming (ADP) for accurate reward estimation and value function updates.
* **Ideal Approach-- Stationary Distribution:** The best choice is the stationary distribution of the Markov Reward Process (MRP) defined by the current policy. This reflects the long-term likelihood of encountering different states.
* **Methods:**
* **Analytical Derivation:** If the MDP structure allows, the stationary distribution (or distribution at a specific time step) can sometimes be derived analytically.
* **Empirical Estimation:** When analytical solutions are unavailable, simulate multiple trajectories under the current policy. The relative frequency of visiting each state provides a good approximation of the distribution.
* **Fallback-- Uniform Distribution:** If other methods are too complex, assume a uniform distribution over non-terminal states. However, this may be less accurate if some states are much more likely to be visited than others.
## References
- Chapter 6 of the [RLForFinanceBook](https://stanford.edu/~ashlearn/RLForFinanceBook/book.pdf)
- [Function Approximation and Approximate Dynamic Programming Algorithms](https://github.com/coverdrive/technical-documents/blob/master/finance/cme241/Tour-ADP.pdf) slides for CME 241: Foundations of Reinforcement Learning with Applications in Finance