# DNBP Loss BackPropagation ![Loss Function](https://hackmd.io/_uploads/HkqVx_9exx.png) ## Colors - Blue - Indicates Belief Weights Likelihood - Red - Indicates Message Weights Unary - Green - Indicates Message Weights Neighbourhood # Likelihood Optimizer - This optimizer contains 2 parameters - CNN network parameters $f_s$ - Node Likelihood parameters $\phi_s$ - **a** - $\frac {dl(x_i|\theta_1)}{d\phi_s}=\sum^N_{j=1} 1 \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times \frac {dw_{1,i,j}}{d\phi_{s}}$ - **b** - $\frac {dl(x_i|\theta_1)}{df_s}=\sum^N_{j=1} 1 \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times \frac {dw_{1,i,j}}{dY} \times \frac{dY}{df_s}$ - This optimizer only depends on `Belief Weights Likelihood (loss function)`. :::info - **c** - Backpropagation doesn't happen because `likelihood factors training should depend only on the corresponding node, not its neighbors`. (Comment in the code) ::: # Sampler Optimizer - This optimizer contains - Edge Sampler Parameters $\tilde\psi_{sd}$ - **d** - $\frac {dl(x_i|\theta_2)}{d\tilde\psi_{sd}}=\sum^N_{j=1} w_{\theta_{2,i,j}} \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times -1 \times \left(\frac {x_i - p_{i,j}}{\sigma} \right) \times \frac {dx_{i}}{d\tilde\psi_{sd}}$ - This optimizer only depends on `Message Weights Unary (loss function)`. # Density Optimizer - This optimizer contains - Edge Density Parameters $\psi_{sd}$ - **e** - $\frac {dl(x_i|\theta_3)}{d\psi_{sd}}=\sum^N_{j=1} 1 \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times \frac {dw_{3,i,j}}{d\psi_{sd}}$ - This optimizer only depends on `Message Weights Neighbourhood (loss function)` # Time Optimizer - This optimizer contains - Time Sampler Parameters - **f** - $\frac {dl(x_i|\theta_1)}{d\tilde\tau_{s}}=\sum^N_{j=1} w_{\theta_{1,i,j}} \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times -1 \times \left(\frac {x_i - p_{i,j}}{\sigma} \right) \times \frac {dx_{i}}{d\tilde\tau_s}$ - **g** - $\frac {dl(x_i|\theta_2)}{d\tilde\tau_{s}}=\sum^N_{j=1} w_{\theta_{2,i,j}} \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times -1 \times \left(\frac {x_i - p_{i,j}}{\sigma} \right) \times \frac {dx_{i}}{d\tilde\tau_s}$ - **h** - $\frac {dl(x_i|\theta_3)}{d\tilde\tau_{s}}=\sum^N_{j=1} w_{\theta_{3,i,j}} \times\left[ \frac 1 {\sigma \sqrt {2\pi}} \exp \left( \frac {-1} 2 \times \left(\frac {x_i-p_{i,j}}{\sigma}\right)^2 \right)\right] \times -1 \times \left(\frac {x_i - p_{i,j}}{\sigma} \right) \times \frac {dx_{i}}{d\tilde\tau_s}$ - - This optimizer depends on - `Belief Weights Likelihood (loss function)` - `Message Weights Unary (loss function)` - `Message Weights Neighbourhood (loss function)`