<style>
p {
text-align: justify;
}
img {
display: block;
margin: auto;
}
td {
width: auto;
background-color: #fff;
overflow: hidden;
}
td > img {
display: block;
width: 400px;
height: 300px;
}
td > img:hover {
transition-duration: 1s;
transform: scale(1.3);
}
.img-curve {
display: block;
width: 80%;
}
.img-caption {
font-size: 1.2rem;
font-weight: 500;
letter-spacing: -1px;
padding: 0 0 30px 0;
}
.img-caption:hover {
color: gray;
cursor: none;
}
</style>
<script type="text/javascript" id="MathJax-script" async
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/3.0.0/es5/latest?tex-mml-chtml.js">
</script>
<h1 style="text-align:center">VFX Project I: High Dynamic Range Imaging</h1>
<p style="text-align:right">VFX (NTU, Spring 2022)</p>
Hao-Cheng Lo 駱皓正 (D08227104)
Li-Wei Fu 傅立威 (R10942078)
## Introduction
In High Dynamic Range (HDR) imaging project, we took static photos, with different exposure/shutter times. Next, we implemented Wards’ Median Threshold Bitmap (MTB) algorithm for image alignment and realized two famous HDR methods, Paul Debevec’s and Mark Robertson’s. Last, we use let our hdr results into photomatix processing for tone mapping. To sum up, the outline of this project is arranged as follows:
> 1. Photos and Camera Setting
> 2. Image Alignment
> 3. High Dynamic Range Imaging
> 4. Tone Mapping
> 5. Discussion and Conclusion
> 6. HDR Photo Gallery
## Photos and Camera Setting

<center class="img-caption">Sony α7 III with 16-35mm F2.8 GM lens</center>
We used Sony α7 III camera with 16-35mm F2.8 GM lens to take photos under different exposure time/shutter speed when fixing other parameters such as ISO or aperture. We shot total as 8 sets of images to ensure the stableness of our algorithm, including ***Huashan Prairie, NTU EE, NTU Math, NTU Library (close), NTU Library (far), Taipei 101 (inside), Taipei 101 (outside), XingYi NanShan***. Here, we demostrate ***XingYi NanShan*** and ***NTU Library (far)*** in our each implementation detail; we put results of the other sets in the section of [Our HDR Photo Gallery](#HDR-Photo-Gallery). Regarding the set of ***XingYi NanShan***, we shot $7$ photos, whose exposure/shutter time $t \in \{8, 4, 2, 1, \dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}\}$, aperture $=f/8$ and ISO $=800$. Below shows its original images.

<center class="img-caption">
LDR Photos of <span style="font-weight: 800">XingYi NanShan</span>
</center>
Regarding the set of ***NTU Library (far)***, we shot $9$ photos, whose exposure/shutter time $t \in \{8, 4, 2, 1, \dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}, \dfrac{1}{15}, \dfrac{1}{30}\}$, aperture $=f/8$ and ISO $=800$. Below shows its original images.

<center class="img-caption">
LDR Photos of <span style="font-weight: 800">NTU Library (far)</span>
</center>
## Image Alignment (bonus)
Since we took photos by hand and manually changed the exposure time for our camera, we still implement an image alignment algorithm, though this step is accounted as a bonus step. In this section, we use Median Threshold Bitmap (MTB) algorithm introduced in the class to align multiple images with different exposure times.
In general, we followed these main steps:
1. Convert each RGB image into a grayscale image.
2. Choose the image with median exposure time as the reference/base image.
3. Each image is aligned according to this reference/base image.
Specifically, for computing the offset of given two images (base image and target image) and discovering the optimal offset, the pyramid method is adopted. In each recursive call (pyramid layer $d$), we iterate/go through 9 directions, from $(-1,-1)$ to $(1,1)$, offsets of target image. In each direction, we perform three things: (1) Threshold two images by their median exposure and generate the corresponding bitmaps $b_s$ and $b_t$; (2) For preventing some noises in near-median pixels, we choose a range around source image median exposure ($\pm 4$) as $b_{sm}$; (3) Calculate the error according to the following bitwise operations:
$$
b_s \oplus b_t \land \neg b_{sm}
$$
Hence, we can retrieve the optimal direction in this layer. Then, we downsize the orginal input/target and perform the same thing in the next pyramid recursive call. The total offsets we will check is related to the pyramid depth $d$. That is,
$$
\text{Total Offsets} = \pm (2^{d+1} - 1)
$$
Hence, we will get the optimal total offsets between base image and target image, then complete the image alignment. For more details, please refer to [this paper](http://www.anyhere.com/gward/papers/jgtpap2.pdf) to learn more.
```python=
def calculate_error(source, target, threshold):
source_median = np.median(source)
target_median = np.median(target)
_, source_binary = cv2.threshold(source, source_median, 255, cv2.THRESH_BINARY)
_, target_binary = cv2.threshold(target, target_median, 255, cv2.THRESH_BINARY)
mask = cv2.inRange(source, source_median - threshold, source_median + threshold)
return cv2.countNonZero(cv2.bitwise_and(cv2.bitwise_xor(source_binary, target_binary), cv2.bitwise_not(mask)))
def find_shift(source, target, x, y, threshold):
min_error = np.inf
h, w = target.shape
best_dx, best_dy = 0, 0
for dx in [-1, 0, 1]:
for dy in [-1, 0, 1]:
target_shift = cv2.warpAffine(target, M=np.float32([[1, 0, x + dx], [0, 1, y + dy]]), dsize=(w, h))
error = calculate_error(source, target_shift, threshold)
if error < min_error:
min_error = error
best_dx, best_dy = dx, dy
return x + best_dx, y + best_dy
def MTB_method(source, target, depth=4, threshold=6):
'''
Image alignment using Median Threshold Bitmap (MTB) algorithm.
We want to align the target image to the source image.
The parameters returned are the offset required by the target image.
Note:
Source image and target image must be grayscale images with the same shape.
Depth must be smaller than log2( the length of the shorter side of the image ).
depth:
The number of layers of the image pyramid.
The displacement that can be tested:
depth=0 --------------------> ± 1
depth=1 --> ± (1 * 2) + 1 --> ± 3
depth=2 --> ± (3 * 2) + 1 --> ± 7
depth=3 --> ± (7 * 2) + 1 --> ± 15
...
depth=n --> ± 2 ^ (n + 1) - 1
threshold:
Since those pixels that are too close to the median may cause errors,
these pixels in the median ± threshold are marked out and skipped when comparing.
'''
if depth == 0:
return find_shift(source, target, 0, 0, threshold)
H, W = target.shape
source_half = cv2.resize(source, (W // 2, H // 2))
target_half = cv2.resize(target, (W // 2, H // 2))
dx, dy = MTB_method(source_half, target_half, depth - 1, threshold)
return find_shift(source, target, dx * 2, dy * 2, threshold)
```

<center class="img-caption">
Demo of <span style="font-weight: 800">XingYi NanShan</span> Image Alignment
</center>
## High Dynamic Range Imaging
In this section, we will show how we implemented both Paul Debevec’s and Robertson’s HDR algorithm to generate radiance maps and corresponding responce curve.
### Paul Debevec's method
Pual Debevec's HDR method provide us with an algorithm to generate the response curve/function via sampling a set of positions on the images and solving a weighted $Ax=b$ linear system.
**Sampling Points.** To prevent bias results, we adopted a uniform sampling method to sample points. Specifically, We sampled $(x,y) \overset{iid}{\sim} DU(([0, h-1], [0, w-1]))$ where $h$ is the height of the images and $w$ is the width of the images. Such method can ensure that we can evenly sample the points on the images and raise the probability to capture the lighter areas and darker areas.
**Solving Linear System.** Below algorithms follow the Paul Debevec thought, constructing the $Ax=b$ to represent the objective function:
$$
\mathbf{O} = \sum^N\sum^P\{w(Z_{np})[g(Z_{np}) - \ln E_n - \ln \triangle t_p)]\}^2 + \lambda \sum^Z w(z)g''(z)^2
$$
It is worth noting that we adopted the trangle function to be our weighting function $w$. And we set the parameter $\lambda = 10$ and the constraint as the floored mean of all $NP$ points. We construct and solving the least square problem as follows, with each color channel computed independently.
```python=
class Debevec():
'''
Paul Debevec Method
'''
def __init__(self, Z, shutter_times, weighting_function, lamda = 10):
'''
Input:
Z: Z has PxN dimension, where P is # of imgs and N is sample of each imgs -> numpy array
shutter_times: shutter_times is a list containing P imgs' shutter_times -> numpy array
weighting_function: weighting function, which recieve PxN values and return PxN weights -> func
lamda: solving Ax=b related parameter -> int
'''
self.Z = np.array(Z) # all PxN pixel value
self.P = self.Z.shape[0] # # of imgs
self.N = self.Z.shape[1] # # of samples of each imgs
self.PN = self.P * self.N # # of elements in Z
self.ln_shutter_times = np.log(np.array(shutter_times)) # log shutter time
# solving Ax = b
self.matrix_A = None
self.vec_x = None
self.vec_b = None
self.vec_w = weighting_function(self.Z.flatten())
self.constraint = np.floor(0.5*(np.max(self.Z)+np.min(self.Z))).astype(int) # usually = 127
self.lamda = lamda
self.ln_G = None
self.solve_Ax_b()
def solve_Ax_b(self):
self.matrix_A = np.zeros([self.PN + 255, 256 + self.N]) # shape (PN+255, 256+N)
self.vec_b = np.zeros(self.PN + 255) # shape (PN+255)
target_rows = np.arange(self.PN) # 0, 1, ..., PN
target_cols = np.array(self.Z).flatten() # which col should be put 1
# [top-left block of A]
self.matrix_A[target_rows, target_cols] = self.vec_w
# [top-right of A] and [b]
for img_id in range(P):
self.matrix_A[img_id*self.N:img_id*self.N+self.N, 256:] = -np.identity(N) * self.vec_w[img_id*self.N:img_id*self.N+self.N]
self.vec_b[img_id*self.N:img_id*self.N+self.N] = self.ln_shutter_times[img_id]
# [btm-left of A]
for i in range(254):
self.matrix_A[self.PN+i, i:i+3] = np.array([1, -2, 1]) * np.abs(i - self.constraint) * self.lamda
# [last-row of A] and [b]
self.matrix_A[-1, self.constraint] = 1
self.vec_b[target_rows] *= self.vec_w
print("[Paul Debevec] A matrix and b vec Completed")
# solving Ax = b
A_inv = np.linalg.pinv(self.matrix_A.astype(np.float64))
self.vec_x = A_inv@self.vec_b
self.ln_G = self.vec_x[:256]
print("[Paul Debevec] Ax = b solved")
```
**Compute Radiance and Plotting.** For radiance map the response curve, each channel is conducted independently. Thus we also calculate the final radiance by the formula:
$$
\ln E_n = \dfrac{\sum^P w(Z_{np})(g(Z_{np}) - \ln \triangle t_p)}{\sum^P w(Z_{np})}
$$
Hence, we can generate the following radiance map and their response curve per each channel.
<img class="img-curve" src="https://i.imgur.com/f0t9EgW.png" />
<center class="img-caption">Radiance Map and Response Curve of <span style="font-weight: 800">XingYi NanShan</span> (Devebec's)
</center>
<img class="img-curve" src="https://i.imgur.com/SlOeWYl.png" />
<center class="img-caption">
Radiance Map and Response Curve of <span style="font-weight: 800">NTU Library (far)</span> (Devebec's)
</center>
### Robertson's Method (bonus)
Contrary to Debevec's method, Robertson's considers all pixels in the images to optimize a slightly different target function:
$$
\hat{g},\ \hat{E_n} = \arg \min_{g,\ E_n} \sum^N \sum^P w(Z_{np})(g(Z_{np})-E_i \triangle t_j)^2
$$
Where there are two paremeters required to be estimate, namely $g$ and $E_n$ (It is worth noting that $n$ here presenting all points of a whole image). Due to the difficulty in optimizing two variables at the same time. In Robertson's, it adopts an standard alternating least square method. That is, alternatively and iteratively fix one variable and find the others' extremum until convergence. Stated another way, when assuming $g$ function is known, we can derive:
$$
E_n = \dfrac{\sum^P w(Z_{np})g(Z_{np})\triangle t_p}{\sum^P w(Z_{np}) \triangle t^2_p}
$$
Alternately, when assuming irradiance is known, we can derive:
$$
g(m) = \dfrac{1}{|E_m|}\sum_{np \in E_m} E_n \triangle t_p
$$
Since $g$ can have infinte solutions, we normalize it in each updating iteration. It is worth noting that, in our implementation:
1. Different channels are operated independently.
2. Each channel's $g$ is initialized as a linear response.
3. $g(127)$ is set to be $1$ (at the first iteration) or normalized (the other iterations).
4. Triangle function is adopted as our weighting function.
5. The number of iterations is set to be $20$ since we observed that, in general, convergence can be reached after $<20$ iterations.
```python=
def weighting_function(shape, c=0.5, d=64, eps=1e-3):
'''
c is used in gaussian, the larger the smoother, the smaller the steeper.
d is used in trapezoid, indicating the length of both sides.
eps is used in triangle and trapezoid, to avoid the problem of dividing by zero when optimizing E.
'''
assert shape in ['uniform', 'triangle', 'gaussian', 'trapezoid']
if shape == 'uniform':
w = np.ones(256)
elif shape == 'triangle':
arr = np.arange(0, 1, 1 / 128)
w = np.concatenate((arr, np.flip(arr)))
w[0] = w[255] = eps
elif shape == 'gaussian':
arr = np.arange(0, 1, 1 / 128)
arr = np.concatenate((-np.flip(arr), arr))
w = np.exp(-arr**2 / c)
elif shape == 'trapezoid':
arr = np.arange(0, 1, 1 / d)
w = np.ones(256)
w[:d] = arr
w[-d:] = np.flip(arr)
w[0] = w[255] = eps
return w
```
```python=
def robertson_method(exposure_images, exposure_times, weight_shape='triangle', iteration=20):
'''
Generate HDR image from exposure images using the Robertson algorithm.
We assume the g curve initially is linear response.
RGB channels are calculated independently.
The parameters returned (E, g):
E is irradiance image
g is the camera response curve
Principle of this approach:
(a) Assuming g(Z_ij) is known, optimize for E_i
(b) Assuming E_i is known, optimize for g(Z_ij)
Iterate (a) and (b) until convergence
Note:
exposure_images and exposure_times are numpy array.
exposure_images: Shape of (N, H, W, C)
exposure_times: Shape of (N, )
The order of exposure_times must be the same as the order of exposure_images.
weight:
Weighting function shape, there are four types:
1. uniform
2. triangle
3. gaussian
4. trapezoid
'''
N, H, W, C = exposure_images.shape
# g has shape (C, 256), each channel initially is a linear response.
g_init = np.arange(256) / 127
g = np.repeat(g_init.reshape(1, 256), repeats=C, axis=0)
E = np.zeros((C, H, W))
weight = weighting_function(shape=weight_shape)
# Change exposure_images shape to (C, N, H, W).
exposure_images = exposure_images.transpose(3, 0, 1, 2)
# Broadcast exposure_times and exposure_times_square shape to (C, N, H, W).
broadcast_exposure_times = np.broadcast_to(exposure_times[None, ..., None, None], (C, N, H, W))
broadcast_exposure_times_square = np.broadcast_to(np.square(exposure_times)[None, ..., None, None], (C, N, H, W))
# To speedup, we use Lookup Table and reduce repetitive calculations.
Wij = cv2.LUT(exposure_images, weight)
numerator = Wij * broadcast_exposure_times
denominator = np.sum(Wij * broadcast_exposure_times_square, axis=1)
for c in range(C):
print(f'channel={c}')
# This speedup method will consume extremely large amounts of memory, images (6000x4000) over 10 will even exceed 64GB,
# so it is recommended to use it only if you have 64GB+ memory, or just resize the input image is also an option.
index = np.ones((256, N, H, W), dtype=bool)
pixel_count = np.ones(256, dtype=int)
for pixel_value in range(256):
index[pixel_value] = exposure_images[c] == pixel_value
pixel_count[pixel_value] = np.count_nonzero(index[pixel_value])
for i in range(iteration):
print(f'iteration={i}')
# optimize E
Gij = cv2.LUT(exposure_images[c], g[c])
E[c] = np.sum(numerator[c] * Gij , axis=0) / denominator[c]
# optimize g
for pixel_value in range(256):
g_sum = 0
for n in range(N):
g_sum += (E[c, index[pixel_value, n]] * exposure_times[n]).sum()
g[c, pixel_value] = g_sum / pixel_count[pixel_value]
# normalize g
g[c] /= g[c, 127]
return E, g
```
<img class="img-curve" src="https://i.imgur.com/wW5i8yy.png" />
<center class="img-caption">Radiance Map and Response Curve of <span style="font-weight: 800">XingYi NanShan</span> (Robertson's)
</center>
<img class="img-curve" src="https://i.imgur.com/HU3u4k9.png" />
<center class="img-caption">
Radiance Map and Response Curve of <span style="font-weight: 800">NTU Library (far)</span> (Robertson's)
</center>
## Tone Mapping (bonus)
In this section, we tone mapped the above radiance maps results using photomatix tonemap utilities. That is, Debevec's and Robertson's hdr results of each set of images are fed in the photomatix software. Then, we can get the tone-mapped results. It is worth noting that since the Debevec's is contantly better than the Robertson's, we here only demonstrate the Debevec's tonemap results on two sets. The results are shown in below.

<center class="img-caption">
HRD Tone Mapping Result of <span style="font-weight: 800">XingYi NanShan</span>
</center>

<center class="img-caption">
HRD Tone Mapping Result of <span style="font-weight: 800">NTU Library (far)</span>
</center>
## Discussion and Conclusion
We have experimented on several situations and learned a lot from this project. In this section, we will (1) summarize the difficulties we have encoutered; (2) perform some comparisons between the results of different situations (algorithms or parameters).
**Discussion on Image Alignment.** It is worthnoting that the reference/base image chosen and threshold setting is very germane to the results, determining the success of the alignment task.
For the former one, if we randomly chose an image as the base image, we will find that the alignment task is quite prone to be failed. However, if we can proper choose the image with the median exposure level as the reference image, the efficacy of alignment will increase.
On the other hand, for the later one, threshold setting is quite cruicial to the image alignment task. Take ***XingYi NanShan*** for example, at the begining, we set the threshold to $0$, resulting in a lot of areas in the sky are determined to be white (as the follow image). This means that the intensity value nearby the sky is close to the median value, albeit such areas do not contain pivotal feature points (i.e., we should not take them into consideration.)

<center class="img-caption">
Threshold 0 Bitmap of <span style="font-weight: 800">XingYi NanShan</span>
</center>
After that, we set the threshold to be $6$ and the problem could be resolved. As the following picture, the red area marks the area we should neglect when aligning. We can find that in this situation, the error in the sky can be circumvented.

<center class="img-caption">
Threshold 6 Bitmap of <span style="font-weight: 800">XingYi NanShan</span>
</center>
As a result, the alignment task can be fairly successful.

<center class="img-caption">
Alignment result of <span style="font-weight: 800">XingYi NanShan</span>
</center>
**Discussion on HDR Methods and Tonemapping Results.** There are many commonalities and differences between Debevec's method and Robertson's method and their corresponding tone-mapping results. Regarding their common aspects, we can find that:
1. The type of weighting function matters to the results, including irradiance map, response curve, and tone-mapped result. We have experimented on uniform type, guassian type, triangle type, and echelon type weighting functions. It turns out that no matter in what HDR algorithms, either triangle type or echelon type weighting function is conducive to the results.
2. We restrict the minimum value of the weighting function greater than $\epsilon = 10^{-3}$, in order to avoid the problem of dividing by zero when optimizing $E$.
Regaring the performance of *Paul Debevec's*:
1. Can reached the relatively better result than Robertson's in many of our photography situations.
2. Results are quite sensitive to the initail points sampling method. And we find that the discrete uniform sampling method give us a fine start.
3. In Debevec's, there is little blue shift, and Debevec’s algorithm yields smoother curves due to its regularization term.
4. Our implementation of Debevec's, compared to OpenCV, has good image properties, such as brightness distribution, contrast, edge detail clearness.
Regaring the performance of *Mark Robertson's*:
1. Robertson method is much slower than Debevec method, because it use all pixels of the images.
2. The images using Robertson method have some color artifacts. Tonemapping using OpenCV package wiil cause rainbows in over-exposed areas. Such phenomona show up in both OpenCV and our implementation.
3. We have encountered many difficulties in realizing Robertson's method, especially when we want to speed up its computation.
4. When optimizing Robertson's, we consider a look-up table to refrain repetative computations. Results shows that we can speed up over 100% times than the original implemtation, nearly close to the speed of OpenCV.
5. In Robertson's method, our result looks better than the OpenCV's Robertson's results.
Based on our experiments and observations, we can summerize pros and cons of two methods in many aspects:
| HDR Algorithm | Debevec's method | Robertson's method|
| -------- | -------- | -------- |
| Speed | Fast, since just solving a linear system. | Slow, even using the look-up table to speed up. |
| Quality | Better, since there is no blue shift in tone-mapped results, and response curve shows more smooth. | Worse, since the tonemapped results look insalient and have a few color artifacts. |
| Stability | Sensitive to the initial points sampling. Uniform is a good start. | Stable, since always converge to one solution. |
| Our preference | **YES** | **NO** |
To give a in-depth picture and comparison, here we compare over-exposed images, and final HDRs with regard to same areas in ***XingYi NanShan***. Please hone in on the brighter details:
| Over-exposed | HDR |
| - | - |
|  |  |
|  |  |
|  |  |
Although we are all newbies to photography (camera shotting), it is quite fun to use professional munual camera to shot photos in person. Besides, despite having encountered many obstacles in implementation HDR algorithms, we finally accomplished them and did even better than some packages!
## HDR Photo Gallery
Please enjoy our HDR images, generated from different image sets, including ***Huashan Prairie, NTU EE, NTU Math, NTU Library (close), Taipei 101 (inside), Taipei 101 (outside)***.

<center class="img-caption">
HRD result of 7 LDR photos in <span style="font-weight: 800">Taipei 101 (outside)</span>
</center>

<center class="img-caption">
HRD Result of 8 LDR Photos in <span style="font-weight: 800">Taipei 101 (inside)</span>
</center>

<center class="img-caption">
HRD Result of 9 LDR Photos in <span style="font-weight: 800">NTU Library (close)</span>
</center>

<center class="img-caption">
HRD Result of 11 LDR Photos in <span style="font-weight: 800">NTU Math</span>
</center>

<center class="img-caption">
HRD Result of 10 LDR Photos in <span style="font-weight: 800">NTU EE</span>
</center>

<center class="img-caption">
HRD Result of 22 LDR Photos in <span style="font-weight: 800">Huashan Prairie</span>
</center>
## Program's README
This is the instruction to run our NTU VFX HW1 program, HDR Imaging.
### Dependencies
We implemented all algorithms in Python. Please make sure you have python version above 3.8, ensuring no error when executing the scripts.
```
python>=3.8
```
Regarding the packages used in Python. We mainly adopt three packages as follows, please use `pip` or `pip3` to intall them.
```
opencv-python
matplotlib
numpy
```
### Program Usage
**Step 1: Getting an HDR Image**. One line, all results. Simply run our the command below, you will get $13$ outputs. That is,
```bash
# original images array
original_images_set_name.png
# cv2 results for comparison
map_curve_cv2_debevec.png
map_curve_cv2_robertson.png
cv2_debevec.hdr
cv2_robertson.hdr
tonemap_ldr_cv2_debevec.png
tonemap_ldr_cv2_robertson.png
# our results
map_curve_my_debevec.png
map_curve_my_robertson.png
my_debevec.hdr # used for photomatix tone mapping
my_robertson.hdr # used for photomatix tone mapping
tonemap_ldr_my_debevec.png
tonemap_ldr_my_robertson.png
```
So, please run the following command.
```bash=
cd code
python3 hdr.py <img_dir> <shutter_times> <is_downsample> <set_name>
# Copy and paste this command for examplar images
python3 hdr.py ../data 8,4,2,1,0.5,0.25,0.125,1/15,1/30 downsample NTU_Library_Far
```
**Step 2: Tone Mapping**. Please feed the `.hdr` files into Photomatix Pro 6.3, which the professor recommanded, and pick one style from `Detailed`, `Balanced`, `Realistic` mode.
## References
* Greg Ward, Fast Robust Image Registration for Compositing High Dynamic Range Photographs from Hand-Held Exposures, Journal of Graphics Tools 2003.
* Paul E. Debevec, Jitendra Malik, Recovering High Dynamic Range Radiance Maps from Photographs, SIGGRAPH 1997.
* Mark Robertson, Sean Borman, Robert Stevenson, Estimation-Theoretic Approach to Dynamic Range Enhancement using Multiple Exposures, Journal of Electronic Imaging 2003.
## Appendix

<center class="img-caption">
Lo Shots Fu Shotting Somewhere
</center>