# Week 3 - Stereo Vision Fundamentals We have captured our 3D object using two cameras, forming a stereo vision system. The next step is to find matching points between the two image projections. This is where the concept of stereo matching comes into play. Stereo matching is the process of identifying corresponding pixels in a pair of images. Once the matching is complete, depth can be inferred, allowing us to translate 2D positions into 3D depth information. There are two possible scenarios when capturing a scene. In the first scenario, the cameras may be moving, and the resulting 2D shifts are known as parallax. In the second scenario, the relative positions of the cameras remain fixed. It is generally easier to work with cameras that have a fixed baseline, because once the epipolar geometry is established, a one-time camera calibration is sufficient, and the cameras can operate with a known spatial relationship. In contrast, using moving cameras poses a greater challenge, as continuous recalibration is required to maintain accuracy. A key element of stereo matching is the correspondence problem. This refers to the task of matching parts of a scene in one image with the corresponding parts in the other image. Traditionally, this problem was addressed using search and optimization techniques, but more recently, deep learning has played a crucial role in improving accuracy and efficiency. As a refresher, let us now revisit the fundamentals of stereo geometry. Consider the figure below: ![stereo_coordinate_systems_rehg_dellaert](https://hackmd.io/_uploads/SkkA62gggx.jpg) There is a 3D point in space, let’s call it P, that we aim to capture using our stereo camera setup. The distance between the cameras, known as the baseline, is B, and the image plane is formed at a distance f, which is the focal length of the camera. Let us visualize this: when capturing point P with our cameras, we are essentially capturing the light rays emitted or reflected from P to our cameras. The two cameras (assumed to be parallel to each other, with a known translation vector and no rotation, i.e., a zero rotation matrix) form a triangle with point P—this triangle lies in a plane known as the epipolar plane. Our objective is to reverse the projection from 3D space back to the 2D image planes. If we seek to find a matching point, say (x,y), in the scene captured by camera 1, we know that we only need to search along the x-axis in the image from camera 2. This is because the corresponding point will lie on the same y-coordinate, but will appear at a different x-coordinate, shifted due to the horizontal translation between the cameras. Armed with the knowledge of the geometric relation between the camera sets. Depth can be estimated as $$ Z = \frac{b \cdot f}{x_l - x_r} $$ The main take away here is that thedisparity between the two images left and right is inversely proportional to the depth. Scenes that are far away will slightly move or not move at all while scenes close to camera appear to move much and this explains the larger disparity. Equally, if you take two images from near distance and take some time to observe the photos it is easy to pick out the extra details or the translation while images which are taken at a distant tend to appear similar and may require keen eye to spot out the pecularities(this is was just my thought and a way of sinking this information) The focal length and baseline are constants (these are known when performing camera calibration and equipment set up) ## Classical Stereo Matching Techniques Back to our correspondence problem :) how can we tell if a patch on image one is similar to a patch on the second image. As mentioned earlier, the correspondence problem has always been solved as a correspondence problem where methods such as Newton's Method []() as well as Gradient Descent Method[]() have been used. The thing here is that some box similar to a kernel is defined and then what we want to do is slide this box over the search area perfom pixel basis comparison while also taking into account noise. Normally to speed up the computation of the correspondence problem as well as the accuracy, rectified stereo images are used. You might ask, what are rectified stereo images? Rectified stereo images are stereo image pairs which have been transformed such that the epipolar lines are horizontal and they are aligned between the two images effectively, the correspondence problem is reduced from a 2D problem to 1D problem and you can see why there is improved computational perfomance. It is now imperative to spare you the talk and just carry on with how we can implement the optimization problem. We need two things to succeed with solving the problem * If you said a pen and paper you are right * You will need to store the images in vector representaion think of stacking the pixel values on top of one another and you will end up with a vector and then the comparison can be made :::info Assuming a good match is found what do we expect? Before that to obtain a good match, all we need to do is some simple inner products or some normalized correlation or perform sum of squared differences ::: Recall from Linear Algebra classes: when we want to understand the similarity between vectors, we often use the inner product (or dot product) as a tool. If two vectors point in similar directions, their inner product yields a high value. If they are orthogonal—that is, completely uncorrelated—their inner product is zero. This gives us a powerful way to quantify similarity: a high inner product implies similarity, while a low or zero value implies dissimilarity. This reminds me of a quote by G.H. Hardy, who famously described proof by contradiction as “a far finer gambit than any chess gambit.” In a similar spirit, we’ve just built an intuition for the inner product by examining what happens when vectors are dissimilar—almost like proving the power of the method by imagining its failure. The other metrics that can be used are **SSD (L2 norm)** measures the sum of squared pixel value differences between two patches: $$ SSD = \sum_{i,j} \left( P_1(i,j) - P_2(i+d,j) \right)^2 $$ **SAD (L1 norm)** measures the sum of absolute pixel value differences: $$ SAD = \sum_{i,j} \left| P_1(i,j) - P_2(i+d,j) \right| $$ ### Comparison: - **SSD** is sensitive to large differences, so it's affected by outliers. - **SAD** is more robust to noise and outliers, as it uses absolute differences, which grow linearly. ## Error Profile Similarity error between patches is examined and the patches in the image which exhibit close to convex error and highly non convex profile are desired ## Smoothing Disparity maps obtained using SAD (Sum of Absolute Differences) and SSD (Sum of Squared Differences) are not always smooth. Pixels that lie next to each other on the same surface can exhibit varied disparities, resulting in noisy outputs. Ideally, neighboring pixels should show a smooth transition—especially within the same surface—though this is less applicable at object boundaries. However, this expected smoothness is often violated, necessitating the use of smoothing constraints. Some commonly used smoothing techniques include: * Semi-Global Matching (SGM) * Semi-Global Block Matching (SGBM) SGM introduces a penalty term to the matching cost function, which discourages large disparity differences between neighboring pixels. This encourages smoother disparity maps by penalizing abrupt changes. Smoothing using SGM and SGBM is not a one-size-fits-all solution for noisy data. These techniques are particularly effective in background or uniformly textured regions where pixel values are similar. In such areas, smoothing constraints improve the results without introducing drastic changes. In occluded regions—where SAD or SSD fails to find a good match—smoothing constraints help by forcing the disparity values to align more closely with those of neighboring pixels. As previously mentioned, these smoothing techniques perform best in areas with uniform textures. This leads to the early conclusion that their performance degrades in regions with fine detail, rapid pixel transitions, or high edge density—areas where depth and disparity change significantly. * It is important to observe that SSD and SAD suffers from big disparities in areas where there is no texture. * Having a small matching window might lead to inability of differentiating unique features while having large matching windows might result in many areas sharing so many things in common leading to matches which are not really helpful * Textureless image regions offer a unique challenge when it comes to matching because without any unique identifier or feature in an image, matching is impossible and therefore this calls for the need to integrate information such as having a consideration of the wall edges. * Other factors such as specular reflections present a significant problem As as proposed solution, convolution neural networks can be considered. :::success We can express the stereo matching as an energy minmization problem where the match quality is one of the constraints while smoothness is the other. So our goal will be to minimize the energy where Energy = matchCost + SmoothnessCost ::: "problems in early vision involve assigning each pixel a label, where the labels represent some local quantity such as disparity. Such pixel- labeling problems are naturally represented in terms of energy minimization, where the energy function has two terms: one term penalizes solutions that are inconsistent with the observed data, whereas the other term enforces spatial coherence (piecewise smoothness). " Quoted from [Energy Minimization Methods](https://vision.middlebury.edu/MRF/pdf/MRF-PAMI.pdf) ### Summary: Stereo Reconstruction Pipeline * Calibrate cameras * Rectify Images * Compute disparity * Estimate depth ____ #### Sources of Errors * Camera calibration * Poor image resolution * Occlusions * Violations of brightness constancy (specular reflection) * Large motions * Low contrast images A 3D object with little or no texture poses a significant challenge when it comes to depth estimation and 3D reconstruction since there are no unique features that can be used to extract depth information. Therefore, a big question we ask oursevles is , can we still recover depth information or reconstruct these? Of course we can still recover image depth by employing the use of structured light. The disparity between the laser points when focused on the same scanline in the image make it possible to determine the 3D coordinate. ## Python Code Implementation - Disparity Maps ```python= #import needed libraries import numpy as np import matplotlib.pyplot as plt import cv2 # Read image with opencv grayscale img_1 = cv2.imread("view1.png", 0) img_2 = cv2.imread("view5.png", 0) # Convert to RGB colorspace img_rgb_1 = cv2.cvtColor(img_1, cv2.COLOR_BGR2RGB) img_rgb_2 = cv2.cvtColor(img_2, cv2.COLOR_BGR2RGB) # Display using matplotlib plt.subplot(1,2,1) plt.imshow(img_rgb_1) plt.axis("off") plt.title("left image") # Display using matplotlib plt.subplot(1,2,2) plt.imshow(img_rgb_2) plt.axis("off") plt.title("Right Image") #show both images at once plt.tight_layout() plt.show() ``` ![image](https://hackmd.io/_uploads/B14CpnNlgx.png) ```python= # Computing disparity def ComputeDisparity(img1, img2, bsize, numDisparities): #disparity - pixel shift between left and right images #initialize stereo block matching object my_stereo = cv2.StereoBM_create(numDisparities=numDisparities, blockSize = bsize) #compute disparity my_disparity = my_stereo.compute(img1, img2) #normalize images for representation min = my_disparity.min() max = my_disparity.max() disparity = np.uint(255 * (my_disparity - min)/max - min) return disparity #test case disparity_map = ComputeDisparity(img1=img_1, img2=img_2, bsize=15, numDisparities=16) #disparity map with block size = 5 and num of disparities = 32 plt.title("disparity Map NumDisparities = 16, bsize = 15") plt.axis("off") plt.imshow(disparity_map) plt.show() ``` ![image](https://hackmd.io/_uploads/H1RzRhNelx.png) ![image](https://hackmd.io/_uploads/BybO02Eegx.png) ![image](https://hackmd.io/_uploads/rk0cRh4lex.png) ### Observation The number of disparities implementation in python requires passing an argument that is a multiple of 16 (algorithmic and optimization constraint). A larger value of the disparity means that the algorithm can detect objects which are really close accurately but this increases the computation time as well as the noise. For scenes which are far, having number of disparities set to 16 is fine while for closer objects it is recommended to have a value of say 64 or higher. When it comes to the block matching part, the algorithm performs block matching what i mean here is that instead of pixel by pixel comparison being carried out, square blocks(windows) are implemented instead which have odd values and are centered around a pixel. The larger the block size the smooth the disparity maps(blurrinng of fine details), this can be disadvantageous at the boundaries where there is a sharp pixel transition. On the other hand, having smaller blocksize can inadvently lead to sensitivity to noise. ### StereoBM Parameters Summary | Parameter | Description | Recommended Values | Trade-offs | |-------------------|-----------------------------------------------------------------------------|--------------------|---------------------------------------------| | `numDisparities` | Maximum disparity range (horizontal pixel shift). Must be divisible by 16. | 16, 32, 64, 128 | 🔹 Higher = can detect closer objects<br>🔹 Increases processing time | | `blockSize` | Size of the block window used for matching. Must be odd and ≥ 5. | 5, 9, 15, 21 | 🔹 Larger = smoother but less detail<br>🔹 Smaller = more detail, more noise | --- ### Tips - Start with `numDisparities = 16` and `blockSize = 5`. - Increase `numDisparities` if objects are close to the camera. - Decrease `blockSize` if you need sharper depth edges, but beware of noise. - Both images must be **rectified** and **grayscale** before computing disparity. ### Implementation ```python stereo = cv2.StereoBM_create(numDisparities=64, blockSize=15) disparity = stereo.compute(left_img, right_img) ``` ____ ## StereoSGBM - OpenCV This is a more sophisticated algorithm that computes disparity given a pair of rectified stereo images. The obtained disparity maps is important since it gives insight into the depth information of the scene. ### StereoSGBM Algorithm At the heart of SGBM, is the block window which is an extension of the block window from StereoBM(). The quality of the match is evaluated using either SAD or SSD as earlier discussed with the goal here being minimization of the cost function. StereoSGBM combines the block matching between left and right images as well enforcing semi-global optimization where smoothness is enforced across multiple directions. :::info SGBM not only does the pixel blocks comparison but also considers the context ::: ### Step wise Approach 1. Matching cost Computation - SAD/SSD . 2. Cost Aggregation (Semi-Global) - Instead of comparing locally comparison is done along multiple 1D paths which enforcing smoothness while ensuring the edges are preserved. 3. Disparity with lowest cost is selected. 4. To clean up disparity map; speckle filtering, uniqueness check and left-right consistency check. ### Advantages of SGBM * Accurate than StereoBM in challenging scenes * Preserves depth edges and works on textureless regions * Suitable for real-time applications ### Limitations * Slower than StereoBM * Requires rectified stereo images ### Python Code Implementation StereoSGBM ```python= # Read image with opencv grayscale img_1 = cv2.imread("view1.png", 0) img_2 = cv2.imread("view5.png", 0) block_size = 11 min_disp = -128 max_disp = 128 num_disp = max_disp - min_disp uniquenessRatio = 5 speckleWindowSize = 200 speckleRange = 2 disp12MaxDiff = 0 stereo = cv2.StereoSGBM_create( minDisparity=min_disp, numDisparities=num_disp, blockSize=block_size, uniquenessRatio=uniquenessRatio, speckleWindowSize=speckleWindowSize, speckleRange=speckleRange, disp12MaxDiff=disp12MaxDiff, P1=8 * 1 * block_size * block_size, P2=32 * 1 * block_size * block_size, ) disparity_SGBM = stereo.compute(img_1, img_2) plt.imshow(disparity_SGBM, cmap='gray') plt.colorbar() plt.axis("off") plt.show() ``` #### Brief explanation The arguments minDisparity represents the smallest pixel shift during the mathcing process, numDisparities represents thee maximum disparity range to search for, must be divisble by 16, blocksize is the size of matching block around each window, the larger the blocks the smoother the results while the smaller the block the finer detail, the noise it can get. Uniqueness ratio, is a metric used to filter weak matches, the higher the uniquness ratio the stricter the filtering. SpeckleWindowsize - Filters out small regions of noise in disparity maps. Specklerange - This is the maximum allowed disparity variation which helps eliminate small and inconsistent regions Disp12MaxDiff - Checks left to right and right to left consistency and if large value of difference is obtained, the disparity is invalidated. P1 - Penalty on small changes between regions which are neighbouring, in short it encourages smooth transitions while allowing for small jumps P2 - Penalty for large disparity changes and it is used to discourage abrupt jumps unless there is a reason e.g object boundary ### Depth Map ![image](https://hackmd.io/_uploads/BJ6agNrxgx.png) ### MATLAB Implementation of Disparity Maps ```MATLAB= leftImage = imread('im2.png'); rightImage = imread('im6.png'); % Convert to grayscale if they are color images if size(leftImage, 3) == 3 leftGray = rgb2gray(leftImage); rightGray = rgb2gray(rightImage); else leftGray = leftImage; rightGray = rightImage; end disparityMap = disparitySGM(leftGray, rightGray); imshow(disparityMap, [min(disparityMap(:)), max(disparityMap(:))]); colormap jet; title('Disparity Map'); ``` The stereo image pairs used are shown below Credits Middlebury Dataset ![image](https://hackmd.io/_uploads/rJtBde8xlx.png) The generated disparity map is shown below ![disparitymap](https://hackmd.io/_uploads/r1eoFe8xxx.png) We can observe the disparity map with an associated color map, the red color indicates objects closer to camera while the cooler colors are objects which are farther away. ## References [John Lambert Stereo Vision](https://johnwlambert.github.io/stereo/)