# Nips Rebuttal
We thank the reviewer for the careful comments and constructive suggestions. Below we list all the concerns and clarify them one by one in detail.
---
### Q2: A superpixel may contain two or more categories, which will obtain confusing features from two or more categories.
Strictly speaking, your concerned phenomenon does exist but it only happens on few individual pixels, which almost causes little negative impact on the overall performance of ES-CRF. For a more quantitative explanation, we introduce Achievable Segmentation Accuracy(ASA)[1], a common metric defined as the overlap of superpixels with the corresponding ground truth, to measure the confusion degree of the superpixels. Detailed results are shown in the table of **Q4** and we find ASA can reach up to **$89.13\%$** under our default settings, which means very few pixels may be included in the wrong superpixels and most boundary pixels can be correctly connected with inner pixels. Thus, the superpixels can help refine the feature representation of boundary pixels to relieve the PFB problem from a statistical perspective. What's more, some wrong over-segmented pixels can be fixed by pairwise message passing strategy. The experiment results in Table 4 of the paper can also support our intuition.
### Q3: Why not combining a deep superpixel module in an end-to-end framework?
In this paper, we mainly focus on how to relieve the PFB problem and propose to fuse the CRF mechanism into the CNN network for more effective optimization. On the other hand, we analyze that the superpixel algorithm may have the potential to help CRF eliminate the PFB problem based on the local object prior. Thus, our proposed ES-CRF in the paper becomes the simplest approach to fuse CRF, superpixels, and the CNN network as an organic whole for a fast evaluation of our motivation. In fact, combining SSN with our method is feasible theoretically and is also likely to create a more general framework. However, some problems may be raised in that way, e.g., how to combine the backbone network of SSN with our network properly and balance the loss between SSN loss and segmentation loss. We leave further explorations on these challenges as our future work.
### Q4: How does the accuracy of the superpixels affect the final performance?
As the number of superpixels directly controls the accuracy(ASA) of the superpixel algorithm, for simplicity, we generate several superpixel models with different accuracy by changing the number of superpixels as follows. We take DeeplabV3+ based on ResNet-50 as the baseline model to conduct experiments on ADE20K dataset with(without) pairwise message passing strategy.
|SP num |ASA(%) | mIoU w/ ${\psi}^{f}_{p}$(%) |mIoU w/o ${\psi}^{f}_{p}$(%)|
|---|---|:---:|:---:|
|No|N/A|43.91|42.72|
|50|81.23|43.92|43.21|
|100|85.86|44.02|43.43|
|**200**|**89.13**|**44.20**|**43.83**|
|300|90.76|44.13|43.56|
|400|91.72|43.96|43.22|
We can find that ASA only considers the purity of the generated superpixels. Thus it increases as the number of superpixels increases. However, considering our motivation, we also need superpixels to retain as big areas as possible to connect the boundary pixels with inner pixels to help CRF relieve the PFB problem. Therefore, a proper superpixel number such as 200 which maintains a better trade-off between superpixel purity and long-range dependency, can achieve the best performance. We will add this experiment in our final version.
### Q5.1: The difference of motivation between ES-CRF and Segfix and DecoupleSegNet.
DecoupleSegNet and Segfix are two typical methods focusing on boundary segmentation. DecoupleSegNet learns edge and body in two different branches and Segfix learns associated inner points for all boundary points. However, both of them are heavily dependent on the high-quality feature representations of the boundary pixels to achieve better performance. Thus, unreliable feature representations of boundary pixels limit their performance to a great extent. On the contrary, ES-CRF delves deep into the optimization issue (PFB problem) caused by the boundary pixels and fuses CRF, superpixels, and the CNN network as an organic whole to enhance the feature representations and boost the overall segmentation performance. What’s more, ES-CRF also promotes boundary segmentation performance thanks to the characteristics of CRF and superpixels. We will make a more clear illustration in our final version.
### Q5.2: The performance comparison between ES-CRF and Segfix and DecoupleSegNet.
We report the detailed performance comparison among ES-CRF, Segfix, and DecoupleSegNet as follows. It is clear that ES-CRF achieves better mIoU and F-score compared with Segfix and DecoupleSegNet by relieving the PFB problem.
|Method | Backbone | mIoU(%) | F-score(%)|
|---|---|:---:|:---:|
|DeeplabV3+| ResNet101|44.6|16.15|
|+Segfix| ResNet101|45.62|18.14|
|DecoupleSegNet| ResNet101|45.73|18.02|
|**ES-CRF**|** ResNet101**|**46.02**|**18.32**|
[1] " Superpixel sampling networks", Jampani V, Sun D, Liu M Y, et al., ECCV2018.
## ***************************
Authors' response
### Q1: Ablation study on how many, and which layers could insert ES-CRF.
We take DeeplabV3+ based on ResNet-50 as our baseline model to perform the ablation studies on ADE20K dataset. The logit layer is the last classification layer in DeeplabV3+ while layer4 is the last layer of ResNet. Detailed comparisons are reported as follows:
| layer3 | layer4 | logit layer |mIoU(%)|
|:---:|:---:|:---:|:---:|
||||42.72|
| ✓ |||41.85|
||✓||43.28|
|||✓|**44.2**|
|✓|✓||41.19|
|✓|✓|✓|41.88|
As we have discussed in Section 1, PFB problem is caused by the confusing features from high-level feature maps. Naturally, it is more reasonable to insert ES-CRF into the high-level feature map. Results in the above table can also support our motivation. Specifically, we can find that when inserting ES-CRF into the logit layer, it achieves the best performance as the logit layer suffers most from the PFB problem due to the continuous expansion of receptive fields. On the contrary, when we insert ES-CRF into the lower layer, e.g., layer3, it achieves even worse performance than the baseline model. As we all know, lower layers tend to have small reception fields and focus on learning local features. However, ES-CRF will introduce extra features from long-range dependency, which will mislead the low-level local feature learning and results in poor performance. Detailed ablation studies will be added in our final version.
### Q2: How to implement the convolution over the concatenated neighboring pixels in (7) ?
We adopt **nn.functinal.unfold** provided by **Pytorch** framework to concatenate neighboring pixels followed by a conventional $1\times1$ convolutional layer with one channel output.
Here is our pseudocode:
```
result =[]
get one feature map -> feature [B,C,H,W]
unfold(feature) -> feature_im2col [B,C,K*K,H,W]
for i in range(K*K):
cat([feature,feature_im2col[:,:,i,:,:]],dim=1) -> fused_feature [B,2*C,H,W]
sigmoid(conv(fused_feature)) -> score [B,1,H,W]
result.append(score)
cat(result,dim=1) -> feature_compatibility [B,K*K,H,W]
```
By the way, our code will be realeased for reproduction soon.
### Q3: What is the difference between G in (4) and Q in (8)?
The meaning of $G$ in Eq.(4) keeps consistent with the definition in Eq.(1), which is described in Line 134-138. It denotes the associated pixel set with pixel $i$. In our implementation, it is pixels around pixel $i$ within the kernel size K. In addition, as described in Line 181-187, $Q$ is a pixel set that contains pixels $i$ and other pixels belonging to the same superpixel as $i$. We will make a more detailed description of them in our final version.
### Q4: What is the effect of applying the cosine position embeddings in (6)?
We have to emphasize that position embedding is a necessary module in the traditional CRF methods. Cosine position embedding and absolute pixel coordinates are two common candidates. As we have described in Line 166-175, the dot product of combined feature **[I,p]** is used as the similarity to replace the hand-designed Gaussian kernels. Thus the absolute pixel coordinate is no longer suitable for this because larger coordinates tend to obtain larger similarities, so we adopt the cosine position embedding which is designed for dot product similarity metric.
Detailed comparisons between these position embedding methods are reported as follows and DeeplabV3+ based on ResNet-50 is adopted as the baseline model. It is worth noting that we remove the superpixel message passing module for better observations.
|Method| Backbone| Position Embedding |mIoU(%)|
|---|:---:|:---:|:---:|
|DeeplabV3+|ResNet50||42.72|
|ES-CRF|ResNet50|Abs|43.04|
|**ES-CRF**|**ResNet50**|**Consine**|**43.91**|
### Q5: What is the additional training time while using ES-CRF? Is it sensitive to the over-segmentation algorithm?
We take DeeplabV3+ based on ResNet-50 as the baseline model to perform the training time comparisons. Image size is set to 512*512 and all the experiments are conducted on 8 GeForce RTX 2080Ti GPUs with two images per GPU. The parameter size and FLOPS are also reported as follows.
|Method|Backbone|Training Time(s)|FLOPs(G)|Parameters(M)|
|---|:---:|:---:|:---:|:---:|
|DeeplabV3+|ResNet50|0.52|177.3|41.2|
|**ES-CRF**|**ResNet50**|**0.54**|**177.4**|**41.3**|
|DeeplabV3+|ResNet101|0.71|254.8|60.1|
|**ES-CRF**|**ResNet101**|**0.74**|**255**|**60.2**|
We can find that our proposed ES-CRF brings negligible extra cost over the baseline model. Meanwhile, it is worth noticing that the training time of ES-CRF is not sensitive to the over-segmentation algorithms because we generate superpixels offline and involve them into the segmentation model training to perform the message passing.
### Q6: What is the effect of the different numbers of superpixels?
We follow standard settings in our paper and take DeeplabV3+ based on ResNet-50 as the baseline model to show the performance of ES-CRF under different superpixel numbers. Detailed comparisons on ADE20K dataset are reported as follows and **SP** denotes **S**uper**P**ixel.
|SP num | mIoU w/ ${\psi}^{f}_{p}$(%) | mIoU w/o ${\psi}^{f}_{p}$(%) |
|---|:---:|:---:|
|No|43.91|42.72|
|50|43.92|43.21|
|100|44.02|43.43|
|**200**|**44.20**|**43.83**|
|300|44.13|43.56|
|400|43.96|43.22|
From the above table, we can find that different numbers of superpixels indeed affect the performance of ES-CRF. Intuitively, when the number of superpixels is 200, ES-CRF acquires the best performance as it achieves a better trade-off between the superpixel purity and the long-range dependency. Moreover, it is worth mentioning that when the pairwise message passing strategy is also adopted in ES-CRF, it becomes more robust to the different numbers of superpixels as our adaptive message passing mechanism(including pairwise and superpixel-based) can be compatible with the variance. Detailed discussion about this experiment will be added in our final version.
### Q7: The experiments should comprise some visualization results before and after using the ES-CRF.
Thank you for your suggestions and we will add intuitive visualization comparisons to show the difference before and after using ES-CRF in our final version.
### Q8: Both the results of OCRNet [51] and SETR [55] should be in Table 3 and Table 4.
Thank you for your kind reminder. The result of OCRNet and SETR will be included in both Table 3 and Table 4 in our final version.
## *****************************
Authors' response
### Q1: Missing results on Cityscapes test set.
Thank you for your kind advice. We have conducted experiments on the Cityscapes test set and the generated anonymous link to the result can also be found [here](<https://www.cityscapes-dataset.com/anonymous-results/?id=c5512ad36cb606dda045fe608f56cba6849b7715d640ff3ea8ddac7661b384d3>).
We also compare our ES-CRF with other SOTA methods on the Cityscapes test set and the result is reported as follows. It is obvious that our proposed ES-CRF can also achieve promising performance on Cityscapes test set. We will add this table in our final version for a more comprehensive comparison.
|Method|Backbone | mIoU(%)|
|---|---|:---:|
|CCNet|ResNet101|81.9|
|GFFNet|ResNet101|82.3|
|RecoNet|ResNet101|82.3|
|ACNet|ResNet101|82.3|
|RANet|ResNet101|82.4|
|HANet|ResNet101|82.1|
|RPCNet|ResNet101|81.8|
|Spyet|ResNet101|81.6|
|OCRNet|ResNet101|81.8|
|SERT|T-Large|81.08|
|STLNet|ResNet101|82.3|
|**ES-CRF(Ours)**|**ResNet101**|**82.5**|
### Q2: Several typo errors.
We will revise our manuscript and promise all typos and errors will be eliminated in our final version.