# WeiPer: OOD Detection using Weight Perturbations of Class Projections
Summary:
The paper introduce a posthoc OOD detection method called WeiPer. The method propose to perturb each row of the last layer's weight matrix and project the penultimate features on these vectors. There are two OOD score formulations: density score function and KL Divergence score function. The experiment on 8 benchmarks which include 3 ID datasets and 4 backbone models show their score functions outperforms competitions on 4 benchmarks.
<!-- claim the projection at the last layer of a trained classification neural network do not capture enough structure of the training distribution and -->
Strengths:
*
* evaluate on OpenOOD
Weeknesses:
* The motivation is not clear
* Why changing the basis can improve NAC [cite]?
* How the theorem 1 related to the proposed method?
* It's hard to follow some claims in the paper. For example:
*
*
* The proposed score functions have many hyperparameter ($r, \delta, n_{\text{bin}}, \lambda_1,\lambda_2$) comparing to other methods such as MSP, KNN, MDS. It is also not clear how these hyperparamters are chosen in the main experiment.
* Minor:
* In theorem 1, isn't $m_k = \mathbb{E}(|X|^k)$
Questions
# REpresentation Shift QUantifying Estimator (ReSQuE) for Sustainable Model Reusability
Summary
The paper proposes a method, REpresentation Shift QUantifying Estimator (ReSQuE), designed to predict the cost of retraining a model to adapt to distribution shifts. Given a pretraind model, the proposed method measures the distance between two distributions of activations computed from the original dataset and a new dataset. Experimental results show that the outputs of ReSQuE are strongly correlated with cost measures such as numbers of retraining epochs, gradient norm, change in parameter magnitudes, energy consumption, and carbon emissions.
Strong points:
<!-- * The paper is well motivated -->
* The proposed method is sound.
* The experiments consider several datasets, types of noise, and neural-network architectures. The experimental results demonstrate a strong positive correlation between ReSQuE's estimation and cost measures of finetuning pretrained models.
Weak points:
* Use cases for ReSCuE in practice are not clearly presented. How do practitioners make decisions based on outputs from ReSQuE?
* The distribution shifts only include transformed distributions created by well-specified function mappings. Even though it helps to understand the method under different noise levels, it will be beneficial to know the results under other types of shifts, e.g., natural shifts presented in Camelyon17 [3] or FMoW datasets from the WILDS [5] benchmark.
* The results in section 5.1 which show that retraining has lower computation costs compared to training from scratch under distribution shifts is not novel [1, 2]
* To predict the sustainability cost of retraining models, the method performs fitting a regression predictor on data points which is collected by retraining models on different levels of noise. Collecting these data samples might be costly and outweigh the benefit of having a regression predictor.
Questions
* I would like to know the difference between the 2 measures: gradient norm and parameter change. How do these measures reflect training costs?
* In line 311, could you explain why using specific hyper-parameters can impede comparison with other noise levels?
* Experiments in section 5.5 show that the outputs of ReSQuE can be leveraged to predict the number of epochs it takes in the retraining phase. However, to enable the use of a common regression predictor among model architectures, pre-trained models are required to be retrained with small learning rates. As shown in Figure 4 and Figure 6, since this setting increases the sustainability cost (for example, with noise level 1, the energy consumption is less than 0.005 kWh in Figure 4 while the value in Figure 6 about 0.5 kWh), shouldn't it be employed in practice? If this is true then how should we leverage ReSQuE in practice?
Minors:
* Should "low-learning experiment" in line 452 be "low-learning rate experiment"?
[1] Lee, Y., et al. "Surgical fine-tuning improves adaptation to distribution shifts," in International Conference on Learning Representations, 2023.
[2] Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson. "Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations." The Eleventh International Conference on Learning Representations . 2023.
[3] Bándi, P., et al. "From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge," in IEEE Transactions on Medical Imaging, vol. 38, no. 2, pp. 550-560, 2019.
[4] G. Christie, N. Fendley, J. Wilson, and R. Mukherjee. Functional map of the world. In Computer Vision and Pattern Recognition (CVPR), 2018.
[5] Koh, P., et al, "WILDS: A Benchmark of in-the-Wild Distribution Shifts," in 2021 International Conference on Machine Learning, 2021.