Lyttonkeepgoing
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Uncertainty related work ## Metric **AURC E-AURC** *ICLR 2019 Computer Science Department Technion – Israel Institute of Technology* [Bias-reduced uncertainty estimation for deep neural classifiers](https://arxiv.org/pdf/1805.08206.pdf) **FPR at TPR(95%):** FPR = FP / (FP+TN) TPR = TP / (TP+FN) **AUROC:** Area under the Receiver Characteristic Curve **AUPR:** Area under the Precision-Recall Curve ## Post-hoc Methods ## **MSP** *ICLR 2017 UC Berkeley* [A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS](https://arxiv.org/pdf/1610.02136.pdf) >假设我们有一个图像分类模型,该模型能够识别三种类型的动物:猫、狗和鸟。当我们输入一张猫的图片到模型中,Softmax的输出可能是这样的: 猫:0.7 狗:0.2 鸟:0.1 在这个例子中,"猫" 的Softmax概率是最大的,为0.7,因此MSP就是0.7。 正确分类的情况:如果模型正确地将这张图片分类为猫,那么这个MSP值(0.7)通常会比错误分类或OOD样本的MSP值要高。 错误分类的情况:假设模型错误地将一张猫的图片分类为狗,Softmax输出可能是这样的: 猫:0.3 狗:0.5 鸟:0.2 在这种情况下,MSP是0.5,这通常会比正确分类样本的MSP(0.7)要低。 分布外(OOD)的情况:如果我们输入一张不属于任何已知类别(比如,一张鱼的图片)的图片,Softmax输出可能相对均匀,例如: 猫:0.4 狗:0.3 鸟:0.3 在这种情况下,MSP(0.4)也会比正确分类样本的MSP(0.7)要低。 --- **ODIN** *ICLR 2018 University of Illinois at Urbana-Champaign* [Enhancing the reliability of out-of-distribution image detection in neural networks](https://arxiv.org/pdf/1706.02690.pdf) >1.Pre-processing 2.Temperature scaling >![](https://hackmd.io/_uploads/rkeGj7_ya.png) 训练的时候, T=1 测试的时候,T为人为设定 ![](https://hackmd.io/_uploads/rJAVi7_JT.png) 对于每个图像x,首先根据上面的公式计算预处理后(其实就是添加一个扰动)的图像 将预处理后的图像输入到神经网络中,计算其校准后的Softmax分数 将该分数与阈值δ进行比较。如果Softmax分数大于阈值,则将图像x分类为分布内(in-distribution);否则,分类为分布外(out-of-distribution)。 --- **MDS** *Neurips 2018 Google brain KAIST* [A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks](https://proceedings.neurips.cc/paper/2018/file/abdeb6f575ac5c6676b747bca8d09cc2-Paper.pdf) >1.Mahalanobis distance-based confidence score 2.Calibration techniques 3.Feature ensemble >ps:以下都是在训练完成后 1.参数估计:使用训练集,为每个类别计算特征空间中的均值和协方差矩阵(可以取倒数第二层也可以对每一层都构建特征空间(对应feature ensemble)) 2.Mahalanobis距离计算:对于每个测试样本,计算它到每个类别均值的Mahalanobis距离。(为什么是马氏距离不是欧氏距离?马氏距离可以考虑特征之间的相关性(考虑了数据分布的协方差)) 置信度得分:使用Mahalanobis距离来定义一个置信度得分。 3.输入预处理:与ODIN中一样也要preprocessing ![](https://hackmd.io/_uploads/rk1x7Edy6.png) 4.特征集成:除了使用DNN的最后一层特征外,还可以使用低层特征来计算置信度得分,并通过加权平均的方式进行集成。 --- **EBO** *Neurips 2020 UCSD* [Energy-based Out-of-distribution Detection](https://proceedings.neurips.cc/paper/2020/file/f5496252609c43eb8a3d147ab9b9c006-Paper.pdf) >1.Energy score >使用能量分数作为OOD检测的评分机制 能量越高->熵越高-> 不确定性越大-> OOD ![](https://hackmd.io/_uploads/SJJjUEOka.png) --- **GRAM** *ICML 2020* (还在学习) [Detecting out-of-distribution examples with gram matrices](https://proceedings.mlr.press/v119/sastry20a/sastry20a.pdf) >1.gram 矩阵计算特征之间的相关性 某一层的通道数和像素值构成的矩阵 --- **RMDS** *Arxiv Google Stanford Harvard* [A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection](https://arxiv.org/pdf/2106.09022.pdf) >1.MDS -> RMDS >Follow **MDS** ,提出了Relative Mahalanobis Distance 除了类别特定的高斯分布外,还拟合一个包含所有训练数据(不考虑类别标签)的全局高斯分布。 对于一个测试输入,计算其到每个类别高斯分布的Mahalanobis距离(MD)和到全局高斯分布的Mahalanobis距离。 RMD是通过从类别特定的MD中减去全局MD来计算的,其余和MDS一样 --- **GradNorm** *Neurips 2021 University of Wisconsin-Madison* [On the importance of gradients for detecting distributional shifts in the wild](https://proceedings.neurips.cc/paper_files/paper/2021/file/063e26c670d07bb7c4d30e6fc69fe056-Paper.pdf) >1.compute KL Divergence 2.backward and grad normalization >1.将输入数据(可能是分布内或分布外)通过预训练的神经网络模型进行前向传播。 2.计算KL散度(Kullback-Leibler Divergence) 使用模型的输出和目标分布(通常是真实标签的one-hot编码)来计算KL散度。 3.将计算得到的KL散度进行反向传播,以获取梯度信息。 4.计算从KL散度反向传播得到的梯度的向量范数。这个范数用作OOD检测的评分函数。高梯度范数->不确定性越大->OOD --- **React** *Neurips 2021 University of Wisconsin-Madison* [React: Out-of-distribution detection with rectified activations](https://proceedings.neurips.cc/paper/2021/file/01894d6f048493d2cacde3c579c315a3-Paper.pdf) >1.rectifying 在预测的时候截断(rectified)激活函数 减少输出极端值 ![](https://hackmd.io/_uploads/S1ia3S_J6.png) ![](https://hackmd.io/_uploads/BJDfaSdJp.png)(如果是ReLU) --- **MLS** *ICML 2022 Maryland University* [Scaling out-of-distribution detection for real-world settings](https://arxiv.org/pdf/1911.11132.pdf) >1.MaxLogit MSP方法在小规模设置下表现良好,但在大规模、现实世界的设置下表现不佳 MaxLogit方法是基于神经网络输出层之前的logit值(即softmax函数输入的原始预测分数)其余和MSP一样 在大规模 多类别 分割任务 表现更好 --- **KL Matching** *ICML 2022 Maryland University* [Scaling out-of-distribution detection for real-world settings](https://arxiv.org/pdf/1911.11132.pdf) >和MaxLogit出自同一篇 >**The shape of predicted posterior distributions is often class dependent.** >形成后验分布模板: 用验证集中的图像(只包括狗、猫和鸟的图像)来计算每个类别(狗、猫、鸟)的平均softmax输出,从而得到每个类别的“典型”后验分布。 例如,对于“狗”类别,我们可能发现平均softmax输出为 0.8 (狗), 0.1 (猫), 0.1(鸟) 计算异常分数: 当得到一个新的测试图像(可能是OOD)时,先用模型计算其softmax输出。 例如,对于一张鱼的图像,模型的softmax输出可能是 0.4(狗),0.3(猫),0.3(鸟) 应用KL散度: 用KL散度来比较这个新图像的softmax输出与每个类别的“典型”后验分布。 例如,使用KL散度比较 0.4(狗),0.3(猫),0.3(鸟)与 0.8(狗),0.1(猫),0.1(鸟)。 生成异常分数: 取所有类别的KL散度中的最小值作为该图像的异常分数。 如果这个分数高于某个阈值,认为这张图像是OOD --- **VIM** *CVPR 2022 SenseTime Research* (还在学习) [Vim: Out-of-distribution with virtual-logit matching](https://arxiv.org/pdf/2203.10807.pdf) 类别无关的特征空间分数和In-Distribution (ID) 类别相关的logits分数 --- **KNN** *ICML 2022 Maryland University* [Out-of-distribution detection with deep nearest neighbors](https://proceedings.mlr.press/v162/sun22d/sun22d.pdf) 1.KNN >1.特征向量收集:首先,对于训练集中的每一个样本,通过网络模型计算其特征向量。这些特征向量通常来自网络的某一层(通常是倒数第二层)。对于每一个特征向量,进行L2 Norm。 2.测试样本的特征向量:对于一个给定的测试样本,也用同样的网络模型和层来计算其特征向量,并进行L2 Norm。 3.计算最近邻距离:使用欧氏距离来计算测试样本和训练集中每一个样本的特征向量之间的距离。将这些距离排序,并选择第k小的距离。 如果这个距离大于或等于一个预定的阈值,则判断为OOD样本。 --- **Dice** *ECCV 2022* (还在学习) [Dice: Leveraging sparsification for out-of-distribution detection](https://arxiv.org/pdf/2111.09805.pdf) Sparsification --- **RankFeat** *Neurips 2022* (还在学习) [RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection](https://arxiv.org/pdf/2209.08590.pdf) --- **ASH** *ICLR 2023* [EXTREMELY SIMPLE ACTIVATION SHAPING FOR OUTOF-DISTRIBUTION DETECTION](https://arxiv.org/pdf/2209.09858.pdf) >与**React**相似,在激活函数上改进。 ASH-P(Pruning-only) 仅仅是对某一层的激活进行剪枝(Pruning),也就是将一部分激活值设置为零。 效果:在OOD检测方面的性能相对较低,但在ID(In-Distribution)准确性方面与ASH-S相同。 > ASH-B(Binary) 除了剪枝外,还会将所有未剪枝的激活设置为一个常数值,使整个表示变为二进制。 效果:在多个OOD数据集和评估指标上几乎都建立了新的最先进(SOTA)性能。 >ASH-S(Scaling) 不仅进行剪枝,还会通过在剪枝前后的激活值之间计算比率来放大其值。 效果:表现出优异的OOD检测性能,尤其是在剪枝百分比较高时。 --- **SHE** *ICLR 2023* --- **GEN** *CVPR 2023* [GEN: Pushing the Limits of Softmax-Based Out-of-Distribution Detection](https://openaccess.thecvf.com/content/CVPR2023/papers/Liu_GEN_Pushing_the_Limits_of_Softmax-Based_Out-of-Distribution_Detection_CVPR_2023_paper.pdf) > 1.Generalized ENtropy score ![](https://hackmd.io/_uploads/rkuJMD_Jp.png) ![](https://hackmd.io/_uploads/B11Zzwuka.png) 如果是猫狗鸟分类,softmax输出的结果是0.9 0.05 0.05 可以算GEN = $\sqrt[]{(0.9*0.1)}$ + $\sqrt[]{(0.05*0.95)}$ + $\sqrt[]{(0.05*0.95)}$ ## Uncertainty Method **MC Dropout** *ICML2016 Cambridge* [Dropout as a bayesian approximation: Representing model uncertainty in deep learning](https://proceedings.mlr.press/v48/gal16.pdf) >1.n times dropout >测试时不关闭dropout,用多次前向传播结果的统计度量来确定 uncertainty。(算variance) --- **Mixup** *ICLR 2018 MIT FAIR* [Mixup: Beyond empirical risk minimization. ](https://arxiv.org/pdf/1710.09412.pdf%C2%A0) > 1.Mixup Robustness ![](https://hackmd.io/_uploads/BkaNwvuJa.png) 本质上,Mixup是通过标签平滑或软标签来解决分类器的过分自信问题的 --- **CutMix** *ICCV 2019 Clova AI Research* [CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features](https://arxiv.org/pdf/1905.04899.pdf) > []为了使分类器不会过度关注图像的某个小区域,Cutout随机选择一个区域并将其掩盖或去除,以提高模型的泛化和目标定位能力。CutMix希望保留Cutout移除部分区域所起到的避免过度关注某个小区域的作用,同时还希望被移除的区域不会被丢弃而导致一些有效信息的丢失。于是,CutMix采用了换补丁的策略,即先切下当前图像的一个子区域,再取另一个图像对应的子区域补上来,并根据补丁在完整图像中的占比对标签进行平滑。 CutMix不仅能通过标签平滑(Mixed image & label)起到Mixup的校准功能,还能通过补丁(Regional dropout)操作起到Cutout提高模型泛化和目标定位能力的功能。 --- **Pixmix** *CVPR 2021 UC Berkeley UIUC Harvard* [Pixmix: Dreamlike pictures comprehensively improve safety measures.](https://arxiv.org/pdf/2112.05135.pdf) ![](https://hackmd.io/_uploads/SkS-Ywu16.png) > 需要注意的是,因为混合集没有明确的类别,所以经过混合操作后得到的的标签与x相同,即PixMix没有采用标签平滑的操作。 --- **DeepEnsemble** *NeurIPS 2017* [Simple and scalable predictive uncertainty estimation using deep ensembles.](https://arxiv.org/pdf/1612.01474.pdf) > 通过观察集成中各个模型的预测结果的变化,可以估计模型对特定输入的不确定性。如果所有模型都给出相似的预测,那么不确定性就低;反之,则高 --- **TempScale** *ICML 2017* [On calibration of modern neural networks](https://arxiv.org/pdf/1706.04599.pdf) 和ODIN一样的temperature scaling ![](https://hackmd.io/_uploads/SJVTqvuyp.png) --- **Trust Score** *Neurips 2018 Google&Standord* [To Trust Or Not To Trust A Classifier](https://proceedings.neurips.cc/paper/2018/file/7180cffd6a8e829dacfc2a31b3f72ece-Paper.pdf) ![](https://hackmd.io/_uploads/rk09AP_Ja.png) >假设只有三个类class1,2,3 1.用测试图片在训练集中的每个类中分别找K个最近的sample。 2.假设模型预测出来的类别是class1,则计算测试图片和class1对应的K个最近sample的distance再取平均值为d1。 3.另外两个类class2和class3中K个最近sample,同样算距离取平均值,取最小的一个为d2。 4.Trust score 为d2/d1,Trust score越大 confidence越高。 --- **ConfidNet** *Neurips 2019 TPAMI2021 valeo.ai, Paris, France* [Addressing failure prediction by learning model confidence.](https://proceedings.neurips.cc/paper/2019/file/757f843a169cc678064d9530d12a1881-Paper.pdf) [Confidence Estimation via Auxiliary Models](https://arxiv.org/pdf/2012.06508.pdf) ![](https://hackmd.io/_uploads/rJOApv_kp.png) > 提出使用TCP作为模型置信度的新准则。TCP是模型对真实类别的概率,而不是预测类别的概率(即MSP)。 当TCP大于1/2时,模型的预测是正确的;当TCP小于1/K(K是类别数)时,模型的预测是错误的。 ConfidNet:为了在测试时使用TCP,引入了一个名为ConfidNet的置信神经网络(MLP)。这个网络在训练阶段学习模拟TCP(Lconf) --- **RTS** *AAAI 2023* --- ## Papers I haven’t read yet ## Generative model based *Arxiv University of Waterloo* [Improving Reconstruction Autoencoder Out-of-distribution Detection with Mahalanobis Distance](https://arxiv.org/pdf/1812.02765.pdf) *Arxiv University of Waterloo* [Out-of-distribution Detection in Classifiers via Generation](https://arxiv.org/pdf/1910.04241.pdf) ## Classifier-based *Arxiv University of Waterloo* [Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output](https://arxiv.org/pdf/1910.10307.pdf) ## Self-supervised learning based *Neurips 2020 KAIST* [CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances](https://proceedings.neurips.cc/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-Paper.pdf) *ICLR 2021 Princeton University* [SSD: A UNIFIED FRAMEWORK FOR SELF-SUPERVISED OUTLIER DETECTION](https://openreview.net/pdf?id=v5gjXpmR8J) [Delving Deep into the Generalization of Vision Transformers under Distribution Shifts](https://arxiv.org/pdf/2106.07617.pdf) [CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No](https://arxiv.org/pdf/2308.12213v2.pdf) ## Review: [A review of uncertainty quantification in deep learning: Techniques, applications and challenges](https://pdf.sciencedirectassets.com/272144/1-s2.0-S1566253521X00073/1-s2.0-S1566253521001081/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEMv%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJIMEYCIQD9cpQSgl4JPBgaRR2QgHr0QD%2FFEqpyfobxAAP8zAMA4wIhAJ%2FN9ZjP00c5%2BhQ5yTpWgRJkSNfucBr9g0dFezp10q5aKrsFCKT%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEQBRoMMDU5MDAzNTQ2ODY1IgyLaUN2ZnniekBEWRoqjwXt6aRocuIHv0TlkFASJPx7A7k4Hnw0WQQnHsqXKBsaOA1it10NBhXVS3RaZG88i0X7GhNwk8lOdpBvDmuUB0EW8j0YwiDkrjiMD0Va4Ofd0ZiKPsdRvNHIbCD2k5HPwQCKGwbCZGoN7Q04jTLofuM7CW0Sif2c%2Bn1g0XT4ztaj2KkSgYrkbp5gwG2bWi0b1roxCtMXYVXvTBWBSmQ4IescHYtQEanwRmbzRdzzYsnUSrfzItx8Z80FVfMcT%2FBw3mdx7lxUso2mB1kj5YQNGahCzMxMeeTsqyonH%2FeKlFWEnvk9xw8MW90HR%2FYtJ2Qy93fMEmHejduF0I0uggDjDjpQID15heeIfg80%2FJIYsdDYM2JI68A7k1JCXGHtSZGYhGDJnFvW0PCDAq3WoMmMtlQPBTfUHb2FbZKQMeUJvTJQSAzJzv111i9fZavDht5ArmqistvmILiNQhITsL2T3qZHWVqhsPsV476rXK9xWdTY1lCNv%2FvlwCikO%2F9KZ%2FoGrfrqfKJ8UaqE8d8Mx0uhOrD3FvrWHNYEi56pacgz1NXgoA9AcF0KF%2BTE3jb3xl%2B%2BZV3bxcemksY0PZVnkW26EZc2JvNBe7eO1vrM1LAylZzneGnQFFFEBfj9wuGlpjdgT1w7Rg67ExE3YBj2qhtw3NjR5tF%2FVWfQ0T8ux0McCxMVyVXcyWx4iJq2P43Y7%2BG6klkTHdfj8PMGLYjRQViWWx2JTpE%2FUHZwqhk9gTYvLzM8X1OvORZh6TKozImrU0cWsJepqRXEU1Xs3xaLHh4iN%2FGH6j8bLUa2YjbeQd%2BoP6HAQ1KNFh53Hns9tO4zAmxQxtcYvG8PdvTLKPoOgOvYrHdgOKCyTKZsxbW0DkS8Ky63MNn166cGOrABvcAvUdseOXGSgYCHOifHy957o4WsqIv%2BIUZ%2BBRWamLEP8PWNpfGzJ0Y%2BwxMLSeqPmeNfP0BAvh4%2FBrQQqLY3nL2ydSJ3Lsy%2BqhUhRl4RP1GqfZAPBkG6o5NFFlx1XMjjgj1Y2lWySG0OwjoIdT%2FW86cf6Dpod%2FJWaFklWyuZzgIrG2vF0AnIc28gilJlLjZ468WEE9sunR5FHgiV%2BkNlunHCeGCdLS9IcdjL%2FwQ15is%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20230908T114148Z&X-Amz-SignedHeaders=host&X-Amz-Expires=299&X-Amz-Credential=ASIAQ3PHCVTYTCOUX66S%2F20230908%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=72332a917e69318a5d7324e240cbe1bcafbb8cb7d88c5a1ead8e08526d989bcb&hash=6fb609aa7f30a6697e8912606b655253d8fa918a73760213aebcf63890302121&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S1566253521001081&tid=spdf-bea49b18-8ac6-4351-b799-12de5ddff533&sid=619e19d91b6bb1401608ee88fab26a41e94dgxrqb&type=client&tsoh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&ua=0e1458535758020d045f57&rr=8036eca85e5ae176&cc=tr) [A survey of uncertainty in deep neural networks](https://link.springer.com/article/10.1007/s10462-023-10562-9) [Generalized Out-of-Distribution Detection: A Survey](https://arxiv.org/pdf/2110.11334.pdf) [Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_Rethinking_Out-of-Distribution_OOD_Detection_Masked_Image_Modeling_Is_All_You_CVPR_2023_paper.pdf) # Uncertainty - OpenMix: Exploring Outlier Samples for Misclassification Detection (**CVPR** 2023) [[paper]](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_OpenMix_Exploring_Outlier_Samples_for_Misclassification_Detection_CVPR_2023_paper.pdf) [[code]](https://github.com/Impression2805/OpenMix) - Failure Detection for Motion Prediction of Autonomous Driving: An Uncertainty Perspective (**ICRA** 2023) [[paper]](https://arxiv.org/ftp/arxiv/papers/2301/2301.04421.pdf) - A Call to Reflect on Evaluation Practices for Failure Detection in Image Classification (**ICLR** 2023) [[paper]](https://openreview.net/pdf?id=YnkGMIh0gvX) [[code]](https://github.com/IML-DKFZ/fd-shifts) - What can we learn from the selective prediction and uncertainty estimation performance of 523 imagenet classifiers (**ICLR** 2023) [[paper]](https://arxiv.org/pdf/2302.11874.pdf) [[code]](https://github.com/IdoGalil/benchmarking-uncertainty-estimation-performance) - Towards Better Selective Classification (**ICLR** 2023) [[paper]](https://openreview.net/pdf?id=5gDz_yTcst) [[code]](https://github.com/BorealisAI/towards-better-sel-cls) - AUC-based Selective Classification (**AISTATS** 2023) [[paper]](https://proceedings.mlr.press/v206/pugnana23a/pugnana23a.pdf) - Failure Detection in Medical Image Classification: A Reality Check and Benchmarking Testbed (**TMLR** 2022) [[paper]](https://openreview.net/pdf?id=VBHuLfnOMf) [[code]](https://github.com/melanibe/failure_detection_benchmark) - Rethinking Confidence Calibration for Failure Prediction (**ECCV** 2022) [[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850512.pdf) [[code]](https://github.com/Impression2805/FMFP) - Improving the Reliability for Confidence Estimation (**ECCV** 2022) [[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136870385.pdf) - Trust, but Verify: Using Self-Supervised Probing to Improve Trustworthiness (**ECCV** 2022) [[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136730362.pdf) - Single model uncertainty estimation via stochastic data centering(**NeurIPS** 2022) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2022/file/392d0d05e2f514063e6ce6f8b370834c-Paper-Conference.pdf) - RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness(**NeurIPS** 2021) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2022/file/5ddcfaad1cb72ce6f1a365e8f1ecf791-Paper-Conference.pdf)[[code]](https://github.com/FrancescoPinto/RegMixup) - Learning to predict trustworthiness with steep slope loss (**NeurIPS** 2021) [[paper]](https://arxiv.org/pdf/2110.00054.pdf) [[code]](https://github.com/luoyan407/predict_trustworthiness) - Confidence-Aware Learning for Deep Neural Networks (**ICML** 2020) [[paper]](http://proceedings.mlr.press/v119/moon20a/moon20a.pdf) [[code]](https://github.com/daintlab/confidence-aware-learning) - Self-Adaptive Training: beyond Empirical Risk Minimization (**NeurIPS** 2020) [[paper]](https://arxiv.org/pdf/2002.10319.pdf) [[code]](https://github.com/LayneH/self-adaptive-training) - Selectivenet: A deep neural network with an integrated reject option (**ICML** 2019) [[paper]](http://proceedings.mlr.press/v97/geifman19a/geifman19a.pdf)[[code]](https://github.com/geifmany/selectivenet) - Addressing Failure Prediction by Learning Model Confidence (**NeurIPS** 2019) [[paper]](https://proceedings.neurips.cc/paper/2019/file/757f843a169cc678064d9530d12a1881-Paper.pdf) [[code]](https://github.com/valeoai/ConfidNet) - Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift (**NeurIPS** 2019) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2019/file/8558cb408c1d76621371888657d2eb1d-Paper.pdf) - Deep Gamblers: Learning to Abstain with Portfolio Theory (**NeurIPS** 2019) [[paper]](https://arxiv.org/pdf/1907.00208.pdf) [[code]](https://github.com/Z-T-WANG/NIPS2019DeepGamblers) - Using self-supervised learning can improve model robustness and uncertainty(**NeurIPS** 2019) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2019/file/a2b15837edac15df90721968986f7f8e-Paper.pdf)[[code]](https://github.com/hendrycks/ss-ood) - To Trust Or Not To Trust A Classifier (**NeurIPS** 2018) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2018/file/7180cffd6a8e829dacfc2a31b3f72ece-Paper.pdf) [[code]](https://github.com/google/TrustScore) - Selective classification for deep neural networks (**NeurIPS** 2017) [[paper]](https://proceedings.neurips.cc/paper/2017/file/4a8423d5e91fda00bb7e46540e2b0cf1-Paper.pdf)[[code]](https://github.com/geifmany/selective_deep_learning) - A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks (**ICLR** 2017) [[paper]](https://arxiv.org/pdf/1610.02136.pdf) - What uncertainties do we need in bayesian deep learning for computer vision? (**NeurIPS** 2017) [[paper]](https://proceedings.neurips.cc/paper_files/paper/2017/file/2650d6089a6d640c5e85b2b88265dc2b-Paper.pdf) - Simple and scalable predictive uncertainty estimation using deep ensembles(**NeurIPS** 2017) [[paper]](https://arxiv.org/pdf/1612.01474.pdf) - Dropout as a bayesian approximation: Representing model uncertainty in deep learning(**ICML** 2016)[[paper]](https://proceedings.mlr.press/v48/gal16.pdf)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully