科探空谷
  • Home
  • zhimind home
  • Categories
  • Tags
  • Archives
  • 留学
    • 学校库
    • 专业库
    • 研究方向与招生
    • 工具
    • GPA计算器
    • 脑洞背单词
    • 脱口而出

Mixup

目录

  • 方法
    • 其他
  • 个人总结
    • 疑问:
    • 有意思的
  • 相关
目录

方法¶

$$ \begin{array}{l} \tilde{x}=\lambda x_{i}+(1-\lambda) x_{j} \ \tilde{y}=\lambda y_{i}+(1-\lambda) y_{j} \end{array} $$

Mixup extends the training distribution by incorporating the prior knowledge that **linear

interpolations** of feature vectors should lead to linear interpolations of the associated targets.

其他¶

这篇论文方法非常简单, 不过细看下发现还有很多内容被忽略。

目的是解决以下问题:

  • 记忆训练数据
  • 对抗样本
  • 同时能提高准确率

Empirical Risk Minimization(ERM) 经验风险最小化原则(可看李航《统计学习方法》)

  1. 最小化经验风险只能在训练集上做到 -> 会导致网络记住数据而无泛化能力
  2. 训练数据越多, 神经网络规模就应该越大 -> 矛盾是: 要保证ERM的可收敛性, 则网络(模型)的规模(size)不能随训练数据的增加一起变大

Vicinal Risk Minimization (VRM) 邻域风险最小化, 进行数据增广(data augmentation)

个人总结¶

疑问:¶

The size of these state-of-theart neural networks scales linearly with the number of training examples. ??? 还有这回事?

有意思的¶

learning theory

VC-complexity 不变, the convergence of ERM is guaranteed as long as the size of the learning machine (e.g., the neural network) does not increase with the number of training data.

¶

相关¶

  1. Mixup ICLR 2018

  2. manifold Mixup ICML 2019

  3. AdaMixUp MixUp as Locally Linear Out-Of-Manifold Regularization AAAI 2019

  4. CutMix ICCV2019(oral)

  5. AugMix ICLR2020

  6. Puzzle Mix ICML'20

  7. Mixup+SemiSL->MixMatch NIPS2019

  8. ReMixMatch ICLR2019

  9. Mixup+Defense->Mixup Inference ICLR 2020

  10. On Mixup Training Improved Calibration and Predictive Uncertainty NIPS2019

  11. Nonlinear Mixup: Out-Of-Manifold Data Augmentation for Text Classification. AAAI 2020

  12. Adversarial Domain Adaptation with Domain Mixup AAAI2020

  13. Active Mixup for Data-Efficient Knowledge Distillation, CVPR2020

  14. Adversarial Vertex Mixup, CVPR2020

  15. Manifold Mixup for Few-shot Learning WACV2020

  16. Improving Short Text Classification Through Global Augmentation Methods CD-MAKE 2020

  17. Manifold Mixup Improves Text Recognition with CTC Loss ICDAR2019

  18. Spatial Mixup IEEE ACCESS

  19. Understanding mixup training methods, IEEE ACCESS

  20. Unifying semi-supervised and robust learning by mixup ICLR 2019 Workshop

  21. MetaMixup 未中

  22. SuperMix (被拒, 未中)
  23. Rethinking Image Mixture (unsupervised) 未中
  24. GraphMix (ICLR 被拒, 未中)
  25. FixMatch (MixMatch->ReMixMatch->FixMatch(+UDA+Cutout)未中)
  26. MixUp as Directional Adversarial Training (NIPS2019 ICLR2020 连拒, 未中)
  27. MixUp + Adversarial T 或 VAT? ICLR 上好像被拒的, 我看到过

Hongyu Guo, 一人薅了4篇, 2篇AAAI, 2篇 arxiv


Published

5月 4, 2020

Last Updated

5月 4, 2020

Category

论文

Tags

  • 论文 39
  • 深度学习 11

Stay in Touch

  • Powered by Pelican. Theme: Elegant by Talha Mansoor