何謂小資料?為何要小資料AI? 從2005 年Roger Mougalas首次提出Big data這個詞彙之後,相關的題目一直都很夯。Big data是指企業為了自動化做機器學習所需要的大量資料。 其中包含了5個V:資料量(Volume)大、資料種類(Vriety)多、資料生產速度(Velocity)快、資料可信度(Veracity)要高、產生的價值(Value)要能回本。在這個5V增長之下,雖然每筆資料花費變少了,但收集、訓練資料的總花費節節攀升。 因此在近年小公司訓練資料量不足、訓練資料不夠多元、資料標註不足的情況下,小資料的機器學習開始興起。其中有很多key words,包含Small data、few shot learning、meta learning等等,都是解決小資料問題的一些技術方向。 先來看看small data有什麼應用場景。第一是使用少量資料學習更接近人類學習的方式、第二是用在改善稀有資料的建模、第三是節省收集資料、標註的人力成本。 更接近人類學習的方式
2/24/2021Updated on Feb. 2021 (Vanilla GAN)Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial networks. arXiv preprint arXiv:1406.2661. Cited by 27497, 1st GAN paper (DCGAN)Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by 8447, improve CNN in GAN using striding (Pix2Pix)Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134). Cited by 7899, Nvidia change noise-to-picture to picture-to-picture (cycleGAN)Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232). Cited by 7325, most well-known cycle loss with pix2pix discrimination (WGAN)Arjovsky, M., Chintala, S., & Bottou, L. (2017, July). Wasserstein generative adversarial networks. In International conference on machine learning (pp. 214-223). PMLR. Cited by 6202, replace JS divergence with Wasserstein distance (SRGAN)Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690). Cited by 5026, the most well-known super-resolution paper (conditional GAN)Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by 4729, 1st conditional GAN (WGAN-GP)Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. (2017). Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028.Cited by 4187, accelerated WGAN
2/23/2021or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up