专栏名称: VALSE
VALSE(Vision and Learning Seminar) 年度研讨会的主要目的是为计算机视觉、图像处理、模式识别与机器学习研究领域内的中国青年学者提供一个深层次学术交流的舞台。
目录
相关文章推荐
南方人物周刊  ·  芬兰正在针对俄罗斯备战 ·  21 小时前  
每日人物  ·  范冰冰,靠一片面膜“杀”回来了 ·  23 小时前  
九章算法  ·  「九点热评」亚马逊确认大裁员! ·  昨天  
51好读  ›  专栏  ›  VALSE

VALSE Webinar 19-25期 深度解析对抗机器学习

VALSE  · 公众号  ·  · 2019-09-19 19:08

正文

请到「今天看啥」查看全文



个人主页:

https://cihangxie.github.io/


报告摘要:

Adversarial examples that can fool the state-of-the-art computer vision systems present challenges to convolutional networks and opportunities for understanding them. In this talk, I will present our recent work on defending against adversarial examples. Noticing that small adversarial perturbations on images lead to significant noise in the features space, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code is available at:

https://github.com/facebookresearch/ImageNet-Adversarial-Training.


参考文献:

[1] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He, “Feature Denoising for Improving Adversarial Robustness”, CVPR 2019.

[2] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, Alan Yuille, “Mitigating Adversarial Effects Through Randomization”, ICLR 2018.

[3] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Yuyin Zhou, Lingxi Xie, Alan Yuille, “Adversarial Examples for Semantic Segmentation and Object Detection”, ICCV 2017.

报告嘉宾: 宫博庆 (Google)

报告时间: 2019年9月25日(星期三)晚上21:00(北京时间)

报告题目: Gaussian Attack by Learning the Distributions of Adversarial Examples


报告人简介:

Boqing Gong is a research scientist at Google, Seattle and a remote principal investigator at ICSI, Berkeley. His research in machine learning and computer vision focuses on modeling, algorithms, and visual recognition. Before joining Google in 2019, he worked in Tencent and was a tenure-track Assistant Professor at the University of Central Florida (UCF). He received an NSF CRII award in 2016 and an NSF BIGDATA award in 2017, both of which were the first of their kinds ever granted to UCF. He is/was a (senior) area chair of NeurIPS 2019, ICCV 2019, ICML 2019, AISTATS 2019, AAAI 2020, and WACV 2018--2020. He received his Ph.D. in 2015 at the University of Southern California, where the Viterbi Fellowship partially supported his work.


个人主页:

http://boqinggong.info








请到「今天看啥」查看全文