TY - STD TI - J Lu et al. “NO need to worry about adversarial examples in object detection in autonomous vehicles.” (2017). ID - ref1 ER - TY - STD TI - A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? The kitti vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3354–3361. IEEE (2012). ID - ref2 ER - TY - STD TI - T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, (2015). ID - ref3 ER - TY - STD TI - H. Bou-Ammar, H. Voos, and W. Ertel. Controller design for quadrotor uavs using reinforcement learning. In Control Applications (CCA), 2010 IEEE International Conference on, pages 2130–2135. IEEE, (2010). ID - ref4 ER - TY - STD TI - C. Mostegel, M. Rumpler, F. Fraundorfer, and H. Bischof. Uav-based autonomous image acquisition with multi-view stereo quality assurance by confidence prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–10, (2016). ID - ref5 ER - TY - STD TI - F. Zhang, J. Leitner, M. Milford, B. Upcroft, and P. Corke. Towards vision-based deep reinforcement learning for robotic motion control. arXiv preprint arXiv:1511.03791, 2015. ID - ref6 ER - TY - STD TI - A. Athalye and I. Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, (2017). ID - ref7 ER - TY - STD TI - M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: real and stealthy attacks on state-ofthe-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM, (2016). ID - ref8 ER - TY - JOUR AU - Zhou, Y. AU - Hu, X. AU - Wang, L. AU - Duan, S. AU - Chen, Y. PY - 2018 DA - 2018// TI - Markov chain based efficient defense against adversarial examples in computer vision JO - IEEE Access VL - 7 UR - https://doi.org/10.1109/ACCESS.2018.2889409 DO - 10.1109/ACCESS.2018.2889409 ID - Zhou2018 ER - TY - JOUR AU - Zhang, K. AU - Liang, Y. AU - Zhang, J. AU - Wang, Z. AU - Li, X. PY - 2019 DA - 2019// TI - No one can escape: A general approach to detect tampered and generated image JO - IEEE Access VL - 7 UR - https://doi.org/10.1109/ACCESS.2019.2939812 DO - 10.1109/ACCESS.2019.2939812 ID - Zhang2019 ER - TY - STD TI - Chen, S., Shi, D., Sadiq, M., & Zhu, M. Image denoising via generative adversarial networks with detail loss. In Proceedings of the 2019 2nd International Conference on Information Science and Systems, pp. 261-265. ACM, Jeju Island (2019). ID - ref11 ER - TY - JOUR AU - Li, Y. AU - Wang, Y. PY - 2019 DA - 2019// TI - Defense against adversarial attacks in deep learning JO - Appl. Sci. VL - 9 UR - https://doi.org/10.3390/app9010076 DO - 10.3390/app9010076 ID - Li2019 ER - TY - STD TI - N Das, M Shanbhogue, ST Chen, F Hohman, S Li, L Chen, ... & DH Chau. Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 196-204. ACM, London (2018). ID - ref13 ER - TY - STD TI - C Guo, et al. “Countering adversarial images using input transformations.” arXiv preprint arXiv:1711.00117 (2017). ID - ref14 ER - TY - STD TI - Xie, C, et al. “Mitigating adversarial effects through randomization.” arXiv preprint arXiv:1711.01991 (2017). ID - ref15 ER - TY - STD TI - N Papernot, P McDaniel, I Goodfellow, S Jha, ZB Celik, and A Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp.506–519. ACM, Abu Dhabi (2017). ID - ref16 ER - TY - STD TI - C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow & R Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013). ID - ref17 ER - TY - STD TI - Goodfellow, I. J., Shlens, J., & Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014). ID - ref18 ER - TY - STD TI - A Kurakin, I Goodfellow, and S Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016). ID - ref19 ER - TY - STD TI - N Carlini, & D Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, San Jose (2017). ID - ref20 ER - TY - STD TI - S. M Moosavi-Dezfooli, A Fawzi, & P Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582. (2016). ID - ref21 ER - TY - STD TI - N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 372–387. IEEE, Las Vegas (2016). ID - ref22 ER - TY - STD TI - J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, (2012). ID - ref23 ER - TY - STD TI - C Szegedy, S Ioffe, V Vanhoucke, & A Alemi. Inception-ResNet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261 (2016). ID - ref24 ER - TY - STD TI - A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, (2016). ID - ref25 ER - TY - STD TI - S Shen et al. “Ape-gan: adversarial perturbation elimination with GAN.” arXiv pre-print arXiv:1707.05474 (2017). ID - ref26 ER -