Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,263 Hits in 6.9 sec

Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses [article]

Yao Deng, Tiehua Zhang, Guannan Lou, Xi Zheng, Jiong Jin, Qing-Long Han
2021 arXiv   pre-print
Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and  ...  The rapid development of artificial intelligence, especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving  ...  Table II shows some research works implemented black-box evasion attacks and experimented the effectiveness of their methods for attacking E2E driving models or object detectors in the perception layer  ... 
arXiv:2104.01789v2 fatcat:zekeddt7zzcnrphu3f4yw6vzii

Physically Realizable Adversarial Examples for LiDAR Object Detection

James Tu, Mengye Ren, Sivabalan Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, Raquel Urtasun
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we address this issue and present a method to generate universal 3D adversarial objects to fool LiDAR detectors.  ...  This is one step closer towards safer self-driving under unseen conditions from limited training data.  ...  The attack will consistently cause target vehicles to disappear, severely impeding downstream tasks in autonomous driving systems.  ... 
doi:10.1109/cvpr42600.2020.01373 dblp:conf/cvpr/TuRMLYDCU20 fatcat:f7ycwcr3zfdgllpi6ego5pgjhe

Physically Realizable Adversarial Examples for LiDAR Object Detection [article]

James Tu, Mengye Ren, Siva Manivasagam, Ming Liang, Bin Yang, Richard Du, Frank Cheng, Raquel Urtasun
2020 arXiv   pre-print
In this paper, we address this issue and present a method to generate universal 3D adversarial objects to fool LiDAR detectors.  ...  This is one step closer towards safer self-driving under unseen conditions from limited training data.  ...  The attack will consistently cause target vehicles to disappear, severely impeding downstream tasks in autonomous driving systems.  ... 
arXiv:2004.00543v2 fatcat:ygq2dhrwuffcbbakhwttt4znia

On the Adversarial Robustness of Camera-based 3D Object Detection [article]

Shaoyuan Xie, Zichao Li, Zeyu Wang, Cihang Xie
2024 arXiv   pre-print
We systematically analyze the resilience of these models under two attack settings: white-box and black-box; focusing on two primary objectives: classification and localization.  ...  However, the robustness of these methods to adversarial attacks has not been thoroughly examined, especially when considering their deployment in safety-critical domains like autonomous driving.  ...  Department of Transportation National University Transportation Center) headquartered at Clemson University, Clemson, South Carolina, USA.  ... 
arXiv:2301.10766v2 fatcat:pe2bhuigpfgbbha5ww7rtk3vte

Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems [article]

Guangjing Wang, Ce Zhou, Yuanda Wang, Bocheng Chen, Hanqing Guo, Qiben Yan
2023 arXiv   pre-print
We further examine the implications of transferable attacks in practical scenarios such as autonomous driving, speech recognition, and large language models (LLMs).  ...  Although considerable efforts have been directed toward developing transferable attacks, a holistic understanding of the advancements in transferable attacks remains elusive.  ...  This object is designed to deceive the autonomous driving system, leading to its failure to detect the object.  ... 
arXiv:2311.11796v1 fatcat:prutglv6iffsxi6stdd7lmpvfm

Visually Adversarial Attacks and Defenses in the Physical World: A Survey [article]

Xingxing Wei, Bangzheng Pu, Jiefan Lu, Baoyuan Wu
2023 arXiv   pre-print
Compared with digital attacks, which generate perturbations in the digital pixels, physical attacks are more practical in the real world.  ...  To establish a taxonomy, we organize the current physical attacks from attack tasks, attack forms, and attack methods, respectively.  ...  UPC [63] (Universal perturbation camouflage) is a general white-box attack framework against object detection, its region proposal network (RPN), classification, and regression network are simultaneously  ... 
arXiv:2211.01671v5 fatcat:rxe74n6zinbzbbdwb4f3plprxq

Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey [article]

Hui Cao, Wenlong Zou, Yinkun Wang, Ting Song, Mengjun Liu
2022 arXiv   pre-print
, developments and recent research in deep learning security technologies in autonomous driving.  ...  The academic community has proposed deep learning countermeasures against the adversarial examples and AI backdoor, and has introduced them into the autonomous driving field for verification.  ...  The scheme leads to object tracking system confuses, when objects cross each other. Algorithm 8.  ... 
arXiv:2210.11237v1 fatcat:d6ytvahs2valjcx2aosgwg66za

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications.  ...  However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos.  ...  [202] have analyzed adversarial stickers on stop signs in the context of autonomous driving to fool YOLO [203] -a popular object detector. Jia et al.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

To make yourself invisible with Adversarial Semantic Contours

Yichi Zhang, Zijian Zhu, Hang Su, Jun Zhu, Shibao Zheng, Yuan He, Hui Xue
2023 Computer Vision and Image Understanding  
of the object area in COCO in white-box scenario and around 10% of those in black-box scenario.  ...  We further extend the attack to datasets for autonomous driving systems to verify the effectiveness.  ...  This further raises concerns over the applications of these DNN-based object detectors in safetycritical systems, e.g., autonomous driving systems.  ... 
doi:10.1016/j.cviu.2023.103659 fatcat:isc7chrypzfizakfwohkg4jela

Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training [article]

Derek Wang, Chaoran Li, Sheng Wen, Surya Nepal, Yang Xiang
2020 arXiv   pre-print
The defence further constructs a detector to identify and reject high-confidence adversarial examples that bypass the black-box defence.  ...  In the experiments, we evaluated our defence against four state-of-the-art attacks on MNIST and CIFAR10 datasets.  ...  Collaborative multi-task training We propose a collaborative multi-task training (CMT) framework in this section. We consider both black-box and grey-box attacks.  ... 
arXiv:1803.05123v4 fatcat:oxt36f255vb5bilfy37hmrwri4

A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial Attacks

Yanjie Li, Bin Xie, Songtao Guo, Yuanyuan Yang, Bin Xiao
2024 ACM Computing Surveys  
We extend the concept of adversarial examples beyond imperceptive perturbations and collate over 170 papers to give an overview of deep learning model robustness against various adversarial attacks.  ...  In addition, we examine physical adversarial attacks that lead to safety violations.  ...  [126] proposed a black-box attack against the Lidar detector in the self-driving setting.  ... 
doi:10.1145/3636551 fatcat:yohd72gxgbhijbhsr2jnfldprq

Physical Adversarial Attacks for Surveillance: A Survey [article]

Kien Nguyen, Tharindu Fernando, Clinton Fookes, Sridha Sridharan
2023 arXiv   pre-print
In particular, we propose a framework to analyze physical adversarial attacks and provide a comprehensive survey of physical adversarial attacks on four key surveillance tasks: detection, identification  ...  Furthermore, we review and analyze strategies to defend against the physical adversarial attacks and the methods for evaluating the strengths of the defense.  ...  Their overall loss is defined as, L = L obj + λL black , (25) where L obj denotes the object score of the object detector and L black is the average probability of black pixels appearing in patch.  ... 
arXiv:2305.01074v2 fatcat:fwleaw3a45dvzmvgrjn5q76eka

Adversarial Examples: Attacks and Defenses for Deep Learning

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2019 IEEE Transactions on Neural Networks and Learning Systems  
Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage.  ...  In addition, three major challenges in adversarial examples and the potential solutions are discussed.  ...  [112] presented a method to generate universal adversarial perturbations against semantic image segmentation task.  ... 
doi:10.1109/tnnls.2018.2886017 pmid:30640631 fatcat:enznysw3svfzdjrmubwkedr6me

A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking [article]

Shaohui Mei, Jiawei Lian, Xiaofei Wang, Yuru Su, Mingyang Ma, Lap-Pui Chau
2023 arXiv   pre-print
Surprisingly, there has been a lack of comprehensive studies on the robustness of RS tasks, prompting us to undertake a thorough survey and benchmark on the robustness of image classification and object  ...  To our best knowledge, this study represents the first comprehensive examination of both natural robustness and adversarial robustness in RS tasks.  ...  [242] presents a fully black-box universal attack (FBUA) framework for creating a single universal adversarial perturbation against SAR target recognition that can be used against a wide range of DNN  ... 
arXiv:2306.12111v2 fatcat:kanti3ucenfchkaqqvmhbomzpa

Adversarial Examples: Attacks and Defenses for Deep Learning [article]

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2018 arXiv   pre-print
With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments.  ...  Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage.  ...  ACKNOWLEDGMENT The work presented is supported in part by National Science Foundation (grants ACI 1245880, ACI 1229576, CCF-1128805, CNS-1624782), and Florida Center for Cybersecurity seed grant.  ... 
arXiv:1712.07107v3 fatcat:5wcz4h4eijdsdjeqwdpzbfbjeu
« Previous Showing results 1 — 15 out of 1,263 results