Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








49,317 Hits in 3.8 sec

Cross-Modal Search for Social Networks via Adversarial Learning

Nan Zhou, Junping Du, Zhe Xue, Chong Liu, Jinxuan Li
2020 Computational Intelligence and Neuroscience  
In contrast to traditional cross-modal search, social network cross-modal information search is restricted by data quality for arbitrary text and low-resolution visual features.  ...  A search module is implemented based on adversarial learning, through which the discriminator is designed to measure the distribution of generated features from intramodal and intramodal perspectives.  ...  S d t and S d v are the generation processes interacting with the discriminator to optimize parameters jointly by adversarial learning.  ... 
doi:10.1155/2020/7834953 pmid:32733547 pmcid:PMC7369674 fatcat:jphqvfhc7nbdbgg5xgguqtwp4q

Adaptive Adversarial Attack on Scene Text Recognition [article]

Xiaoyong Yuan, Pan He, Xiaolin Andy Li, Dapeng Oliver Wu
2020 arXiv   pre-print
By leveraging the uncertainty of each task, we directly learn the adaptive multi-task weightings, without manually searching hyper-parameters.  ...  A unified architecture is developed and evaluated for both non-sequential tasks and sequential ones. To validate the effectiveness, we take the scene text recognition task as a case study.  ...  From Table II , we observe that both Adaptive Attack and Basic Attack with a modified binary search can successfully generate adversarial examples on the scene text recognition model.  ... 
arXiv:1807.03326v3 fatcat:5xiolddubje5fmr5ccbrbdrbti

A universal adversarial policy for text classifiers

Gallil Maimon, Lior Rokach
2022 Neural Networks  
We achieve this by learning a single search policy over a predefined set of semantics preserving text alterations, on many texts.  ...  Discovering the existence of universal adversarial perturbations had large theoretical and practical impacts on the field of adversarial learning.  ...  By adding a reward term relating to the validity or likelihood of the output text, such as a language model's perplexity, we could learn to generate more natural adversarial texts.  ... 
doi:10.1016/j.neunet.2022.06.018 pmid:35763880 fatcat:bu33qyvidndhtljlqa2whbbqs4

SneakyPrompt: Jailbreaking Text-to-image Generative Models [article]

Yuchen Yang, Bo Hui, Haolin Yuan, Neil Gong, Yinzhi Cao
2023 arXiv   pre-print
Given a prompt that is blocked by a safety filter, SneakyPrompt repeatedly queries the text-to-image generative model and strategically perturbs tokens in the prompt based on the query results to bypass  ...  Our evaluation shows that SneakyPrompt not only successfully generates NSFW images, but also outperforms existing text adversarial attacks when extended to jailbreak text-to-image generative models, in  ...  This work was supported in part by Johns Hopkins University Institute for Assured Autonomy (IAA) with grants 80052272 and 80052273, National Science Foundation (NSF) under grants CNS-21-31859, CNS-21-12562  ... 
arXiv:2305.12082v3 fatcat:etzh4xsbpnfezliaebk23nfy2a

Image-Text Multi-Modal Representation Learning by Adversarial Backpropagation [article]

Gwangbeen Park, Woobin Im
2016 arXiv   pre-print
In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature.  ...  We present novel method for image-text multi-modal representation learning.  ...  They use adversarial learning concept which is inspired by GAN (Generative Adversarial Network) (Goodfellow et al., 2014) to achieve category discriminative and domain invariant feature.  ... 
arXiv:1612.08354v1 fatcat:pvixvhdeejfyvkqdqhjdeovtwa

Semantic Structure Enhanced Contrastive Adversarial Hash Network for Cross-media Representation Learning

Meiyu Liang, Junping Du, Xiaowen Cao, Yang Yu, Kangkang Lu, Zhe Xue, Min Zhang
2022 Proceedings of the 30th ACM International Conference on Multimedia  
Thirdly, a cross-media and intra-media contrastive adversarial representation learning mechanism is proposed  ...  Deep cross-media hashing technology provides an efficient crossmedia representation learning solution for cross-media search.  ...  The image features learned by the image network are taken as the real image features, and the features learned by the text network are used as the generated image features.  ... 
doi:10.1145/3503161.3548391 fatcat:nii2rtfrozbenaa4oecvk4xqce

TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack [article]

Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He
2022 arXiv   pre-print
In particular, we find we can learn the importance of different words via the change on prediction label caused by word substitutions on the adversarial examples.  ...  Extensive evaluations for text classification and textual entailment show that TextHacker significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality  ...  Acknowledgement This work is supported by National Natural Science Foundation (62076105) and International Cooperation Foundation of Hubei Province, China (2021EHB011).  ... 
arXiv:2201.08193v2 fatcat:vyfjkuiervcfpi2w5fzlovelsa

Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion [article]

Xiangbin Liu, Junping Du, Meiyu Liang, Ang Li
2022 arXiv   pre-print
Generator and discriminator are trained alternately based on adversarial learning, so that the data obtained by the feature mapping network is semantically consistent with the original data and the modal  ...  To address the above problems, this paper proposes a novel Feature Fusion based Adversarial Cross-modal Retrieval method (FFACR) to achieve text-to-video matching, ranking and searching.  ...  Acknowledgements This work was supported by the National Natural Science Foundation of China (No.62192784, No.62172056, No. 61877006).  ... 
arXiv:2210.05243v1 fatcat:j75goo6rafgdlduvvgzmcorw7e

Universal Rules for Fooling Deep Neural Networks based Text Classification [article]

Di Li, Danilo Vasconcellos Vargas, Sakurai Kouichi
2019 arXiv   pre-print
Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one.  ...  By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations  ...  Fig. 3 . 3 Applied Convolutional Neural Network models for text classification (DNN-1 and DNN-2) [3] Fig. 4 . 4 An adversarial text sample generated by swapping the two letters (two elements of perturbation  ... 
arXiv:1901.07132v2 fatcat:nsdxdivblvcftasb5rd2lsx65a

Generating Natural Adversarial Examples [article]

Zhengli Zhao, Dheeru Dua, Sameer Singh
2018 arXiv   pre-print
In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing  ...  We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.  ...  We would also like to thank Ishaan Gulrajani and Junbo Jake Zhao for making their code available. This work is supported in part by Adobe Research and in part by FICO.  ... 
arXiv:1710.11342v2 fatcat:urv6jnxcgfetpfloq7gjdowdtm

Robustness Tests of NLP Machine Learning Models: Search and Semantically Replace [article]

Rahul Singh, Karan Jindal, Yufei Yu, Hanyu Yang, Tarun Joshi, Matthew A. Campbell, Wayne B. Shoumaker
2021 arXiv   pre-print
We also investigate the effectiveness of this strategy and provide a general framework to assess a variety of machine learning models.  ...  The overall approach relies upon a Search and Semantically Replace strategy that consists of two steps: (1) Search, which identifies important parts in the text; (2) Semantically Replace, which finds replacements  ...  Acknowledgements We would like to acknowledge Harsh Singhal and Vijayan N. Nair for their continuous support and insightful suggestions for the improvement of the manuscript.  ... 
arXiv:2104.09978v1 fatcat:tlfzqnkyjfdstheusjasxz7ckq

Exploiting Class Probabilities for Black-box Sentence-level Attacks [article]

Raha Moraffah, Huan Liu
2024 arXiv   pre-print
Sentence-level attacks craft adversarial sentences that are synonymous with correctly-classified sentences but are misclassified by the text classifiers.  ...  examine the question if it is worthy or practical to use class probabilities by black-box sentence-level attacks.  ...  Algorithm 1 Learning the Adversarial Sentence Distribution via S2B2-Attack Input: Original text x orig and its label y, standard deviation σ, population size p, learning rate η, maximum number of iterations  ... 
arXiv:2402.02695v2 fatcat:tkjm6yz3dneelnl6ygeigwbvlq

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation [article]

Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi
2020 arXiv   pre-print
Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches.  ...  In this work, we present a Controlled Adversarial Text Generation (CAT-Gen) model that, given an input text, generates adversarial texts through controllable attributes that are known to be invariant to  ...  By utilizing a text generation model and a larger search space over the controlled attributes, our model is able to generate more diverse and fluent adversarial texts compared to existing approaches.  ... 
arXiv:2010.02338v1 fatcat:o2liixpt4ffaxhpabiqvignbvq

Automated Robustness with Adversarial Training as a Post-Processing Step [article]

Ambrish Rawat, Mathieu Sinn, Beat Buesser
2021 arXiv   pre-print
Adversarial training is a computationally expensive task and hence searching for neural network architectures with robustness as the criterion can be challenging.  ...  Specific policies are adopted for tuning the hyperparameters of the different steps, resulting in a fully automated pipeline for generating adversarially robust deep learning models.  ...  Automated Generation of Robust Models The proposed framework for automated generation of adversarially robust deep learning models comprises of two steps -• First, a NAS algorithm is used to obtain an  ... 
arXiv:2109.02532v1 fatcat:gbqqnabpcnbw3dg34d7mhc6ydm

Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder [article]

Yao Qiu, Jinchao Zhang, Jie Zhou
2021 arXiv   pre-print
Besides, RAR can also be used to generate text-form adversarial samples.  ...  Recent work has proposed several efficient approaches for generating gradient-based adversarial perturbations on embeddings and proved that the model's performance and robustness can be improved when they  ...  Acknowledgments We would like to thank all the reviewers for their insightful and valuable comments and suggestions.  ... 
arXiv:2109.06536v1 fatcat:z65b3bfkizer3kdzaqycg6cjfq
« Previous Showing results 1 — 15 out of 49,317 results