Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








551,514 Hits in 4.9 sec

Regularized Learning with Networks of Features

Ted Sandler, John Blitzer, Partha Pratim Talukdar, Lyle H. Ungar
2008 Neural Information Processing Systems  
For text classification, regularization using networks of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning methods.  ...  Here we present a framework for regularized learning when one has prior knowledge about which features are expected to have similar and dissimilar weights.  ...  Regularized Learning with Networks of Features We assume a standard supervised learning framework in which we are given a training set of instances T = {(x i , y i )} n i=1 with x i ∈ R d and associated  ... 
dblp:conf/nips/SandlerBTU08 fatcat:k2uoeiuiw5hilntbm6x2ubrcf4

Deep Neural Network Regularization for Feature Selection in Learning-to-Rank

Ashwini Rahangdale, Shital Raut
2019 IEEE Access  
The main aim of regularization is optimizing the weight of neural network, selecting the relevant features with active neurons at the input layer, and pruning of the network by selecting only active neurons  ...  The sparsity of network is measured by the sparsity ratio and it is compared with learning-torank models, which adopt the embedded method for feature selection.  ...  (a) number of active neurons with 2 regularized network; (b) number of active neurons with 1 regularized network; (c) number of active neurons with groupl regularized network; (d) number of active neurons  ... 
doi:10.1109/access.2019.2902640 fatcat:tfrgjgusdfbaxhw33zrysmmjca

Learning with Feature Network and Label Network Simultaneously

Yingming Li, Ming Yang, Zenglin Xu, Zhongfei (Mark) Zhang
2017 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
a robust predictor with the feature network and the label network regularization simultaneously.  ...  To improve the generalization performance, in this paper, we propose Doubly Regularized Multi-Label learning (DRML) by exploiting feature network and label network regularization simultaneously.  ...  Further, we incorporate the label network regularization into the framework of learning with feature network.  ... 
doi:10.1609/aaai.v31i1.10715 fatcat:duq2ibcnqfadza3i6qahj5ksh4

Automatic feature selection in neuroevolution

Shimon Whiteson, Peter Stone, Kenneth O. Stanley, Risto Miikkulainen, Nate Kohl
2005 Proceedings of the 2005 conference on Genetic and evolutionary computation - GECCO '05  
Feature selection is the process of finding the set of inputs to a machine learning algorithm that will yield the best performance.  ...  Initial experiments in an autonomous car racing simulation demonstrate that FS-NEAT can learn better and faster than regular NEAT.  ...  By learning appropriate feature sets, FS-NEAT learns significantly better networks and learns them faster than regular NEAT.  ... 
doi:10.1145/1068009.1068210 dblp:conf/gecco/WhitesonSSMK05 fatcat:gkcdfwjfmzgtxdn5jfaqmjcfmm

DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks [article]

Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Liping Liu, Zeyu Chen, Jun Huan
2020 arXiv   pre-print
In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention.  ...  Specifically, in addition to minimizing the empirical loss, DELTA intends to align the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by  ...  with feature map regularization.  ... 
arXiv:1901.09229v4 fatcat:vsx5dc6eyraaln765j4wpow2aa

On Regularization Properties of Artificial Datasets for Deep Learning [article]

Karol Antczak
2019 arXiv   pre-print
The paper discusses regularization properties of artificial data for deep learning. Artificial datasets allow to train neural networks in the case of a real data shortage.  ...  One can treat this property of artificial data as a kind of "deep" regularization. It is thus possible to regularize hidden layers of the network by generating the training data in a certain way.  ...  It was shown that, by generating the input data from high-level features, it is possible to regularize hidden layers of the network by exploiting the ability of deep networks to learn hierarchical representations  ... 
arXiv:1908.07005v1 fatcat:3lqiaiidebc7zixo5csnze2hiq

Combined Group and Exclusive Sparsity for Deep Neural Networks

Jaehong Yoon, Sung Ju Hwang
2017 International Conference on Machine Learning  
The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting  ...  of features.  ...  A similar regularizer was used in (Hwang et al., 2011) in a metric learning setting, with an additional 1-regularization that helps learn discriminative features for each metric.  ... 
dblp:conf/icml/YoonH17 fatcat:755tf3bc3jg35komcnl46rcqqe

AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks [article]

Majed El Helou, Frederike Dümbgen, Sabine Süsstrunk
2020 arXiv   pre-print
The large capacity of neural networks enables them to learn complex functions.  ...  A common practical approach to attenuate overfitting is the use of network regularization techniques.  ...  Our contributions are summarized as follows. 1) We present AL2, a progressive regularization method acting on the activations of the feature representation learned by neural networks before their final  ... 
arXiv:2003.03633v1 fatcat:dhm3xeefcrhq3nejk3csvkd2oy

Regularization by Intrinsic Plasticity and Its Synergies with Recurrence for Random Projection Methods

Klaus Neumann, Christian Emmerich, Jochen J. Steil
2012 Journal of Intelligent Learning Systems and Applications  
We provide an in-depth analysis of such networks with respect to feature selection, model complexity, and regularization.  ...  selection parameters like network size, initialization ranges, or the regularization parameter of the output learning.  ...  In this paper we distinguish two different levels of regularization: output regularization with regard to the linear output learning and input or feature regularization with regard to the feature encoding  ... 
doi:10.4236/jilsa.2012.43024 fatcat:qifzvgfp2vb5vfp3kdowvdzr6u

Deep Multi-View Learning using Neuron-Wise Correlation-Maximizing Regularizers [article]

Kui Jia, Jiehong Lin, Mingkui Tan, Dacheng Tao
2019 arXiv   pre-print
Many machine learning problems concern with discovering or associating common patterns in data of multiple views or modalities. Multi-view learning is of the methods to achieve such goals.  ...  Recent methods propose deep multi-view networks via adaptation of generic Deep Neural Networks (DNNs), which concatenate features of individual views at intermediate network layers (i.e., fusion layers  ...  Under the framework of regularized function learning, this amounts to training network parameters by penalizing objectives of the main learning tasks with correlation-maximizing regularization at fusion  ... 
arXiv:1904.11151v1 fatcat:omjyjr4gabeuliq3ik3en6qrie

Learning Temporal Regularity in Video Sequences

Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K. Roy-Chowdhury, Larry S. Davis
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We approach this problem by learning a generative model for regular motion patterns (termed as regularity) using multiple sources with very limited supervision.  ...  Second, we build a fully convolutional feed-forward autoencoder to learn both the local features and the classifiers as an end-to-end learning framework.  ...  Since the network is trained with regular videos, it learns the regular motion patterns.  ... 
doi:10.1109/cvpr.2016.86 dblp:conf/cvpr/0003CNRD16 fatcat:z4gq5mndhrcrbmitar2ldywfvq

SMIL: Multimodal Learning with Severely Missing Modality

Mengmeng Ma, Jian Ren, Long Zhao, Sergey Tulyakov, Cathy Wu, Xi Peng
2021 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
For the first time in the literature, this paper formally studies multimodal learning with missing modality in terms of flexibility (missing modalities in training, testing, or both) and efficiency (most  ...  The results prove the state-of-the-art performance of SMIL over existing methods and generative baselines including autoencoders and generative adversarial networks.  ...  Acknowledgements This work is partially supported by the Data Science Institute (DSI) at University of Delaware and Snap Research.  ... 
doi:10.1609/aaai.v35i3.16330 fatcat:owvuukxvefbojlgo2jyipu4724

Interpretable Neuron Structuring with Graph Spectral Regularization [article]

Alexander Tong, David van Dijk, Jay S. Stanley III, Matthew Amodio, Kristina Yim, Rebecca Muhle, James Noonan, Guy Wolf, Smita Krishnaswamy
2020 arXiv   pre-print
This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network.  ...  While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features.  ...  To circumvent this problem, we propose to learn a feature graph in the latent space of a neural network using feature co-activations as a measure of similarity.  ... 
arXiv:1810.00424v5 fatcat:wvsyah2uljhchpu3aywgatmjlm

Loss Function Entropy Regularization for Diverse Decision Boundaries [article]

Chong Sue Sin
2022 arXiv   pre-print
This paper will present a remarkably simple method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn  ...  output space of unsupervised learning, thereby diversifying the latent representation of decision boundaries of neural networks.  ...  neural networks with similar accuracy but different latent representation of features.  ... 
arXiv:2205.00224v1 fatcat:e52r427lrngwnou4gnts7jjpte

Uniform Priors for Data-Efficient Transfer [article]

Samarth Sinha, Karsten Roth, Anirudh Goyal, Marzyeh Ghassemi, Hugo Larochelle, Animesh Garg
2020 arXiv   pre-print
Across all experiments, we show that uniformity regularization consistently offers benefits over baseline methods and is able to achieve state-of-the-art performance in Deep Metric Learning and Meta-Learning  ...  It is therefore crucial to understand what makes for good, transfer-able features in deep networks that best allow for such adaptation.  ...  Large γ values hinder effective feature learning from training data, while values of γ too small result in weak regularization, leading to a non-uniform learned feature distribution with reduced generalization  ... 
arXiv:2006.16524v2 fatcat:w6e74imnbbhe7jb3ghuf7fjymm
« Previous Showing results 1 — 15 out of 551,514 results