Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,061 Hits in 4.4 sec

Sequential Alternating Proximal Method for Scalable Sparse Structural SVMs

P. Balamurugan, Shirish Shevade, T. Ravindra Babu
2012 2012 IEEE 12th International Conference on Data Mining  
We compare the proposed method with existing methods for L1-regularized Structural SVMs.  ...  Though L1-regularized structural SVMs have been studied in the past, the use of elastic net regularizer for structural SVMs has not been explored yet.  ...  We also conducted experiments to study contributions of L1 and L2 regularizers on model sparsity.  ... 
doi:10.1109/icdm.2012.81 dblp:conf/icdm/PSB12 fatcat:bfd5pv4amvantgu2j7veoj7idm

A conjugate subgradient algorithm with adaptive preconditioning for LASSO minimization [article]

Alessandro Mirone, Pierre Paleo
2015 arXiv   pre-print
A comparison with other state-of-art methods shows a significant reduction of the number of iterations, which makes this algorithm appealing for practical use.  ...  This paper describes a new efficient conjugate subgradient algorithm which minimizes a convex function containing a least squares fidelity term and an absolute value regularization term.  ...  Acknowledgement We thank Jerome Lesaint which, during his stage from UJF, partecipated to the initial phase of the investigations, studying the convergence properties of Conjugate Gradient on smoothed  ... 
arXiv:1506.07730v2 fatcat:7et3y5f44neghez75fxw3h6mlm

Linearly Constrained Smoothing Group Sparsity Solvers in Off-grid Model [article]

Cheng-Yu Hung, Mostafa Kaveh
2019 arXiv   pre-print
Finally, we propose algorithms for quadratically constrained L2-L1 mixed norm minimization problem by using the smoothed dual conic optimization (SDCO) and continuation technique.  ...  Then, iterative algorithms for the BPDN formulation is proposed by combining the Nesterov smoothing technique with accelerated proximal gradient method, and the convergence analysis of the method is conducted  ...  a a 2 , if a 2 > 1 a, if a 2 ≤ 1. (22) Similarly, ∀u ∈ U l1 , if we choose d l1 (u) = 1 2 u 2 2 , then u l1 can be computed as u l1 = S 1 ( η µ ν) where S 1 (·) denotes the projection operator of projecting  ... 
arXiv:1903.07164v2 fatcat:yk7d4ecaevcn3l7gfe34xt2yeq

Extreme Learning Machines with Regularization for the Classification of Gene Expression Data

Dániel T. Várkonyi, Krisztian Buza
2019 Conference on Theory and Practice of Information Technologies  
In this paper, we compare ELMs with different regularization strategies (no regularization, L1, L2) in context of a binary classification task related to gene expression data.  ...  As L1 regularization is known to lead to sparse structures (i.e., many of the learned weights are zero) in case of various models, we examine the distribution of the learned weights and the sparsity of  ...  Acknowledgement This work was supported by the project no. 20460-3/2018/FEKUTSTRAT within the Institutional Excellence Program in Higher Education of the Hungarian Ministry of Human Capacities.  ... 
dblp:conf/itat/VarkonyiB19 fatcat:2gm5f7xlarh7bdyfrkuz3azydm

Improved error rates for sparse (group) learning with Lipschitz loss functions [article]

Antoine Dedieu
2021 arXiv   pre-print
For Group L1-L2 regularization, our bounds scale as (s^*/n) log( G / s^* ) + m^* / n – G is the total number of groups and m^* the number of coefficients in the s^* groups which contain β^* – and improve  ...  We propose a new theoretical framework that uses common assumptions in the literature to simultaneously derive new high-dimensional L2 estimation upper bounds for all three regularization schemes. and  ...  We select α ≥ 2 so that µ(k * ) ≤ 2αM for L1 and Slope regularizations, and µ(g * s * ) ≤ 2αM √ s * for Group L1-L2 regularization.  ... 
arXiv:1910.08880v7 fatcat:4wvxepiltfc4vmpdisj4gh54dy

Kernel Task-Driven Dictionary Learning for Hyperspectral Image Classification [article]

Soheil Bahrampour and Nasser M. Nasrabadi and Asok Ray and Kenneth W. Jenkins
2015 arXiv   pre-print
While these methods are usually developed under ℓ_1 sparsity constrain (prior) in the input domain, recent studies have demonstrated the advantages of sparse representation using structured sparsity priors  ...  Moreover, the proposed algorithm uses a joint (ℓ_12) sparsity prior to enforce collaboration among the neighboring pixels.  ...  Kernelized sparse representation with structured sparsity prior Kernel methods are usually used to project the data set into a higher dimensional feature space to make different classes to become linearly  ... 
arXiv:1502.03126v1 fatcat:cffqj3vv6zebjftszspyikia3i

Sparse reconstruction by separable approximation

Stephen J. Wright, Robert D. Nowak, Mario A. T. Figueiredo
2008 Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing  
One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularizer.  ...  Experiments with CS problems show that our approach provides state-of-the-art speed for the standard ℓ2ℓ1 problem, and is also efficient on problems with GS regularizers.  ...  SpaRSA is also related to the GPSR (gradient projection for sparse reconstruction) method recently presented by the authors of this manuscript [15] .  ... 
doi:10.1109/icassp.2008.4518374 dblp:conf/icassp/WrightNF08 fatcat:c3f5ul6kvje6ditacaf2lrjrgu

Sparse Reconstruction by Separable Approximation

S.J. Wright, R.D. Nowak, M.A.T. Figueiredo
2009 IEEE Transactions on Signal Processing  
One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularizer.  ...  Experiments with CS problems show that our approach provides state-of-the-art speed for the standard ℓ2ℓ1 problem, and is also efficient on problems with GS regularizers.  ...  SpaRSA is also related to the GPSR (gradient projection for sparse reconstruction) method recently presented by the authors of this manuscript [15] .  ... 
doi:10.1109/tsp.2009.2016892 fatcat:o2bznsclfvawfftztsd4onsiai

Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells

Yunfei Huang, Christoph Schell, Tobias B. Huber, Ahmet Nihat Şimşek, Nils Hersch, Rudolf Merkel, Gerhard Gompper, Benedikt Sabass
2019 Scientific Reports  
Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization.  ...  We compare two classical schemes, L1- and L2-regularization, with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net.  ...  (b) In this work, we test five regularization methods for traction reconstruction: L2 regularization (L2), L1 regularization (L1), EN regularization (EN), Proximal Gradient Lasso (PGL) and Proximal Gradient  ... 
doi:10.1038/s41598-018-36896-x pmid:30679578 pmcid:PMC6345967 fatcat:25xfychrtrgsna2oxz72c3oex4

Structure learning in random fields for heart motion abnormality detection

Mark Schmidt, Kevin Murphy, Glenn Fung, Romer Rosales
2008 2008 IEEE Conference on Computer Vision and Pattern Recognition  
We consider block-L1 regularization for each set of features associated with an edge, and formalize an efficient projection method to find the globally optimal penalized maximum likelihood solution.  ...  To do this, we propose a method for jointly learning the structure and parameters of conditional random fields, formulating these tasks as a convex optimization problem.  ...  We considered block-L1 methods for α = {1, 2, ∞} for both the MB and RF variants.  ... 
doi:10.1109/cvpr.2008.4587367 dblp:conf/cvpr/SchmidtMFR08 fatcat:yp472iuphzf2tmt4ejtjmdmnkq

Atmospheric Inverse Modeling via Sparse Reconstruction

Nils Hase, Scot M. Miller, Peter Maaß, Justus Notholt, Mathias Palm, Thorsten Warneke
2016 Geoscientific Model Development Discussions  
In this study we present a new regularization approach for ill-posed inverse problems in atmospheric science.  ...  It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system.  ...  The collaboration was supported by a research scholarship by the Deutscher Akademischer Austauschdienst (DAAD).  ... 
doi:10.5194/gmd-2016-256 fatcat:qcfpwjp5ivcdpgong7mqvg377i

Atmospheric inverse modeling via sparse reconstruction

Nils Hase, Scot M. Miller, Peter Maaß, Justus Notholt, Mathias Palm, Thorsten Warneke
2017 Geoscientific Model Development  
It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system.  ...  In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science.  ...  The collaboration was supported by a research scholarship by the Deutscher Akademischer Austauschdienst (DAAD).  ... 
doi:10.5194/gmd-10-3695-2017 fatcat:cllkwapy2rgmnmqlbalc7ew3tu

2DNMR data inversion using locally adapted multi-penalty regularization [article]

Villiam Bortolotti, Germana Landi, Fabiana Zama
2020 arXiv   pre-print
The method solves an unconstrained optimization problem whose objective contains a data-fitting term, a single L1 penalty parameter and a multiple parameter L2 penalty.  ...  This paper proposes a multi-penalty method with locally adapted regularization parameters for fast and accurate inversion of 2DNMR data.  ...  As already discussed in [6] , the starting guess f (0) is computed by applying a few iterations of the Gradient Projection (GP) method to the nonnegatively constrained least squares problem min f ≥0  ... 
arXiv:2007.01268v1 fatcat:tpfuf3r5xzanhgcci37cw4xvfm

Distributed Coordinate Descent for L1-regularized Logistic Regression [chapter]

Ilya Trofimov, Alexander Genkin
2015 Communications in Computer and Information Science  
We present d-GLMNET, a new algorithm solving logistic regression with L1-regularization in the distributed settings.  ...  Solving logistic regression with L1-regularization in distributed settings is an important problem.  ...  After each pass we saved a vector β. After training we evaluated a quality of all classifiers at the test set and counted the number of non-zero entries in β.  ... 
doi:10.1007/978-3-319-26123-2_24 fatcat:zyvciv3qgzaazcmquyrabtfrey

Sparse Representations for Object- and Ego-Motion Estimations in Dynamic Scenes

Hirak J. Kashyap, Charless C. Fowlkes, Jeffrey L. Krichmar
2020 IEEE Transactions on Neural Networks and Learning Systems  
sparsity more effectively than L1- and L2-norm-based penalties.  ...  Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking.  ...  The sharp sigmoid penalty is continuous and differentiable for all input values, making it a well-suited sparsity regularizer for gradient-based optimization methods.  ... 
doi:10.1109/tnnls.2020.3006467 pmid:32687472 fatcat:vyzezhpwzbhrrjebw6so5ury2q
« Previous Showing results 1 — 15 out of 1,061 results