Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








9,978 Hits in 5.1 sec

Low-rank features based double transformation matrices learning for image classification [article]

Yu-Hong Cai, Xiao-Jun Wu, Zhe Chen
2022 arXiv   pre-print
This paper proposes a double transformation matrices learning method based on latent low-rank feature extraction.  ...  The core idea is to use double transformation matrices for relaxation, and jointly projecting the learned principal and salient features from two directions into the label space, which can share the pressure  ...  Acknowledgments This work is supported by the National Natural Science Foundation of China under Grant Nos. 61672265, U1836218, the 111 Project of Ministry  ... 
arXiv:2201.12351v2 fatcat:7lorp2b64ze2tfkhjsxpmqoewy

Fisher Discriminative Least Squares Regression for Image Classification [article]

Zhe Chen, Xiao-Jun Wu, Josef Kittler
2020 arXiv   pre-print
In order to learn a more powerful discriminative projection, as well as regression labels, we propose a Fisher regularized DLSR (FDLSR) framework by constraining the relaxed labels using the Fisher criterion  ...  FDLSR for the first time ever attempts to integrate the Fisher discriminant criterion and ϵ-draggings technique into one unified model because they are absolutely complementary in learning discriminative  ...  [5] proposed a low-rank discriminative least squares regression model (LRDLSR) by class-wisely imposing lowrank constraint on the learned relaxed labels of DLSR.  ... 
arXiv:1903.07833v3 fatcat:prfe4vgnwjfc3c3ac63xrg4cam

Noise Robustness Low-Rank Learning Algorithm for Electroencephalogram Signal Classification

Ming Gao, Runmin Liu, Jie Mao
2021 Frontiers in Neuroscience  
To solve this problem, this paper develops a noise robustness low-rank learning (NRLRL) algorithm for EEG signal classification.  ...  NRLRL establishes a low-rank subspace to connect the original data space and label space.  ...  FUNDING This work was supported in part by the Scientific Research Program of Education Department of Hubei Province, China (D20184101), the Higher Education Reform Project of Hubei Province, China (201707  ... 
doi:10.3389/fnins.2021.797378 pmid:34899177 pmcid:PMC8652211 fatcat:qiggyijykbh7fi5y36nhnd7n6i

Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation

Yong Xu, Xiaozhao Fang, Jian Wu, Xuelong Li, David Zhang
2016 IEEE Transactions on Image Processing  
Index Terms-Source domain, target domain, low-rank and sparse constraints, knowledge transfer, subspace learning.  ...  To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative  ...  We learn a flexible linear classifier (projection) by relaxing the strict binary label matrix into a slack variable matrix, which brings the following two advantages for our model: 1) it can enlarge the  ... 
doi:10.1109/tip.2015.2510498 pmid:26701675 fatcat:fl5arjzwwjdyjb7ty3ovtb5hmm

Universal Domain Adaptation in Ordinal Regression [article]

Boris Chidlovskii and Assem Sadek and Christian Wolf
2021 arXiv   pre-print
We propose a method that complements the OR classifier with an auxiliary task of order learning, which plays the double role of discriminating between common and private instances, and expanding class  ...  labels to the private target images via ranking.  ...  We propose a deep architecture for UDA which jointly learns OR, the order and domain invariant features through an adversarial domain discrimination.  ... 
arXiv:2106.11576v2 fatcat:hj3xmrlmifcuhk2zrcb6wtrrue

JCS: an Explainable Surface Defects Detection Method for Steel Sheet by Joint Classification and Segmentation

Shiyang Zhou, Huaiguang Liu, Ketao Cui, Zhiqiang Hao
2021 IEEE Access  
ACKNOWLEDGMENTS The authors would like to thank editors and anonymous reviewers for their constructive suggestions to improve the manuscript.  ...  INDEX TERMS Joint classification and segmentation for image; class-specific and shared dictionary learning; double low-rank matrix decomposition; surface defects of steel sheet I.  ...  It comprises of the classification method based on a class-specific and shared discriminative dictionary learning (CASDDL) and the segmentation method based on a double low-rank based matrix decomposition  ... 
doi:10.1109/access.2021.3117736 fatcat:c7xcvzckbzhhlnz5vkjj4r3oza

Group Similarity Constraint Functional Brain Network Estimation for Mild Congititive Impairment Classification [article]

xin gao, xiaowen xu, weikai li, rui li
2019 bioRxiv   pre-print
Unfortunately, despite its efficiency, FBN still suffers several challenges for accurately estimate the biological meaningful or discriminative FBNs, under the poor quality of functional magnetic resonance  ...  The experimental results illustrated that the proposed method can construct a more discriminative brain network.  ...  For alleviating this issue, in this paper, based on the tensor regularization framework, we relax the 2,1 -norm penalty and naturally introduce the tensor low-rank (TLR) regularizer for formulating the  ... 
doi:10.1101/734574 fatcat:sc3mpdlc7bfzpllmt6xxuoua4i

Double Descent and Other Interpolation Phenomena in GANs [article]

Lorenzo Luzi and Yehuda Dar and Richard Baraniuk
2024 arXiv   pre-print
Second, we develop a novel pseudo-supervised learning approach for GANs where the training utilizes pairs of fabricated (noise) inputs in conjunction with real output samples.  ...  First, we show that overparameterized generative models that learn distributions by minimizing a metric or f-divergence do not exhibit double descent in generalization errors; specifically, all the interpolating  ...  The research on overparameterized learning and double descent phenomena has been mostly focused on regression [2, 3, 4, 5] and classification [6, 7, 8] problems.  ... 
arXiv:2106.04003v2 fatcat:drnyblond5aqdevor5qaiglade

Ranking via Sinkhorn Propagation [article]

Ryan Prescott Adams, Richard S. Zemel
2011 arXiv   pre-print
We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization.  ...  It is of increasing importance to develop learning methods for ranking.  ...  Acknowledgements The authors wish to thank Sreangsu Acharyya, Daniel Tarlow and Ilya Sutskever for valuable discussions. RPA is a junior fellow of the Canadian Institute for Advanced Research.  ... 
arXiv:1106.1925v2 fatcat:usmwvbtmrbderlkow5tysjtppu

A Convex Relaxation for Weakly Supervised Classifiers [article]

Armand Joulin, Francis Bach (INRIA - Ecole Normale Superieure)
2012 arXiv   pre-print
Empirically, our method compares favorably to standard ones on different datasets for multiple instance learning and semi-supervised learning as well as on clustering tasks.  ...  To avoid this problem, we propose a cost function based on a convex relaxation of the soft-max loss.  ...  This paper was partially supported by the European Research Council (SIERRA and VIDEOWORLD projects).  ... 
arXiv:1206.6413v1 fatcat:jroc2kovr5e7zdueu6ayvjitfa

Spectral Collaborative Representation based Classification for hand gestures recognition on electromyography signals

Ali Boyali, Naohisa Hashimoto
2016 Biomedical Signal Processing and Control  
The CRC based methods do not require large number of training samples for an efficient pattern classification.  ...  The worst recognition result which is the best in the literature is obtained as 97.3% among the four sets of the experiments for each hand gestures.  ...  Acknowledgments The study is supported by the Japan Society for the Promotion of Science (JSPS) fellowship program and the KAKENHI Grant (Grant Number 15F13739).  ... 
doi:10.1016/j.bspc.2015.09.001 fatcat:v7y24detd5dl7at3xvvhivbf6a

Novel Approach for Emotion Detection and Stabilizing Mental State by Using Machine Learning Techniques

Nisha Vishnupant Kimmatkar, B. Vijaya Babu
2021 Computers  
Band power is ranked as the prominent feature. The multimodel approach of classifier is used to classify emotions.  ...  This research study presents a novel approach for detailed analysis of brain EEG signals for emotion detection and stabilize mental state.  ...  The linear regression model is used for feature ranking. Band power and power spectral density ranked high.  ... 
doi:10.3390/computers10030037 fatcat:vcjmyqwyffcpvkxnvxtrcjhtji

Facial Gender Recognition via Low-rank and Collaborative Representation in An Unconstrained Environment

2017 KSII Transactions on Internet and Information Systems  
In this paper, a method via low-rank and collaborative representation is proposed for facial gender recognition in the wild.  ...  The proposed method combines the low-rank and collaborative representation to an organic whole to solve the task of facial gender recognition under unconstrained environments.  ...  Face images processed by low-rank decomposition are denoised and well aligned, and the dictionary generated by these low-rank matrices has the more robust discriminative power.  ... 
doi:10.3837/tiis.2017.09.018 fatcat:cid2t76cvbf5zbnximjsq5hmc4

Manifold elastic net: a unified framework for sparse dimension reduction

Tianyi Zhou, Dacheng Tao, Xindong Wu
2010 Data mining and knowledge discovery  
In particular, MEN has the following advantages for subsequent classification: 1) the local geometry of samples is well preserved for low dimensional data representation, 2) both the margin maximization  ...  and the classification error minimization are considered for sparse projection calculation, 3) the projection matrix of MEN improves the parsimony in computation, 4) the elastic net penalty reduces the  ...  However, existing sparse learning algorithms are designed for linear regression problems and the data intrinsic structure is usually ignored.  ... 
doi:10.1007/s10618-010-0182-x fatcat:nf22afafy5cwxf6g7m2f3sbn54

Dimensionality reduction, regularization, and generalization in overparameterized regressions [article]

Ningyuan Huang and David W. Hogg and Soledad Villar
2021 arXiv   pre-print
This realization brought back the study of linear models for regression, including ordinary least squares (OLS), which, like deep learning, shows a "double-descent" behavior: (1) The risk (expected out-of-sample  ...  In this work, we show that for some data models it can also be avoided with a PCA-based dimensionality reduction (PCA-OLS, also known as principal component regression).  ...  Both linear regressions and more complex machine-learning methods typically show a "double-descent" phenomenon, recently identified by Belkin et al.  ... 
arXiv:2011.11477v2 fatcat:bqxhmteverdjdefvfzgtm2cqkm
« Previous Showing results 1 — 15 out of 9,978 results