Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








15,224 Hits in 5.2 sec

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

W. Chu, C.J. Ong, S.S. Keerthi
2005 IEEE Transactions on Neural Networks  
The Least Square Support Vector Machines (LS-SVM) formulation corresponds to the solution of a linear system of equations.  ...  Compared with the existing algorithm for LS-SVM, our approach is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.  ...  Acknowledgments Wei Chu gratefully acknowledges the financial support provided by the National University of  ... 
doi:10.1109/tnn.2004.841785 pmid:15787157 fatcat:vrhcuj2qzbglnhrkdyfthayx3q

A least squares formulation for a class of generalized eigenvalue problems in machine learning

Liang Sun, Shuiwang Ji, Jieping Ye
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
In addition, the least squares formulation leads to efficient and scalable implementations based on the iterative conjugate gradient type algorithms.  ...  In this paper, we show that under a mild condition, a class of generalized eigenvalue problems in machine learning can be formulated as a least squares problem.  ...  Acknowledgments This work was supported by NSF IIS-0612069, IIS-0812551, CCF-0811790, and NGA HM1582-08-1-0016.  ... 
doi:10.1145/1553374.1553499 dblp:conf/icml/SunJY09 fatcat:crm2jkkfwnc5zgzy4dgjqf2q24

Day-Ahead Load Demand Forecasting in Urban Community Cluster Microgrids Using Machine Learning Methods

Sivakavi Naga Venkata Bramareswara Rao, Venkata Pavan Kumar Yellapragada, Kottala Padma, Darsy John Pradeep, Challa Pradeep Reddy, Mohammad Amir, Shady S. Refaat
2022 Energies  
Thus, to identify the best load forecasting method in cluster microgrids, this article implements a variety of machine learning algorithms, including linear regression (quadratic), support vector machines  ...  In addition, three distinct optimization techniques are used to find the optimum ANN training algorithm: Levenberg–Marquardt, Bayesian Regularization, and Scaled Conjugate Gradient.  ...  Acknowledgments: This work was supported by the Qatar National Library. Open access funding provided by the Qatar National Library.  ... 
doi:10.3390/en15176124 fatcat:x6x4lj7ttfc5tf6hwchdcccdrm

A hardware acceleration technique for gradient descent and conjugate gradient

David Kesler, Biplab Deka, Rakesh Kumar
2011 2011 IEEE 9th Symposium on Application Specific Processors (SASP)  
We show that the proposed accelerator can provide significant speedups for iterative versions of several applications and that for some applications such as least squares, it can substantially improve  ...  To mitigate the performance loss from robustification, we present the design of a hardware accelerator and corresponding software support that accelerate gradient descent and conjugate gradient based iterative  ...  ACKNOWLEDGEMENTS The authors would like to thank Joseph Sloan and the anonymous referees for their valuable feedback. This work was supported in part by NSF and GSRC.  ... 
doi:10.1109/sasp.2011.5941086 dblp:conf/sasp/KeslerDK11 fatcat:o4ahplwirnfoxmvnoiabza2m2a

Big data analytics with small footprint

John Canny, Huasha Zhao
2013 Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '13  
By co-designing all of these elements we achieve single-machine performance levels that equal or exceed reported cluster implementations for common benchmark problems.  ...  We present several benchmark problems to show how the above elements combine to yield multiple orders-of-magnitude improvements for each problem.  ...  Solving Miγi = βCi using an iterative method (conjugate gradient here) requires only black-box evaluation of Miγi for various query vectorŝ γi.  ... 
doi:10.1145/2487575.2487677 dblp:conf/kdd/CannyZ13 fatcat:qplx3uzxcnglve3zxxe3q53o6i

Robust linear and support vector regression

O.L. Mangasarian, D.R. Musicant
2000 IEEE Transactions on Pattern Analysis and Machine Intelligence  
solvable simple convex quadratic program for both linear and nonlinear support vector estimators.  ...  Numerical test comparisons with these algorithms indicate the computational effectiveness of the new quadratic programming model for both linear and nonlinear support vector problems.  ...  ACKNOWLEDGMENTS The research described in this Data Mining Institute Report 99-09, November 1999, was supported by the US National Science Foundation grants CCR-9729842 and CDA-9623632, by Air Force Office  ... 
doi:10.1109/34.877518 fatcat:qsjjtq3bmbctdpytypeqe33q3a

High Performance Scientific Computing Using FPGAs with IEEE Floating Point and Logarithmic Arithmetic for Lattice QCD

Owen Callanan, David Gregg, Andy Nisbet, Mike Peardon
2006 2006 International Conference on Field Programmable Logic and Applications  
The recent development of large FPGAs along with the availability of a variety of floating point cores have made it possible to implement high-performance matrix and vector kernel operations on FPGAs.  ...  In this paper we seek to evaluate the performance of FPGAs for real scientific computations by implementing Lattice QCD, one of the classic scientific computing problems.  ...  Double precision conjugate gradient solver The double precision conjugate gradient solver requires double precision versions of the dot-product and vector scale-add operations.  ... 
doi:10.1109/fpl.2006.311191 dblp:conf/fpl/CallananGNP06 fatcat:jmrr2vbuxrelzly423zzrolnoi

GPU-based power flow analysis with Chebyshev preconditioner and conjugate gradient method

Xue Li, Fangxing Li
2014 Electric power systems research  
This work implemented a polynomial preconditioner Chebyshev preconditioner with graphic processing unit (GPU), and integrated a GPU-based conjugate gradient solver.  ...  Results show that GPU-based Chebyshev preconditioner can reach around 46× speedup for the largest test system, and conjugate gradient can gain more than 4× speedup.  ...  Acknowledgement This work was supported in part by US NSF grant ECCS-1128381.  ... 
doi:10.1016/j.epsr.2014.05.005 fatcat:ekb7abwty5ablj4s2s4y5we7c4

Kernel Conjugate Gradient for Fast Kernel Machines

Nathan D. Ratliff, J. Andrew Bagnell
2007 International Joint Conference on Artificial Intelligence  
We propose a novel variant of the conjugate gradient algorithm, Kernel Conjugate Gradient (KCG), designed to speed up learning for kernel machines with differentiable loss functions.  ...  We establish an upper bound on the number of iterations for KCG that indicates it should require less than the square root of the number of iterations that standard conjugate gradient requires.  ...  Acknowledgements The authors gratefully acknowledge the partial support of this research by the DARPA Learning for Locomotion contract.  ... 
dblp:conf/ijcai/RatliffB07 fatcat:hpiythdstbdmfn6phde6p4fnea

Context-Aware Energy Enhancements for Smart Mobile Devices

Brad K. Donohoo, Chris Ohlsen, Sudeep Pasricha, Yi Xiang, Charles Anderson
2014 IEEE Transactions on Mobile Computing  
learning algorithms in Section 4, including a list of pros and cons for each algorithm; (iv) Analysis and results for a new machine learning technique based on support vector machines; (v) Further description  ...  overhead for each of the algorithms in Section 6.4, in which the algorithms are run on an actual mobile device.  ...  Support Vector Machines Support Vector Machines (SVMs) have become quite popular in recent years.  ... 
doi:10.1109/tmc.2013.94 fatcat:mswk3fx3svbhzitu2z4u2jv2xi

Kernel learning at the first level of inference

Gavin C. Cawley, Nicola L.C. Talbot
2014 Neural Networks  
In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation.  ...  Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds.  ...  This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/F010508/1 -Advancing Machine Learning Methodology for New Classes of Prediction Problems.  ... 
doi:10.1016/j.neunet.2014.01.011 pmid:24561452 fatcat:iszwkmrzabg57eksbfcmyruz7m

Efficient Kernel Machines Using the Improved Fast Gauss Transform

Changjiang Yang, Ramani Duraiswami, Larry S. Davis
2004 Neural Information Processing Systems  
The computation and memory required for kernel machines with N training samples is at least O(N 2 ).  ...  We present an approximation technique based on the improved fast Gauss transform to reduce the computation to O(N ).  ...  Nail Gumerov for many discussions. We also gratefully acknowledge support of NSF awards 9987944, 0086075 and 0219681.  ... 
dblp:conf/nips/YangDD04 fatcat:ywidj35hbreexokmujoamxlsaq

An Iterative Solver Benchmark

Jack Dongarra, Victor Eijkhout, Henk van der Vorst
2001 Scientific Programming  
We present a benchmark of iterative solvers for sparse matrices.  ...  The implementer then has the freedom to improve parallel efficiency by optimising the implementation for a particular ordering.  ...  The Conjugate and BiConjugate gradient methods (see Fig. 1 ) involve, outside the matrix-vector prod-uct and preconditioner application, only simple vector operations.  ... 
doi:10.1155/2001/527931 fatcat:uqlitne4gnfipab6efm7wrzvlm

MapReduce for Parallel Reinforcement Learning [chapter]

Yuxi Li, Dale Schuurmans
2012 Lecture Notes in Computer Science  
Furthermore, we design parallel reinforcement learning algorithms to deal with large scale problems using linear function approximation, including model-based projection, least squares policy iteration  ...  , temporal difference learning and recent gradient temporal difference learning algorithms.  ...  squares policy iteration, temporal difference learning, and the very recent gradient TD algorithms.  ... 
doi:10.1007/978-3-642-29946-9_30 fatcat:jyoovu2waffuzd5v3bxqyjdffm

Artificial Intelligence Based Methods for Asphaltenes Adsorption by Nanocomposites: Application of Group Method of Data Handling, Least Squares Support Vector Machine, and Artificial Neural Networks

Mohammad Sadegh Mazloom, Farzaneh Rezaei, Abdolhossein Hemmati-Sarapardeh, Maen M. Husein, Sohrab Zendehboudi, Amin Bemani
2020 Nanomaterials  
In this study, three efficient artificial intelligent models including group method of data handling (GMDH), least squares support vector machine (LSSVM), and artificial neural network (ANN) are proposed  ...  The models are also optimized using nine optimization techniques, namely coupled simulated annealing (CSA), genetic algorithm (GA), Bayesian regularization (BR), scaled conjugate gradient (SCG), ant colony  ...  The modified version of SVM approach; namely least squares support vector machine (LSSVM), uses least-squares results in the form of a principle to obtain the minimum structural risk [65] [66] [67] .  ... 
doi:10.3390/nano10050890 pmid:32384755 pmcid:PMC7279394 fatcat:ymbbgy464zfmrn45a3b3ixqxti
« Previous Showing results 1 — 15 out of 15,224 results