Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








5,455 Hits in 4.3 sec

Hierarchical Mixtures of Experts and the EM Algorithm

Michael I. Jordan, Robert A. Jacobs
1994 Neural Computation  
The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coe cients and the mixture components are generalized linear models (GLIM's).  ...  Learning is treated as a maximum likelihood problem in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture.  ...  Acknowledgements: We w ant to thank Geo rey Hinton, Tony Robinson, Mitsuo Kawato, and Daniel Wolpert for helpful comments on the manuscript.  ... 
doi:10.1162/neco.1994.6.2.181 fatcat:clexcziqrrdbjae5tezs5mvznm

Hierarchical Mixtures of Experts and the EM Algorithm [chapter]

M. I. Jordan, R. A. Jacobs
1994 ICANN '94  
The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coe cients and the mixture components are generalized linear models (GLIM's).  ...  Learning is treated as a maximum likelihood problem in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture.  ...  Acknowledgements: We w ant to thank Geo rey Hinton, Tony Robinson, Mitsuo Kawato, and Daniel Wolpert for helpful comments on the manuscript.  ... 
doi:10.1007/978-1-4471-2097-1_113 fatcat:fmwk7s7lqbgzda5uq3ewqqnyfy

Hierarchical mixtures of experts and the EM algorithm

M.I. Jordan, R.A. Jacobs
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan)  
The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coe cients and the mixture components are generalized linear models (GLIM's).  ...  Learning is treated as a maximum likelihood problem in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture.  ...  Acknowledgements: We w ant to thank Geo rey Hinton, Tony Robinson, Mitsuo Kawato, and Daniel Wolpert for helpful comments on the manuscript.  ... 
doi:10.1109/ijcnn.1993.716791 fatcat:cn6xnhjaqrcc5dvkapc6huyh4a

Normalized Gaussian Network Based on Variational Bayes Inference and Hierarchical Model Selection
変分法的ベイズ推定法に基づく正規化ガウス関数ネットワークと階層的モデル選択法

Junichiro YOSHIMOTO, Shin ISHII, Masa-aki SATO
2003 Transactions of the Society of Instrument and Control Engineers  
We introduce a hierarchical prior distribution of the model parameters and the NGnet is trained based on the variational Bayes (VB) inference.  ...  The performance of our method is evaluated by using function approximation and nonlinear dynamical system identification problems. Our method achieved better performance than existing methods.  ...  Thesis, Department of Engineering, University of Cambridge (1997) 18) M.I. Jordan and R.A. Jacobs: Hierarchical mixtures of experts and the EM algorithm,  ... 
doi:10.9746/sicetr1965.39.503 fatcat:bwsnt6at3vdclij6xktdugvtoy

Page 183 of Neural Computation Vol. 6, Issue 2 [page]

1994 Neural Computation  
Mixtures of Experts and EM Algorithm 183 structured approach to estimation that is reminiscent of CART, MARS, and ID3. The remainder of the paper proceeds as follows.  ...  We first introduce the hierarchical mixture-of-experts architecture and present the likelihood function for the architecture.  ... 

Page 151 of Neural Computation Vol. 8, Issue 1 [page]

1996 Neural Computation  
On the convergence properties of the EM algorithm. Ann. Stat. 11, 95-103. Xu, L., and Jordan, M. I. 1993a. Unsupervised learning by EM algorithm based on finite mixture of Gaussians. Proc.  ...  Theoretical and Experimental Studies of the EM Algorithm for Unsupervised Learning Based on Finite Gaussian Mixtures. MIT Computational Cognitive Science, Tech.  ... 

Non-Normal Mixtures of Experts [article]

Faicel Chamroukhi
2015 arXiv   pre-print
We develop dedicated expectation-maximization (EM) and expectation conditional maximization (ECM) algorithms to estimate the parameters of the proposed models by monotonically maximizing the observed data  ...  Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in data for regression, classification and clustering.  ...  The EM algorithms are indeed very popular and successful estimation algorithms for mixture models in general and for mixture of experts in particular.  ... 
arXiv:1506.06707v2 fatcat:5vz7u462wnbrdagdajfsaeqis4

Hierarchical Routing Mixture of Experts [article]

Wenbo Zhao, Yang Gao, Shahan Ali Memon, Bhiksha Raj, Rita Singh
2019 arXiv   pre-print
Further, we develop a probabilistic framework for the HRME model, and propose a recursive Expectation-Maximization (EM) based algorithm to learn both the tree structure and the expert models.  ...  Addressing these problems, we propose a binary tree-structured hierarchical routing mixture of experts (HRME) model that has classifiers as non-leaf node experts and simple regression models as leaf node  ...  Hierarchical Routing Mixture of Experts In this section, we present the specifications of the HRME model, formulate the optimization objective, and develop the optimization algorithm.  ... 
arXiv:1903.07756v1 fatcat:xc32dajwnfahtd55vgf767z4xq

Constructive Algorithms for Hierarchical Mixtures of Experts

Steve R. Waterhouse, Anthony J. Robinson
1995 Neural Information Processing Systems  
We present two additions to the hierarchical mixture of experts (HME) architecture.  ...  We demonstrate results for the growing and path pruning algorithms which show significant speed ups and more efficient use of parameters over the standard fixed structure in discriminating between two  ...  CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS The mixture of experts, shown in Figure 1 , consists of a set of "experts" which perform local function approximation .  ... 
dblp:conf/nips/WaterhouseR95 fatcat:vyxkz4r2wfdofnmeqbewzwooju

A mixture of experts model for rank data with applications in election studies

Isobel Claire Gormley, Thomas Brendan Murphy
2008 Annals of Applied Statistics  
Model fitting is achieved via a hybrid of the EM and MM algorithms. An example of the methodology is illustrated by examining an Irish presidential election.  ...  A mixture of experts model is a mixture model in which the model parameters are functions of covariates.  ...  Adrian Raftery, the members of the Center for Statistics and the Social Sciences and the members of the Working Group on Model-based Clustering at the University of Washington for numerous suggestions  ... 
doi:10.1214/08-aoas178 fatcat:dlvixib4qvg77fasbjk6urihta

Robust mixture of experts modeling using the t distribution

F. Chamroukhi
2016 Neural Networks  
Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in data for regression, classification, and clustering.  ...  We develop a dedicated expectation-maximization (EM) algorithm to estimate the parameters of the proposed model by monotonically maximizing the observed data log-likelihood.  ...  One interesting future direction is therefore to extend the proposed models to the hierarchical MoE framework (Jordan and Jacobs, 1994) .  ... 
doi:10.1016/j.neunet.2016.03.002 pmid:27093693 fatcat:x26wnvse5vf7zixakz57goa7w4

A flexible probabilistic framework for large-margin mixture of experts

Archit Sharma, Siddhartha Saxena, Piyush Rai
2019 Machine Learning  
Crucially, neither of the two popular gating networks used in MoE, namely the softmax gating network and hierarchical gating network (the latter used in the hierarchical mixture of experts), have efficient  ...  Mixture-of-Experts (MoE) enable learning highly nonlinear models by combining simple expert models.  ...  The gating networks can learn a flat or a hierarchical partitioning of the input space (the latter being the case with hierarchical mixture of experts Bishop and Svenskn 2002) .  ... 
doi:10.1007/s10994-019-05811-4 fatcat:lxfqyduzvzh6nc242lm5m3q4ja

A hierarchical mixture model for software reliability prediction

Shaoming Li, Qian Yin, Ping Guo, Michael R. Lyu
2007 Applied Mathematics and Computation  
This is an application of the hierarchical mixtures of experts (HME) architecture. In HMSRM, individual software reliability models are used as experts.  ...  During the training of HMSRM, an Expectation-Maximizing (EM) algorithm is employed to estimate the parameters of the model.  ...  HMSRM can be considered as an application of the hierarchical mixtures of experts (HME) architecture [9] .  ... 
doi:10.1016/j.amc.2006.07.028 fatcat:3hpgmigs7rbwnitj7yd4ngo4x4

Hierarchical Methods for Landmine Detection with Wideband Electro-Magnetic Induction and Ground Penetrating Radar Multi-Sensor Systems

Seniha Esen Yuksel, Ganesan Ramachandran, Paul Gader, Joseph Wilson, Gyeongyong Heo, Dominic Ho
2008 IGARSS 2008 - 2008 IEEE International Geoscience and Remote Sensing Symposium  
The EM algorithm is used to estimate the parameters of the hierarchical mixture.  ...  All four features from WEMI and GPR are used in a Hierarchical Mixture of Experts model to increase the landmine detection rate.  ...  HIERARCHICAL MIXTURE OF EXPERTS Hierarchical Mixtures of Experts is a tree structure introduced by Jacobs et al.  ... 
doi:10.1109/igarss.2008.4778956 dblp:conf/igarss/YukselRGWHH08 fatcat:ygn3ie6sd5gtvcmeaa4uvdwyuu

Learning Ambiguities Using Bayesian Mixture of Experts

Atul Kanaujia, Dimitris Metaxas
2006 Proceedings - International Conference on Tools with Artificial Intelligence, TAI  
Mixture of Experts training involve learning a multi-category classifier for the gates distribution and fitting a regressor within each of the clusters.  ...  Mixture of Experts(ME) is an ensemble of function ap- proximators that fit the clustered data set locally rather than globally.  ...  Bayesian Mixture of Experts Mixture of Experts training involves learning the experts and the gates distribution.  ... 
doi:10.1109/ictai.2006.73 dblp:conf/ictai/KanaujiaM06 fatcat:vcye635jdfczzbwh475c57cyiq
« Previous Showing results 1 — 15 out of 5,455 results