Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Oct 13, 2015 · Abstract:We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers ...
We study the improper learning of multi-layer neural networks. Suppose that the neural net- work to be learned has k hidden layers and that the l1-norm of ...
People also ask
A kernel-based method, such that with probability at least $1 - \delta$, it learns a predictor whose generalization error is at most $\epsilon$ worse than ...
Jun 19, 2016 · We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers and that ...
Oct 13, 2015 · Abstract. We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers ...
Oct 19, 2015 · We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has k hidden layers and that the ℓ1 ...
ℓ1-regularized Neural Networks are Improperly Learnable in Polynomial Time. Y. Zhang, J. Lee, and M. Jordan. CoRR, (2015 ).
l1-regularized neural networks are improperly learnable in polynomial time. Y Zhang, JD Lee, MI Jordan. International Conference on Machine Learning, 993-1001, ...
ℓ1-regularized Neural Networks are Improperly Learnable in Polynomial Time ... This work gives a polynomial-time algorithm for learning neural networks with one ...
We focus on l1-regularized ... i=1 Algorithm 2 outputs a neural network. ̂f ∈ Nm ... -regularized neural networks are improperly learnable in polynomial time. arXiv ...