Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,780 Hits in 7.3 sec

Multitask Additive Models With Shared Transfer Functions Based on Dictionary Learning

Alhussein Fawzi, Mathieu Sinn, Pascal Frossard
2017 IEEE Transactions on Signal Processing  
We establish a connection with sparse dictionary learning and propose a new efficient fitting algorithm which alternates between sparse coding and transfer function updates.  ...  Our key idea is to share transfer functions across models in order to reduce the model complexity and ease the exploration of the corpus.  ...  Multi-task additive models with shared transfer functions based on dictionary learning Alhussein Fawzi, Mathieu Sinn, and Pascal Frossard Abstract-Additive models form a widely popular class of regression  ... 
doi:10.1109/tsp.2016.2634546 fatcat:fjwzivb32rh5zkrmz37xq3kavu

Multitask Generalized Eigenvalue Program

Boyu Wang, Joelle Pineau, Borja Balle
2016 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We present a novel multitask learning framework called multitask generalized eigenvalue program (MTGEP), which jointly solves multiple related generalized eigenvalue problems (GEPs).  ...  Empirical evaluation with both synthetic and benchmark real world datasets validates the efficacy and efficiency of the proposed techniques, especially for grouped multitask GEPs.  ...  In this section, we briefly review the literature that relates to sparse coding based transfer and multitask learning algorithms.  ... 
doi:10.1609/aaai.v30i1.10229 fatcat:uic272ztybhincec7nf4hsrqje

Sentiment Classification for Chinese Text Based on Interactive Multitask Learning

Han Zhang, Shaoqi Sun, Yongjin Hu, Junxiu Liu, Yuanbo Guo
2020 IEEE Access  
In addition, the multitask information interaction mechanism is used, and the prediction information on the autonomous subtask is fed back into the potential representation of the two tasks.  ...  INDEX TERMS Multitask learning, information interaction mechanism, emotion classification, emotional dictionary expansion, ERNIE.  ...  .: Sentiment Classification for Chinese Text Based on Interactive Multitask Learning  ... 
doi:10.1109/access.2020.3007889 fatcat:syhi27a6qrg2dc6z4asnpnlcky

Joint acoustic modeling of triphones and trigraphemes by multi-task learning deep neural networks for low-resource speech recognition

Dongpeng Chen, Brian Mak, Cheung-Chi Leung, Sunil Sivadas
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
As triphone modeling and trigrapheme modeling are highly related learning tasks, a better shared internal representation (the hidden layers) can be learned to improve their generalization performance.  ...  It is well-known in machine learning that multitask learning (MTL) can help improve the generalization performance of singly learning tasks if the tasks being trained in parallel are related, especially  ...  readily transferred or shared across multiple tasks.  ... 
doi:10.1109/icassp.2014.6854673 dblp:conf/icassp/ChenMLS14 fatcat:wvbl7kwc6jdrhdarqxyiwcl5pm

Adversarial Multitask Learning for Joint Multi-Feature and Multi-Dialect Morphological Modeling

Nasser Zalmout, Nizar Habash
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
We use multitask learning for joint morphological modeling for the features within two dialects, and as a knowledge-transfer scheme for crossdialectal modeling.  ...  We use adversarial training to learn dialect invariant features that can help the knowledge-transfer scheme from the high to low-resource variants.  ...  VECMAP uses a seed dictionary to learn a mapping function that minimizes the distances between seed dictionary unigram pairs.  ... 
doi:10.18653/v1/p19-1173 dblp:conf/acl/ZalmoutH19 fatcat:wa5uoyc6hndrje6yq6szvxsgu4

Adversarial Multitask Learning for Joint Multi-Feature and Multi-Dialect Morphological Modeling [article]

Nasser Zalmout, Nizar Habash
2019 arXiv   pre-print
We use multitask learning for joint morphological modeling for the features within two dialects, and as a knowledge-transfer scheme for cross-dialectal modeling.  ...  We use adversarial training to learn dialect invariant features that can help the knowledge-transfer scheme from the high to low-resource variants.  ...  VECMAP uses a seed dictionary to learn a mapping function that minimizes the distances between seed dictionary unigram pairs.  ... 
arXiv:1910.12702v1 fatcat:bbowpz4mkrfdnl6m3eptkiqhdy

A Multitask Learning Model with Multiperspective Attention and Its Application in Recommendation

Yingshuai Wang, Dezheng Zhang, Aziguli Wulamu, Jin Jing
2021 Computational Intelligence and Neuroscience  
Training models to predict click and order targets at the same time. For better user satisfaction and business effectiveness, multitask learning is one of the most important methods in e-commerce.  ...  Some existing researches model user representation based on historical behaviour sequence to capture user interests. It is often the case that user interests may change from their past routines.  ...  Expert system based on frames and expert system based on models are regarded as different experts in the multitask learning algorithm.  ... 
doi:10.1155/2021/8550270 pmid:34691173 pmcid:PMC8536436 fatcat:4fggvnrr3rcjhfjlowf5qqmi2i

A Neural Machine Translation Model for Arabic Dialects That Utilizes Multitask Learning (MTL)

Laith H. Baniata, Seyoung Park, Seong-Bae Park
2018 Computational Intelligence and Neuroscience  
We propose the development of a multitask learning (MTL) model which shares one decoder among language pairs, and every source language has a separate encoder.  ...  The proposed solution of the neural machine translation model is prompted by the recurrent neural network-based encoder-decoder neural machine translation model that has been proposed recently, which generalizes  ...  Based on the recurrent neural network architecture (RNN), three various methods of sharing information were used to model text with task-specific and shared layers.  ... 
doi:10.1155/2018/7534712 fatcat:n5s5ukeouvcxla374c2v5q3t7e

Co-Clustering for Multitask Learning [article]

Keerthiram Murugesan, Jaime Carbonell, Yiming Yang
2017 arXiv   pre-print
The paper also proposes a highly-scalable multitask learning algorithm, based on the new framework, using conjugate gradient descent and generalized Sylvester equations.  ...  This paper presents a new multitask learning framework that learns a shared representation among the tasks, incorporating both task and feature clusters.  ...  Early work on latent shared representation includes (Zhang et al., 2005) , which proposes a model based on Independent Component Analysis (ICA) for learning multiple related tasks.  ... 
arXiv:1703.00994v1 fatcat:mzprawjzprg2lagpdb3guettdy

Self-Paced Multitask Learning with Shared Knowledge [article]

Keerthiram Murugesan, Jaime Carbonell
2017 arXiv   pre-print
We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the task parameters, and updating the shared knowledge, optimizing a new bi-convex  ...  This proposed method applies quite generally, including to multitask feature learning, multitask learning with alternating structure optimization, etc.  ...  The tasks are solved in a sequential manner based on this order by transferring information from the previously learned tasks to the next ones through shared task parameters.  ... 
arXiv:1703.00977v2 fatcat:town6yjghngo3ckt2gckii72cu

Flexible Modeling of Latent Task Structures in Multitask Learning [article]

Alexandre Passos , Jacques Wainer, Hal Daume III (University of Maryland)
2012 arXiv   pre-print
Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks.  ...  However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem.  ...  In addition to offering a general framework for multitask learning, our proposed model also addresses several shortcomings of commonly used MTL models.  ... 
arXiv:1206.6486v1 fatcat:va2tqlbefbgyldcufobmpvqiva

Self-Paced Multitask Learning with Shared Knowledge

Keerthiram Murugesan, Jaime Carbonell
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the task parameters, and updating the shared knowledge, optimizing a new bi-convex  ...  This proposed method applies quite generally, including to multitask feature learning, multitask learning with alternating structure optimization, etc.  ...  Our proposed framework based on self-paced learning for multiple tasks addresses these three key challenges: 1) it embeds task selection into the model learning; 2) it gradually learns the shared knowledge  ... 
doi:10.24963/ijcai.2017/351 dblp:conf/ijcai/MurugesanC17 fatcat:dlxifelaavcytkjhqpmdhtwi2u

Multitask Learning for Grapheme-to-Phoneme Conversion of Anglicisms in German Speech Recognition [article]

Julia Pritzen, Michael Gref, Dietlind Zühlke, Christoph Schmidt
2021 arXiv   pre-print
With this approach, the model learns to generate pronunciations differently depending on the classification result.  ...  We show that multitask learning can help solving the challenge of loanwords in German speech recognition.  ...  All other models are a copy of the baseline model that including an additional Anglicism pronunciation dictionary based on the respective G2P approach.  ... 
arXiv:2105.12708v2 fatcat:fcs3xn3v2zbzxkgwd6qjrtaetq

A Multichannel Biomedical Named Entity Recognition Model Based on Multitask Learning and Contextualized Word Representations

Hao Wei, Mingyuan Gao, Ai Zhou, Fei Chen, Wen Qu, Yijia Zhang, Mingyu Lu
2020 Wireless Communications and Mobile Computing  
Moreover, we introduce the auxiliary corpora with same entity types for the main corpora to be evaluated in multitask learning framework, then train our model on these separate corpora and share parameters  ...  In the previous studies based on deep learning, pretrained word embedding becomes an indispensable part of the neural network models, effectively improving their performance.  ...  "M" denotes the multitask model. "T" denotes the model based on transfer learning.  ... 
doi:10.1155/2020/8894760 fatcat:z55w6flkovhfnpchlz2lzj2aya

A Multitask Deep Learning Framework for DNER

Ran Jin, Tengda Hou, Tongrui Yu, Min Luo, Haoliang Hu, Baiyuan Ding
2022 Computational Intelligence and Neuroscience  
The proposed method was found with better performance on the DDI2011 and DDI2013 datasets.  ...  In this article, the multi-DTR model that can accurately recognize drug-specific name by joint modeling of DNER and DNEN was proposed.  ...  [37] used a hierarchical recursive network for cross-language transfer learning. e model proposed by Liu et al.  ... 
doi:10.1155/2022/3321296 pmid:35469206 pmcid:PMC9034928 fatcat:6pvtyesjlbh5dkqmabbtl5ovdu
« Previous Showing results 1 — 15 out of 1,780 results