Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








275,216 Hits in 2.9 sec

Large Scale Translation Quality Estimation

Miguel Angel Ríos Gaona, Serge Sharoff
2015 Deep Machine Translation Workshop  
The transfer learning methods are: Transductive SVM, Label Propagation and Self-taught Learning.  ...  We expand existing resources for Quality Estimation across related languages by using different transfer learning methods.  ...  Transfer learning setup We aim to apply transfer learning, when texts in related languages are treated as unlabelled out-of-domain data.  ... 
dblp:conf/acl-dmtw/GaonaS15 fatcat:yhgp4cwpyjculflbv65teu63vm

StRE: Self Attentive Edit Quality Prediction in Wikipedia

Soumya Sarkar, Bhanu Prakash Reddy, Sandipan Sikdar, Animesh Mukherjee
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Our pre-trained model achieves such result after retraining on a set as small as 20% of the edits in a wikipage.  ...  More specifically, we deploy deep encoders to generate representations of the edits from its text content, which we then leverage to infer quality.  ...  Table 4 : 4 Results for intra-category, inter-category and category agnostic predictions without and with transfer learning. The transfer learning approach is always beneficial.  ... 
doi:10.18653/v1/p19-1387 dblp:conf/acl/SarkarRSM19 fatcat:k2hk3vpy6ndl3eu4ugbafhw2ma

Review Learning: Alleviating Catastrophic Forgetting with Generative Replay without Generator [article]

Jaesung Yoo, Sunghyuk Choi, Ye Seul Yang, Suhyeon Kim, Jieun Choi, Dongkyeong Lim, Yaeji Lim, Hyung Joon Joo, Dae Jung Kim, Rae Woong Park, Hyeong-Jin Yoon, Kwangsoo Kim
2022 arXiv   pre-print
It deteriorates performance of the deep learning model on diverse datasets, which is critical in privacy-preserving deep learning (PPDL) applications based on transfer learning (TL).  ...  When a deep learning model is sequentially trained on different datasets, it forgets the knowledge acquired from previous data, a phenomenon known as catastrophic forgetting.  ...  In medical machine learning applications [9, 10] and privacy-preserving deep learning (PPDL) [11, 12] , transfer learning (TL) [13, 14] is used when data are insufficient, which leverages acute knowledge  ... 
arXiv:2210.09394v1 fatcat:dlqilzmnmnggpmhn7bn4tbwnha

LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer [article]

Machel Reid, Victor Zhong
2021 arXiv   pre-print
Many types of text style transfer can be achieved with only small, precise edits (e.g. sentiment transfer from I had a terrible time... to I had a great time...).  ...  Our method outperforms existing generation and editing style transfer methods on sentiment (Yelp, Amazon) and politeness (Polite) transfer.  ...  Encode, learning for non-parallel text style transfer. In Pro- tag, realize: High-precision text editing.  ... 
arXiv:2105.08206v1 fatcat:nlfiku2dxzfxdftd5yhov7ej2m

A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning

Gonçalo M. Correia, André F. T. Martins
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
APE systems are usually trained by complementing human post-edited data with large, artificial data generated through backtranslations, a time-consuming process often no easier than training a MT system  ...  Automatic post-editing (APE) seeks to automatically refine the output of a black-box machine translation (MT) system through human post-edits.  ...  Instead of relying on edit operations, our approach mitigates the small amount of data with transfer learning through BERT.  ... 
doi:10.18653/v1/p19-1292 dblp:conf/acl/CorreiaM19 fatcat:yj73lyngirhrlcrue6vbv5n4zm

The Need for Transfer Learning in CRISPR-Cas Off-Target Scoring [article]

Pavan K Kota, Yidan Pan, Hoang-Anh Vu, Mingming Cao, Richard G Baraniuk, Gang Bao
2021 bioRxiv   pre-print
We demonstrate that model complexity can only improve performance on TrueOT if transfer learning techniques are employed.  ...  We hypothesize that such models may suboptimally transfer to the low throughput data in TrueOT due to fundamental biological differences between proxy assays and in vivo behavior.  ...  P.K.K. was supported by the NLM Training Program in Biomedical Informatics and Data Science (T15LM007093).  ... 
doi:10.1101/2021.08.28.457846 fatcat:glhrqsi4mffcffv4xygtjwaep4

Quality In, Quality Out: Learning from Actual Mistakes

Frédéric Blain, Nikolaos Aletras, Lucia Specia
2020 European Association for Machine Translation Conferences/Workshops  
For that purpose, we use transfer-learning to leverage large scale noisy annotations and small sets of high-quality human annotated translation errors to train QE models.  ...  However, QE models are often trained on noisy approximations of quality annotations derived from the proportion of post-edited words in translated sentences instead of direct human annotations of translation  ...  -(4) BiRNN-MQM model trained with transfer-learning, i.e. trained on HTER data and adapted using MQM data.  ... 
dblp:conf/eamt/BlainAS20 fatcat:4tkmpa3pejgrheomuoa7hala6i

DPE: Disentanglement of Pose and Expression for General Video Portrait Editing [article]

Youxin Pang, Yong Zhang, Weize Quan, Yanbo Fan, Xiaodong Cun, Ying Shan, Dong-ming Yan
2023 arXiv   pre-print
In this paper, we introduce a novel self-supervised disentanglement framework to decouple pose and expression without 3DMMs and paired data, which consists of a motion editing module, a pose generator,  ...  The editing module projects faces into a latent space where pose motion and expression motion can be disentangled, and the pose or expression transfer can be performed in the latent space conveniently  ...  c e represents the edited code with expression transfer.  ... 
arXiv:2301.06281v2 fatcat:gpviojlwg5h7lmculikpgvbyby

Language Adaptation for Extending Post-Editing Estimates for Closely Related Languages

Miguel Rios, Serge Sharoff
2016 Prague Bulletin of Mathematical Linguistics  
In this paper we report a toolkit for achieving language adaptation, which is based on learning new feature representation using transfer learning methods.  ...  This paper presents an open-source toolkit for predicting human post-editing efforts for closely related languages.  ...  Learn. Res., 13:281-305, Feb. 2012. ISSN 1532  ... 
doi:10.1515/pralin-2016-0017 fatcat:jf6jls6pfrgxnbeuits2gtln44

Page 716 of SMPTE Motion Imaging Journal Vol. 88, Issue 10 [page]

1979 SMPTE Motion Imaging Journal  
Figure | shows a typical single-effects control panel with its various learn modes and learn registers.  ...  Ideally, a standard data pro- tocol should be created that would allow dif- ferent editing consoles to be used with different switchers.  ... 

Editing Text in the Wild [article]

Liang Wu, Chengquan Zhang, Jiaming Liu, Junyu Han, Jingtuo Liu, Errui Ding, Xiang Bai
2019 arXiv   pre-print
In this paper, we are interested in editing text in natural images, which aims to replace or modify a word in the source image with another one while maintaining its realistic look.  ...  The background inpainting module erases the original text, and fills the text region with appropriate texture.  ...  Specifically, the text style from source image I s is transferred to the target text with the help of a skeleton-guided learning mechanism aiming to retain text semantics(Sec. 3.1).  ... 
arXiv:1908.03047v1 fatcat:skm7yrjuibaxbnfgwhpphkoidi

U-Net Model for Brain Extraction: Trained on Humans for Transfer to Non-human Primates

Xindi Wang, Xin-Hui Li, Jae Wook Cho, Brian E. Russ, Nanditha Rajamani, Alisa Omelchenko, Lei Ai, Annachiara Korchmaros, Stephen Sawiak, R. Austin Benn, Pamela Garcia-Saldivar, Zheng Wang (+8 others)
2021 NeuroImage  
We also demonstrated the transfer-learning process enables the macaque model to be updated for use with scans from chimpanzees, marmosets, and other mammals (e.g. pig).  ...  U-Net Model), and then transferred this to NHP data using a small NHP training sample.  ...  Similarly, models with transfer-learning show higher Dice coefficients with lower variation in the validation and testing sets than models without transfer-learning.  ... 
doi:10.1016/j.neuroimage.2021.118001 pmid:33789137 pmcid:PMC8529630 fatcat:6qmoge2jwjhvzmeeypdaql2yci

Reducing Sequence Length by Predicting Edit Operations with Large Language Models [article]

Masahiro Kaneko, Naoaki Okazaki
2023 arXiv   pre-print
We apply instruction tuning for LLMs on the supervision data of edit spans.  ...  Representing an edit span with a position of the source text and corrected tokens, we can reduce the length of the target sequence and the computational cost for inference.  ...  The learning rate was set to 1e-5, with a warmup rate of 0.03, and we employed a cosine learning rate schedule.  ... 
arXiv:2305.11862v2 fatcat:5e3l3uigzjh3vkrlchvfzouj5m

TEGLO: High Fidelity Canonical Texture Mapping from Single-View Images [article]

Vishal Vinod, Tanmay Shah, Dmitry Lagun
2023 arXiv   pre-print
We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology.  ...  We equip our method with editing capabilities by creating a dense correspondence mapping to a 2D canonical space.  ...  NeuMesh [55] learns mesh representations to enable texture transfer and texture editing using textured meshes.  ... 
arXiv:2303.13743v1 fatcat:xbh5rmdzqndi7e2musny2gbxce

Active semi-supervised framework with data editing

Xue Zhang, Wang-xin Xiao
2012 2012 International Conference on Systems and Informatics (ICSAI2012)  
The fusion of active learning with data editing makes ASSDE more robust to the sparsity and the distribution bias of the training data.  ...  In this paper, we propose an active semi-supervised framework with data editing (we call it ASSDE) to improve sparsely labeled text classification.  ...  In this paper, we address the problem of sparsely labeled text classification by active semi-supervised learning with data editing.  ... 
doi:10.1109/icsai.2012.6223045 fatcat:xln6ixpbfjhpblteaxcx3f5pzq
« Previous Showing results 1 — 15 out of 275,216 results