A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Phrase-level Self-Attention Networks for Universal Sentence Encoding
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
As a result, the memory consumption can be reduced because the self-attention is performed at the phrase level instead of the sentence level. ...
Attention mechanism has been an integral part in many sentence encoding models, allowing the models to capture context dependencies regardless of the distance between elements in the sequence. ...
Sentence-level Self-Attention (w/o PSA model described in subsection 4.2) is used as a baseline for our model. ...
doi:10.18653/v1/d18-1408
dblp:conf/emnlp/WuWLM18
fatcat:anathbqhl5cwje3erv6ligazlm
Combining Semantics, Context, and Statistical Evidence in Genomics Literature Search
2007
2007 IEEE 7th International Symposium on BioInformatics and BioEngineering
We present an information retrieval model for combining evidence from concept-based semantics, term statistics, and context for improving search precision of genomics literature by accurately identifying ...
level information versus using document, sentence or passage level information alone. ...
Context evaluation with dimensional data model BM25 was used for document retrieval, and QTM was used for passage and sentence retrieval. ...
doi:10.1109/bibe.2007.4375738
dblp:conf/bibe/UrbainGF07
fatcat:axaenauynfcwxb5fkd45ljgc4q
Generating Questions and Multiple-Choice Answers using Semantic Analysis of Texts
2016
International Conference on Computational Linguistics
Evaluation by human annotators indicates that our approach requires a larger number of inference steps, which necessitate deeper semantic understanding of texts than a traditional single-sentence approach ...
The system also generates correct answers and semantically-motivated phrase-level distractors as answer choices. ...
Jun Araki is partly supported by an IBM Ph.D. Fellowship and a Funai Overseas Scholarship. ...
dblp:conf/coling/ArakiRSHYM16
fatcat:k3jz7calkvf4rj7k5an5kfbsoe
Unsupervised Graph-Based Tibetan Multi-Document Summarization
2022
Computers Materials & Continua
In terms of topic division, we adopt two level clustering methods converting original document into document-level and sentence-level graph, next we take both linguistic and deep representation into account ...
and integrate external corpus into graph to obtain the sentence semantic clustering. ...
For sentence level cluster, we use spectral clustering method. ...
doi:10.32604/cmc.2022.027301
fatcat:fqxgkck3jnebbgxt7xi6qvkq7e
Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension
[article]
2019
arXiv
pre-print
To alleviate the second problem, we add an additional semantic similarity loss to push the generated distractors more relevant to the article. ...
In reading comprehension, generating sentence-level distractors is a significant task, which requires a deep understanding of the article and question. ...
Acknowledgments We thank Wenjie Zhou for his valuable comments and suggestions. ...
arXiv:1911.08648v1
fatcat:fv65npf56zfrdoyg4fzhnwsjku
Understand Legal Documents with Contextualized Large Language Models
[article]
2023
arXiv
pre-print
Our evaluations demonstrate that our designed models are more accurate than baselines, e.g., with an up to 15.0% better F1 score in subtask B. ...
Specifically, we first develop the Legal-BERT-HSLN model that considers the comprehensive context information in both intra- and inter-sentence levels to predict rhetorical roles (subtask A) and then train ...
context semantics in both intra-and inter-sentence levels. ...
arXiv:2303.12135v4
fatcat:7sjl5snyonfvlhxllhpreyd2xa
SRLGRN: Semantic Role Labeling Graph Reasoning Network
[article]
2020
arXiv
pre-print
The proposed graph is a heterogeneous document-level graph that contains nodes of type sentence (question, title, and other sentences), and semantic role labeling sub-graphs per sentence that contain arguments ...
We propose a graph reasoning network based on the semantic structure of the sentences to learn cross paragraph reasoning paths and find the supporting facts and the answer jointly. ...
We thank the anonymous reviewers for their thoughtful comments. ...
arXiv:2010.03604v2
fatcat:z3nw25dmn5aghfdfbluslflbl4
Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
To alleviate the second problem, we add an additional semantic similarity loss to push the generated distractors more relevant to the article. ...
In reading comprehension, generating sentence-level distractors is a significant task, which requires a deep understanding of the article and question. ...
Acknowledgments We thank Wenjie Zhou for his valuable comments and suggestions. ...
doi:10.1609/aaai.v34i05.6522
fatcat:hvsztxaoljfndcouqghz65rvji
Think and Tell: Preview Network for Image Captioning
2018
British Machine Vision Conference
However, for humans to describe a scene, it's a common behaviour to first preview and organize all the observed visual contents in a semantically-meaningful order, and then form a complete description ...
sentence. ...
Qualitative evaluations See Figure 3 for a qualitative comparison of captions generated by our method and the baseline model. ...
dblp:conf/bmvc/ZhuXY18
fatcat:cwt6nvwscvgapkbwdynsvuns3u
Multi-labeled Relation Extraction with Attentive Capsule Network
[article]
2018
arXiv
pre-print
the highly overlapped relations within an individual sentence. ...
To disclose overlapped multiple relations from a sentence still keeps challenging. ...
The first one extracts low-level semantic meanings. ...
arXiv:1811.04354v1
fatcat:qkl6xyxicbetffnoxsbopgsoom
XLING: Matching Query Sentences to a Parallel Corpus using Topic Models for WSD
2013
International Workshop on Semantic Evaluation
translation for the target polysemous words. ...
The XLING system introduces a novel approach to skip the sense disambiguation step by matching query sentences to sentences in a parallel corpus using topic models; it returns the word alignments as the ...
The XLING_TnT system outputs one translation for each query sentence for the best result evaluation. It output the top 5 translations for the out-of-five evaluation. ...
dblp:conf/semeval/TanB13
fatcat:vbqptsq2ivchzcirlrguchtbeu
BioVerbNet: a large semantic-syntactic classification of verbs in biomedicine
2021
Journal of Biomedical Semantics
Results We demonstrate the utility of the new resource in boosting model performance in document- and sentence-level classification in biomedicine. ...
This new resource comprises 693 verbs assigned to 22 top-level and 117 fine-grained semantic-syntactic verb classes. ...
We evaluate each based on their document-level (Pubmed abstract) and sentence-level classifications, where zero or more predefined labels can be assigned for both of these tasks. ...
doi:10.1186/s13326-021-00247-z
pmid:34266499
pmcid:PMC8280585
fatcat:24ej77yimjdonlrzdiy4lgao6q
Rethinking Modern Communication from Semantic Coding to Semantic Communication
[article]
2022
arXiv
pre-print
learning (RL)-powered semantic communication paradigm that endows a system the ability to convey the semantics instead of pursuing the bit level accuracy. ...
After analyzing the underlying defects of existing semantics-aware techniques, we establish a confidence-based distillation mechanism for the joint semantics-noise coding (JSNC) problem and a reinforcement ...
But for images, the original pixel-level regression pipeline no longer holds under an RL setting. ...
arXiv:2110.08496v2
fatcat:lvuamazr7fe33cdt7ycajcvet4
Generating Sentences by Editing Prototypes
[article]
2018
arXiv
pre-print
Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies. ...
generates higher quality outputs according to human evaluation. ...
We thank the reviewers and editor for their insightful comments. This work was funded by DARPA CwC program under ARO prime contract no. W911NF-15-1-0462. ...
arXiv:1709.08878v2
fatcat:s6veavzgvfhs5lb73auynjuqh4
Measuring language lateralisation with different language tasks: a systematic review
2017
PeerJ
Measurement of laterality is of interest both to researchers investigating the neural organisation of the language system and to clinicians needing to establish an individual's hemispheric dominance for ...
of the strength, reliability and robustness of the laterality measurements they yield with fMRI, to look at variability that is both dependent and independent of aspects of study design, such as the baseline ...
Thus, there is currently insufficient data with these tasks to evaluate predictions of stronger laterality for sentence over word processing. ...
doi:10.7717/peerj.3929
pmid:29085748
pmcid:PMC5659218
fatcat:4gr77yfsinep5capt754obwupm
« Previous
Showing results 1 — 15 out of 8,150 results