A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Contextual semantic annotations
2009
Proceedings of the fifth international conference on Knowledge capture - K-CAP '09
In this paper we propose an approach to automatically extract annotations by taking into account context in order to obtain a better representation of the document content. ...
It automatically generates a set of contextual semantic annotations represented in RDF. ...
These tools are generally based on linguistic methods such as morpho-syntactic pattern matching [1] or on statistical methods such as frequency of terms co-occurrences. ...
doi:10.1145/1597735.1597782
dblp:conf/kcap/MokhtariC09
fatcat:bozdewl5zbat3fhs2hjpdcfk6m
Challenging Knowledge Extraction to Support the Curation of Documentary Evidence in the Humanities (short paper)
2019
International Conference on Knowledge Capture
After that, we examine general knowledge extraction tasks and discuss their relation to the problem at hand. ...
In this position paper, we ponder the applicability of knowledge extraction techniques to support the data acquisition process. ...
KNOWLEDGE EXTRACTION Knowledge extraction is a branch of artificial intelligence covering a variety of tasks related to the automatic or semi-automatic derivation of formal symbolic knowledge from unstructured ...
dblp:conf/kcap/DagaM19a
fatcat:cjrkk5qy4rco3pdewmuyjzbkhq
Diamond multidimensional model and aggregation operators for document OLAP
2015
2015 IEEE 9th International Conference on Research Challenges in Information Science (RCIS)
new aggregation operators for textual data in OLAP environment. ...
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 15441 The contribution was presented at : Abstract-On-Line Analytical Processing (OLAP) has generated methodologies ...
This knowledge structure can be obtained automatically and keeps the semantics of the texts. x Fact with a textual dimension: [9] proposes a multidimensional IR engine MIRE which is based on a multidimensional ...
doi:10.1109/rcis.2015.7128897
dblp:conf/rcis/AzabouKFSV15
fatcat:omavw7hcqrfspp64vwfnyycm3m
SMART: System Model Acquisition from Requirements Text
[chapter]
2004
Lecture Notes in Computer Science
Modeling of a business system has traditionally been based on free text documents. ...
requirement documents and whose output is an OPM model, expressed both graphically, through a set of Object-Process Diagrams, and textually in equivalent Object-Process Language. ...
on textual scenarios. ...
doi:10.1007/978-3-540-25970-1_12
fatcat:gaccnu4jmnbfri57ohimx7f4py
Utilizing Graph Measure to Deduce Omitted Entities in Paragraphs
2018
International Conference on Computational Linguistics
This demo deals with the problem of capturing omitted arguments in relation extraction given a proper knowledge base for entities of interest. ...
We introduce the concept of a salient entity and use this information to deduce omitted entities in the paragraph which allows improving the relation extraction quality. ...
knowledge base and reasoning platform). ...
dblp:conf/coling/KimHKC18
fatcat:bp5azvlsjva3ldtyhg22hxrutq
Extracting Events from Wikipedia as RDF Triples Linked to Widespread Semantic Web Datasets
[chapter]
2011
Lecture Notes in Computer Science
In this paper we describe an approach to enhance the extraction of semantic contents from unstructured textual documents, in particular considering Wikipedia articles and focusing on event mining. ...
Starting from the deep parsing of a set of English Wikipedia articles, we produce a semantic annotation compliant with the Knowledge Annotation Format (KAF). ...
We have defined an event extraction methodology that takes as input KAF annotated documents: it is based on functional dependencies and on the results of disambiguation of the terms through WordNet synsets ...
doi:10.1007/978-3-642-21796-8_10
fatcat:u7hli4ngqjhrxksxq35jxpmihq
Towards Monitoring of Novel Statements in the News
[chapter]
2016
Lecture Notes in Computer Science
Relevance is defined by a semantic query of the user, while novelty is ensured by checking whether the extracted statements are related, but non-existing in a knowledge base containing the currently known ...
Our evaluation performed on English news texts and on CrunchBase as the knowledge base demonstrates the effectiveness, unique capabilities and future challenges of this novel approach to novelty. ...
Technically, the Textual Triple Extraction step is based on the tool ClausIE [3] . ...
doi:10.1007/978-3-319-34129-3_18
fatcat:2sat4rcikba7fp76e3oxykq7xi
I Do Not Understand What I Cannot Define: Automatic Question Generation With Pedagogically-Driven Content Selection
[article]
2021
arXiv
pre-print
And, how do we phrase the question automatically? We address those challenges with an automatic question generator grounded in learning theory. ...
Automatic question generators may alleviate this scarcity by generating sound pedagogical questions. However, generating questions automatically poses linguistic and pedagogical challenges. ...
In the example, the italic words are extracted because they constitute a relative clause (other relations are omitted for readability). ...
arXiv:2110.04123v1
fatcat:32j4afmtkrfyvar3dflfu4ee2i
Engineering the Production of Meta-Information: The Abstracting Concern
2003
Journal of information science
At the level of content, three significantly different types of procedure stand out, depending on the document structure in question: extracting, rhetorical summarizing and cognitive summarizing. ...
In order to improve the automatic production of metainformation in the abstracting field, an essential starting point is the exposition of the current state of the art. ...
paragraphs on a relational map. ...
doi:10.1177/01655515030295006
fatcat:li77lizccjh4hm43ammkqchgya
NLP-based metadata extraction for legal text consolidation
2009
Proceedings of the 12th International Conference on Artificial Intelligence and Law - ICAIL '09
The proposed approach to consolidation is metadataoriented and based on Natural Language Processing (NLP) techniques: we use XML-based standards for metadata annotation of legislative acts and a flexible ...
The paper describes a system for the automatic consolidation of Italian legislative texts to be used as a support of an editorial consolidating activity and dealing with the following typology of textual ...
In Japan, an automatic consolidation system for Japanese statutes has been developed based on the formalization and experts' knowledge about consolidation [2] . ...
doi:10.1145/1568234.1568240
dblp:conf/icail/SpinosaGCMVM09
fatcat:3l4rvfjfxvf6zdnuhfao6bpspq
Digging for knowledge with information extraction
2010
Proceedings of the 19th ACM international conference on Information and knowledge management - CIKM '10
We present the information extraction system Text2SemRel. The system (semi-) automatically constructs knowledge bases from textual data consisting of facts about entities using semantic relations. ...
The second contribution in this paper is the presentation of a case study on the (semi-)automatic construction of a knowledge base consisting of gene-disease associations. ...
Contributions and Outline In this paper we present Text2SemRel, which (semi-)automatically constructs knowledge bases of entities and relations extracted from textual data. ...
doi:10.1145/1871437.1871744
dblp:conf/cikm/BundschusBTFK10
fatcat:s2fhvlhz3bfbjl6jyo7q34i2lq
Towards Enriching DBpedia from Vertical Enumerative Structures Using a Distant Learning Approach
[chapter]
2018
Lecture Notes in Computer Science
Automatic construction of semantic resources at large scale usually relies on general purpose corpora as Wikipedia. ...
Our relation extraction approach achieves an overall precision of 62%, and 99% of the extracted relations can enrich DBpedia, with respect to a reference corpus. ...
If the entities are linked in the knowledge base, that entity pair constitutes a positive example, a negative example otherwise. ...
doi:10.1007/978-3-030-03667-6_12
fatcat:e7uv32y2sngvjj2wn4z6wcudve
A Content Analysis Technique for Inconsistency Detection in Software Requirements Documents
2005
Workshop em Engenharia de Requisitos
This technique exploits the extraction, from a requirement document, of the interactions between the entities described in the document as Subject-Action-Object (SAO) triples (obtainable using a suitable ...
a knowledge base of the system. ...
The Java Requirement Analyzer (J-RAn) tool [4] is capable to automatically extract from the requirement document each paragraph in Natural Language related to a requirement description; sentences of ...
dblp:conf/wer/FantechiS05
fatcat:xpc6n7bryjan7n7lwihth2k6ay
Towards an Automatic Text Comprehension for the Arabic Question-Answering: Semantic and Logical Representation of Texts
2018
Pacific Asia Conference on Language, Information and Computation
This approach is based on the automatic understanding of Arabic texts (question or passages of texts) to transform them into semantic and logical representations. ...
Automatic text comprehension is an arduous task of automatic natural language processing. ...
It is based on querying databases to extract the answer; it transforms the question into a query to retrieve the answer and seeks answers from structured data bases that focused on a knowledge model. ...
dblp:conf/paclic/BakariBN18
fatcat:ccwlvhwsp5hkdgy5z34fa7ariq
Automatic Keyphrase Extractor from Arabic Documents
2016
International Journal of Advanced Computer Science and Applications
The new algorithm, Automatic Keyphrases Extraction from Arabic (AKEA), extracts keyphrases from Arabic documents automatically. ...
The keyphrase is a sentence or a part of a sentence that contains a sequence of words that expresses the meaning and the purpose of any given paragraph. ...
based on part of speech (POS) tags. ...
doi:10.14569/ijacsa.2016.070226
fatcat:nydyag6vt5ba3ho52q5pfjlbri
« Previous
Showing results 1 — 15 out of 12,048 results