A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Deep Multi-level Semantic Hashing for Cross-modal Retrieval
2019
IEEE Access
And a deep hashing framework is designed for multi-label image-text cross retrieval tasks. ...
Due to its efficiency on storage and computing, hashing-based methods are broadly used for large scale cross-modal retrieval. ...
CONCLUSION In this paper, we propose a deep multi-level semantic hashing method for cross-modal retrieval. ...
doi:10.1109/access.2019.2899536
fatcat:xynopqlgyfhe3ef6su55zqczim
Hybrid-attention based Feature-reconstructive Adversarial Hashing Networks for Cross-modal Retrieval
2022
Maǧallaẗ al-abḥāṯ al-handasiyyaẗ
With the massive growth of data of various modal types, people no longer use a single modal retrieval method, but a cross-modal retrieval method when performing retrieval tasks. ...
In order to solve this problem, we propose a Hybrid-attention based Feature-reconstructive Adversarial Hashing (HFAH) networks for cross-modal retrieval. ...
Among many methods for cross-modal retrieval, the cross-modal retrieval based on hashing learning (Gionis. A., 1999) (Shen et al., 2015) is a more common method. ...
doi:10.36909/jer.iccsct.19467
fatcat:h56a7tjsbrhxnfbxfrh4anzwi4
Deep Multi-Semantic Fusion-Based Cross-Modal Hashing
2022
Mathematics
Due to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era. ...
To this end, this paper proposes deep multi-semantic fusion-based cross-modal hashing (DMSFH), which uses two deep neural networks to extract cross-modal features, and uses a multi-label semantic fusion ...
Third, our method was tested on a specific dataset, and common cross-modal hash retrieval methods use data of known categories, but in practical applications, the rapid emergence of new unlabeled things ...
doi:10.3390/math10030430
fatcat:yri6dbd53zglhc77wtpswxgsoa
Cross-Model Hashing Retrieval Based on Deep Residual Network
2021
Computer systems science and engineering
This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes: A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network (CMHR-DRN). ...
In the era of big data rich in We Media, the single mode retrieval system has been unable to meet people's demand for information retrieval. ...
Acknowledgement: This paper would like to thank all the authors cited in the reference for their contributions to this field. ...
doi:10.32604/csse.2021.014563
fatcat:maw3votu7bdl5csccvqniby65i
MULTI-MODAL RETRIEVAL IN NEWS FEED APP USING GCDL TECHNIQUE
2017
International Journal of Recent Trends in Engineering and Research
Due to the efficiency of hashing-based methods, there also exists a rich line of work focusing the problem of mapping multi-modal high-dimensional data to lowdimensional hash codes, such as Latent semantic ...
Hashing methods have proven to be useful for a variety of tasks and have attracted extensive attention in recent years. ...
Hashing-Based Methods In recent years, hashing-based methods, which create compact hash codes that preserve similarity, for single-modal or cross-modal retrieval on large-scale databases have attracted ...
doi:10.23883/ijrter.2017.3365.aeikk
fatcat:6dmfmfsmtbaejale6t63ts7may
Creating Something From Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
the same space, so that it becomes efficient in cross-modal data retrieval. ...
In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into ...
Related Work
Cross-Modal Retrieval and Hashing Cross-modal retrieval aims to search semantically similar instances in one modality using a query from another modality [37, 39] . ...
doi:10.1109/cvpr42600.2020.00319
dblp:conf/cvpr/HuXH020
fatcat:siwq4j4dmfbxzfps7m37razswi
Creating Something from Nothing: Unsupervised Knowledge Distillation for Cross-Modal Hashing
[article]
2020
arXiv
pre-print
the same space, so that it becomes efficient in cross-modal data retrieval. ...
In recent years, cross-modal hashing (CMH) has attracted increasing attentions, mainly because its potential ability of mapping contents from different modalities, especially in vision and language, into ...
Related Work
Cross-Modal Retrieval and Hashing Cross-modal retrieval aims to search semantically similar instances in one modality using a query from another modality [37, 39] . ...
arXiv:2004.00280v1
fatcat:xrwcao6ayzepnpuj2pqsu3lv7e
Cross-Media Hashing with Neural Networks
2014
Proceedings of the ACM International Conference on Multimedia - MM '14
Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional hamming space, has attracted intensive attention in recent years. ...
By restricting in the learning objective a) the hash codes for relevant cross-media data being similar, and b) the hash codes being discriminative for predicting the class labels, the learned Hamming space ...
For the image modality, 500-D BoVW are extracted for each image. For the text modality, the corresponding tags of each image are represented by a 1,000-D BoW. ...
doi:10.1145/2647868.2655059
dblp:conf/mm/ZhuangYWWTS14
fatcat:lj7kidnxyjf4zjlnegwxw3nhoq
A Deep Cross-Modality Hashing Network for SAR and Optical Remote Sensing Images Retrieval
2020
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
To address this limitation, this study proposes a deep cross-modality hashing network (DCMHN). ...
Finally, the triplet loss, in combination with the hash function, helps the modal to extract the discriminative features of images and upgrade the retrieval efficiency. ...
Supervised Cross-modality Hash methods Hashing methods have attracted considerable attention due to their low storage costs and fast retrieval speed. ...
doi:10.1109/jstars.2020.3021390
fatcat:y7zmf6luejbzri5qaymoed2zii
MTFH: A Matrix Tri-Factorization Hashing Framework for Efficient Cross-Modal Retrieval
[article]
2018
arXiv
pre-print
As a result, the derived hash codes are more semantically meaningful for various challenging cross-modal retrieval tasks. ...
Most existing cross-modal hashing methods learn unified hash codes in a common Hamming space to represent all multi-modal data and make them intuitively comparable. ...
These three datasets consist of both image and text modalities, which are frequently utilized for cross-modal retrieval evaluation. ...
arXiv:1805.01963v1
fatcat:oooqywtqo5honeaccssqaekb3q
Discriminative coupled dictionary hashing for fast cross-media retrieval
2014
Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval - SIGIR '14
We propose a discriminative coupled dictionary hashing (DCDH) method in this paper. In DCDH, the coupled dictionary for each modality is learned with side information (e.g., categories). ...
Cross-media hashing, which conducts cross-media retrieval by embedding data from different modalities into a common low-dimensional Hamming space, has attracted intensive attention in recent years. ...
Compared Methods We perform three types of retrieval schemes in the experiments : 1) Image-query-Texts: use image queries to retrieve relevant texts. 2) Text-query-Images: use text queries to retrieve ...
doi:10.1145/2600428.2609563
dblp:conf/sigir/YuWYTLZ14
fatcat:igpcpkocsrggvmldkcboj2ofly
TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data
2021
Applied Sciences
To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utilizes deep neural network ...
Besides, a novel hybrid index, called TH-Quadtree, is developed by combining cross-modal binary hash codes and quadtree to support high-performance search. ...
Our Method. To this end, this paper proposes a novel efficient cross-modal hashing approach, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR). ...
doi:10.3390/app112210803
fatcat:isorl3ocfbbmtcfkgzr7i4mdb4
A Cross-Modal Image and Text Retrieval Method Based on Efficient Feature Extraction and Interactive Learning CAE
2022
Scientific Programming
In view of the complexity of the multimodal environment and the existing shallow network structure that cannot achieve high-precision image and text retrieval, a cross-modal image and text retrieval method ...
Then, based on interactive learning CAE, cross-modal retrieval of images and text is realized. ...
analysis (ml-KCCA) method proposed in [15] and a cross-modal hashing retrieval method (DNDCMH) proposed in [22] . ...
doi:10.1155/2022/7314599
fatcat:n4h2va3upvep3fgjem6nypc2we
A Review of Hashing Methods for Multimodal Retrieval
2020
IEEE Access
INDEX TERMS Multimedia, multimodal retrieval, hashing method, deep learning, reviews. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. ...
Among many retrieval methods, the hashing method is widely used in multimodal data retrieval due to its low storage cost, fast and effective characteristics. ...
In general, single-modal method is the basis for transition to cross-modal and multi-modal method. ...
doi:10.1109/access.2020.2968154
fatcat:e3vmte5hrnhu3b3lf5ws4gwnhm
Adaptive Asymmetric Label-guided Hashing for Multimedia Search
[article]
2022
arXiv
pre-print
With the rapid growth of multimodal media data on the Web in recent years, hash learning methods as a way to achieve efficient and flexible cross-modal retrieval of massive multimedia data have received ...
In order to address above obstacles, we present a simple yet efficient Adaptive Asymmetric Label-guided Hashing, named A2LH, for Multimedia Search. Specifically, A2LH is a two-step hashing method. ...
Acknowledgment The authors would also like to thank the associate editor and anonymous reviewers for their comments to improve the paper. ...
arXiv:2207.12625v1
fatcat:qgdctu237bbz7fn6gbg5s2xlk4
« Previous
Showing results 1 — 15 out of 755 results