Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,447 Hits in 3.9 sec

Guided Feature Selection for Deep Visual Odometry [article]

Fei Xue, Qiuyuan Wang, Xin Wang, Wei Dong, Junqiu Wang, Hongbin Zha
2018 arXiv   pre-print
We present a novel end-to-end visual odometry architecture with guided feature selection based on deep convolutional recurrent neural networks.  ...  Different from current monocular visual odometry methods, our approach is established on the intuition that features contribute discriminately to different motion patterns.  ...  Guided Feature Selection for Deep Visual Odometry In this section, we introduce our framework ( Fig. 1 ) in detail. First, the model encodes RGB images to high-level features in § 3.1.  ... 
arXiv:1811.09935v1 fatcat:dt2yhqilhfbkvgetg6egyarrki

Learning Selective Sensor Fusion for States Estimation [article]

Changhao Chen, Stefano Rosa, Chris Xiaoxuan Lu, Bing Wang, Niki Trigoni, Andrew Markham
2022 arXiv   pre-print
Although deep learning approaches for multimodal odometry estimation and localization have gained traction, they rarely focus on the issue of robust sensor fusion - a necessary consideration to deal with  ...  Moreover, current deep odometry models suffer from a lack of interpretability.  ...  ACKNOWLEDGMENTS The authors would like to thank Yishu Miao from MoIntelligence, Wei Wu from Tencent and Wei Wang from University of Oxford for helpful discussions.  ... 
arXiv:1912.13077v2 fatcat:4jieqtk43zfh3hvqa3bkmevftq

Exploring Object-Aware Attention Guided Frame Association for RGB-D SLAM [article]

Ali Caglayan, Nevrez Imamoglu, Oguzhan Guclu, Ali Osman Serhatoglu, Weimin Wang, Ahmet Burak Can, Ryosuke Nakamura
2023 arXiv   pre-print
Moreover, these gradients can be integrated with CNN features for localizing more generalized task dependent attentive (salient) objects in scenes.  ...  Deep learning models as an emerging topic have shown great progress in various fields.  ...  Deep Features for Frame Association Object Attention Guided CNN Features Deep feature extraction module in Figure 2 provides the attention guided compact representation to be used in the loop closure  ... 
arXiv:2201.12047v2 fatcat:x35osap3wrgevhwcz4rgxigwhi

Selective Sensor Fusion for Neural Visual-Inertial Odometry

Changhao Chen, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, Niki Trigoni
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Deep learning approaches for Visual-Inertial Odometry (VIO) have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data.  ...  During testing, the network is able to selectively process the features of the available sensor modalities and produce a trajectory at scale.  ...  Deep Neural Networks for Localization Recent datadriven approaches to visual odometry have gained a lot of attention.  ... 
doi:10.1109/cvpr.2019.01079 dblp:conf/cvpr/ChenRMLWMT19 fatcat:xabsp6nwbrhqdmyuvqk4embmgi

Selective Sensor Fusion for Neural Visual-Inertial Odometry [article]

Changhao Chen, Stefano Rosa, Yishu Miao, Chris Xiaoxuan Lu, Wei Wu, Andrew Markham, Niki Trigoni
2019 arXiv   pre-print
Deep learning approaches for Visual-Inertial Odometry (VIO) have proven successful, but they rarely focus on incorporating robust fusion strategies for dealing with imperfect input sensory data.  ...  During testing, the network is able to selectively process the features of the available sensor modalities and produce a trajectory at scale.  ...  Our selective fusion mechanisms re-weights the concatenated inertial-visual features, guided by the concatenated features themselves.  ... 
arXiv:1903.01534v1 fatcat:bxbzvoti2zhitpf3izbtq3usma

TP-TIO: A Robust Thermal-Inertial Odometry with Deep ThermalPoint [article]

Shibo Zhao, Peng Wang, Hengrui Zhang, Zheng Fang, Sebastian Scherer
2020 arXiv   pre-print
Finally, taking advantage of an optimization-based visual-inertial framework, a deep feature-based thermal-inertial odometry (TP-TIO) framework is proposed and evaluated thoroughly in various visually  ...  To solve this problem, we propose ThermalPoint, a lightweight feature detection network specifically tailored for producing keypoints on thermal images, providing notable anti-noise improvements compared  ...  The reason for selecting LOAM algorithm is that it can achieve robust pose estimation in light smoke environments while visual odometry will fail in such environments.  ... 
arXiv:2012.03455v1 fatcat:2hx6wrmyune3zgxuwemtu5xusm

A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence [article]

Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
2020 arXiv   pre-print
It is our hope that this work can connect emerging works from robotics, computer vision and machine learning communities, and serve as a guide for future researchers to apply deep learning to tackle localization  ...  In this work, we provide a comprehensive survey, and propose a new taxonomy for localization and mapping using deep learning.  ...  TABLE 1 : 1 A summary of existing methods on deep learning for odometry estimation.Model: VO, VIO and LO represent visual odometry, visual-inertial odometry and LIDAR odometry respectively.  ... 
arXiv:2006.12567v2 fatcat:snb2byqamfcblauw5lzccb7umy

EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention [article]

Zheming Tu, Changhao Chen, Xianfei Pan, Ruochen Liu, Jiarui Cui, Jun Mao
2022 arXiv   pre-print
., on overcast days and water-filled ground , which are difficult for traditional VIO algorithms to extract visual features.  ...  We propose a novel learning based VIO framework with external memory attention that effectively and efficiently combines visual and inertial features for states estimation.  ...  Our work is mainly relevant to deep learning based visual positioning, deep learning based visual-inertial odometry and attention mechanisms. Thus, we discuss them in this section. A.  ... 
arXiv:2209.08490v1 fatcat:ope5djagiba3fhewmixks6wyo4

Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation

Yingcai Wan, Qiankun Zhao, Cheng Guo, Chenlong Xu, Lijing Fang
2022 Remote Sensing  
We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method  ...  This paper presents a new deep visual-inertial odometry and depth estimation framework for improving the accuracy of depth estimation and ego-motion from image sequences and inertial measurement unit (  ...  visual odometry (DVO) with deep inertial odometry (DIO).  ... 
doi:10.3390/rs14051228 fatcat:srqcx7oo4fhztjq4qrutqosaau

TOWARDS GUIDED UNDERWATER SURVEY USING LIGHT VISUAL ODOMETRY

M. M. Nawaf, P. Drap, J. P. Royer, D. Merad, M. Saccone
2017 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time.  ...  The built system opens promising areas for further development and integration of embedded computer vision techniques.  ...  Most importantly, we propose to guide the survey based on a visual odometry approach that runs on a distributed embedded system in real-time.  ... 
doi:10.5194/isprs-archives-xlii-2-w3-527-2017 fatcat:tri2vrzsnbeuzbvlaeh5embsje

Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments [article]

Dan Barnes, Will Maddern, Geoffrey Pascoe, Ingmar Posner
2018 arXiv   pre-print
At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching.  ...  We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network.  ...  Fig. 6 illustrates the predicted depth, selected sparse features and weighted dense intensity values used for a typical urban scene. V.  ... 
arXiv:1711.06623v2 fatcat:tg7no3np45dj7nau5cxmlisf2i

DF-VO: What Should Be Learnt for Visual Odometry? [article]

Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ravi Garg, Ian Reid
2021 arXiv   pre-print
Multi-view geometry-based methods dominate the last few decades in monocular Visual Odometry for their superior performance, while they have been vulnerable to dynamic and low-texture scenes.  ...  ORB-SLAM (3.247%}) in terms of translation error in KITTI Odometry benchmark. Source code is publicly available at: \href{https://github.com/Huangying-Zhan/DF-VO}{DF-VO}.  ...  Acknowledgment This work was supported by the UoA Scholarship to HZ, the ARC Laureate Fellowship FL130100102 to IR and the Australian Centre of Excellence for Robotic Vision CE140100016.  ... 
arXiv:2103.00933v1 fatcat:gs4bsysoozelfmv7mdntx3xiba

DeepLO: Geometry-Aware Deep LiDAR Odometry [article]

Younggun Cho, Giseop Kim, Ayoung Kim
2019 arXiv   pre-print
These groundbreaking works focus on unsupervised learning for odometry estimation but mostly for visual sensors.  ...  In this paper, we propose a novel approach to geometry-aware deep LiDAR odometry trainable via both supervised and unsupervised frameworks.  ...  For example, visual-LiDAR odometry and mapping (V-LOAM) records the first place in the KITTI odometry benchmark and has shown remarkable accuracy.  ... 
arXiv:1902.10562v1 fatcat:usorlpcd6zbrton7hseyvrfkta

Deep Learning for Visual Localization and Mapping: A Survey [article]

Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
2023 arXiv   pre-print
to apply deep learning to tackle the problem of visual localization and mapping.  ...  To this end, a series of localization and mapping topics are investigated, from the learning based visual odometry, global relocalization, to mapping, and simultaneous localization and mapping (SLAM).  ...  To tackle the deep sensor fusion problem, [251] , [252] propose selective sensor fusion, that selectively learns context-dependent representations for visual inertial pose estimation.  ... 
arXiv:2308.14039v1 fatcat:h26nrsakhbcm5ja7vhacld4t3u

Training Deep SLAM on Single Frames [article]

Igor Slinko, Anna Vorontsova, Dmitry Zhukov, Olga Barinova, Anton Konushin
2019 arXiv   pre-print
In this work, we focus on generating synthetic data for deep learning-based visual odometry and SLAM methods that take optical flow as an input.  ...  We train visual odometry model on synthetic data and do not use ground truth poses hence this model can be considered unsupervised.  ...  Supervised learning-based methods DeepVO [32] was a pioneer work to use deep learning for visual odometry.  ... 
arXiv:1912.05405v1 fatcat:zgqpolbxr5fgvgjt762pvf4czu
« Previous Showing results 1 — 15 out of 1,447 results