Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








117 Hits in 4.7 sec

Deep Learning for Camera Calibration and Beyond: A Survey [article]

Kang Liao, Lang Nie, Shujuan Huang, Chunyu Lin, Jing Zhang, Yao Zhao, Moncef Gabbouj, Dacheng Tao
2024 arXiv   pre-print
It comprises both synthetic and real-world data, with images and videos captured by different cameras in diverse scenes.  ...  Camera calibration involves estimating camera parameters to infer geometric features from captured sequences, which is crucial for computer vision and robotics.  ...  SSR-Net [55] presents a self-supervised deep homography estimation network, which relaxes the need for ground truth annotations and leverages the invertibility constraints of homography.  ... 
arXiv:2303.10559v2 fatcat:7cln4mukqvbqlimpt2fw5h5cu4

Depth-Aware Multi-Grid Deep Homography Estimation with Contextual Correlation [article]

Lang Nie, Chunyu Lin, Kang Liao, Shuaicheng Liu, Yao Zhao
2021 arXiv   pre-print
The codes and models will be available at https://github.com/nie-lang/Multi-Grid-Deep-Homography.  ...  The learning solutions, on the contrary, try to learn robust deep features but demonstrate unsatisfying performance in the scenes with low overlap rates.  ...  of the existing single deep homography estimation solutions.  ... 
arXiv:2107.02524v2 fatcat:etpm43j4obgxhlr23u4l7gfyeu

A Revisit of the Normalized Eight-Point Algorithm and A Self-Supervised Deep Solution [article]

Bin Fan, Yuchao Dai, Yongduek Seo, Mingyi He
2024 arXiv   pre-print
convolutional neural network with a self-supervised learning strategy to the normalization.  ...  Our learning-based normalization module could be integrated with both traditional (e.g., RANSAC) and deep learning framework (affording good interpretability) with minimal efforts.  ...  Second, we propose a self-supervised deep neural network to learn a robust normalization scheme for more accurate fundamental matrix estimation.  ... 
arXiv:2304.10771v2 fatcat:7gl2hfl2bvhz5bq4ngrglbpf44

Warp Consistency for Unsupervised Learning of Dense Correspondences [article]

Prune Truong and Martin Danelljan and Fisher Yu and Luc Van Gool
2021 arXiv   pre-print
We derive and analyze all flow-consistency constraints arising between the triplet.  ...  Our objective is effective even in settings with large appearance and view-point changes.  ...  Flownet 2.0: Evolu- tion of optical flow estimation with deep networks.  ... 
arXiv:2104.03308v3 fatcat:wgukjdkv3ngydgvdsstt6pq7om

DK-SLAM: Monocular Visual SLAM with Deep Keypoints Adaptive Learning, Tracking and Loop-Closing [article]

Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi, Changhao Chen
2024 arXiv   pre-print
To address these issues, we present DK-SLAM, a monocular visual SLAM system with adaptive deep local features.  ...  Initially, a direct method approximates the relative pose between consecutive frames, followed by a feature matching method for refined pose estimation.  ...  self-supervised training, employing a multi-step process.  ... 
arXiv:2401.09160v1 fatcat:ha623h2hjbhzlllo7wd6euj4zy

Semi-supervised Deep Multi-view Stereo [article]

Hongbin Xu, Weitao Chen, Yang Liu, Zhipeng Zhou, Haihong Xiao, Baigui Sun, Xuansong Xie, Wenxiong Kang
2023 arXiv   pre-print
With the same settings in backbone network, our proposed SDA-MVS outperforms its fully-supervised and unsupervised baselines.  ...  The visual style of unlabeled sample is transferred to labeled sample to shrink the gap, and the model prediction of generated sample is further supervised with the label in original labeled sample.  ...  Since the self-supervision loss built on photometric consistency excavates supervision signals on all available pixels in the image, unsupervised MVS has more complete regions with valid supervision constraints  ... 
arXiv:2207.11699v4 fatcat:6gtf2k3jlzb5xhttqx5uer5sd4

Fully neuromorphic vision and control for autonomous drone flight [article]

Federico Paredes-Vallés, Jesse Hagenaars, Julien Dupeyroux, Stein Stroobants, Yingfu Xu, Guido de Croon
2023 arXiv   pre-print
The vision part of the network, consisting of five layers and 28.8k neurons, maps incoming raw events to ego-motion estimates and is trained with self-supervised learning on real event data.  ...  The control part consists of a single decoding layer and is learned with an evolutionary algorithm in a drone simulator.  ...  Neuromorphic state estimation: self-supervised learning To train our spiking architecture to estimate the displacement of the four corner pixels in a self-supervised fashion, we use the contrast maximization  ... 
arXiv:2303.08778v1 fatcat:a3di2xgkzvhwbm5h2pkgf7uw6y

Remote Sensing Novel View Synthesis with Implicit Multiplane Representations [article]

Yongchang Wu, Zhengxia Zou, Zhenwei Shi
2022 arXiv   pre-print
The 3D scene is reconstructed under a self-supervised optimization paradigm through a differentiable multiplane renderer with multi-view input constraints.  ...  Considering the overhead and far depth imaging of remote sensing images, we represent the 3D space by combining implicit multiplane images (MPI) representation and deep neural networks.  ...  In the proposed method, the 3D scene is constructed under a self-supervised optimization paradigm through a differentiable multiplane renderer with multi-view input constraints.  ... 
arXiv:2205.08908v1 fatcat:nkwlpmubkrdapep3bmyg6se5za

Self-supervised Light Field View Synthesis Using Cycle Consistency [article]

Yang Chen, Martin Alain, Aljosa Smolic
2020 arXiv   pre-print
To tackle this problem, we propose a self-supervised light field view synthesis framework with cycle consistency.  ...  A cycle consistency constraint is used to build bidirectional mapping enforcing the generated views to be consistent with the input views.  ...  Cycle consistency has proven capable of modeling invertible mapping when direct supervision is unavailable [13] - [15] .  ... 
arXiv:2008.05084v1 fatcat:4vyotfvkfbbp5oao5m533sv2h4

Self-supervised Light Field View Synthesis Using Cycle Consistency

Yang Chen, Martin Alain, Aljosa Smolic
2020 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)  
To tackle this problem, we propose a self-supervised light field view synthesis framework with cycle consistency.  ...  A cycle consistency constraint is used to build bidirectional mapping enforcing the generated views to be consistent with the input views.  ...  Cycle consistency has proven capable of modeling invertible mapping when direct supervision is unavailable [13] - [15] .  ... 
doi:10.1109/mmsp48831.2020.9287105 fatcat:e7oqiqqlprellew2wcie7icz44

Neural Camera Models [article]

Igor Vasiljevic
2022 arXiv   pre-print
To enable these embodied agents to interact with real-world objects, cameras are increasingly being used as depth sensors, reconstructing the environment for a variety of downstream reasoning tasks.  ...  Machine-learning-aided depth perception, or depth estimation, predicts for each pixel in an image the distance to the imaged scene point.  ...  Self-supervised learning with ge-ometric constraints in monocular video: Connecting ow, depth, and camera.  ... 
arXiv:2208.12903v1 fatcat:kmmanyu7brekxdbe3ynoasq3nq

Simultaneous temperature estimation and nonuniformity correction from multiple frames [article]

Navot Oz, Omri Berman, Nir Sochen, David Mendelovich, Iftach Klapp
2023 arXiv   pre-print
We leverage the camera's physical image-acquisition model and incorporate it into a deep-learning architecture termed kernel prediction network (KPN), which enables us to combine multiple frames despite  ...  We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation.  ...  An homography transformation preserves co-linearity between the frames. Moreover, an homography is invertible and linear by definition [31, Def. 2.9].  ... 
arXiv:2307.12297v2 fatcat:aax5tujzsnfalo3dduu23dgyle

There and Back Again: Self-supervised Multispectral Correspondence Estimation [article]

Celyn Walters
2021 arXiv   pre-print
We do this by introducing a novel cycle-consistency metric that allows us to self-supervise.  ...  We also show the performance of our unmodified network on the cases of RGB-NIR and RGB-RGB, where we achieve higher accuracy than similar self-supervised approaches.  ...  For distant scenes, which approximate orthographic projection, alignment can be made with a simple homography registration.  ... 
arXiv:2103.10768v2 fatcat:hk36qvm4jbfe5dcqv563bnpazq

RAUM-VO: Rotational Adjusted Unsupervised Monocular Visual Odometry [article]

Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos
2022 arXiv   pre-print
Then, we adjust the predicted rotation with the motion estimated by F2F using the 2D matches and initializing the solver with the pose network prediction.  ...  Therefore, we present RAUM-VO, an approach based on a model-free epipolar constraint for frame-to-frame motion estimation (F2F) to adjust the rotation during training and online inference.  ...  [19] , which we name F2F, and use the rotation to guide the training with an additional self-supervised loss.  ... 
arXiv:2203.07162v1 fatcat:6p2fgindvbdthc4ecs5xk5vhsu

Table of Contents

2021 IEEE Signal Processing Letters  
Orlando Adv-Depth: Self-Supervised Monocular Depth Estimation With an Adversarial Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Li Reconciling Hand-Crafted and Self-Supervised Deep Priors for Video Directional Rain Streaks Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/lsp.2021.3134549 fatcat:m6obtl7k7zdqvd62eo3c4tptfy
« Previous Showing results 1 — 15 out of 117 results