Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








1,158 Hits in 5.4 sec

Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation [article]

Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano
2023 arXiv   pre-print
This work proposes a generalizable multi-view gaze estimation task and a cross-view feature fusion method to address this issue.  ...  Appearance-based gaze estimation has been actively studied in recent years. However, its generalization performance for unseen head poses is still a significant limitation for existing methods.  ...  Conclusion In this paper, we presented a novel multi-view appearance-based gaze estimation task.  ... 
arXiv:2305.12704v3 fatcat:3zcnpeeysnhwhoev2umsc56euy

Robust Real-Time Multi-View Eye Tracking [article]

Nuri Murat Arar, Jean-Philippe Thiran
2018 arXiv   pre-print
The features extracted on various appearances are then used for estimating multiple gaze outputs.  ...  Leveraging multi-view appearances enables to more reliably detect gaze features under challenging conditions, particularly when they are obstructed in conventional single-view appearance due to large head  ...  Next, we apply cross ratio-based gaze estimation with the detected gaze features to calculate raw PoRs.  ... 
arXiv:1711.05444v2 fatcat:kyb2hdh5rzhcxfmr326did3gze

Continuous Driver's Gaze Zone Estimation Using RGB-D Camera

Yafei Wang, Guoliang Yuan, Zetian Mi, Jinjia Peng, Xueyan Ding, Zheng Liang, Xianping Fu
2019 Sensors  
In this work, a solution for a continuous driver gaze zone estimation system in real-world driving situations is proposed, combining multi-zone ICP-based head pose tracking and appearance-based gaze estimation  ...  For the RGB information, an appearance-based gaze estimation method with two-stage neighbor selection is utilized, which treats the gaze prediction as the combination of neighbor query (in head pose and  ...  Acknowledgments: The authors sincerely thank the editors and anonymous reviewers for the very helpful and kind comments to assist in improving the presentation of our paper.  ... 
doi:10.3390/s19061287 fatcat:tmc4hnhz55eynfs2xfo7udka2e

Deep Fusion for 3D Gaze Estimation from Natural Face Images using Multi-stream CNNs

Abid Ali, Yong-Guk Kim
2020 IEEE Access  
INDEX TERMS Gaze estimation, data fusion, convolutional neural networks, MPIIGaze, EYEDIAP.  ...  In this paper, we propose a 3D gaze estimation framework based on the data science perspective: First, a novel neural network architecture is designed to exploit every possible visual attribute such as  ...  APPEARANCE-BASED GAZE ESTIMATION Appearance-based methods aim to map directly gaze directions by taking raw images as the input.  ... 
doi:10.1109/access.2020.2986815 fatcat:v6hcaczddvaonlke4z6eogelzy

Audio-Video detection of the active speaker in meetings

Francisco Madrigal, Frederic Lerasle, Lionel Pibre, Isabelle Ferrane
2021 2020 25th International Conference on Pattern Recognition (ICPR)  
Contextual reasoning is done with an original methodology, based on the gaze of all participants. We evaluate our proposal with a public benchmarks in state-of-art: AMI corpus.  ...  We analyse several CNN architectures with both cues: raw pixels (RGB images) and motion (estimated with optical flow).  ...  Nevertheless, the evaluations on the available datasets with increasing complexity and mono / multi-people, observed within the field of view, are encouraging for the different analysis and experimentation  ... 
doi:10.1109/icpr48806.2021.9412681 fatcat:bbvxj2nmqfbg3egluv5364oqvq

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
., +, TIP 2020 8429-8442 Unsupervised Multi-View Constrained Convolutional Network for Accurate Depth Estimation.  ...  ., +, TIP 2020 8097-8106 Unsupervised Multi-View Constrained Convolutional Network for Accurate Depth Estimation.  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m

Author Index

2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
Shape Freund, Ziv An Eye for an Eye: A Single Camera Gaze-Replacement Method Friedlhuber, Thomas Demo: The OpenTL Framework for Model-based Visual Tracking Friedman, Yuli Workshop: Appearance-based  ...  Gait Recognition based on Local Motion Feature Selection Multi-View Structure Computation without Explicitly Estimating Motion Li, Hongsheng Object Matching with a Locally Affine-Invariant Constraint  ... 
doi:10.1109/cvpr.2010.5539913 fatcat:y6m5knstrzfyfin6jzusc42p54

Remote Eye Gaze Tracking Research: A Comparative Evaluation on Past and Recent Progress

Ibrahim Shehi Shehu, Yafei Wang, Athuman Mohamed Athuman, Xianping Fu
2021 Electronics  
Since early 2000, eye gaze tracking systems have emerged as interactive gaze-based systems that could be remotely deployed and operated, known as remote eye gaze tracking (REGT) systems.  ...  In this paper, we present a comparative evaluation of REGT systems intended for the PoG and LoS estimation tasks regarding past to recent progress.  ...  However, a feature-based or model-based method with constrained illumination (i.e., NIR) still achieves better accuracy for the gaze estimation than an appearance-based method under visible light [199  ... 
doi:10.3390/electronics10243165 fatcat:ju3gsvvlezcvxplda6f5aabb74

Robust validation of Visual Focus of Attention using adaptive fusion of head and eye gaze patterns

Stylianos Asteriadis, Kostas Karpouzis, Stefanos Kollias
2011 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)  
We propose a framework for inferring the focus of attention of a person, utilizing information coming both from head rotation and eye gaze estimation.  ...  For head pose we propose Bayesian modality fusion of both local and holistic information, while for eye gaze we propose a methodology that calculates eye gaze directionality, removing the influence of  ...  In Section 3 the method for head pose estimation based on local features and holistic information is presented, and the proposed scheme for fusion is analyzed, while Section 4 explains how eye gaze information  ... 
doi:10.1109/iccvw.2011.6130271 dblp:conf/iccvw/AsteriadisKK11 fatcat:qh7e6g63tvaljlww5aapt7lwdq

Sensor-Assisted Face Recognition System on Smart Glass via Multi-View Sparse Representation Classification

Weitao Xu, Yiran Shen, Neil Bergmann, Wen Hu
2016 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)  
The system is based on a novel face recognition algorithm, namely Multi-view Sparse Representation Classification (MVSRC), by exploiting the prolific information among multi-view face images.  ...  Our evaluations on public and private datasets show that the proposed method is up to 10% more accurate than the state-of-the-art multi-view face recognition methods while its computation cost is in the  ...  For image-based gaze estimation, we perform facial features detection first and use the method in [23] to estimate the gaze.  ... 
doi:10.1109/ipsn.2016.7460721 dblp:conf/ipsn/XuSBH16 fatcat:bcxoyxg7bjh6lpdxmrspqenvju

Ensemble Learning for Fusion of Multiview Vision with Occlusion and Missing Information: Framework and Evaluations with Real-World Data and Applications in Driver Hand Activity Recognition [article]

Ross Greer, Mohan Trivedi
2023 arXiv   pre-print
on within-group subjects, and that our multi-camera framework performs best on average in cross-group validation, and that the fusion approach outperforms ensemble weighted majority and model combination  ...  Multi-sensor frameworks provide opportunities for ensemble learning and sensor fusion to make use of redundancy and supplemental information, helpful in real-world safety applications such as continuous  ...  For the single view model, we make direct inference, and for the multi-view models, we pass the logits to ensemble algorithms, or pass the CNN-output feature maps to a neural network for late fusion.  ... 
arXiv:2301.12592v2 fatcat:siknw5hfxrh43hblhsnrhl5pra

Improving retail efficiency through sensing technologies: A survey

M. Quintana, J.M. Menendez, F. Alvarez, J.P. Lopez
2016 Pattern Recognition Letters  
The manuscript goes through relevant algorithms to infer properties of consumers like demography, attention or behavior based on their appearance (computer vision techniques), and on signal captured by  ...  Measuring customer reaction to new products for understanding their level of engagement is necessary for the future of retail.  ...  They are used as evidences for the Bayesian Networks (BNs [6]) that performs multi-emotion tagging on video based on a previous multi-expression detection on images.  ... 
doi:10.1016/j.patrec.2016.05.027 fatcat:7lnqil5utrgtjpf26apk6n4mlq

A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task [article]

Jack Hadfield, Georgia Chalvatzaki, Petros Koutras, Mehdi Khamassi, Costas S. Tzafestas, Petros Maragos
2018 arXiv   pre-print
We propose a deep-based multi-view solution that takes advantage of recent developments in human pose detection.  ...  In this work we tackle the problem of child engagement estimation while children freely interact with a robot in their room.  ...  for helping carry out the experiments and for their technical assistance.  ... 
arXiv:1812.00253v1 fatcat:m4bhlwqguba6fbzdokezzey22q

Learning to Detect Head Movement in Unconstrained Remote Gaze Estimation in the Wild [article]

Zhecan Wang, Jian Zhao, Cheng Lu, Han Huang, Fan Yang, Lianji Li, Yandong Guo
2020 arXiv   pre-print
In this paper, we propose novel end-to-end appearance-based gaze estimation methods that could more robustly incorporate different levels of head-pose representations into gaze estimation.  ...  Prior solutions struggle to maintain reliable accuracy in unconstrained remote gaze tracking. Among them, appearance-based solutions demonstrate tremendous potential in improving gaze accuracy.  ...  They could be divided into two main categories, i.e. appearance-based and model-based methods. Model-based methods often use the geometric prior to regularize models for gaze estimation.  ... 
arXiv:2004.03737v1 fatcat:77tqsotbevcz5bwo6qs7dgqf7m

Image registration for foveated panoramic sensing

Fadi Dornaika, James H. Elder
2012 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
The first registration method is based on matching extracted interest points using a closed form method.  ...  Such systems may find application in surveillance and telepresence systems that require a large field of view and high resolution at selected locations.  ...  In the following we develop and compare feature-based and featureless methods for estimating this homography.  ... 
doi:10.1145/2168996.2168997 fatcat:e7efol6wg5hv7eqmxhmb5rcx44
« Previous Showing results 1 — 15 out of 1,158 results