Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








8,495 Hits in 5.3 sec

Detection and Classification of COVID-19 by Radiological Imaging Modalities Using Deep Learning Techniques: A Literature Review

Albatoul S. Althenayan, Shada A. AlSalamah, Sherin Aly, Thamer Nouh, Abdulrahman A. Mirza
2022 Applied Sciences  
The existing research on the COVID-19 classification problem suffers from limitations due to the use of the binary or flat multiclass classification, and building classifiers based on only a few classes  ...  The paper concludes with a list of recommendations, which are expected to assist future researchers in improving the diagnostic process for COVID-19 in particular.  ...  Acknowledgments: The authors would like to thank the editor and reviewers for spending their valuable time reviewing and polishing this article.  ... 
doi:10.3390/app122010535 fatcat:fom62njnljar5l5fb64456etay

AIML at VQA-Med 2020: Knowledge Inference via a Skeleton-based Sentence Mapping Approach for Medical Domain Visual Question Answering

Zhibin Liao, Qi Wu, Chunhua Shen, Anton van den Hengel, Johan Verjans
2020 Conference and Labs of the Evaluation Forum  
This enabled us to apply a multi-scale and multi-architecture ensemble strategy for robust prediction.  ...  Lastly, we positioned the VQG task as a transfer learning problem using the VGA task trained models. The VQG task was also solved using classification.  ...  Multi-scale and multi-architecture ensemble We adopted a multi-scale learning technique, using 128, 256, 384, and 512 as candidate image resize options.  ... 
dblp:conf/clef/LiaoWSHV20 fatcat:klcz6gntyfgpzfa2467uwix77e

Multi-modal Wound Classification using Wound Image and Location by Deep Neural Network [article]

D. M. Anisuzzaman, Yash Patel, Behrouz Rostami, Jeffrey Niezgoda, Sandeep Gopalakrishnan, Zeyun Yu
2021 arXiv   pre-print
The multi-modal network is developed by concatenating the image-based and location-based classifier's outputs with some other modifications.  ...  The proposed multi-modal network also shows a significant improvement in results from the previous works of literature.  ...  Finally, the output layer is added with either a softmax or sigmoid layer for multi-class or binary-class classification.  ... 
arXiv:2109.06969v1 fatcat:d2b7qcp2d5dodokwhvmxlwuyc4

A Multiclass-based Classification Strategy for Rhetorical Sentence Categorization from Scientific Papers

Dwi H. Widyantoro, Masayu L. Khodra, Bambang Riyanto, E. Aminudin Aziz
2013 Journal of ICT Research and Applications  
the multi improve the classification performance over multi  ...  This paper presents a multi terms of classification of r behind this approach is based on an observation that no single classifier is the best performer for classifyi our approach learns which classifiers  ...  necessarily reflect the views of the funding organization.  ... 
doi:10.5614/itbj.ict.res.appl.2013.7.3.5 fatcat:dbdmmdz2oncpncv3tbhjxdfz3i

Diving Deep onto Discriminative Ensemble of Histological Hashing & Class-Specific Manifold Learning for Multi-class Breast Carcinoma Taxonomy [article]

Sawon Pratiher, Subhankar Chattoraj
2019 arXiv   pre-print
The proposed scheme employs deep neural network (DNN) aided discriminative ensemble of holistic class-specific manifold learning (CSML) for underlying HI sub-space embedding & HI hashing based local shallow  ...  The model achieves 95.8% accuracy pertinent to multi-classification & 2.8% overall performance improvement & 38.2% enhancement for Lobular carcinoma (LC) sub-class recognition rate as compared to the existing  ...  Experiment Design: Deep Discriminative Ensemble of Histological Hashing & Class-Specific Manifold Learning Table II : II Results for Binary Classification.  ... 
arXiv:1806.06876v3 fatcat:otuwfdfcpfcxxemx73z7zn7pj4

Integrating Audio-Visual Features for Multimodal Deepfake Detection [article]

Sneha Muppalla, Shan Jia, Siwei Lyu
2023 arXiv   pre-print
Therefore, this paper proposes an audio-visual-based method for deepfake detection, which integrates fine-grained deepfake identification with binary classification.  ...  Existing methods for audio-visual detection do not always surpass that of the analysis based on single modalities.  ...  Multi-task Learning As our method proposes the integration of a fine-grained deepfake identification module with the binary classification of each modality for distinct feature learning, we formulate a  ... 
arXiv:2310.03827v1 fatcat:x7dx4uge5nbcdn5cjm3ucdsjru

Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease

Daoqiang Zhang, Dinggang Shen
2012 NeuroImage  
Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different  ...  In this paper, we propose a general methodology, namely Multi-Modal Multi-Task (M3T) learning, to jointly predict multiple variables from multi-modal data.  ...  Hoffman-La Roche, Schering-Plough, Synarc, Inc., as well as non-profit partners the Alzheimer's Association and Alzheimer's Drug Discovery Foundation, with participation from the U.S.  ... 
doi:10.1016/j.neuroimage.2011.09.069 pmid:21992749 pmcid:PMC3230721 fatcat:msle4wj4zrc2lonbfibchhae3e

Comprehensive approach for solving multimodal data analysis problems based on integration of evolutionary, neural and deep neural network algorithms

I Ivanov, E Sopov, I Panfilov
2018 IOP Conference Series: Materials Science and Engineering  
This approach involves multimodal data fusion techniques, multi-objective approach to feature selection and neural network ensemble optimization, as well as convolutional neural networks trained with hybrid  ...  In this work we propose the comprehensive approach for solving multimodal data analysis problems.  ...  Acknowledgements The research is performed with the financial support of the Ministry of Education and Science of the Russian Federation within the State Assignment for the Siberian State Aerospace University  ... 
doi:10.1088/1757-899x/450/5/052007 fatcat:uaqlkkxwovfh3nfxa4w24dgy6i

Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information

Po-Yu Kao, Fnu Shailja, Jiaxiang Jiang, Angela Zhang, Amil Khan, Jefferson W. Chen, B. S. Manjunath
2020 Frontiers in Neuroscience  
Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method.  ...  In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation.  ...  AUTHOR CONTRIBUTIONS P-YK, SS, and JJ designed the study and algorithm with BM guiding the research and analyzed and interpreted the data. P-YK collected the data.  ... 
doi:10.3389/fnins.2019.01449 pmid:32038146 pmcid:PMC6993565 fatcat:hdkfesthlvhijgemxn3u6cdbtm

Detecting Depression with Audio/Text Sequence Modeling of Interviews

Tuka Al Hanai, Mohammad Ghassemi, James Glass
2018 Interspeech 2018  
Our results were comparable to methods that explicitly modeled the topics of the questions and answers which suggests that depression can be detected through sequential modeling of an interaction, with  ...  In this study we demonstrate an automated depression-detection algorithm that models interviews between an individual and agent and learns from sequences of questions and answers without the need to perform  ...  Conducting fusion scoring on the multi-modal model, resulted in the best multi-class classification score (MAE 4.97, RMSE 6.27).  ... 
doi:10.21437/interspeech.2018-2522 dblp:conf/interspeech/HanaiGG18 fatcat:czzjyn2ntjewxkzfznplqfunp4

The YouTube-8M Kaggle Competition: Challenges and Methods [article]

Haosheng Zou, Kun Xu, Jialian Li, Jun Zhu
2017 arXiv   pre-print
It's noteworthy that, with merely the proposed strategies and uniformly-averaging multi-crop ensemble was it sufficient for us to reach our ranking.  ...  We hope this paper could serve, to some extent, as a review and guideline of the YouTube-8M multi-label video classification benchmark, inspiring future attempts and research.  ...  Acknowledgements This work was supported by the National Basic Research Program of China (2013CB329403), the National Natural Science Foundation of China (61620106010, 61621136008) and the grants from  ... 
arXiv:1706.09274v2 fatcat:dlhepq5yefaubpp546wl4guxnm

ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images

Aymen M. Al-Hejri, Riyadh M. Al-Tam, Muneer Fazea, Archana Harsing Sable, Soojeong Lee, Mugahed A. Al-antari
2022 Diagnostics  
Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches.  ...  The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively.  ...  Meanwhile, we express our gratitude to the expert radiologist team members: Muneer Fazea, Amal Abdulrahaman Bafagih, and Rajesh Kamalkishor Agrawal who work for the Department of Radiology, Al-Ma'amon  ... 
doi:10.3390/diagnostics13010089 pmid:36611382 pmcid:PMC9818801 fatcat:v6gbroijfvgntg2riwrq62t5tq

Inferring Distributions Over Depth from a Single Image [article]

Gengshan Yang, Peiyun Hu, Deva Ramanan
2019 arXiv   pre-print
In this paper, we recast the continuous problem of depth regression as discrete binary classification, whose output is an un-normalized distribution over possible depths for each pixel.  ...  Such output allows one to reliably and efficiently capture multi-modal depth distributions in ambiguous cases, such as depth discontinuities and reflective surfaces.  ...  Acknowledgements This work was supported by the CMU Argo AI Center for Autonomous Vehicle Research.  ... 
arXiv:1912.06268v1 fatcat:kzjstpat65gbdnxsavqyqf7btu

DPNET: Dynamic Poly-attention Network for Trustworthy Multi-modal Classification

Xin Zou, Chang Tang, Xiao Zheng, Zhenglai Li, Xiao He, Shan An, Xinwang Liu
2023 Proceedings of the 31st ACM International Conference on Multimedia  
However, existing multi-modal classification methods are basically weak in integrating global structural information and providing trustworthy multi-modal fusion, especially in safety-sensitive practical  ...  (ii) A transparent fusion strategy based on the modality confidence estimation strategy is induced to track information variation within different modalities for dynamical fusion.  ...  We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.  ... 
doi:10.1145/3581783.3612652 fatcat:gqqcyqc4afcmjafftul4v7635y

Multimodal Depression Severity Prediction from medical bio-markers using Machine Learning Tools and Technologies [article]

Shivani Shimpi, Shyam Thombre, Snehal Reddy, Ritik Sharma, Srijan Singh
2020 arXiv   pre-print
Our app estimates the severity of depression based on a multi-class classification model by utilizing the language, audio, and visual modalities.  ...  Further optimization reveals the intramodality and intermodality relevance through the selection of the most influential features within each modality for decision making.  ...  We also thank the anonymous reviewers for their helpful comments.  ... 
arXiv:2009.05651v2 fatcat:sjrf66l3qrb7hh5o63nvrbnc7e
« Previous Showing results 1 — 15 out of 8,495 results