Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Filters








4,285 Hits in 3.0 sec

Multiscale High-Level Feature Fusion for Histopathological Image Classification

ZhiFei Lai, HuiFang Deng
2017 Computational and Mathematical Methods in Medicine  
The main process is that training a deep convolutional neural network is to extract high-level feature and fuse two convolutional layers' high-level feature as multiscale high-level feature.  ...  We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network.  ...  And all experiments are conducted on a computer with i5-6500 3.2 GHz CPU, 32 G main memory, and GTX1060 GPU.  ... 
doi:10.1155/2017/7521846 pmid:29463986 pmcid:PMC5804108 fatcat:mjj367delngshbisww46qj65ua

MS-LSTM: Exploring Spatiotemporal Multiscale Representations in Video Prediction Domain [article]

Zhifeng Ma, Hao Zhang, Jie Liu
2024 arXiv   pre-print
They obtain the multi-scale features of the video only by stacking layers, which is inefficient and brings unbearable training costs (such as memory, FLOPs, and training time).  ...  Concretely, we employ LSTMs with mirrored pyramid structures to construct spatial multi-scale representations and LSTMs with different convolution kernels to construct temporal multi-scale representations  ...  and the space complexity (memory) of a ConvLSTM unit is 16𝑏𝑐ℎ𝑤.  ... 
arXiv:2304.07724v3 fatcat:2apsmkfldfgf3ks55lkkgl7xbq

MTF-CRNN: Multiscale Time-Frequency Convolutional Recurrent Neural Network For Sound Event Detection

Keming Zhang, Yuanwen Cai, Yuan Ren, Ruida Ye, Liang He
2020 IEEE Access  
INDEX TERMS Pattern recognition, sound event detection, multiscale learning, time-frequency transform, convolutional recurrent neural network.  ...  To reduce neural network parameter counts and improve sound event detection performance, we propose a multiscale time-frequency convolutional recurrent neural network (MTF-CRNN) for sound event detection  ...  Second, we achieved the best performance compared with the method that applied the multiscale frequency-domain convolution (MF-CRNN), the multiscale time-domain convolution (MT-CRNN) and multiscale simultaneous  ... 
doi:10.1109/access.2020.3015047 fatcat:hzox2myax5gapo4so5wthrcd4q

Radar Signal Recognition and Localization Based on Multiscale Lightweight Attention Model

Weijian Si, Jiaji Luo, Zhian Deng, Abdellah Touhafi
2022 Journal of Sensors  
ResXNet, a novel multiscale lightweight attention model, is proposed in this paper.  ...  In addition, the convolution block attention module (CBAM) is utilized to effectively aggregate channel and spatial information, enabling the convolutional neural network model to extract features more  ...  Frank code and P3 code, as well as P1 code and P4 code, are similar and easy to confuse. The recognition accuracy of the other four radar signals is 100%.  ... 
doi:10.1155/2022/9970879 fatcat:6batnpny2rabdgvlljwiofl2he

Automatic Road Extraction from High-Resolution Remote Sensing Images using a Method Based on Densely Connected Spatial Feature-Enhanced Pyramid

Wu Qiangqiang, Luo Feng, Penghai Wu, Biao Wang, Yang Hui, Wu Yanlan
2020 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing  
Due to the impact of the loss of multiscale spatial features, the results of road extraction still contain incomplete or fractured results.  ...  To obtain more abundant multiscale features, a dense and global spatial pyramid pooling module based on Atrous Spatial Pyramid Pooling is built to perceive and aggregate the contextual information.  ...  Improved Residual Units The residual unit consists of a batch normalization (BN) layer [46] , a rectified linear unit (ReLU) [47] , a convolution layer, and a shortcut connection layer.  ... 
doi:10.1109/jstars.2020.3042816 fatcat:d7cnnciehnbonkxyabcs2xnenq

Anti-UAV High-Performance Computing Early Warning Neural Network Based on PSO Algorithm

Yang Lei, Honglei Yao, Bo Jiang, Tian Tian, Peifei Xing, Tongguang Ni
2022 Scientific Programming  
Convolution module 3 Convolution unit Number of channels: 1024 Convolution kernel: 3×3 Convolution unit Number of channels: 512 Convolution kernel: 1×1 Convolution unit Number of channels: 512 Convolution  ...  kernel: 3×3 Convolution unit Number of channels: 256 Convolution kernel: 1×1 Convolution unit Number of channels: 256 Convolution kernel: 3×3 Convolution unit Number of channels: 255 Convolution kernel  ... 
doi:10.1155/2022/7150128 fatcat:cfhumorgpjc3rple3qufnaynzu

Artifacts Reduction Using Multi-Scale Feature Attention Network in Compressed Medical Images

Seonjae Kim, Dongsan Jun
2022 Computers Materials & Continua  
Multiscale feature extraction layers have four Feature Extraction (FE) blocks. Each FE block consists of five convolution layers and one CA block for weighted skip connection.  ...  In general, image compression can introduce undesired coding artifacts, such as blocking artifacts and ringing effects.  ...  and ' * ' represent Parametric Rectified Linear Unit (PReLU) function as an activation function, filter weights, biases, and convolutional operation, respectively.  ... 
doi:10.32604/cmc.2022.020651 fatcat:tbnd2wbm3zg2zeibahhesj5leu

Neural Video Coding using Multiscale Motion Compensation and Spatiotemporal Context Model [article]

Haojie Liu, Ming Lu, Zhan Ma, Fan Wang, Zhihuang Xie, Xun Cao, Yao Wang
2020 arXiv   pre-print
decoder in the VAE for coding motion features that generates multiscale flow fields, 2) we design a novel adaptive spatiotemporal context model for efficient entropy coding for motion information, 3) we  ...  Novel features of NVC include: 1) To estimate and compensate motion over a large range of magnitudes, we propose an unsupervised multiscale motion compensation network (MS-MCN) together with a pyramid  ...  Nonlocal attention is adopted at the bottlenecks of both main and hyper encoders to enable saliency based bit allocation, and rectified linear unit (ReLU) is embedded with convolutions for enabling the  ... 
arXiv:2007.04574v1 fatcat:gc5lnxixnvemdildw246wzaduq

Working memory inspired hierarchical video decomposition with transformative representations [article]

Binjie Qin, Haohao Mao, Ruipeng Zhang, Yueqi Zhu, Song Ding, Xu Chen
2022 arXiv   pre-print
Then, patch recurrent convolutional LSTM networks with a backprojection module embody unstructured random representations of the control layer in working memory, recurrently projecting spatiotemporally  ...  To solve these problems, this study is the first to introduce a flexible visual working memory model in video decomposition tasks to provide interpretable and high-performance hierarchical deep architecture  ...  ACKNOWLEDGMENTS The authors would like to thank all the cited authors for providing the source codes used in this work and the anonymous reviewers for their valuable comments on the manuscript.  ... 
arXiv:2204.10105v3 fatcat:ifzpeay2qjfvbaznwruwc4dz5m

Learning deep autoregressive models for hierarchical data [article]

Carl R. Andersson, Niklas Wahlström, Thomas B. Schön
2021 arXiv   pre-print
We propose a model for hierarchical structured data as an extension to the stochastic temporal convolutional network.  ...  Results show that KL units on all levels are being used.  ...  For exact hyperparameter settings and additional implementation details we refer to the appendix and code 1 .  ... 
arXiv:2104.13853v3 fatcat:ufadhoc7ybeljfoctfqcqlmyem

Protein Secondary Structure Prediction Using Cascaded Convolutional and Recurrent Neural Networks [article]

Zhen Li, Yizhou Yu
2016 arXiv   pre-print
Our deep architecture leverages convolutional neural networks with different kernel sizes to extract multiscale local contextual features.  ...  In addition, considering long-range dependencies existing in amino acid sequences, we set up a bidirectional neural network consisting of gated recurrent unit to capture global contextual features.  ...  , multiscale convolutional neural network (CNN) layers, three stacked bidirectional gated recurrent unit (BGRU) layers and two fully connected hidden layers.  ... 
arXiv:1604.07176v1 fatcat:gbz6vi2lobgopa42kej6jsckce

Multiscale Convolutional Neural Networks with Attention for Plant Species Recognition

Xianfeng Wang, Chuanlei Zhang, Shanwen Zhang, Navid Razmjooy
2021 Computational Intelligence and Neuroscience  
A novel multiscale convolutional neural network with attention (AMSCNN) model is constructed for plant species recognition.  ...  In AMSCNN, multiscale convolution is used to learn the low-frequency and high-frequency features of the input images, and an attention mechanism is utilized to capture rich contextual relationships for  ...  Experiments and Results. e code environment is Win10 + CUDA + VS + Anaconda + Keras configuration GPU, the memory is 96 G, and the development environment is PyChARM.  ... 
doi:10.1155/2021/5529905 pmid:34285692 pmcid:PMC8275439 fatcat:r4efykvg35gfblkerfmdzmb4iq

Predictive Coding Based Multiscale Network with Encoder-Decoder LSTM for Video Prediction [article]

Chaofan Ling, Junpei Zhong, Weihua Li
2023 arXiv   pre-print
Code is available at https://github.com/Ling-CF/MSPN.  ...  We present a multi-scale predictive coding model for future video frames prediction.  ...  The 3D convolution is responsible for learning the shortterm memory within each group, and the LSTM performs long-term modeling to effectively manage long-term historical memory.  ... 
arXiv:2212.11642v3 fatcat:ubf624rgtjebbj2cs3b5quo3py

Multiscale, Thermomechanical Topology Optimization of Cellular Structures for Porous Injection Molds [chapter]

Tong Wu, Kim Brand, Doyle Hewitt, Andres Tovar
2017 Advances in Structural and Multidisciplinary Optimization  
The thermomechanical properties of the mesoscale cellular unit cells are estimated using homogenization theory.  ...  The objective of this research is to establish a multiscale topology optimization method for the optimal design of non-periodic cellular structures subjected to thermomechanical loads.  ...  The memory usage of the new code is also briefly discussed. Problem formulation The MBB beam is a classical problem in topology optimization.  ... 
doi:10.1007/978-3-319-67988-4_134 fatcat:jf4jxfpp45ah3iqyrpgepxwrfm

MS-RNN: A Flexible Multi-Scale Framework for Spatiotemporal Predictive Learning [article]

Zhifeng Ma, Hao Zhang, Jie Liu
2024 arXiv   pre-print
The results show the efficiency that RNN models incorporating our framework have much lower memory cost but better performance than before. Our code is released at .  ...  In order to improve the performance without increasing memory consumption, we focus on scale, which is another dimension to improve model performance but with low memory requirement.  ...  In addition, stacking multiple ConvLSTM units will enhance the model's temporal memory ability and expand the spatial receptive field, which is not available in LSTM, because stacking 1 × 1 convolutions  ... 
arXiv:2206.03010v7 fatcat:alqy7l4jdzf5tlgcdslpunqose
« Previous Showing results 1 — 15 out of 4,285 results