Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
×
Our experiments show that BFlip effectively reduces model size and computation with negligible accuracy impact. The proposed accelerator achieves 2.45 × speedup ...
Abstract—Future deep neural networks (DNNs) tend to grow deeper and contain more trainable weights. Although methods such as pruning and quantization are ...
BFlip clusters similar bit matrices together, and finds a combination of row and column flips for each bit matrix to minimize its distance to the centroid of ...
We show this approach reduces average power consumption for a single crossbar convolution by up to a factor of x16 for an unsigned 8-bit input image, where each ...
Flipping Bits to Share Crossbars in ReRAM-Based DNN Accelerator. L Zhao, Y Zhang, J Yang. 2021 IEEE 39th International Conference on Computer Design (ICCD) ...
Aug 23, 2022 · In this paper, we propose SRA - a secure ReRAM-based DNN accelerator that stores DNN weights on crossbars in an encrypted format while still ...
In this paper, we propose SRA – a secure ReRAM-based DNN accelerator that stores DNN weights on crossbars in an encrypted format while still maintaining ReRAM' ...
Zhao et al., “Flipping Bits to Share Crossbars in ReRAM-Based DNN Accelera- tor,” in ICCD, 2021. [15] A. Shafiee et al., “ISAAC: A convolutional neural ...
Our Crossbar-Level Sharing (CLS) scheme flips the bit matrices of different crossbars to reduce their distance, as a result, we only need to store the bit ...