ResNet: Enabling Deep Convolutional Neural Networks through Residual Learning
By: Xingyu Liu, Kun Ming Goh
Potential Business Impact:
Lets computers learn from many more picture layers.
Convolutional Neural Networks (CNNs) has revolutionized computer vision, but training very deep networks has been challenging due to the vanishing gradient problem. This paper explores Residual Networks (ResNet), introduced by He et al. (2015), which overcomes this limitation by using skip connections. ResNet enables the training of networks with hundreds of layers by allowing gradients to flow directly through shortcut connections that bypass intermediate layers. In our implementation on the CIFAR-10 dataset, ResNet-18 achieves 89.9% accuracy compared to 84.1% for a traditional deep CNN of similar depth, while also converging faster and training more stably.
Similar Papers
ResNets Are Deeper Than You Think
Machine Learning (CS)
Makes computer learning better by changing how it learns.
Step by Step Network
CV and Pattern Recognition
Builds deeper computer brains that learn better.
Research on Brain Tumor Classification Method Based on Improved ResNet34 Network
CV and Pattern Recognition
Spots brain tumors faster and more accurately.