联系我们
Isaac Scientific Publishing
Frontiers in Signal Processing
FSP > Volume 4, Number 4, October 2020

CIFAR-10 Image Classification Based on Convolutional Neural Network

Download PDF  (307.9 KB)PP. 100-106,  Pub. Date:October 7, 2020
DOI: 10.22606/fsp.2020.44004

Author(s)
Xiyun Lv
Affiliation(s)
College of Electrical & Information Engineering, Southwest Minzu University, Chengdu 610041, China
Abstract
Aiming at the problems of gradient diffusion and network redundancy caused by the degradation of network performance during the training of most convolutional neural image classification models, the ResNet neural network model was improved, and the data expansion technology was used to expand the data and use SGD to fine-tune the depth Convolutional network to avoid gradient dissipation. In the CIFAR-10 test set, experiments show that the test accuracy of CIFAR-10 in the model data set reaches 90.85%. Compared with other models, it improves the accuracy of image classification. And by manually observing and comparing the classification effects of the 10 types of objects of the model, the model can distinguish each object more accurately.
Keywords
image classification, ResNet, data augmentation, CIFAR-10
References
  • [1]  1. Y. Bengio, P. Lamblin, D. Popovici, et al. Greedy layer-wise training of deep networks[C]. In Proceedings of the 20th Annual Conference on Neural Information Processing Systems, 2006: 153-160.
  • [2]  2. A. Krizhevsky, I. Sutskever, E. G. Hinton. Image Net classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
  • [3]  3. K. Simonyan, A. Zisserman. Very deep convolutional networks for large-scale image recognition[J]. ar Xiv preprint ar Xiv: 1409.1556, 2014.
  • [4]  4. S. Christian, W. Liu, Y. Q. Jia, et al. Going deeper with convolutions[C]. In Proceedings of the 28th IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015: 1-9.
  • [5]  5. R. K. Srivastava, K. Greff, J. Schmidhuber. Training very deep networks[C]. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems, 2015: 2377-2385.
  • [6]  6. K. M. He, X. Y. Zhang, S. Q. Ren, et al. Deep residual learning for image recognition[C]. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
  • [7]  7. Y. Yamada, M. Wamura, K. Kise. Deep pyramidal residual networks with separated stochastic depth[J]. ar Xiv preprint ar Xiv: 1612.01230, 2016.
  • [8]  8. K. M. He, X. Y. Zhang, S. Q. Ren, et al. Deep residual learning for image recognition[C]. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
  • [9]  9. S. Zagoruyko, N. Komodakis. Wide residual networks[C]. In Proceedings of the 27th British Machine Vision Conference, 2016: 1-12.
  • [10]  10. T. Zhang, G. J. Qi, B. Xiao, et al. Interleaved group convolutions for deep neural networks[J]. ar Xiv preprint ar Xiv: 1707.02725, 2017.
  • [11]  11. G. Huang, Z. Liu, L. Van Der Maaten, et al. Densely connected convolutional networks[C]. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2261-2269.
  • [12]  12. L. M. Zhao, J. D. Wang, X. Li, et al. On the connection of deep fusion to ensembling[J]. ar Xiv preprint ar Xiv: 1611.07718, 2016.
  • [13]  13. L. Shen, Z. C. Lin, Q. Huang. Relay backpropagation for effective learning of deep convolutional neural networks[C]. In Proceedings of the 14th Computer Vision European Conference, 2016: 467-482.
  • [14]  14. Yang Mengzhuo, Guo Mengjie, Fang Liang. Research on Image Classification Algorithm Based on Keras Convolutional Neural Network[J]. Science and Technology Wind, 2019, 000(023):117-118.
  • [15]  15. Zhou Yuepeng, Lu Xili. CNN Application Research Based on Combined Activation Function[J]. Journal of Shaoguan University, 2019(12):24-30.
  • [16]  16. Pang Sisi, Huang Chengcheng. Research on Image Classification Based on Convolutional Neural Network[J]. Modern Computer, 2019(23).
Copyright © 2020 Isaac Scientific Publishing Co. All rights reserved.