The purpose of this project was to explore residual neural networks (ResNets) about the trend and error rates of using different regularizations for number of weights in each hidden layer and using different layers to investigate the ResNets model, and also discussing whether scaling the output size of each layer affects the error rate. This study was utilising the Keras which provided the functional API to construct a model, and analyse the result by adopting the built-in function to obtain the weights of ResNet to calculate their Root-Mean-Square value with each layer. Moreover, this study also showed the algorithm of generating multiple layers of residual units and the architecture of models which have different sizes for output layers. The results revealed that the performance of L1_L2 is poorer than L1 and L2, and output size for the convolutional layer is needed to reduce or resulting in poor performance. The experiment of multi-layers presented the feature of residual neural network which higher layers can maintain good accuracy.