Available via license: CC BY 3.0
Content may be subject to copyright.
Journal of Physics: Conference Series
PAPER • OPEN ACCESS
Lightweight target shooting image analysis device
based on Raspberry Pi
To cite this article: Siyuan Lu 2022 J. Phys.: Conf. Ser. 2170 012042
View the article online for updates and enhancements.
You may also like
CCSDS-MHC on Raspberry Pi for
Lossless Hyperspectral Image
Compression
N A A Samah, N R M Noor, E A Bakar et
al.
-
Implementation of vibration signals
receiving unit on Raspberry single-board
computers
V A Faerman, A V Tsavnin and S A
Andreev
-
Implementation of Attendance System
Using Raspberry Pi
A P Sujana and A Y Prastyawan
-
This content was downloaded from IP address 2.59.1.218 on 01/02/2023 at 18:43
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
1
Lightweight target shooting image analysis device based on
Raspberry Pi
Siyuan Lu
School of Information Engineering, Wuhan University of Technology, Wuhan, Hubei,
China
Email: 301671@whut.edu.cn
Abstract: This paper shows a lightweight target image analysis device using Raspberry Pi as
the master control, which contains the processing and data analysis of the whole target image,
making it easier for the public to perform shooting analysis. System uses Raspberry Pi 4b as
the main control, python as the core language, the whole target surface processing is divided
into five parts: target ring analysis, bullet point analysis, data processing, data visualization and
user interface. The whole system is low-priced, convenient and lightweight, and can be
connected to any screen to display the whole shooting process and human-computer interaction.
In the test, the system shows excellent recognition accuracy and speed, flexible and simple, and
multiple interfaces make the program expansion very accessible. In the subsequent
development, it can be combined with deep learning to pool multi-dimensional shooting data
as a way to filter out the best shooting groups for subsequent training.
1.Introduction
Shooting is an essential item both as a daily training for the military and as a sports competition for the
general public. In the traditional shooting training, the reporting process is to record the shooting
results by manually judging the bullet holes on the target surface. However, it is highly subjective. In
recent years, the mainstream automatic reporting system for live ammunition is mainly divided into
electronic target surface, photoelectric target surface, acoustic photoelectric and fiber optic coding, but
its shortcomings are also obvious: many system components, high power consumption, and poor
mobility [1,2]. As for popular sports, the participation threshold is too high, which is not conducive to
the popularity of shooting sports, and the public urgently needs a low-threshold system. It is gratifying
to note that in recent years, image processing technology has advanced to an unprecedented level [3].
The use of image processing technology to solve the problem of target reporting is fast, accurate and
fair [4,5].
In this paper, we first introduce the hardware advantages of Raspberry Pi 4b, then introduce the
image processing of the target surface, followed by the analysis and visualization of the results, and
finally the analysis of the human-computer interaction page design.
2. Overall hardware system
The whole hardware system consists of camera, display, and mouse, and they are all controlled by
Raspberry Pi 4b as the main control The system block diagram is shown in Figure 1. The Raspberry Pi
4b, the 4-core Cortex-A72 architecture CPU used in this paper, was chosen for the main control
hardware module, which stores and processes images very fast. When not processing the program, it
will keep low power consumption with frequency at 800mhz. When running the whole vision
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
2
processing program, the CPU occupancy is less than 60% and the temperature only rises 3 degrees,
which is suitable for this program.
In the image processing module, the hardware consists of the camera and the Raspberry Pi main
control. When the target surface image changes, the camera intercepts and acquires the target surface
image, and transmits the image to Raspberry Pi for a series of image processing through the
connection with Raspberry Pi. Due to the high efficiency of Raspberry Pi 4b, the whole set of image
analysis and data processing will be completed within milliseconds and displayed to the user through
the visual operation interface.
In the visual operator interface module, the hardware consists of a mouse, a monitor and a
Raspberry Pi master control. All target data will be presented to the user through the screen. The user
is able to manipulate the interface options with the mouse for data reflow, allowing the Raspberry Pi to
process and display accordingly.
Figure 1. Hardware resource.
3. Image analysis and processing
3.1. Image distortion correction algorithm
In order to extract the data of subsequent target rings and ejection points, it is usually necessary to
geometrically correct the image to eliminate possible distortions and restore the spatial relationships of
pixels to their correct positions. The geometric correction is divided into two steps, spatial
transformation and grayscale interpolation.
To perform geometric correction, the main task is to find four standard locus points, generate a
rectangle according to the principle of maximum area, and find the transformation relationship
between the four vertices of the rectangle and the four standard points to generate a transformation
matrix. Using the obtained transformation matrix, the image is spatially transformed, and finally
grayscale interpolation is performed to complete the geometric correction task and output the
corrected image.
In the specific implementation of the algorithm, x y in Equation (1) is before distortion; x’ y’ in
Equation (2) is After distortion, the address mapping formula used in this paper as below:
𝑦𝑦
1𝑘1𝑟
𝑘2𝑟
𝑝1𝑟2𝑦
2𝑝2𝑥𝑦 (1)
𝑥𝑥
1𝑘1𝑟
𝑘2𝑟
2𝑝1𝑥𝑦𝑝2𝑟2𝑥
(2)
𝑟𝑥
𝑦
(3)
3.2 Image target ring recognition algorithm
After the image distortion correction algorithm, the image will be processed for the number of target
rings. (Figure 2) The acquired image is first grayscale converted, and then the noise of the image
needs to be removed, since the target surface contains little information except the number of rings
and bullet points, it is suitable to use mean filtering.
The filtered image needs to be binarized with a threshold of 60 at an average illumination of 1080
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
3
Lux. After binarization, morphological closure operation is performed on the image to obtain a clearer
contour and extract it. The extracted contours are firstly screened by an area screening to remove the
digital symbols on the target paper. It is tested that the screening threshold is 1000pt to remove all
characters when the distance between the camera and the target is 1m.
The remaining contour area sorting, the smallest three as area of 8-10 rings, and the equivalent
diameter to derive the corresponding diameter of these three rings, the minimum variance to derive the
corresponding diameter of each ring and extrapolate the 6, 7 ring diameter. The mean of the center of
gravity of the three rings is considered as the center of the target ring using the center of gravity
algorithm.
Figure 2. Ring count image processing.
Figure 3. Bullet point image processing.
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
4
3.3 Image bullet point recognition algorithm
For the processing of bullet points, since the bullet points may appear in the white area of 10 rings and
also appear in the effective area of dark green, the processing of bullet points needs to introduce the
double threshold method, setting two different thresholds, when one of them does not detect the bullet
points using the other value for re-analysis.
When the two thresholds are set, the image processing first uses the image difference to analyze
where the image transformation occurs, that is, the bullet point area, and then morphology open
operation to locate the bullet point and remove the noise, at this moment, the bullet point outline can
be derived (Figure 3) and the center of gravity and coordinates of the bullet point are also obtained.
4. Analysis and visualization of grades
4.1 Calculation of score values
After image processing, we got the coordinates of the circle center 𝑥,𝑦 the target point 𝑥,𝑦, the
radius of the target ring 𝑑 ~ For target shooting, the meaningful data are: the score of this target
shooting, the highest score representing the best performance, the lowest score representing the
farthest deviation, the variance of whether the score is stable or not.
𝑟 𝑥𝑥
𝑦𝑦
(4)
𝑝𝑜𝑖𝑛𝑡 𝑖 𝑟𝑑
/𝑑 𝑑
(5)
𝑎𝑣𝑒𝑟𝑎𝑔𝑒 ∑𝑝𝑜𝑖𝑛𝑡
/𝑁 (6)
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 ∑𝑝𝑜𝑖𝑛𝑡𝑎𝑣𝑒𝑟𝑎𝑔𝑒
∑𝑝𝑜𝑖𝑛𝑡
𝑎𝑣𝑒𝑟𝑎𝑔𝑒
(7)
4.2 Visualization interface
In the visualization interface, after processing the number of rings at first, the interface will mark all
contours in red and each ring contour in green.
After each shot, the system circles the bullet point and records the individual score, displaying each
analysis index. At the end of a round, the system will record the number of training sessions and
compare the average value of the new round's score with the previous round to help understand your
technical status.
Figure 4. Ring count recognition interface.
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
5
Figure 5. Bullet point analysis interface.
5. Experiments and analysis
5.1 Experimental Environment
The whole system was tested using a 20*20cm chest target, in a room with a light intensity of 1000
lux, at a distance of 1m from the target paper, firstly, a good camera was placed to capture all the
pictures.
5.2 Test index
5.2.1. Target ring identification. For the calculation of the ring distance, when the ring distance is 2cm,
the comparison of the test value and the theoretical value is as Table 1.
Table 1. Ring distance error rate.
Number of extractions Average test value 𝑒𝑟𝑟𝑜𝑟%
1 2.102 5.1
5 2.042 2.1
10 2.051 2.55
5.2.2 Bullet point identification. For the extraction of bullet points, observe the actual number of times
they landed in each area during multiple hits compared to the theoretical number of times the system
identified the corresponding area as Table 2.
Table 2. Bullet point recognition rate.
Rings area Number of actual hits Number of system Recognition rate%
10 22 22 100
10-9 42 41 97.62
9-8 52 51 98.08
8-7 35 37
100(5.71)
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
6
The program has a high success rate in rings 10-8, while in rings 8-7 projected by the algorithm
due to the misjudgment of the first few rings, its recognition itself after incorrectly identifying the
other rings into it, making a 5.71% error in the calculation of the other rings.
5.2.3 Program Robustness. In the robustness test for the program, the program will perform the
following actions for possible recognition errors.
Table 3. Robustness testing.
Errors Prompts Test times Successful times
No
p
icture Ente
r
“Ima
g
e not reco
g
nized” 20 20
Overflow stack occurre
d
“Insufficient memor
y
” 1 1
Image corruption “Image cannot be processed” 2 2
Incomplete images “Incomplete recognition” 10 10
Bullet points coincide “No shot /
b
ullet point overlap” 20 20
Anal
y
sis without shootin
g
“No shot / bullet
p
oint overla
p
” 20 20
Based on this, it can be concluded that the program is robust enough, and will prompt for various
accidents, and the success rate is 100%.
6. Conclusion
This paper introduces a lightweight targeting analysis device with Raspberry Pi as the main control.
The system can accurately process the target shooting images, analyze the performance indexes, and
display them to the user through the UI interface.
After testing, the system has a stable error of finding target rings within 5%, the success rate of
recognizing target rings is over 97%, and there have friendly prompt messages in facing various error
programs, and the whole system has high robustness. Due to the cross-platform nature of python and
Qt, the program can be well ported to other non-Linux systems.
This system can present an idea for future image processing applications in daily life and can be
extended to test and analyze targeting data in more dimensions. Even multi-dimensional, massive data
will be used for deep learning and predictive analysis of player condition. It has some construction
ideas for the development of image processing and lightweight devices.
References
[1] Pan Nan, Jiang Xuemei, Pan Dilin et al. Bullet fast matching based on single point laser
detection[J]Journal of Intelligent & Fuzzy Systems, 2021, 40(4)
[2] Y. W, Q. Z. A Non-Photorealistic Approach to Night Image Enhancement Using Human JND:
2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control.
[3] Feng Y, Zhao H Y, Li X F. A multi-scale 3D Otsu thresholding algorithm for medical image
segmentation [J]Digital Signal Processing,2017,60(1):186-199.
[4] F. H, Y. I, M. H, et al. Image Denoising With Edge-Preserving and Segmentation Based on Mask
NHA[J]. IEEE Transactions on Image Processing, 2015,24(12): 6025-6033.
[5] He Chenggang, Ding Chris H. Q., Chen Sibao et al. Bullet Engraving Automated Comparison
Optimization Method Based on Second Moment Invariant[J] Journal of Physics: Conference
Series, 2021, 1746(1)
[6] Kolya Kumar Anup, Mondal Debasish, Ghosh Alokesh et al. Direction and Speed Control of DC
Motor Using Raspberry PI and Python-Based GUI[J] International Journal of
Hyperconnectivity and the Internet of Things (IJHIoT), 2021, 5(2)
[7] K. Sarat Kumar, P. Kanakaraja , K. Ch. Sri Kayva et al. Artificial Intelligence (Ai) and Personal
Assistance for Disabled People using Raspberry Pi[J] International Journal of Innovative
Technology and Exploring Engineering (IJITEE), 2019, 8(7)