ArticlePDF Available

Lightweight target shooting image analysis device based on Raspberry Pi

Authors:

Abstract and Figures

This paper shows a lightweight target image analysis device using Raspberry Pi as the master control, which contains the processing and data analysis of the whole target image, making it easier for the public to perform shooting analysis. System uses Raspberry Pi 4b as the main control, python as the core language, the whole target surface processing is divided into five parts: target ring analysis, bullet point analysis, data processing, data visualization and user interface. The whole system is low-priced, convenient and lightweight, and can be connected to any screen to display the whole shooting process and human-computer interaction. In the test, the system shows excellent recognition accuracy and speed, flexible and simple, and multiple interfaces make the program expansion very accessible. In the subsequent development, it can be combined with deep learning to pool multi-dimensional shooting data as a way to filter out the best shooting groups for subsequent training.
Content may be subject to copyright.
Journal of Physics: Conference Series
PAPER • OPEN ACCESS
Lightweight target shooting image analysis device
based on Raspberry Pi
To cite this article: Siyuan Lu 2022 J. Phys.: Conf. Ser. 2170 012042
View the article online for updates and enhancements.
You may also like
CCSDS-MHC on Raspberry Pi for
Lossless Hyperspectral Image
Compression
N A A Samah, N R M Noor, E A Bakar et
al.
-
Implementation of vibration signals
receiving unit on Raspberry single-board
computers
V A Faerman, A V Tsavnin and S A
Andreev
-
Implementation of Attendance System
Using Raspberry Pi
A P Sujana and A Y Prastyawan
-
This content was downloaded from IP address 2.59.1.218 on 01/02/2023 at 18:43
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
1
Lightweight target shooting image analysis device based on
Raspberry Pi
Siyuan Lu
School of Information Engineering, Wuhan University of Technology, Wuhan, Hubei,
China
Email: 301671@whut.edu.cn
Abstract: This paper shows a lightweight target image analysis device using Raspberry Pi as
the master control, which contains the processing and data analysis of the whole target image,
making it easier for the public to perform shooting analysis. System uses Raspberry Pi 4b as
the main control, python as the core language, the whole target surface processing is divided
into five parts: target ring analysis, bullet point analysis, data processing, data visualization and
user interface. The whole system is low-priced, convenient and lightweight, and can be
connected to any screen to display the whole shooting process and human-computer interaction.
In the test, the system shows excellent recognition accuracy and speed, flexible and simple, and
multiple interfaces make the program expansion very accessible. In the subsequent
development, it can be combined with deep learning to pool multi-dimensional shooting data
as a way to filter out the best shooting groups for subsequent training.
1.Introduction
Shooting is an essential item both as a daily training for the military and as a sports competition for the
general public. In the traditional shooting training, the reporting process is to record the shooting
results by manually judging the bullet holes on the target surface. However, it is highly subjective. In
recent years, the mainstream automatic reporting system for live ammunition is mainly divided into
electronic target surface, photoelectric target surface, acoustic photoelectric and fiber optic coding, but
its shortcomings are also obvious: many system components, high power consumption, and poor
mobility [1,2]. As for popular sports, the participation threshold is too high, which is not conducive to
the popularity of shooting sports, and the public urgently needs a low-threshold system. It is gratifying
to note that in recent years, image processing technology has advanced to an unprecedented level [3].
The use of image processing technology to solve the problem of target reporting is fast, accurate and
fair [4,5].
In this paper, we first introduce the hardware advantages of Raspberry Pi 4b, then introduce the
image processing of the target surface, followed by the analysis and visualization of the results, and
finally the analysis of the human-computer interaction page design.
2. Overall hardware system
The whole hardware system consists of camera, display, and mouse, and they are all controlled by
Raspberry Pi 4b as the main control The system block diagram is shown in Figure 1. The Raspberry Pi
4b, the 4-core Cortex-A72 architecture CPU used in this paper, was chosen for the main control
hardware module, which stores and processes images very fast. When not processing the program, it
will keep low power consumption with frequency at 800mhz. When running the whole vision
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
2
processing program, the CPU occupancy is less than 60% and the temperature only rises 3 degrees,
which is suitable for this program.
In the image processing module, the hardware consists of the camera and the Raspberry Pi main
control. When the target surface image changes, the camera intercepts and acquires the target surface
image, and transmits the image to Raspberry Pi for a series of image processing through the
connection with Raspberry Pi. Due to the high efficiency of Raspberry Pi 4b, the whole set of image
analysis and data processing will be completed within milliseconds and displayed to the user through
the visual operation interface.
In the visual operator interface module, the hardware consists of a mouse, a monitor and a
Raspberry Pi master control. All target data will be presented to the user through the screen. The user
is able to manipulate the interface options with the mouse for data reflow, allowing the Raspberry Pi to
process and display accordingly.
Figure 1. Hardware resource.
3. Image analysis and processing
3.1. Image distortion correction algorithm
In order to extract the data of subsequent target rings and ejection points, it is usually necessary to
geometrically correct the image to eliminate possible distortions and restore the spatial relationships of
pixels to their correct positions. The geometric correction is divided into two steps, spatial
transformation and grayscale interpolation.
To perform geometric correction, the main task is to find four standard locus points, generate a
rectangle according to the principle of maximum area, and find the transformation relationship
between the four vertices of the rectangle and the four standard points to generate a transformation
matrix. Using the obtained transformation matrix, the image is spatially transformed, and finally
grayscale interpolation is performed to complete the geometric correction task and output the
corrected image.
In the specific implementation of the algorithm, x y in Equation (1) is before distortion; x’ y’ in
Equation (2) is After distortion, the address mapping formula used in this paper as below:
𝑦󰆒𝑦
󰇛1𝑘1𝑟
𝑘2𝑟
󰇜𝑝1󰇛𝑟2𝑦
󰇜2𝑝2𝑥𝑦 (1)
𝑥󰆒𝑥
󰇛1𝑘1𝑟
𝑘2𝑟
󰇜2𝑝1𝑥𝑦𝑝2󰇛𝑟2𝑥
󰇜 (2)
𝑟𝑥
𝑦
(3)
3.2 Image target ring recognition algorithm
After the image distortion correction algorithm, the image will be processed for the number of target
rings. (Figure 2) The acquired image is first grayscale converted, and then the noise of the image
needs to be removed, since the target surface contains little information except the number of rings
and bullet points, it is suitable to use mean filtering.
The filtered image needs to be binarized with a threshold of 60 at an average illumination of 1080
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
3
Lux. After binarization, morphological closure operation is performed on the image to obtain a clearer
contour and extract it. The extracted contours are firstly screened by an area screening to remove the
digital symbols on the target paper. It is tested that the screening threshold is 1000pt to remove all
characters when the distance between the camera and the target is 1m.
The remaining contour area sorting, the smallest three as area of 8-10 rings, and the equivalent
diameter to derive the corresponding diameter of these three rings, the minimum variance to derive the
corresponding diameter of each ring and extrapolate the 6, 7 ring diameter. The mean of the center of
gravity of the three rings is considered as the center of the target ring using the center of gravity
algorithm.
Figure 2. Ring count image processing.
Figure 3. Bullet point image processing.
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
4
3.3 Image bullet point recognition algorithm
For the processing of bullet points, since the bullet points may appear in the white area of 10 rings and
also appear in the effective area of dark green, the processing of bullet points needs to introduce the
double threshold method, setting two different thresholds, when one of them does not detect the bullet
points using the other value for re-analysis.
When the two thresholds are set, the image processing first uses the image difference to analyze
where the image transformation occurs, that is, the bullet point area, and then morphology open
operation to locate the bullet point and remove the noise, at this moment, the bullet point outline can
be derived (Figure 3) and the center of gravity and coordinates of the bullet point are also obtained.
4. Analysis and visualization of grades
4.1 Calculation of score values
After image processing, we got the coordinates of the circle center 󰇛𝑥,𝑦󰇜 the target point 󰇛𝑥,𝑦󰇜, the
radius of the target ring 𝑑 ~ For target shooting, the meaningful data are: the score of this target
shooting, the highest score representing the best performance, the lowest score representing the
farthest deviation, the variance of whether the score is stable or not.
𝑟 󰇛𝑥𝑥
󰇜󰇛𝑦𝑦
󰇜 (4)
𝑝𝑜𝑖𝑛𝑡 𝑖 󰇛 𝑟𝑑
󰇜/󰇛𝑑 𝑑
󰇜 (5)
𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑝𝑜𝑖𝑛𝑡
 /𝑁 (6)
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 󰇛𝑝𝑜𝑖𝑛𝑡𝑎𝑣𝑒𝑟𝑎𝑔𝑒󰇜
 𝑝𝑜𝑖𝑛𝑡
 𝑎𝑣𝑒𝑟𝑎𝑔𝑒
(7)
4.2 Visualization interface
In the visualization interface, after processing the number of rings at first, the interface will mark all
contours in red and each ring contour in green.
After each shot, the system circles the bullet point and records the individual score, displaying each
analysis index. At the end of a round, the system will record the number of training sessions and
compare the average value of the new round's score with the previous round to help understand your
technical status.
Figure 4. Ring count recognition interface.
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
5
Figure 5. Bullet point analysis interface.
5. Experiments and analysis
5.1 Experimental Environment
The whole system was tested using a 20*20cm chest target, in a room with a light intensity of 1000
lux, at a distance of 1m from the target paper, firstly, a good camera was placed to capture all the
pictures.
5.2 Test index
5.2.1. Target ring identification. For the calculation of the ring distance, when the ring distance is 2cm,
the comparison of the test value and the theoretical value is as Table 1.
Table 1. Ring distance error rate.
Number of extractions Average test value 𝑒𝑟𝑟𝑜𝑟%
1 2.102 5.1
5 2.042 2.1
10 2.051 2.55
5.2.2 Bullet point identification. For the extraction of bullet points, observe the actual number of times
they landed in each area during multiple hits compared to the theoretical number of times the system
identified the corresponding area as Table 2.
Table 2. Bullet point recognition rate.
Rings area Number of actual hits Number of system Recognition rate%
10 22 22 100
10-9 42 41 97.62
9-8 52 51 98.08
8-7 35 37
1005.71
ISCME-2021
Journal of Physics: Conference Series 2170 (2022) 012042
IOP Publishing
doi:10.1088/1742-6596/2170/1/012042
6
The program has a high success rate in rings 10-8, while in rings 8-7 projected by the algorithm
due to the misjudgment of the first few rings, its recognition itself after incorrectly identifying the
other rings into it, making a 5.71% error in the calculation of the other rings.
5.2.3 Program Robustness. In the robustness test for the program, the program will perform the
following actions for possible recognition errors.
Table 3. Robustness testing.
Errors Prompts Test times Successful times
No
p
icture Ente
r
“Ima
g
e not reco
g
nized” 20 20
Overflow stack occurre
d
“Insufficient memor
y
” 1 1
Image corruption “Image cannot be processed” 2 2
Incomplete images “Incomplete recognition” 10 10
Bullet points coincide “No shot /
b
ullet point overlap” 20 20
Anal
y
sis without shootin
g
“No shot / bullet
oint overla
” 20 20
Based on this, it can be concluded that the program is robust enough, and will prompt for various
accidents, and the success rate is 100%.
6. Conclusion
This paper introduces a lightweight targeting analysis device with Raspberry Pi as the main control.
The system can accurately process the target shooting images, analyze the performance indexes, and
display them to the user through the UI interface.
After testing, the system has a stable error of finding target rings within 5%, the success rate of
recognizing target rings is over 97%, and there have friendly prompt messages in facing various error
programs, and the whole system has high robustness. Due to the cross-platform nature of python and
Qt, the program can be well ported to other non-Linux systems.
This system can present an idea for future image processing applications in daily life and can be
extended to test and analyze targeting data in more dimensions. Even multi-dimensional, massive data
will be used for deep learning and predictive analysis of player condition. It has some construction
ideas for the development of image processing and lightweight devices.
References
[1] Pan Nan, Jiang Xuemei, Pan Dilin et al. Bullet fast matching based on single point laser
detection[J]Journal of Intelligent & Fuzzy Systems, 2021, 40(4)
[2] Y. W, Q. Z. A Non-Photorealistic Approach to Night Image Enhancement Using Human JND:
2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control.
[3] Feng Y, Zhao H Y, Li X F. A multi-scale 3D Otsu thresholding algorithm for medical image
segmentation [JDigital Signal Processing,2017,60(1):186-199.
[4] F. H, Y. I, M. H, et al. Image Denoising With Edge-Preserving and Segmentation Based on Mask
NHA[J]. IEEE Transactions on Image Processing, 2015,24(12): 6025-6033.
[5] He Chenggang, Ding Chris H. Q., Chen Sibao et al. Bullet Engraving Automated Comparison
Optimization Method Based on Second Moment Invariant[J] Journal of Physics: Conference
Series, 2021, 1746(1)
[6] Kolya Kumar Anup, Mondal Debasish, Ghosh Alokesh et al. Direction and Speed Control of DC
Motor Using Raspberry PI and Python-Based GUI[J] International Journal of
Hyperconnectivity and the Internet of Things (IJHIoT), 2021, 5(2)
[7] K. Sarat Kumar, P. Kanakaraja , K. Ch. Sri Kayva et al. Artificial Intelligence (Ai) and Personal
Assistance for Disabled People using Raspberry Pi[J] International Journal of Innovative
Technology and Exploring Engineering (IJITEE), 2019, 8(7)
... All sensors and components are connected to a Raspberry Pi 4 Model B (4 GB RAM), which serves as the control and data processing unit. The Raspberry Pi platform is widely used in embedded vision systems due to its low power consumption, integrated I/O interfaces, and sufficient computational capability for lightweight image processing tasks [14]. ...
Article
Full-text available
In our previous study, we proposed a vision-based ranging algorithm (LRA) that utilized a monocular camera with four lasers (MC4L) for indoor positioning in dark environments. The LRA achieved a positioning error within 2.4 cm using a logarithmic regression algorithm to establish a linear relationship between the illuminated area and real distance. However, it cannot distinguish between obstacles and walls. Hence, it results in severe errors in complex environments. To address this limitation, we developed an LBP-CNNs model that combines local binary patterns (LBPs) and self-attention mechanisms. The model effectively identifies obstacles based on the laser reflectivity of different material surfaces. It reduces positioning errors to 1.27 cm and achieves an obstacle recognition accuracy of 92.3%. In this paper, we further enhance LBP-CNNs by combining it with fast Fourier transform (FFT) to create an LBP-FFT-CNNs model that significantly improves the recognition accuracy of obstacles with similar textures to 96.3% and reduces positioning errors to 0.91 cm. In addition, an inertial measurement unit (IMU) is integrated into the MC4L device (MC4L-IMU) to design an inertial-based indoor positioning algorithm. Experimental results show that the LBP-FFT-CNNs model achieves the highest determination coefficient (R² = 0.9949), outperforming LRA (R² = 0.9867) and LBP-CNN (R² = 0.9934). In addition, all models show strong stability, and the prediction standard index (PSI) values are always below 0.02. To evaluate model robustness and MC4L-IMU work reliably under different conditions, the experiments were conducted in a controlled indoor environment with different obstacle materials and lighting conditions.
Article
Full-text available
This paper presents the design and implementation of control strategy for both the speed and direction of a direct current (DC) motor using Android-based application in smart phone. The Raspberry Pi 3 with a motor driver controller has been used to implement the control action via Python-based user defined programming. The Android application has been developed using Android Developer Tools (ADT) in Java platform. The Android apps work like a client and communicates with Raspberry Pi through wi-fi connectivity. Finally, a small graphical user interface (GUI) has been created in Python in order to interface and control the motor with buttons in GUI. The advantages of GUI are that it is attractive, user friendly, and even a layman can work with the application developed in GUI.
Article
Full-text available
In the research of bullet engraving comparison method at home and abroad, the traditional method is through visual observation by microscope, comparing line-type engraving of two bullets to distinct whether the line match, which is inefficient and having big error. In this paper we proposed method based on moment invariant to identify the bullet engraving, which eliminate the actual measurement error (translation error and rotation error). Finally we get the high performance result of bullet engraving identifying based on second moment invariant.
Article
Bullet trace detection’s main task is determining the type of bullet and the identity of the gun that fired it. In order to solve conventional bullet trace matching method problems, a bullet fast matching method based on single point laser detection is proposed. First, adaptive control of the bullet center position and cylinder axis were performed. Then, a laser displacement sensor was implemented to perform a 360° detection in order to trace rifling on the bullet surface in the circumferential direction; grayscale morphological filtering was implemented in order to de-noise detection data, and the Pearson correlation coefficient was implemented in order to perform the trace similarity matching calculation, thereby achieving a fast bullet trace matching. Moreover, the algorithm’s effectiveness was verified via practical testing data.
Article
Thresholding technique is one of the most imperative practices to accomplish image segmentation. In this paper, a novel thresholding algorithm based on 3D Otsu and multi-scale image representation is proposed for medical image segmentation. Considering the high time complexity of 3D Otsu algorithm, an acceleration variant is invented using dimension decomposition rule. In order to reduce the effects of noises and weak edges, multi-scale image representation is brought into the segmentation algorithm. The whole segmentation algorithm is designed as an iteration procedure. In each iteration, the image is segmented by the efficient 3D Otsu, and then it is filtered by a fast local Laplacian filtering to get a smoothed image which will be input into the next iteration. Finally, the segmentation results are pooled to get a final segmentation using majority voting rules. The attractive features of the algorithm are that its segmentation results are stable, it is robust to noises and it holds for both bi-level and multi-level thresholding cases. Experiments on medical MR brain images are conducted to demonstrate the effectiveness of the proposed method. The experimental results indicate that the proposed algorithm is superior to the other multilevel thresholding algorithms consistently.
Article
In this paper, we propose a zero-mean white Gaussian noise removal method using high-resolution frequency analysis. It is difficult to separate an original image component from a noise component when using discrete Fourier transform (DFT) or discrete cosine transform (DCT) for analysis because sidelobes occur in the results. Two-dimensional non-harmonic analysis (2D NHA) is a high-resolution frequency analysis technique that improves noise removal accuracy because of its sidelobe reduction feature. However, spectra generated by NHA are distorted, because of which the signal of the image is non-stationary. In this paper, we analyzes each region with a homogeneous texture in the noisy image.Non-uniform regions that occur due to segmentation are analyzed by an extended 2D NHA method called Mask NHA. We conducted an experiment using a simulation image, and found that Mask NHA denoising attains a higher PSNR value than state-of-the-art methods if a suitable segmentation result can be obtained from the input image, even though parameter optimization was incomplete. This experimental result exhibits the upper limit on the value of PSNR in our Mask NHA denoising method. The performance of Mask NHA denoising is expected to approach the limit of PSNR by improving the segmentation method.
Artificial Intelligence (Ai) and Personal Assistance for Disabled People using Raspberry Pi[J]
  • Sarat Kumar
Sri Kayva et al. Artificial Intelligence (Ai) and Personal Assistance for Disabled People using Raspberry Pi
  • K Kumar
  • P Kanakaraja
  • K Ch
K. Sarat Kumar, P. Kanakaraja, K. Ch. Sri Kayva et al. Artificial Intelligence (Ai) and Personal Assistance for Disabled People using Raspberry Pi[J] International Journal of Innovative Technology and Exploring Engineering (IJITEE), 2019, 8(7)