ArticlePDF Available

High-performance subpixel edge location based on FPGA for horizon sensors

Authors:

Abstract and Figures

Horizon edge localization accuracy and speed are key factors in the performance of horizon sensors. This paper proposes a high-performance sub-pixel edge localization algorithm base on Field Programmable Gate Array (FPGA) for horizon sensors. The algorithm is carefully designed and simplified according to the computational capabilities and limitations of FPGAs. By making full use of the parallel computing capability of FPGA and carrying out multi-stage pipeline design, the algorithm can complete image acquisition, rough edge localization, sub-pixel edge localization, and projections from pixel points to unit vectors at the same time, which greatly reduces the delay caused by image processing. The experimental results show that FPGA-based image processing shows a significant superiority in terms of speed compared to traditional embedded processors.
Content may be subject to copyright.
Journal of Physics: Conference
Series
PAPER • OPEN ACCESS
High-performance subpixel edge location based
on FPGA for horizon sensors
To cite this article: Huajian Deng
et al
2024
J. Phys.: Conf. Ser.
2746 012038
View the article online for updates and enhancements.
You may also like
Analysis of FPGA theory and its
development potential
Hongrui Huang
-
Via-switch FPGA with transistor-free
programmability enabling energy-efficient
near-memory parallel computation
Masanori Hashimoto, Xu Bai, Naoki Banno
et al.
-
Radiation effects in reconfigurable FPGAs
Heather Quinn
-
This content was downloaded from IP address 94.176.52.36 on 24/05/2024 at 14:38
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
1
High-performance subpixel edge location based on FPGA for
horizon sensors
Huajian Deng1, 2, 3, Hao Wang1, 2, 3, *and Zhonghe Jin1, 2, 3
1 Micro-Satellite Research Center, Zhejiang University, Hangzhou, 310027, China
2 Huanjiang Laboratory, Zhuji, 311899, China
3 Key Laboratory of Micro-Nano Satellite Research Zhejiang Province, Hangzhou,
310027, China
* Corresponding author’s e-mail: roger@zju.edu.cn
Abstract. Horizon edge localization accuracy and speed are key factors in the performance of
horizon sensors. This paper proposes a high-performance sub-pixel edge localization algorithm
base on Field Programmable Gate Array (FPGA) for horizon sensors. The algorithm is
carefully designed and simplified according to the computational capabilities and limitations of
FPGAs. By making full use of the parallel computing capability of FPGA and carrying out
multi-stage pipeline design, the algorithm can complete image acquisition, rough edge
localization, sub-pixel edge localization, and projections from pixel points to unit vectors at the
same time, which greatly reduces the delay caused by image processing. The experimental
results show that FPGA-based image processing shows a significant superiority in terms of
speed compared to traditional embedded processors.
1. Introduction
Horizon sensors provide attitude information and position information to the spacecraft by extracting
the horizon of the target celestial body [1]. It is widely used in near-Earth and deep space exploration
missions for its simplicity, high reliability and strong autonomy [2-3]. Compared to conventional
scanning horizon sensors, which contain moving components, imaging horizon sensors are less
difficult to develop. A number of imaging horizon sensitizers, each with its own characteristics, were
developed [4-6].
Higher performance is a constant pursuit in the development of sensors. According to the analysis
of covariance, the horizon edge localization accuracy directly affects the horizon sensor's accuracy.
More precise edge localization algorithms and higher resolution detectors are both effective in
improving sensor's accuracy. Christian [7] proposed a sub-pixel edge extraction algorithm based on
Zernike moments, along with a complex and comprehensive set of image processing steps, which is
certainly a good guideline for related applications. Zhang et al. [8] proposed a gradient direction partial
area effect method, and tests in simulation and real images demonstrate its superior localization
accuracy and robustness. Kikuya et al. [9] employs a 3280×2464 resolution camera to achieve three-
axis attitude determination using horizon extraction and terrain matching. However, the computational
effort associated with complex algorithms and high-resolution detectors can significantly slow down
the performance of the horizon sensor, which is unacceptable for its high-level applications.
FPGA is commonly used in horizon sensor hardware, which is often responsible for
communicating with the camera chip to take pictures. As the capabilities of FPGAs increase, more
image processing steps should be done in the FPGA, thereby gaining significant speed advantages.
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
2
FPGA-based image processing for star tracker is well established [10]. Various edge location
algorithms including Sobel algorithm [11], canny algorithm [12], and Hessian matrix-based algorithm
[13] are also well implemented. Therefore, high-performance horizon edge localization algorithms
should also be developed based on FPGA.
2. Algorithm design
2.1. Rough edge localization
The horizon edge is usually the area of the image where the gradient changes most dramatically. Thus,
horizon edges can be roughly extracted using the Sobel operator, image binarization and image
erosion.
Sobel operator performs a 2D-spatial-gradient measurement on images, and its specific steps are
as follows. For a particular 3×3 neighbourhood in the image, i.e.:
11 12 13
21 22 23
31 32 33
p p p
p p p
p p p





(1)
where
11 12 13 33
, , , ,p p p p
is the grayscale of the pixel. Then the horizontal gradient
x
G
and the vertical
gradient
y
G
of
22
p
are:
(2)
(3)
From this, the Sobel gradient
G
of
22
p
are:
22
xy
G G G=+
(4)
The multiplication and square operations are often very time-consuming, so the Sobel gradient
G can be approximated as:
xy
G G G+
(5)
Since these gradients are not directly involved in subsequent computations, such an approximation
does not affect the final performance. For the sake of convenience, the resulting data is noted as Sobel
gradient data.
Based on the Sobel gradient
G
, edges can be initially extracted using image binarization.
Specifically, for a pixel in the image, if its corresponding Sobel gradient
G
is greater than the Sobel
gradient threshold
T
, it is noted as 1, otherwise it is noted as 0. The resulting data is noted as image
binarization data. The Sobel gradient distribution of an image tends to vary depending on the lighting
conditions, and a constant threshold
T
is often not sufficient. Therefore, the mean value of the image
Sobel gradient multiplied by an empirical factor, i.e., 4, was chosen as the dynamically varying
threshold
T
.
In addition to the horizon edges, the portion of the binarized image that is 1 will also include the
surface texture. The horizon edges tend to be thick and bright compared to the surface texture. Thus,
image erosion can help to roughly filter out the horizon edges in binary images. The structure element
chosen for image corrosion is as follows:
111
111
111


=


B
(6)
More generally, if the image binarization data corresponding to a pixel in the image and its eight
neighbouring pixels are all 1, it is noted as 1. Otherwise, it is noted as 0, and the resulting data is noted
as image erosion data.
At this point, the rough horizon edges of the image are localized and they are marked as 1 in the
image erosion data.
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
3
2.2. Sub-pixel edge localization
Sub-pixel edge localization algorithm based on Zernike moments [7], [14] is chosen according to the
characteristics of horizon edges. The principle and key procedures of the algorithm are briefly
reviewed as follows:
A circle area of diameter
N
is taken for analysis with one rough horizon edge point as the origin.
Define the horizontal direction of the pixel sensor as axes
u
and the vertical direction as axes
v
.
Rotate the axes
u
and
v
until
u
is exactly perpendicular to the edge of the horizon. Denote the
rotated
u
as
'u
, the rotated
v
as
'v
, and the rotation angle as
. Then based on the imaging principle
of the camera and the properties of the horizon, the distribution of the horizon edge can be described
in the following equation [7]
( )
' ( ) /' (2 )
'
'
'
h u l w
I u h k u l w w l w u l w
h k u l w
−
= + +
+ +
(7)
where
h
represents the grayscale of the background region,
hk+
represents the grayscale of the
planet surface,
2w
represents the width of the horizon edge,
l
represents the distance from the origin to
the horizon edge. The grayscale distribution of the horizon edge described by the above model is
shown in figure 1.
Figure 1. Grayscale distribution of the horizon edge described by the ideal model.
The key parameters in the above model can be solved by using the rotational invariance of the
Zernike moments[14], and their formulas are as follows
( )
11Im 11Re
atan2 ,AA
=
(8)
11 11Re 11Im
cos sinA A A

=+
(9)
20
11
A
lA
=
(10)
cos
sin
2
ii
ii
uu
Nl
vv
=+
(11)
where
( )
,
ii
uv
represents the coordinates of the rough horizon edge point, and
( )
,
ii
uv
represents the
coordinates of the sub-pixel horizon edge point.
11
A
and
20
A
represent the Zernike moment of the
( )
,
ii
uv
.
11Im
A
and
11Re
A
represent the real and imaginary parts of
11
A
, respectively.
11
A
represents
11
A
after rotating angle
.
When
N
is equal to 5, the digitized Zernike moments are calculated as follows [14]:
111 11 20Re 11Re Im 1Re 20
* , * , * .
T
A M I A I A IM M= = =
(12)
where
*
represents the image correlation operator.
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
4
11Re
0.0147 0.0469 0 0.0469 0.0147
0.0933 0.0640 0 0.0640 0.0933
0.1253 0.0640 0 0.0640 0.1253
0.0933 0.0640 0 0.0640 0.0933
0.0147 0.0469 0 0.0469 0.0147
M
−−


−−


=−−

−−


−−

(13)
20
0.0176 0.0595 0.0506 0.0595 0.0176
0.0595 0.0491 0.1003 0.0491 0.0595
0.0506 0.1003 0.1515 0.1003 0.0506
0.0595 0.0491 0.1003 0.0491 0.0595
0.0176 0.0595 0.0506 0.0595 0.0176
M




=




(14)
Considering the difficulty of implementing trigonometric functions in FPGA, equations (8-11)
can be simplified as follows:
( ) ( )
20 11Re 20 11Im
2 2 2 2
11Re 11Im 11Re 11Im
55
,.
22
i i i i
uv
A A A A
A A A A
uv= + = +
++
(15)
2.3. Projections from pixel points to unit vectors
The sub-pixel horizon edge points in the image frame need to be projected into the three-dimensional
space to remove the interference of camera distortion and to participate in further computation.
Scaramuzza’s model [15] is used to describe the projection process, and its specific steps are as
follows:
2 3 4
0 0 0 2 3 4
T
T
i ii ii
x u u vy a avz a a


= + + +

(16)
where
T
i i i
x y z
represents the corresponding three-dimensional vector,
( )
00
,uv
represents the
distortion center of the camera,
0 2 3
,,a a a
and
4
a
represents the distortion parameters of the camera.
Calibration method for these parameters can be found in [15] or [16].
In further computations, unit vectors are often required. Preparing them in FPGA effectively
saves time, and its specific steps are as follows:
2 2 2
/
TT
i i i i i i i i i
x y z x y z x y z= + +
(17)
where
T
i i i
x y z
represents the corresponding unit vector.
3. Implementation based on FPGA
The significant advantage of FPGA for image processing is that different algorithms can be carried out
in parallel at the time of image data acquisition and each specific process can be pipelined. Overall
design for FPGA-based implementation is shown in figure 2. As shown, the implementation process is
mainly divided into two pipelines. The first pipeline is only used for rough edge localization, while the
sub-pixel edge localization and the projection processes are fused in the second pipeline. The data
from the two pipelines are finally aligned and processed. The specific steps in figure 2 are briefly
described as follows.
Image data stream: It is generated when the image sensor data is acquired. It contains frame
synchronization signal to mark the beginning and end of a frame of image, pixel data signal to
represent the grayscale data, and pixel valid signal to mark the validity of the grayscale data.
Calculation of Sobel gradients: It can be described using equations (2), (3) and (5). Besidestwo
FIFOs are used to cache two rows of pixel data for 3×3 neighborhood calculations.
Update of Sobel threshold: It mainly contains the accumulator and division process.
Image binarizationIt requires only one comparison statement to implement. Note that the Sobel
threshold computed from the previous image are used for the current image. Considering the similarity
of two neighboring images, this design is effective in reducing latency and ensuring reliability.
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
5
Image erosion: It requires only a simple logical statement. Similarly, two FIFOs are used to cache
two rows of pixel data for 3×3 neighborhood calculations.
Calculation of Zernike moments: It can be described using equations (12-14). Besidesfour
FIFOs are used to cache four rows of pixel data for 5×5 neighborhood calculations.
Calculation of subpixel edge: It can be described using equation (15), and its FPGA
implementation schematic is shown in figure 3.
Three-dimensional projection: It can be described using equation (16), and its FPGA
implementation schematic is shown in figure 4.
Vector unitization: It can be described using equation (17), and its FPGA implementation
schematic is shown in figure 5.
Synchronization of data stream: It introduces a number of delay modules, thus ensuring the
synchronization of the image erosion data, the current subpixel edge data, and the unit vector data.
Storage of valid data: Identify and store valid data based on the image erosion data. That is, when
the image erosion data is 1, the current subpixel edge data and vector data are stored.
Transmission of valid data: Depending on the architecture of the system, valid data usually needs
to be transferred to an ARM or DSP for further processing. The transmission task is greatly simplified
by the fact that only valid edge point data needs to be transmitted. For example, DMA or SPI are
capable of this task.
Figure 2. Overall design for FPGA-based
implementation of subpixel edge location.
Figure 3. FPGA implementation schematic of
subpixel edge calculation
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
6
Figure 4. FPGA implementation schematic of
three-dimensional projection
Figure 5. FPGA implementation schematic of
vector unitization
4. Experimental results
Earth images rendered by Blender are selected as experimental data. Earth's surface texture is set to
self-illuminate to simulate an infrared image. Earths bump map is set to displacement to simulate the
earth's terrain. Volume scatter and cloud mapping are used to simulate the effects of the atmosphere.
The simulated horizon sensor equipped with an infrared fisheye camera, which has a resolution of
512×512 pixels. The fisheye camera follows the equidistant projection, and it has a 180° circular FOV
inscribed in the square sensor area. The rendered images are convoluted by a Gaussian kernel with an
STD of 1 pixel to simulate the effect of defocusing. Example synthetic infrared Earth image is shown
in figure 6.
The proposed algorithm is implemented in a ZYNQ XC7Z020 development board. The running
time of the algorithm depends on the camera's pixel clock. The algorithm runs with a delay of about
one hundred clocks from the time when image capturing is completed. When the camera's pixel clock
frequency is 100MHz, the processing time of one image is only 2.62ms, which is certainly satisfactory.
The results of subpixel edge localization are shown in figure 7, and a localized zoom-in view is shown
in figure 8. It can be seen that the horizon edges are well extracted. Although some interference points
are inevitably introduced, subsequent processing algorithms, such as the RANSAC algorithm [7], can
handle it well.
As a comparison, the same algorithm is implemented in a TMS320C6748 DSP development
board with a system clock frequency of 456MHz. The DSP's time consumption is greatly limited by
the serial processing, even though it has a high clock frequency. The experimental results show that
the time consumed for subpixel edge location is about 137ms. In addition, extra time is required for
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
7
capturing and carrying the image data to the DSP. It can be seen that FPGA-based image processing
has a significant advantage over DSP in terms of speed.
Figure 6. Example synthetic
infrared Earth image with an
orbital altitude of 500 km.
Figure 7. Example synthetic
infrared Earth image with
subpixel edge.
Figure 8. Zoom-in view of
the rectangular area in
figure 7.
5. Conclusions
This paper proposes an FPGA-based high-performance sub-pixel edge localization algorithm for
horizon sensors. According to the computational ability and limitations of FPGA, the rough edge
localization algorithm and sub-pixel edge localization algorithm are modified and simplified to make
them easy to implement in FPGA. The algorithm makes full use of the parallel computing capability
of FPGA, and through the multi-stage pipeline design, it can complete the pixel-level edge localization
and sub-pixel edge localization of the image at the same time as the image acquisition, which greatly
reduces the latency caused by the image processing. In addition, the algorithm can simultaneously
complete the deconvolution projection of the image and solve the unit vector corresponding to the sub-
pixel edge points, which saves time for the subsequent computer vision algorithms. The experimental
results show that FPGA-based image processing shows a significant superiority in terms of speed
compared to DSP.
References
[1] Christian, J. A. 2021 A tutorial on horizon-based optical navigation and attitude determination
with space imaging systems IEEE Access 9 19819-19853
[2] Shang, L., Chang, J., Zhang, J., and Li, G. 2018 Precision analysis of autonomous orbit
determination using star sensor for Beidou MEO satellite Adv. Space Res. 61(8) 1975-1983
[3] Teil, T., Schaub, H., and Kubitschek, D. 2021. Centroid and Apparent Diameter Optical
Navigation on Mars Orbit J. Spacecraft Rockets 58(4) 1107-1119
[4] Wang, H., Wang, Z. Y., Wang, B. D., Jin, Z. H., and Crassidis, J. L. 2021 Infrared Earth sensor
with a large field of view for low-Earth-orbiting micro-satellites Front Inform. Tech. El. 22(2)
262-271
[5] Nguyen, T., Cahoy, K., and Marinan, A. 2018 Attitude determination for small satellites with
infrared earth horizon sensors. J. Spacecraft Rockets 55(6) 1466-1475
[6] Enright, J., Jovanovic, I., Kazemi, L., Zhang, H., and Dzamba, T. 2018 Autonomous optical
navigation using nanosatellite-class instruments: a Mars approach case study Celest. Mech. Dyn.
Astr. 130 1-31
[7] Christian, J. A. 2017 Accurate planetary limb localization for image-based spacecraft navigation
J. Spacecraft Rockets 54(3) 708-730
[8] Zhang, Y., Jiang, J., Zhang, G., and Lu, Y. 2019 High-accuracy location algorithm of planetary
centers for spacecraft autonomous optical navigation Acta Astronaut. 161 542-551
[9] Kikuya, Y., Iwasaki, Y., Yatsu, Y., and Matunaga, S. 2021 Attitude determination algorithm
using Earth sensor images and image recognition Trans. Japan Soc. Aero. S Sci. 64(2) 82-90
[10] Wang, X., Wei, X., Fan, Q., Li, J., and Wang, G. 2015 Hardware implementation of fast and
robust star centroid extraction with low resource cost IEEE Sens. J. 15(9) 4857-4865
The 14th Asia Conference on Mechanical and Aerospace Engineering
Journal of Physics: Conference Series 2746 (2024) 012038
IOP Publishing
doi:10.1088/1742-6596/2746/1/012038
8
[11] Chaple, G., and Daruwala, R. D. 2014 Design of Sobel operator based image edge detection
algorithm on FPGA. International Conference on Communication and Signal Processing
(Melmaruvathur: IEEE) pp 788-792
[12] Xu, Q., Varadarajan, S., Chakrabarti, C., and Karam, L. J. 2014 A distributed canny edge
detector: algorithm and FPGA implementation IEEE Trans. Image Process. 23(7) 2944-2960
[13] Jiang, J., Liu, C., and Ling, S. 2018 An FPGA implementation for real-time edge detection J.
Real-Time Image Proc 15 787-797
[14] Ghosal, S., and Mehrotra, R. 1993 Orthogonal moment operators for subpixel edge detection Pattern
recognit. 26(2) 295-306
[15] Scaramuzza, D., Martinelli, A., and Siegwart, R. 2006 A toolbox for easily calibrating
omnidirectional cameras IEEE/RSJ International Conference on Intelligent Robots and Systems
(Beijing: IEEE) pp. 5695-5701
[16] Deng, H., Wang, H., Han, X., Liu, Y., and Jin, Z. 2023 Camera calibration method for an
infrared horizon sensor with a large field of view Front Inform. Tech. El. 24(1) 141-153
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Images of a nearby celestial body collected by a camera on an exploration spacecraft contain a wealth of actionable information. This work considers how the apparent location of the observed body’s horizon in a digital image may be used to infer the relative position, attitude, or both. When the celestial body is a sphere, spheroid, or ellipsoid (as is the case for most large bodies in the Solar System), the projected horizon in an image is a conic—usually an ellipse at large distances and a hyperbola at small distances. This work develops non-iterative and analytically exact methods for every case (all combinations of unknown state parameters and quadric shapes), completely superseding older horizon-based methods that are iterative, approximate, or both. Some of the analytic methods presented in this work are new. Recognizing that these developments build on techniques that may be unfamiliar to many spacecraft navigators, this work is fashioned as a tutorial. Descriptive illustrations and numerical examples are provided to make concepts clear and to validate the proposed algorithms.
Article
Full-text available
Infrared Earth sensors are widely used in attitude-determination and control systems of satellites. The main deficiency of static infrared Earth sensors is the requirement of a small field of view (FOV). A typical FOV for a static infrared Earth sensor is about 20° to 30°, which may not be sufficient for low-Earth-orbiting micro-satellites. A novel compact infrared Earth sensor with an FOV of nearly 180° is developed here. The Earth sensor comprises a panoramic annular lens (PAL) and an off-the-shelf camera with an uncooled complementary-metal-oxide-semiconductor (CMOS) infrared sensor. PAL is used to augment FOV so as to obtain a complete infrared image of the Earth from low-Earth-orbit. An algorithm is developed to compensate for the distortion caused by PAL and to calculate the vector of the Earth. The new infrared Earth sensor is compact with low power consumption and high precision. Simulated images and on-orbit infrared images obtained via the micro-satellite ZDPS-2 are used to assess the performance of the new infrared Earth sensor. Experiments show that the accuracy of the Earth sensor is about 0.032°.
Article
Full-text available
This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300m<300\,\text {m}) and velocity (<0.15m/s<0.15\,\text {m/s}) errors as the spacecraft approaches periapse.
Article
Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view (FOV). An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods. A novel camera calibration method for infrared horizon sensors is presented and validated in this paper. Three infrared targets are used as control points. The camera is mounted on a rotary table. As the table rotates, these control points will be evenly distributed in the entire FOV. Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment, this method is easier to implement at a low cost. A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points. Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method. The results show that the proposed method is highly stable, and that the calibration accuracy is at least 30% higher than those of existing methods.
Article
The work described in this paper harnesses an open-source astrodynamics engine and visualization to quantify the performance of on-board optical navigation. It introduces the Hough Circles transform as a candidate method for centroid and apparent diameter extraction. The coupled nature of the simulation enables simultaneous pointing and orbit determination with dynamic image generation, all in a realistic flight-software environment. Navigation is done about Mars solely using optical images, and by means of limb or centroid/diameter extraction. Through the implementation of pre-existing algorithms for a baseline comparison and the development of Hough Circles, an end-to-end autonomous flight-software stack is developed and tested. This research provides insight into achievable navigation accuracy and image processing methods, as well as outlier mitigation for mission readiness.
Article
This paper describes a new algorithm to determine the attitude of micro-/nano-satellites using an Earth sensor. For recent micro-/nano-satellites, the requirements for attitude determination accuracy are becoming more stringent, despite its limited volume. Since Earth sensors have the advantage of smaller size, some studies have presented using them as attitude sensors; however, they could not achieve fully automatic processing in real-time. Therefore, we have developed an algorithm that effectively combines geometrical consideration and image recognition technology, thus realizing high autonomy, robustness, and real-time processing. The validity of this algorithm is confirmed through ground experiments. The algorithm operates at a rate of 0.2 Hz and achieves an accuracy of 0.1–1 deg, which is similar to the accuracy of a coarse sun sensor. Furthermore, it is capable of determining the three-axis attitude using only an Earth sensor and a GNSS receiver for position information. This study proves that the bus equipment required for attitude determination systems in micro-/nano-satellites can be reduced, thereby contributing to increased design freedom.
Article
This study provides an on-board image processing algorithm for a spacecraft optical navigation sensor system. We propose a high-precision planetary center localization algorithm that provides effective information for subsequent spacecraft navigation measurement considering the geometric properties and special texture features of the planetary image. First, we combine a morphological opening with a minimum spanning circle method to preprocess an image. Second, a gradient direction partial area effect method in the edge region of interest is proposed for high-precision subpixel edge extraction. This study combines the law of edge energy distribution with a gradient direction partial area effect on planetary edge detection. A nonlocal filtering method is proposed to improve the results under complex conditions. Finally, the center coordinates of the planet are obtained by fitting the circular arc edges using least squares. The simulated and real images show that our algorithm can obtain high-precision subpixel edges and center coordinates for regular planetary images, with strong robustness to planetary images with noise, complex background, and rich texture.
Article
Infrared Earth horizon sensors are capable of providing attitude knowledge for satellites in low Earth orbit by using thermopile measurements of the Earth’s infrared emission to locate the Earth’s horizon. Because some small satellites, such as CubeSats, have limited resources, a framework was developed that improves the attitude determination performance of an Earth horizon sensing system consisting of inexpensive thermopiles in static dual-mount configurations by leveraging mission geometry properties and improving sensor models. This paper presents an analytical approach to generate an estimate of the nadir vector in the satellite’s body frame from Earth horizon sensor measurements. On-orbit telemetry data from the Microsized Microwave Atmospheric Satellite (MicroMAS) during limb-crossing events were used to assess our model of sensor readings in response to Earth horizon detection. To quantify the expected attitude estimation performance of our method, a detailed simulation of a low-Earth-orbiting satellite was developed with Earth horizon sensors in similar configurations to the MicroMAS sensor system. Our attitude determination method returns an error of 0.16° on average (root-mean-square error of 0.18°) in nadir estimation under a periodic low-frequency attitude disturbance of 4°. A sensitivity analysis was conducted, which takes mounting uncertainty and position error into account, resulting in an additional attitude error of 0.3° for a mounting offset of 0.2° and up to 0.13° error for a 10-km position knowledge error.
Article
This paper focuses on the autonomous orbit determination accuracy of Beidou MEO satellite using the onboard observations of the star sensors and infrared horizon sensor. A polynomial fitting method is proposed to calibrate the periodic error in the observation of the infrared horizon sensor, which will greatly influence the accuracy of autonomous orbit determination. Test results show that the periodic error can be eliminated using the polynomial fitting method. The User Range Error (URE) of Beidou MEO satellite is less than 2 km using the observations of the star sensors and infrared horizon sensor for autonomous orbit determination. The error of the Right Ascension of Ascending Node (RAAN) is less than 60. μrad and the observations of star sensors can be used as a spatial basis for Beidou MEO navigation constellation.
Article
The use of images for spacecraft navigation is well established. Although these images have traditionally been processed by a human analyst on Earth, a variety of recent advancements have led to an increased interest in autonomous imaged-based spacecraft navigation. This work presents a comprehensive treatment of the techniques required to navigate using the lit limb of an ellipsoidal body (usually a planet or moon) in an image. New observations are made regarding the effect of surface albedo and terrain on navigation performance. Furthermore, study of this problem led to a new subpixel edge localization algorithm using Zernike moments, which is found to outperform existing methods for accurately finding the horizon's location in an image. The new limb localization technique is discussed in detail, along with extensive comparisons with alternative approaches. Theoretical results are validated through a variety of numerical examples.