ArticlePDF Available

How to improve the attitude accuracy of the star sensor under dynamic conditions: A review

Authors:
Contents lists available at ScienceDirect
Acta Astronautica
journal homepage: www.elsevier.com/locate/actaastro
Review article
How to improve the attitude accuracy of the star sensor under dynamic
conditions: A review
Liheng Ma a,, Dongkai Dai b, Yuanman Ni c
aCollege of Weapons Engineering, Naval University of Engineering, No. 717, Jiefang Avenue, Qiaokou District, Wuhan, 430033, Hubei Province, China
bCollege of Advanced Interdisciplinary Studies, National University of Defense Technology, No. 109, Deya Road, Kaifu District, Changsha, 410073, Hunan
Province, China
cLaboratory of Intelligent Control, Rocket Force University of Engineering, No. 2, Tongxin Road, Baqiao District, Xi’an, 710025, Shanxi Province, China
A R T I C L E I N F O
Keywords:
Star sensor
Attitude accuracy
Dynamic conditions
Active and passive deblurring
Motion blur compensation
A B S T R A C T
Star sensors experience significant degradation in attitude measurement accuracy under dynamic conditions,
primarily caused by motion-induced star image blurring and diminished effective stars. This review analyzes
the underlying mechanisms responsible for accuracy deterioration in dynamically perturbed star sensors. As the
first comprehensive survey in this field, the paper proposes a novel taxonomy classifying existing mitigation
strategies into two paradigm categories: active and passive deblurring techniques and motion blur compensa-
tion methods. We critically analyzed representative approaches within each category, including optical system
optimization, multi-FOV configurations, servo stabilization platforms, and algorithm-driven solutions such as
image restoration and attitude-correlated frame processing. A comparative evaluation highlights the advantages
and limitations of these methods through quantitative performance metrics and implementation constraints.
Furthermore, the study establishes selection criteria emphasizing that optimal strategy adoption must consider
application-specific requirements. This work provides both a technical reference for star sensor designers and
a framework guiding future research directions in high-dynamics attitude determination.
1. Introduction
The star sensor, as an absolute attitude measurement device, has
gained extensive applications in spacecraft, satellites, ballistic missiles,
and marine navigation systems due to its arc-second level precision [1
4]. Operating through stellar image acquisition via optical sensors
(except for the Sun), it executes sequential procedures including star
image preprocessing, star centroid, star identification, and attitude
determination to establish spatial orientation from its body coordinate
system with respect to the inertial celestial coordinate system.
The cross boresight (corresponding to the 𝑥 and 𝑦 axes) noise
equivalent angle (NEA) error 𝐸𝑠𝑠 can be used to evaluate the attitude
accuracy of the star sensor, and it is calculated as follows [5]:
𝐄𝐬𝐬 =𝐹 𝑂𝑉1𝐷
𝑛𝑝𝑖𝑥𝑒𝑙
𝐸𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑
𝑛𝑠𝑡𝑎𝑟
(1)
where 𝐹𝑂 𝑉1𝐷 (degrees) denotes the field of view of the star sensor in
one dimension, 𝑛𝑝𝑖𝑥𝑒𝑙 (pixels) the sensor resolution, 𝐸𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑 (pixels) the
average centroid accuracy, and 𝑛𝑠𝑡𝑎𝑟 the average detectable stars per
frame.
Corresponding author.
E-mail address: jeremiahmax@163.com (L. Ma).
For a certain star sensor, 𝐹 𝑂𝑉1𝐷 and 𝑛𝑝𝑖𝑥𝑒𝑙 are constant; therefore,
the attitude accuracy of the star sensor is mainly limited by the accu-
racy of the star centroid according to Eq. (1), and increasing the number
of effective star images improves the attitude accuracy of the star
sensor to a certain extent. However, under dynamic conditions, such
as satellite orbital adjustments, spacecraft maneuvering, ship swinging,
and other angular motion scenarios, there is angular motion around
one or more axes of the star sensor, and star images become motion-
blurred. The limited energy of the star image is dispersed into dozens
or even tens of pixels. However, the noise in each pixel remains almost
unchanged, leading to an abrupt decrease in the signal-to-noise ratio
(SNR) of star image followed by star centroid accuracy degradation. On
the other hand, some dim stars drown in noise, and thus the number of
effective star images is reduced, as shown in Fig. 1 [6]. Fig. 1 compares
star images under static (a) (star images are marked in red circles) and
dynamic (b) (star images are marked in red rectangles and numbered)
conditions, both of which were obtained under real night sky conditions
at the same night and the exposure time internal between them was
about two minutes, meaning that the number reduction of star image is
aroused by the dynamic condition. The 2D energy distribution of a star
https://doi.org/10.1016/j.actaastro.2025.03.043
Received 5 January 2025; Received in revised form 27 February 2025; Accepted 31 March 2025
Acta Astronautica 233 (2025) 42–54
Available online 11 April 2025
0094-5765/© 2025 The Authors. Published by Elsevier Ltd on behalf of IAA. This is an open access article under the CC BY-NC-ND license
( http://creativecommons.org/licenses/by-nc-nd/4.0/ ).
L. Ma et al.
Fig. 1. Star images under static (a) and dynamic conditions (b). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of
this article.)
image in Fig. 1 is magnified and shown on the right side. A comparison
of the two figures shows that the number of star images obtained
under dynamic conditions is significantly reduced, the energy of the
star images is dispersed into a dozen pixels, which significantly re-
duces the SNR. This dual degradation mechanism severely compromises
both centroid precision and pattern recognition reliability, ultimately
propagating attitude errors beyond mission tolerances.
Despite advances since pioneering star tracker designs in the 1990s
[7], dynamic performance enhancement remains a persistent challenge.
In this review, depending on whether star image motion blur is gener-
ated or not, existing methods are divided into two categories. Motion
blur is caused by the relative angular motion between the star sensor
and the star to be observed. Hence, to reduce the degree of motion
blur of the star image or to suppress the formation of motion blur,
the most direct method is to reduce the relative motion between them
during the exposure time of the star sensor. A category of enhancement
methods based on this principle is called active and passive deblurring
methods (APDM) in this manuscript, which mainly includes improving
the performance of the optical imaging system, multiple field-of-view
(FOV) star sensor, servo-tracking platform techniques, dynamic binning
algorithm (DBA), and time-delayed integration (TDI) method. Another
category of methods is to compensate for the motion-blurred star
image through the processing of software algorithm after the forma-
tion of motion blur, which is named the motion blur compensation
methods (MBCM) in the review. MBCM includes motion-blurred star
image restoration algorithms, star centroid with filtering method, and
attitude-correlated frame (ACF) approach.
In this article, representative research on improving the attitude ac-
curacy of the star sensor under dynamic conditions, including concept
ideas, principles, results, advantages, and disadvantages, is reviewed
and summarized. The remainder of this paper is organized as follows.
In Section 2, the APDM is reviewed and comparisons and summaries of
different methods are presented. The MBCM is presented in Section 3.
Finally, conclusions are drawn.
2. Active and passive deblurring methods
Active and passive deblurring methods (APDM) aims to minimize
or suppress motion blur formation during the star sensor’s imaging
process. It is categorized as either active (requiring external actuation)
or passive (inherent system optimization) based on operational princi-
ples in this review. Passive deblurring methods in this paper include
techniques of improving the performance of optical imaging systems
and multi-FOV star sensors, while active deblurring methods consist of
techniques such as servo-tracking platform, DBA and TDI.
2.1. Improving the performance of optical imaging systems
Under dynamic conditions, the motion-blurred length of a star
image can span tens of pixels, significantly exceeding the static point
spread function (PSF). The blurring magnitude is directly proportional
to the exposure time of the optical system of the star sensor, whereas
the energy acquired per pixel depends on optical parameters as mod-
eled by Eq. (2) [5]:
𝑃 ℎ𝑜𝑡𝑜𝑛𝑠 𝐾(800
400 𝑄𝐸 𝐼(𝜆, 𝑇𝑘)𝑑𝜆)𝑆𝑎𝑝𝑒𝑟𝑡𝑢𝑟𝑒 𝑑
𝑓𝜔2.512𝑀𝑣 (2)
Acta Astronautica 233 (2025) 42–54
43
L. Ma et al.
where QE denotes wavelength-dependent quantum efficiency of the
image sensor, 𝐼(𝜆, 𝑇 ) is the stellar energy spectral density distribution
(400 nm to 800 nm in the visible band) which is related to the light
wavelength 𝜆 in m and the stellar temperature 𝑇𝑘 in Kelvin, 𝐾 is the
effective transmittance of the system (the atmospheric transmittance
should be taken into account when on the ground), 𝑆𝑎𝑝𝑒𝑟𝑡𝑢𝑟𝑒 is the
aperture of the optical system, 𝑑 is the pixel size of the image sensor,
𝑓 is the focal length of the optical system, 𝜔 is the equivalent angular
velocity of the star sensor, and 𝑀𝑣 is the star magnitude. Therefore,
exposure time can be shortened by choosing an image sensor with a
high QE, an optical lens with high transmittance, and an optical system
with a large aperture. These are systematically and comprehensively
summarized below:
A. Adaptive exposure time adjustment:
Ball HAST sensor [79] employed large-aperture optics
(𝑓∕1.4) with back-illuminated CCDs (QE >80%@600 nm),
achieving 4∕s operation by constraining blur length to <3
pixels through adaptive exposure time adjustment
(1∼10 ms). This introduces an inherent trade-off: the
shorter exposure time reduces the number of detectable
stars by 22% compared to the static modes [10].
Lockheed Martin’s AST-301 [11] implemented similar
variable exposure time control but prioritizes faint-star
detection (limiting magnitude 6.5) through exposure time
optimization algorithms.
B. High-sensitivity sensor implementation:
Electron multiplying charge coupled device (EMCCD):
Electron multiplication gain (>1000×) allows sub-ms expo-
sures but induces multiplicative noise (Excess Noise Factor
= 1.42.0). Experimental results [12] demonstrated 0.8
pixel centroid accuracy at 8∕s, albeit with 15% false star
rates due to noise amplification.
Intensified charge coupled device (ICCD): Microchannel
plate intensifiers achieve 1 ms exposures (25∕s tolerance),
but suffer from limited dynamic range (54 dB vs. CCD’s
72 dB) and radiation-induced gain decay (3% annual loss
in LEO) [1316].
C. Optical parameter optimization: Aperture scaling (𝐷
𝑆𝑁 𝑅) improves sensitivity but exacerbates distortion (𝜖𝑑𝑖𝑠𝑡
𝐷3𝑓). For instance, a 120 mm aperture induces 2.8 arcsec
distortion versus 1.1 arcsec for 80 mm designs [8]. Field tests
reveal optimal f/# ranges (1.21.8) balancing irradiance and
aberrations.
However, a comprehensive analysis and evaluation should be per-
formed to select these optical parameters. For example, the star image
distortion error increases simultaneously with the aperture. Meanwhile,
despite theoretical advantages, implementation barriers persist:
A. Material constraints:
Peak QE capped at 95% due to anti-reflection coating
limitations.
Fused silica substrates restrict D/f ratios to <0.25 for dis-
tortion control.
B. Cost considerations:
EMCCD/ICCD systems incur several times cost premiums
versus conventional CCDs.
Large-aperture aspheric lenses increase manufacturing
costs.
C. Performance plateaus:
Optical enhancements alone yield <50% NEA improvement
(HAST: 0.57 arcsecs 0.42 arcsecs @3∕s)
Diminishing returns observed for QE >90% or 𝑓∕# <1.0
Case studies confirm that standalone optical optimization provides
insufficient dynamic performance gains, necessitating hybrid
approaches combining hardware upgrades with algorithmic compen-
sation (see Section 3).
2.2. Multi-FOV star sensors
The design of star sensors with multi-FOV aims to enhance atti-
tude measurement reliability and dynamic performance through spatial
redundancy and expanded sky coverage. A multi-FOV star sensor com-
bining two or more FOVs into a single FOV offers several benefits.
First, to improve the reliability of the attitude measurement of the star
sensor, a redundant backup strategy is generally adopted; that is, one
or two star sensors are added as backups [11,17], which is essential
for applications of the star sensor in space. Second, multi-FOV are
mounted at an angle to each other, and the measurement results from
different FOVs are fused through accurate calibration among the FOVs
to achieve high precision attitude information output, particularly for
the attitude measurement accuracy of the boresight angle of the system,
typically the angle around the z-axis [18]. Then, the orientation of the
installation of the star sensors in the application needs to consider the
influence of the sun, moon, and reflected light from the Earth [5]. To
overcome this influencing factor, a scheme can be used that combines
multi-FOV facing different sky zones. Finally, the number of effective
star images increases with the number of FOV. As the number of star
images increases, the exposure time can be appropriately reduced, and
the effect of the reduction in the number of star images in a single
FOV due to the motion of the carrier can be partially eliminated, which
also improves the attitude accuracy of the star sensor under dynamic
conditions.
This section analyzes four representative implementations of multi-
FOV star sensors, detailing their operational principles, advantages, and
limitations.
A. Tri-optical-head configuration: HYDRA System
Developed by SODERN (France), the HYDRA star sensor [11]
integrated three independent optical modules arranged at 120
intervals (Fig. 2). Each module employs an active pixel sensor
(APS) with a 22× 22 FOV, achieving a multi-FOV of 68× 22.
Key performance characteristics include:
Attitude accuracy improvement: Cross-boresight NEA re-
duced from 17.4 arcsecs (x/y axes) and 145 arcsecs (z-axis)
in single-FOV mode to 6.7 arcsecs (all axes) under 5∕s
dynamic condition.
Dynamic tolerance: Recognizable star images maintained
at angular velocities up to 10∕s through real-time data
fusion.
Hardware trade-offs:
Mass increased by 210% 4.2 kg vs. 2.0 kg for single-
FOV systems
Power consumption escalated to 28 W (vs. 10 W
baseline)
Calibration complexity grew quadratically, requiring
12 degrees of freedom (DOF) alignment versus 3 DOF
for single-FOV configurations [18].
Acta Astronautica 233 (2025) 42–54
44
L. Ma et al.
Fig. 2. Three FOVs star tracker StarNav III [11].
Fig. 3. Two FOVs star sensor.
B. Reflective multi-FOV architecture: ASI/Galileo spa collabo-
ration
Funded by the Italian Space Agency (ASI), this design [17] uses
dichroic mirrors to project three FOVs onto a single CCD through
a folded optical path:
Advantages:
40% cost reduction compared to multi-head systems
18% improvement in boresight (z-axis) measurement
accuracy through redundant observations
Challenges:
35% irradiance loss per reflection path due to mirror
coatings
Stray light contamination between FOVs caused 12%
SNR degradation in lab tests
Dynamic performance remains unquantified due to
limited prototype testing
C. Semi-transmissive dual-FOV design: Tsinghua University
prototype
This configuration (Fig. 3) employed beam splitters to simulta-
neously image two 14× 14 FOVs on a single detector:
Performance benefits:
55% FOV overlap enabled cross-validation of star
patterns, improving identification success rate by
22%
Static accuracy reached 3.1 arcsecs under laboratory
conditions
Operational limitations:
50% light attenuation in each optical path degraded
SNR by 6 dB
Earth albedo interference in one FOV contaminated
the other FOV, inducing >15% false star detections
Dynamic accuracy deteriorated to 8.9 arcsecs under
3∕s motion [19]
Acta Astronautica 233 (2025) 42–54
45
L. Ma et al.
Fig. 4. A finer small FOV within the center of a large FOV.
D. Nested FOV architecture: Huazhong University approach
Developed by Huazhong University of Science and Technol-
ogy [20], this hierarchical design (Fig. 4) combines:
Wide-FOV subsystem: 8× 8 coverage with 512 × 512
pixels (3.5 arcsecs per pixel resolution) for rapid initial star
identification (<200 ms)
Narrow-FOV subsystem: 0.5× 0.5 coverage with
1024 × 1024 pixels (0.17 arcsecs per pixel resolution) for
high-precision centroiding (0.02 pixel accuracy)
System constraints:
Limited to low-dynamic conditions (<0.5∕s) due to
PSF mismatch between FOVs
Data fusion latency increased by 40% compared to
single-FOV operation
No dynamic performance validation reported in ac-
cessible literature
E. Comparative analysis and implementation challenges
While multi-FOV designs theoretically enhance reliability, prac-
tical deployment faces critical hurdles:
Calibration complexity: Tri-head systems require 12-
parameter alignment matrices versus 5 parameters for
dual-FOV configurations [18].
Optical performance degradation:
1 dB SNR loss per additional optical path component
(lenses/mirrors)
15%–25% modulation transfer function MTF reduc-
tion in wide-FOV optics
Dynamic limitations:
Maximum validated angular rate remains 10∕s (HY-
DRA) versus 3∕s for semi-transmissive designs
Latency penalties of 33 ms per additional FOV due to
parallel processing
Although spatial redundancy enhances fault tolerance in these im-
plementations, the most efficient is the HYDRA star sensor [11] with
three optical heads. The improvement in attitude accuracy of other
multi-FOV star sensors [17,19,20] is neither impressive nor reported
further. Meanwhile, the associated trade-offs increased mass, ele-
vated power demands, and complex calibration requirements man-
date rigorous evaluation tailored to specific operational contexts.
2.3. Servo tracking platform techniques
To mitigate relative motion between the star sensor and its carrier,
some star sensor products have adopted a servo tracking platform po-
sitioned between them [18]. This servo mechanism dynamically tracks
the carrier’s motion in real time, maintaining a fixed pointing direction
for the star sensor. Consequently, the relative motion between the star
sensor and observed stars is effectively eliminated, thereby preventing
motion-blurred star image formation. Such servo tracking platforms
are predominantly integrated into navigation systems. For instance,
Northrop Corp. has implemented this technology in its NAS-27 and
LN-120G INS/CNS navigation systems [21].
Beyond Northrop Corp., the China Huazhong Institute of Optoelec-
tronic Technology has also explored servo tracking platforms [22],
however, the available information is extremely scarce. In [23], the
servo mechanism has been used to drive the precise azimuth axis
pointing the optical axis of the star sensor to the star. A fine star image
centroid extraction performance with an extraction error of less than
0.2 pixels or 2 arc-second under four different types of one-dimensional
angular motion after compensation has been demonstrated. Never-
theless, published literature on these efforts remains scarce, limiting
detailed technical evaluations of their implementations.
Under dynamic conditions, the servo mechanism ensures smooth
operation of the star sensor. However, inherent error factors such as
axis system misalignments and encoder inaccuracies degrade both
attitude measurement precision and system reliability. Furthermore,
the servo platform’s large size, structural complexity, and high cost
render it unsuitable for mass- and size-constrained applications like
satellites. Instead, it finds greater suitability for ship-based platforms
where these constraints are less stringent.
2.4. DBA and TDI method
Researchers at the European Space Agency proposed a conceptual
framework for the dynamic binning algorithm (DBA) tailored to APS
star sensors [24]. The DBA compensates for motion-induced distortions
by combining pixel readouts from consecutively sub-sampled images
via an oversampling technique. This process relies on three-axis motion
parameters obtained from a low-precision gyroscopic unit. However,
the DBA method is limited to compensating motion along horizontal,
vertical, and pixel diagonal directions. For other diagonal trajectories,
motion must be decomposed into combinations of these predefined
directions, as illustrated in Fig. 5. This decomposition approach renders
the method ineffective for addressing complex multi-directional motion
patterns.
Notably, the DBA requires specialized logic circuits for APS sensors
to achieve virtual charge accumulation at the pixel level. While the-
oretically promising, its practical implementation remains unverified,
and no subsequent reports or validated applications have emerged in
accessible literature.
Similar to the DBA, the time-delayed integration (TDI) method [25]
synchronizes photogenerated charge movement with star image mo-
tion through customized drive timing circuits. In CCD imaging arrays,
charges generated by starlight follow the carrier’s motion along the 𝑦-
axis, ensuring overlap in the 𝑦-direction. This synchronization enhances
the signal-to-noise ratio (SNR) when paired with specialized image
processing algorithms.
Acta Astronautica 233 (2025) 42–54
46
L. Ma et al.
Fig. 5. Motion decomposition along diagonal direction with the DBA method.
The TDI technique has been implemented in Lockheed Martin
Corp.’s AST-201 [25] and AST-301 [7] star sensors. For example, the
AST-301 sensor, deployed on the Space Infrared Telescope (SIRTF) at
the Jet Propulsion Laboratory (Fig. 6), eliminates 𝑦-axis motion blur
while retaining x-axis blur. In static conditions, this sensor achieves
attitude accuracies (1𝛿) of 0.18, 0.18, and 5.1 arcsecs for the x-,
y-, and z-axes, respectively. Under dynamic scenarios, it maintains a
hold accuracy of 1 arcsec/s and a continuous tracking accuracy of
2.1 arcsec/s. Studies at the Technical University of Denmark further
validated the TDI method’s efficacy in spacecraft rotational motion
tracking [8].
Despite these advantages, the TDI method increases the complexity
of hardware timing circuits. Furthermore, the charge-transfer region
formed during operation reduces the effective imaging area of the CCD.
These limitations restrict its applicability to large-format frame-shift
CCD arrays and render it unsuitable for handling intricate multi-axis
motion scenarios.
3. Motion blur compensation methods
Active and passive de-blurring methods tend to add extra hardware
costs and increase system complexity. In recent years, there have
been more studies on motion blur compensation algorithms for star
images. The motion-blurred star image compensation method com-
pensates for the motion-blurred star image by designing relevant al-
gorithms after the formation of star image blurring to improve the
precision of star centroid. Motion blur compensation methods include
motion-blurred star image restoration algorithms, star centroid with
filtering algorithms, the attitude-correlated frame approach, and the
attitude-correlated frame adding method. Before specifically introduc-
ing the related compensation algorithms, it is necessary to review
the related research work on the motion-blurred model of star im-
ages and summarize the related methods of motion-blurred star image
preprocessing.
3.1. Modeling of star image motion blur
To effectively mitigate motion blur through algorithmic compensa-
tion, establishing an accurate mathematical model of star image motion
blur under dynamic conditions is essential. Assuming the relative mo-
tion between the star sensor and the observed star is (𝑥0(𝑡), 𝑦0(𝑡)), the
motion-blurred image 𝑔(𝑥, 𝑦) can be expressed as:
𝑔(𝑥, 𝑦) = 𝑇
0
𝑓(𝑥𝑥0(𝑡), 𝑦 𝑦0(𝑡))𝑑 𝑡 +𝑛(𝑥, 𝑦),(3)
where 𝑇 denotes the exposure time of the star sensor, 𝑓(𝑥, 𝑦) represents
the original unblurred star image, and 𝑛(𝑥, 𝑦) accounts for additive
noise. Applying the Fourier transform to Eq. (3), the relationship in
the frequency domain becomes:
𝐺(𝑢, 𝑣) = 𝐹(𝑢, 𝑣)𝑇
0
𝑒𝑗2𝜋(𝑢𝑥0(𝑡)+𝑣𝑦0(𝑡))𝑑 𝑡 +𝑁(𝑢, 𝑣),(4)
where 𝐺(𝑢, 𝑣) and 𝐹(𝑢, 𝑣) are the Fourier transform of 𝑔(𝑥, 𝑦) and 𝑓(𝑥,𝑦),
respectively. The motion-blurred degradation function in the frequency
domain is thus derived as:
𝐻(𝑢, 𝑣) = 𝑇
0
𝑒𝑗2𝜋(𝑢𝑥0(𝑡)+𝑣𝑦0(𝑡))𝑑 𝑡. (5)
This model clarifies that, assuming precise noise removal and accurate
estimation of motion parameters, the original unblurred star image
can theoretically be reconstructed from the degraded observations. The
degradation and restoration process is conceptually illustrated in Fig.
7.
Research on motion-blurred star image modeling has been exten-
sively conducted by Beihang University. In [26], the dynamic perfor-
mance of a star sensor was simulated by incorporating motion functions
into static imaging models. This approach enabled the establishment of
a dynamic star image model and subsequent validation of restoration
algorithms through simulations. Further analysis of error sources and
system parameter optimization was also performed. The work in [27]
systematically investigated the mechanisms of star image blurring un-
der various carrier motion types (e.g., linear, rotational) and developed
corresponding blurring models. These models were validated through
simulations, demonstrating their effectiveness in guiding restoration
algorithms.
A comprehensive study in [28] integrated rotational motion around
three axes to derive equations describing star image trajectories. Sim-
ulations confirmed the accuracy of these models under complex ro-
tational dynamics. In [29], the energy distribution of motion-blurred
star images was modeled by linearly segmenting the motion-blurred
degradation function. This model incorporated critical imaging parame-
ters such as incident light intensity, exposure time, star image velocity,
and Gaussian radius. An analytical expression for centroid error was
derived, enabling precise error evaluation and parameter optimization.
Both simulation and experimental data validated the model’s accuracy.
For missile-borne star sensors, [30] established an energy distribu-
tion function for uniformly moving star images based on kinematic
equations. Two solutions were proposed to estimate star image ve-
locity, and the spatial variation of motion functions across the image
plane was addressed by including nonlinear terms. This refined model
provided a foundation for evaluating the precision of compensation
algorithms [31]. Complex scenarios involving variable acceleration
were further explored in [32], with similar studies reported in [3335].
A simulated motion-blurred star image generated using these models
is shown in Fig. 8(a), while Fig. 8(b) demonstrates strong correlation
between simulated and real motion-blurred star images [6].
Collectively, these studies have advanced the understanding of
motion-blurred star image formation mechanisms. Models accounting
for uniform, accelerated, and variable-velocity motions provide critical
insights for subsequent processing algorithms, such as star image
restoration and centroid accuracy enhancement.
3.2. Preprocessing of motion-blurred star image
To improve the SNR and centroid accuracy of motion-blurred star
images, preprocessing plays a critical role after the formation of motion
blur. As indicated by Eqs. (1) and (4), the primary objective of prepro-
cessing is to eliminate noise components from the degraded star images
while preserving essential features for subsequent processing steps.
In [36], a highly dynamic star image extraction method was pro-
posed, where an adaptive processing window was dynamically adjusted
based on the motion characteristics of the star sensor. This adaptive
window accommodated diverse motion patterns, enabling robust star
image extraction within localized regions. For fragmented star im-
ages caused by rapid motion, mathematical morphology techniques
were employed to detect breakpoints and actively ‘‘grow’’ the bro-
ken segments, ensuring continuity. Additionally, wavelet transforms
were utilized to denoise motion-blurred star images [37], effectively
separating noise from structural features in the frequency domain.
Acta Astronautica 233 (2025) 42–54
47
L. Ma et al.
Fig. 6. TDI for SIRTF star tracker. The motion in the 𝑦 direction (a) is eliminated with the TDI in (b).
Fig. 7. Diagram of star image degradation and restoration.
Fig. 8. Motion-blurred star image simulation (a) and a real motion-blurred star image (b).
Sun et al. [38] introduced a multi-stage preprocessing framework.
First, correlation filtering was applied to suppress noise and enhance
the SNR of motion-blurred star images. Second, background noise
was extracted and subtracted using morphological opening operations,
facilitating accurate segmentation of star image regions. The Sobel
operator was then employed to compute gradients, and autocorrelation
analysis of the differential images yielded estimates of motion param-
eters (e.g., direction and blur length). This approach enabled precise
localization of motion-blurred star regions for subsequent extraction.
Wan et al. [39] proposed a method named optimal directional con-
nected component (ODCC). According to the dynamic characteristics of
star sensors, the ODCC enhancement method can adaptively estimate
the directions of star spots and integrate the star image so that the SNR
of the star spots is increased. It was used in its multi-FOV star sensor
to improve the faint star spot extraction ability under extremely high
dynamic conditions [40].
Our research group has investigated various preprocessing tech-
niques for motion-blurred star images. In [41], correlation matching
was adopted to align and enhance star patterns under dynamic condi-
tions. Mathematical morphology operations, as detailed in [42], were
applied to repair fragmented star images (see Fig. 9(a)–(b)), where
broken regions (marked by dashed red ellipses) were reconstructed
through dilation and erosion processes. Furthermore, a star-seed pixel
growth algorithm combined with region restriction [43] was developed
to isolate valid star pixels from noisy backgrounds, significantly im-
proving SNR. With these methods, the motion-blurred star images were
well preprocessed prior restoration further in [4,43,44].
Acta Astronautica 233 (2025) 42–54
48
L. Ma et al.
Fig. 9. Broken motion-blurred star image (a) after repairing (b).
Preprocessing of motion-blurred star images is indispensable prior
to star centroid. Given the inherently low SNR under dynamic con-
ditions, these steps are crucial for preparing high-quality inputs for
downstream tasks such as image restoration, centroid estimation, and
attitude determination.
3.3. Restoration of motion-blurred star image
The drastic reduction in SNR of motion-blurred star images, caused
by energy dispersion across multiple pixels, necessitates advanced
restoration techniques. The image restoration method attempts to
regroup the dispersed energy of the star images into localized regions
(e.g., 3 × 3 or 5 × 5 pixel windows) to improve the SNR of the star
images.
Jin et al. [36] employed Wiener filtering for motion-blurred star
image restoration. As a linear minimum mean square error estimator,
Wiener filtering effectively suppresses noise but introduces artifacts
such as the ringing effect near sharp edges. While this method improves
SNR to some extent, its limited noise robustness and residual edge
errors constrain its practical utility for high-precision star centroid.
In contrast, Sun et al. [34] leveraged MEMS gyroscope-derived
motion parameters to guide the Richardson–Lucy algorithm (RLA), a
nonlinear iterative restoration method. RLA’s inherent noise suppres-
sion capability and adaptability to PSF variations yielded significant
improvements in centroid accuracy. For instance, under angular ve-
locities of 8∕s, RLA reduced centroid errors from 1.8 pixels to sub-
pixel levels. To further enhance performance, our group proposed
a region-restricted RLA [43], which confines the restoration process
to predefined regions of interest containing valid star signatures. As
illustrated in Fig. 10(a) and (b), this approach achieves concentrated
energy redistribution and markedly improved SNR. Fig. 10(a) shows
raw motion-blurred star images, while Fig. 10(b) demonstrates restored
images with concentrated features and improved SNR.
A critical requirement for most of the restoration algorithms is
the accurate estimation of the motion-blurred degradation function
H(u,v), which depends on precise motion parameter acquisition. These
parameters can be obtained via two approaches:
Image processing algorithms: Techniques such as autocorre-
lation analysis [38] or gradient-based motion estimation [34]
derive motion parameters directly from blurred star images.
Strapdown gyroscope integration: High-precision gyroscopes
[24,34,43] provide real-time angular velocity measurements to
compute motion trajectories.
The motion-blurred star image restoration method can eliminate
the influence of motion blur to a certain extent and improve the
accuracy of the star centroids for star sensors. Despite their effec-
tiveness, motion-blurred star image restoration methods face two key
limitations:
Parameter sensitivity: Inaccurate motion parameter estimation
(e.g., due to gyroscope drift or algorithmic errors) degrades
restoration quality.
Computational complexity: Iterative algorithms like RLA in-
crease processing latency, potentially affecting the update rate
of attitude determination for the star sensor in real-time appli-
cations.
3.4. Star centroid with filtering algorithm
Motion-blurred star image restoration aims to improve the SNR
of degraded star images, thereby enhancing centroid accuracy and
ultimately boosting the attitude determination precision of star sensors
under dynamic conditions. A promising strategy involves integrating
filtering algorithms with predicted star positions derived from theoret-
ical models, enabling refined centroid estimation even in challenging
motion scenarios. Fig. 11 outlines the workflow of star centroid with
Kalman filtering.
In [45], the extended Kalman filter (EKF) was employed to fuse
angular velocity measurements from a strapdown gyroscope with star
centroids obtained via the center-of-mass method. This fusion frame-
work generated optimal estimates of star centroids under dynamic
conditions, achieving an accuracy better than 1 pixel for stars of magni-
tude 5 under angular velocities of 8∕s. Liu et al. [46] utilized a Kalman
filter to predict star positions; however, their work focused on static
or mildly dynamic scenarios without addressing motion-blurred star
images. Another study [47] proposed a directional integration method
to enhance SNR by accumulating star energy along motion trajectories,
where motion parameters were extracted directly from image process-
ing algorithms. In [48], a generalized maneuvering attitude error-state
Kalman filter (GMAESKF) was proposed to directly mitigate the attitude
measurement noise and fulfill the requirement of angular velocity esti-
mation for terrestrial multi-FOV star sensors when they are applied to
agile maneuvering carriers. In the numerical simulation, the proposed
method reduced the attitude measurement error by 42.3%, with an rms
error of 0.0171∕s for angular velocity estimation. Flight experiment
was carried out and verified the performance of the GMAESKF.
Acta Astronautica 233 (2025) 42–54
49
L. Ma et al.
Fig. 10. Motion-blurred star image before (a) and after (b) employing the RLA restoration algorithm.
Fig. 11. Diagram of star centroid with Kalman filtering.
The Boeing Company further advanced this approach in a patented
method [49]. A Kalman filter was designed to bridge the measured
centroids (from the center-of-mass method) and predicted star positions
(projected from the star catalog). The gyroscope unit provided two
critical inputs:
Window localization: Delineating the region of interest (ROI) for
centroid calculation.
Initial attitude estimation: Enabling star catalog projections for
positional predictions. Simulation results demonstrated that this
method reduced centroid errors from 1.8 pixels to 1/1000 pixels
under non-uniform motion conditions. However, its performance
hinges on two stringent requirements:
High-precision calibration of the mounting misalignment
between the star sensor and gyroscope.
Ultra-accurate angular velocity measurements from the gy-
roscope unit.
Similar to star image restoration algorithms, the success of star
centroid filtering methods relies heavily on precise motion parameter
acquisition. These parameters can be sourced either from external
sensors (e.g., gyroscopes) or derived algorithmically from image data,
as illustrated in Fig. 11. While these methods significantly improve
centroid accuracy, their practical implementation demands careful con-
sideration of sensor calibration, computational efficiency, and real-time
performance.
3.5. Attitude-correlated frame approaches
While the aforementioned motion blur compensation algorithms
improve attitude accuracy under dynamic conditions to some extent,
they remain constrained by the limitations of single-frame star image
processing. To overcome this, our research group proposed the attitude-
correlated frame (ACF) method [35,44], which breaks through the
limitation of the traditional method of single-frame star images, extends
the thinking to the sequence of star image frames. By leveraging precise
attitude transformations measured by a strapdown gyroscope unit,
the ACF method correlates independent star image frames to virtu-
ally broaden the FOV and suppress random noise, thereby enhancing
dynamic performance.
Fig. 12 illustrates the two-frame correlation process in the ACF
approach. Star image frames captured at different times (e.g., Frame
1 at Time 1 and Frame 2 at Time 2) are correlated using the attitude
transformation matrix provided by the gyroscope. This correlation
effectively synthesizes a ‘‘virtual FOV’’ by combining star observations
across multiple frames, analogous to the multi-FOV star sensor design
described in Section 2.2. The improvement in attitude accuracy can be
approximately quantified using the modified NEA formula in Eq. (6):
𝐄𝐬𝐬 =𝐹 𝑂𝑉1𝐷
𝑛𝑝𝑖𝑥𝑒𝑙
𝐸𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑
𝑁𝑛𝑠𝑡𝑎𝑟
(6)
where N denotes the number of correlated frames. Both simulations
and experiments confirm that the ACF method significantly enhances
attitude accuracy of the star sensor under high dynamic conditions,
with improvements approximately proportional to 𝑁. For instance,
as shown in Fig. 13(a), the attitude error decreases monotonically as N
increases. Experimental results in Fig. 13(b) further validate this trend
under angular velocities of 3.3∕s (z-axis), 1.8∕s (y-axis), and 0.17∕s
(x-axis).
The ACF approach integrates multiple advanced techniques:
Single-frame restoration: Motion-blurred star images in individ-
ual frames are preprocessed using the Richardson–Lucy algorithm
(RLA) to improve SNR.
Centroid error compensation: Systematic errors in centroid es-
timation are mitigated through calibration methods [50,51].
Acta Astronautica 233 (2025) 42–54
50
L. Ma et al.
Fig. 12. Schematic diagram of two-frame correlation.
Sensor calibration: Precise calibration of the installation mis-
alignment between the star sensor and gyroscope [52,53] ensures
accurate attitude transformation.
Internal parameter calibration: Star sensor parameters (e.g., fo-
cal length, distortion coefficients) are optimized to minimize
model mismatches [54,55].
A derivative approach, the attitude-correlated frame adding (ACFA)
method [56], further enhances SNR by directly co-adding star images
from correlated frames. Unlike ACF, which correlates star observation
vectors, ACFA aligns and sums pixel-level data from multiple frames
using gyroscope-measured attitude transformations. The accuracy im-
provement similarly follows the 𝑁 proportionality, demonstrating
robustness under extreme dynamics.
Compared to hardware-based multi-FOV star sensors (Section 2.2),
the ACF method achieves similar performance through time-division
multiplexing rather than spatial FOV expansion. Key advantages in-
clude:
Noise suppression: Random noise is averaged out across corre-
lated frames.
Hardware simplicity: Eliminates the need for complex multi-
FOV optical systems.
Adaptability: Compatible with existing strapdown gyroscope-star
sensor architectures.
The efficacy of the ACF method has been extensively validated
through simulations and field experiments. For example, in [52,53], the
method demonstrated sub-arcsecond attitude accuracy under angular
rates exceeding 5∕s. Applications span spacecraft, ballistic missiles,
and shipborne platforms, where dynamic conditions are prevalent.
However, the ACF method requires high-precision gyroscopes and rig-
orous calibration to ensure attitude transformation accuracy. Perfor-
mance degradation may occur under rapid angular acceleration or
gyroscope drift.
4. Conclusion
The declining attitude accuracy of star sensors under dynamic
conditions, primarily caused by motion-induced star image blurring
Fig. 13. Simulation (a) and experiment (b) results with ACF approach.
and reduced effective star detection, remains a critical challenge for
aerospace, ballistic, and marine applications. This review systemati-
cally analyzed the mechanisms of accuracy degradation and categorizes
existing improvement methods into two paradigms: active and pas-
sive deblurring methods and motion blur compensation methods. Key
conclusions and insights are summarized as follows:
A. Active and passive deblurring methods:
Passive deblurring methods: such as optical system en-
hancements (e.g., EMCCD/ICCD sensors, large-aperture
lenses) and multi-FOV designs (e.g., HYDRA), focus on
reducing motion blur during imaging through hardware
optimization. These approaches achieve detectable angular
velocities up to 4/s but face limitations in cost, size, and
adaptability to complex motion.
Active deblurring methods: including servo tracking plat-
forms (e.g., NAS-27) and TDI/DBA techniques, actively
counteract motion effects through mechanical isolation or
charge synchronization. While effective for unidirectional
motion, they struggle with multi-axis dynamics and add
hardware complexity.
Acta Astronautica 233 (2025) 42–54
51
L. Ma et al.
Table 1
Methods to improve the attitude accuracy of the star sensor under dynamic conditions.
B. Motion blur compensation methods advancements:
Algorithmic innovation: Motion-blurred star image
restoration (e.g., RLA, Wiener filtering), centroid filter-
ing (e.g., Kalman fusion), and the ACF/ACFA approaches
demonstrate significant potential to enhance SNR and cen-
troid accuracy. These software-driven solutions offer flexi-
bility and cost efficiency compared to hardware modifica-
tions.
Multi-Frame correlation: The ACF method, leveraging
sequential frame correlation via strapdown gyroscopes,
achieves virtual FOV expansion and noise suppression, im-
proving attitude accuracy proportionally to 𝑠𝑞𝑟𝑡𝑁 (num-
ber of correlated frames). This approach mirrors the ben-
efits of multi-FOV systems through time-division multi-
plexing, offering a scalable alternative to spatial hardware
redundancy.
C. Application-specific recommendations:
Spacecraft/satellites: Multi-FOV star sensors (e.g., HY-
DRA) and ACF methods are ideal for high-reliability, long-
duration missions.
Marine platforms: Servo tracking systems and EMCCD-
based sensors suit environments with moderate dynamics
and relaxed size constraints.
High-dynamics scenarios: ACF/ACFA methods paired
with high-precision gyroscopes provide optimal accuracy
under rapid maneuvers (e.g., missiles, agile satellites).
In summary, APDM methods address motion blur at the hardware
level but face inherent trade-offs, while MBCM approaches provide
flexible, cost-effective software solutions. The advantages and disad-
vantages of each method are described and summarized in Table 1.
Mission-specific requirements balancing precision, complexity, and
operational constraints should guide the selection of improvement
strategies.
The CMOS APS is an ideal star sensor image sensor, and its detection
sensitivity, stability, and other properties continue to improve. There is
a trend to replace CCD, the rolling shutter effect of CMOS star sensors
under dynamic conditions are reported in [5759], and CMOS APS star
sensor dynamic performance enhancement technology is bound to be
developed further. Star sensors are widely used inside the atmosphere,
and the all-time performance requirement is a necessary consideration.
All-time star sensors are a current and future research hot spot [6063].
In addition, with the performance improvement of hardware of the
star sensor and the extended application scenarios, the motion blur
compensation methods would be complicated and varied [64,65].
CRediT authorship contribution statement
Liheng Ma: Writing original draft, Supervision, Software,
Resources, Project administration, Methodology, Investigation, Fund-
ing acquisition, Formal analysis, Data curation, Conceptualization.
Dongkai Dai: Writing review & editing, Visualization, Validation,
Software, Resources, Project administration, Methodology, Funding
acquisition, Formal analysis, Conceptualization. Yuanman Ni:
Writing review & editing, Visualization, Validation, Software,
Resources, Formal analysis.
Declaration of competing interest
The authors declare that they have no known competing financial
interests or personal relationships that could have appeared to
influence the work reported in this paper.
Acknowledgments
This work was supported by the Individual Science Foundation of
the Naval University of Engineering, China under Grant 2023507120,
China’s National Natural Science Foundation (NSFC) (61573368, 61803378).
The authors thank the reviewers and editors for their support.
Data availability
The data that support the findings of this study are available in the
article.
Acta Astronautica 233 (2025) 42–54
52
L. Ma et al.
References
[1] C.C. Liebe, Star trackers for attitude determination, IEEE Aerosp. Electron. Syst.
Mag. 10 (6) (1995) 10–16.
[2] J.L. Joergensen, T. Denver, M. Betto, P. Van den Braembussche, The PROBA
satellite star tracker performance, Acta Astronaut. 56 (1) (2005) 153–159.
[3] C. Liu, G. Liu, X. Wang, A. Li, Star Sensor Principle and Application Strapped
on Missile, National Defense Industry Press, Beijing, Beijing, 2010.
[4] L. Ma, D. Zhu, C. Sun, D. Dai, S. Qin, Three-axis attitude accuracy of better
than 5 arcseconds obtained for the star sensor in a long-term on-ship dynamic
experiment, Appl. Opt. 57 (32) (2018) 9589–9595.
[5] C.C. Liebe, Accuracy performance of star trackers - a tutorial, IEEE Trans.
Arespace Electron. Syst. 38 (2) (2002) 587–599.
[6] L. Ma, Research on Dynamic Techniques for the Star Sensor with Attitude
Correlated Frames Approach (Ph.D. thesis), National University of Defense
Technology, Changsha, 2017.
[7] W.H. Roelof, van Bezooijen, Sirtf autonomous star tracker, in: SPIE, 2003, pp.
108–121.
[8] T. Denver, Motion Compensation Techniques for Aerospace (Ph.D. thesis),
Technical University of Denmark, Denmark, 2004.
[9] A. Ball, High accuracy star tracker, 2013, URL https://www.ball.com/aerospace/
capabilities/technologies-components/star-trackers.
[10] X. Wei, W. Tan, J. Li, G. Zhang, Exposure time optimization for highly dynamic
star trackers, Sensors 14 (3) (2014) 4914–4931.
[11] L. Blarre, N. Perrimon, D. Piot, L. Oddos-Marcel, E. Anciant, L. Majewski,
P. Martinez, S. Dussy, HYDRA multiple heads star tracker with enhanced
performances, in: 7th International ESA Conference on Guidance, Navigation &
Control Systems, 2-5 June 2008, Tralee, County Kerry, Ireland, 2008, pp. 1–15.
[12] J. Guo, Study on Ship Attitude Measurement Based on Star Sensor (Ph.D. thesis),
Graduate University of Chinese Academy of Sciences, Changchun Institute of
Optics, Fine Mechanics and Physics, Changchun, 2013.
[13] W. Yu, J. Jiang, G. Zhang, Multiexposure imaging and parameter optimization
for intensified star trackers, Appl. Opt. 55 (36) (2016) 10187–10197.
[14] J. Yan, J. Jiang, G.J. Zhang, Modeling of intensified high dynamic star tracker,
Opt. Express 25 (2) (2017) 927.
[15] Z. Wang, J. Jiang, G. Zhang, Global field-of-view imaging model and param-
eter optimization for high dynamic star tracker, Opt. Express 26 (25) (2018)
33314–33332.
[16] J. Jiang, Y. Ma, G. Zhang, Parameter optimization of single-FOV-double-region
celestial navigation system, Opt. Express 28 (17) (2020) 25149–25166.
[17] G.D. Rogers, M.R. Schwinger, J.T. Kaidy, T.E. Strikwerda, R. Casini, A. Landi,
R. Bettarini, S. Lorenzini, Autonomous star tracker performance, Acta Astronaut.
65 (1–2) (2009) 61–74.
[18] D. Mortari, A. Romoli, D. Alenia, StarNav III: a three fields of view star tracker,
in: Aerospace Conference Proceedings, 2002. IEEE, Vol. 1, IEEE, 2002, pp. 1–57.
[19] Z. You, F. Xing, Y. Dong, A dual FOVs star sensor and its star identification
method, 2005.
[20] Z. Hua, Research on the Key Techniques of High Precise Double-FOV Star Sensor
(Ph.D. thesis), Huazhong University of Science and Technology, Wuhan, 2011.
[21] G. Northrop, LN-120g stellar-inertial-GPS navigation, 2017, URL http://www.
northropgrumman.com/Capabilities/LN120GStellarInertialNavigationSystem.
[22] J. He, Survey of overseas celestial navigation technology development, Ship Sci.
Technol. 27 (5) (2005) 91 96.
[23] H. Lei, H. Yang, B. Li, Q. Wei, W. Zhang, Y. Yue, X. Hu, Inertial information
based star detection for airborne star sensor, Opt. Laser Technol. 162 (2023)
109325.
[24] A. Pasetti, S. Habinc, R. Creasey, Dynamical binning for high angular rate star
tracking, in: Proceedings of the 4th ESA International Conference, Spacecraft
Guidance, Navigation and Control Systems, 1999, pp. 255–266.
[25] W.H. Roelof, van Bezooijen, A. Kevin, K.W. David, Performance of the AST-201
star tracker for the microwave anisotropy probe, in: AIAA Guidance, Navigation
and Control Conference. Monterey, 2002, pp. 1–11.
[26] J. Shen, G. Zhang, X. Wei, Simulation analysis of dynamic working performance
for star trackers, J. Opt. Soc. Am. A- Opt. Image Sci. Vis. 27 (12) (2010)
2638–2647.
[27] X.J. Wu, X.L. Wang, Multiple blur of star image and the restoration under
dynamic conditions, Acta Astronaut. 68 (11–12) (2011) 1903–1913.
[28] H. Wang, W. Zhou, X. Cheng, H. Lin, Image smearing modeling and verification
for strapdown star sensor, Chin. J. Aeronaut. 25 (1) (2012) 115–123.
[29] J. Yan, J. Jiang, G. Zhang, Dynamic imaging model and parameter optimization
for a star tracker, Opt. Express 24 (6) (2016) 5961–5983.
[30] C. Liu, G. Liu, B. Yang, H. Zhou, Star sensor image motion model and its
simulation analysis, Infrared Laser Eng. 42 (5) (2013) 1311–1315.
[31] C. Liu, L. Hu, G. Liu, B. Yang, A. Li, Kinematic model for the space-variant image
motion of star sensors under dynamical conditions, Opt. Eng., Bellingham 54 (6)
(2015) 063104.
[32] J. Zhang, Y.C. Hao, L. Wang, D. Liu, Studies on dynamic motion compensation
and positioning accuracy on star tracker, Appl. Opt. 54 (28) (2015) 8417–8424.
[33] H. Liu, D. Su, J. Tan, J. Yang, X. Li, An approach to star image simulation
for star sensor considering satellite orbit motion and effect of image effect, J.
Astronaut. 32 (5) (2011) 1190–1194.
[34] T. Sun, F. Xing, Z. You, X. Wang, B. Li, Smearing model and restoration of star
image under conditions of variable angular velocity and long exposure time, Opt.
Express 22 (5) (2014) 6009–6024.
[35] S. Qin, D. Zhan, J. Zheng, W. Wu, H. Jia, S. Fu, L. Ma, A dynamic attitude
measurement method of star sensor based on gyro’s precise angular correlation,
2014.
[36] Y. Jin, J. Jiang, G. Zhang, Star extraction method for high dynamic star sensor,
Infrared Laser Eng. 40 (11) (2011) 2281–2285.
[37] W. Zhang, W. Quan, L. Guo, Blurred star image processing for star sensors under
dynamic conditions, Sensors 12 (2012) 6712 6726.
[38] T. Sun, F. Xing, Z. You, M.S. Wei, Motion-blurred star acquisition method of
the star tracker under high dynamic conditions, Opt. Express 21 (17) (2013)
20096–20110.
[39] X. Wan, G. Wang, X. Wei, J. Li, G. Zhang, ODCC: a dynamic star spots extraction
method for star sensors, IEEE Trans. Instrum. Meas. 70 (2021) 5009114.
[40] J. Du, X. Wei, J. Li, G. Wang, X. Wan, Star spot extraction for multi-FOV star
sensors under extremely high dynamic conditions, IEEE Sensors J. 24 (21) (2024)
35167–35180.
[41] L. Ma, X. Wang, D. Zhan, D. Dai, Extraction method of the motion blurred
star image for the star sensor under high dynamic conditions, in: 12th IEEE
International Conference on Signal Processing, ICSP, in: International Conference
on Signal Processing, 2014, pp. 836–840.
[42] L. Ma, Z. Huang, X. Wang, S. Qin, Mathematical morphology operations applied
in star image processing for star trackers, in: OSA Imaging and Applied Optics
Congress: Imaging Systems and Applications, 2016, p. IW2F.3.
[43] L. Ma, B.-Z. Franco, G. Jiang, X. Wang, Z. Huang, S. Qin, Region-confined
restoration method for motion-blurred star image of the star sensor under
dynamic conditions, Appl. Opt. 55 (17) (2016) 4621–4631.
[44] L. Ma, D. Zhan, G. Jiang, S. Fu, H. Jia, X. Wang, Z. Huang, J. Zheng, F. Hu,
W. Wu, S. Qin, Attitude-correlated frames approach for a star sensor to improve
attitude accuracy under highly dynamic conditions, Appl. Opt. 54 (25) (2015)
7559–7566.
[45] X. Fei, C. Nan, Y. Zheng, S. Ting, A novel approach based on MEMS-Gyro’s data
deep coupling for determining the centroid of star spot, Math. Probl. Eng. 2012
(2012) 1–20.
[46] H. Liu, J. Yang, J. Wang, J. Tan, X. Li, Star spot location estimation using Kalman
filter for star tracker, Appl. Opt. 50 (12) (2011) 1735–1744.
[47] W. Hou, H. Liu, Z. Lei, Q. Yu, X. Liu, J. Dong, Smeared star spot loca-
tion estimation using directional integral method, Appl. Opt. 53 (10) (2014)
2073–2886.
[48] J. Yang, J. Jiang, L. Tian, Y. Ma, Z. Wang, Autonomous attitude filtering for
terrestrial star sensors under maneuvering conditions, IEEE Sensors J. 24 (20)
(2024) 32822–32835.
[49] R.A. Fowell, R. Li, Y.-W.A. Wu, Method for compensating star motion induced
error in a stellar inertial attitude determination system, 2009.
[50] W. Tan, S. Qin, R.M. Myers, T.J. Morris, G. Jiang, Y. Zhao, X. Wang, L. Ma,
D. Dai, Centroid error compensation method for a star tracker under complex
dynamic conditions, Opt. Express 25 (26) (2017) 33559–33574.
[51] Y. Ni, W. Tan, D. Dai, X. Wang, S. Qin, A stellar/inertial integrated navigation
method based on the observation of the star centroid prediction error, Rev. Sci.
Instrum. 92 (3) (2021) 035001.
[52] W. Tan, D. Dai, W. Wu, X. Wang, S. Qin, A comprehensive calibration method
for a star tracker and gyroscope units integrated system, Sensors 18 (9) (2018)
3106.
[53] L. Ma, S. Xiao, W. Tan, X. Luo, S. Zhang, Attitude correlated frames based
calibration method for star sensors, Sensors 24 (1) (2024) 67–83.
[54] S. Zhao, X. Wang, W. Tan, D. Dai, S. Qin, Error coupling analysis of laboratory
calibration method for star tracker, Appl. Opt. 60 (8) (2021) 2372–2379.
[55] L. Ma, Q. Li, Y. Chen, X. Luo, Installation matrix calibrtion between star sensor
and the laser gyro unit using the equivalent rotation vector transformation, Meas.
Sci. Technol. 35 (6) (2024) 5010–5021.
[56] Y. Ni, D. Dai, W. Tan, X. Wang, S. Qin, Attitude-correlated frames adding
approach to improve signal-to-noise ratio of star image for star tracker, Opt.
Express 27 (11) (2019) 15548–15564.
[57] Y. Li, X. Wei, J. Li, G. Wang, Imaging modeling and error analysis of the
star sensor under rolling shutter exposure mode, Opt. Express 29 (10) (2021)
15478–15496.
[58] Y. Li, X. Wei, J. Li, G. Wang, Error correction of rolling shutter effect for star
sensor based on angular distance invariance using single frame star image, IEEE
Trans. Instrum. Meas. 71 (2022) 7003213.
[59] Y. Li, X. Wei, J. Li, G. Wang, T. Wang, Star identification of high dynamic
star sensor under rolling shutter exposure mode, IEEE Sensors J. 23 (16) (2023)
18396–18412.
[60] W. Wang, X. Wei, J. Li, G. Zhang, Guide star catalog generation for short-wave
infrared (SWIR) all-time star sensor, Rev. Sci. Instrum. 89 (2018) 075003.
[61] Y. Ni, X. Wang, D. Dai, W. Tan, S. Qin, An adaptive section non-uniformity
correction methodof short-wave infrared star images for star tracker, Appl. Opt.
61 (24) (2022) 6992–6999.
Acta Astronautica 233 (2025) 42–54
53
L. Ma et al.
[62] H. Wang, B. Wang, Y. Gao, S. Wu, Near-earth space star map simulation method
of short-wave infrared star sensor, Infrared Phys. Technol. 127 (2022) 104436.
[63] Y. Ni, X. Wang, D. Dai, W. Tan, Limitations of daytime star tracker detection
based on attitude-correlated frames adding, IEEE Sensors J. 23 (22) (2023)
27450–27457.
[64] L. Hao, W. Xinguo, W. Gangyi, J. Li, A star spot extraction method based on
SSA-UNet for star sensors under dynamic conditions, IEEE Sensors J. 23 (7)
(2023) 7410–7419.
[65] M. Sida, W. Lingyun, G. Li, A method of star spot center-of-mass localization
algorithm for star sensor under highly dynamic conditions, IEEE Sensors J. 23
(13) (2023) 14957–14966.
Acta Astronautica 233 (2025) 42–54
54
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
To improve the dynamic performance of the star sensor, different star image frames are correlated using the attitude-correlated frame (ACF) approach, and the attitude changes among these star image frames are measured by the strapdown gyro unit (GU). The GU consists of three orthogonally assembled gyros, while the accelerometer is excluded. Accurate calibration of the installation angles between the star sensor and the GU is essential for the ACF approach. An installation angle calibration method using the equivalent rotation vector transformation between the star sensor and the GU is proposed in this work. Through three to five maneuverings of the star sensor/GU system, the equivalent rotation vector transformation of the observed vector is performed by the installation matrix, and the optimal values of the installation angles are estimated. Simulations and experiments are designed and conducted, and both verify the proposed calibration method. The calibration results in the experiment show that the standard deviation of the installation angles in each measurement data group is less than 5 arcseconds and the difference between different data sets is less than 14 arcseconds, both for the three axes, which are accurate enough for the ACF approach and other applications.
Article
Full-text available
Star sensors undergo laboratory calibration before they leave the factory. In addition, recalibration is necessary after they experience vibration, deformation, etc. Using the analysis of attitude-dependent and attitude-independent interstar angular invariance calibration methods (IAICMs) as a reference, an attitude-correlated frame-based calibration method (ACFCM) is proposed in this work, which combines the advantages of both methods. Using outdoor star observations, the ACFCM correlates star image frames obtained at different times via a strapdown gyro unit. As a result, the number of efficient star images for calibration increases rapidly and the distribution of star images becomes much more uniform, thus improving the calibration accuracy of the star sensor. A simulation and experimental tests were designed and carried out. Both the simulation and experimental results verify the feasibility of the proposed ACFCM method. Furthermore, by comparing our method with the IAICMs, the repeatability and reliability of the principal point obtained from the calibration with the ACFCM method proposed in this work were significantly improved.
Article
On some special occasions, the spacecraft may need to change attitude at a rate of 30 degrees per second in order to achieve a high level of mobility. To ensure star sensors can stably extract a consistent number of star spots under extremely high dynamic conditions, this paper presents a method to solve the problem of accurate detection of faint star spots by low-level information fusion based on a multi-field-of-view (multi-FOV) star sensor. Firstly, reduce the gray threshold to a low level and use the optimal directional connected component (ODCC) algorithm to extract the light spots containing star spots and false star noise. Next, establish a three-dimensional parameter space for Hough voting, and obtain a three-axis rotation angle estimated by combining the structure characteristics and the motion characteristics with the star vector information. Utilizing the joint observation frames, the relative information of spot motion is obtained to preliminarily screen out the false stars and verify convergence during each iteration. Finally, the faint star spots can be extracted and false star noise can be screened by the estimated three-axis rotation angle. The ground experiments have shown that compared with the traditional algorithms, our algorithm has better faint star spot extraction ability under extremely high dynamic conditions. Moreover, multi-FOV star sensors demonstrate a more robust capability than traditional single-FOV star sensors.
Article
In this paper, a generalized maneuvering attitude error-state Kalman filter (GMAESKF) is proposed to mitigate the attitude measurement noise and fulfill the requirement of angular velocity estimation for terrestrial star sensors when they are applied on agile maneuvering carriers. Based on the interchangeability between quaternions and rotation matrices, this method defines a generalized representation and operations for attitude, as well as attitude dynamics, and further establishes a pseudo-linear maneuvering attitude kinematics based on the Jerk model with high-dimensional characteristics. And the attitude measurement model of the multi-field-of-view (multi-FOV) star sensor based on directional statistics is refined through maximum likelihood estimation, thereby deriving a generalized measurement equation. Based on these, an error-state Kalman filter for attitude, angular velocity, angular acceleration and angular jerk is constructed, realizing the autonomous estimation of the required parameters. In the numerical simulation, the proposed method reduced the attitude measurement error by 42.3%, with an RMS error of 0.0171°/s for angular velocity estimation. Both numerical simulations and flight experiments consistently show that, compared to existing related algorithms, the proposed algorithm yields more accurate results under agile maneuvering conditions.
Article
In order to expand the applications of daytime star tracker (ST) to low-altitude platforms, it is necessary to solve the problem of daytime star detection inside the atmosphere. Currently, the short-wave infrared (SWIR) camera with InGaAs sensor is widely used for daytime ST, to suppress the strong background radiation during daytime. The attitude-correlated frames adding (ACFA) approach can be used to further improve the sensitivity to starlight by correlating and adding successive star images. However, InGaAs sensor has both temporal and spatial noise charge, distinct from charge coupled device (CCD) detectors. These two types of noise exhibit different growth trends during the stacking process, which limits the improvement of the star detection capability of the AFCA. To accurately estimate the limitations of daytime star detection after using ACFA, the influence of this approach on the signal-to-noise ratio (SNR) of star images is revealed. The noise model of InGaAs sensor is first established in this article according to the imaging process of the SWIR camera. Then the changing trend of gray value noise and SNR growth factor with the number of correlated frames is theoretically derived for SWIR star images. Finally, the SNR improvement based on ACFA is proved by the results of stacking the star images. It successfully improves the star detection capability of the ST under dynamic conditions. The star detection limitation of H{H} -band magnitude is expected to increase from −0.21 to 1.5 by ACFA.
Article
At present, it is difficult to achieve the accurate star identification of star sensor in high maneuvering state. Especially under rolling shutter exposure mode, additional rolling shutter distortion further increases the risk of failure for star identification. Therefore, a new star identification algorithm for high dynamic star sensor under rolling shutter exposure mode is proposed. The star-pair position ratio is taken as a stable matching feature through theoretical analysis. Then, the star identification is realized based on a way of circular voting. The experiments are carried out under low angular velocity and high angular velocity working conditions subsequently, whose results show that the proposed algorithm has better robustness to position noise, variable velocity motion, magnitude noise and false star than the traditional algorithms. When the angular velocity is 10°/s and the standard deviation of position noise is within 2 pixels, the identification rate is still higher than 94.3%. Moreover, the robustness test of this algorithm to magnitude noise and false stars shows that the identification rate is higher than 97.0% when the angular velocity is 7°/s under the magnitude noise of 0.6Mv. And the identification rate is still higher than 73.0% under the condition of 10°/s and adding two false stars with the magnitude of 3Mv. Finally, the validity of this method is tested by real star map. This algorithm provides a new idea for star identification in high maneuvering state and improves the dynamic performance of the star sensor under rolling shutter exposure mode to a certain extent.
Article
The starlight guidance system calculates the vehicle attitude information by observing the stars position through the star sensor. When the star sensor operates under high dynamic conditions, the motion blur phenomenon exists in the images taken by it, which seriously affects the computational accuracy of attitude solution. In this paper, based on the motion characteristics of the vehicle under high dynamic conditions, we study and derive the imaging model of the blurred star spot under high dynamic conditions. On this basis, we use the least squares method combined with simulated annealing algorithm to fit the trajectory of the corresponding star spot. The trajectory fitting is based on the information of multiple star streaks in the star field image. And finally, we calculate the center-of-mass coordinates of the star spot. Simulations and experiments demonstrate the feasibility and effectiveness of the method. The fitting error is less than 0.3 pixels for the angular velocity vector [10°/s,0°/s,10°/s] and exposure time 200ms, which is an improvement of at least 0.1 pixel compared to previous studies.
Article
Dynamic performance is an important index of star sensors. When the star sensor is under dynamic conditions, imaged star spots stretch to steaks as the energy is more dispersive than that of static conditions, which leads to decreases in the signal-to-noise ratio (SNR) of the star spots. The threshold method is unable to detect enough stars under dynamic conditions. However, star sensors need sufficient stars to guarantee normal work. The image enhancement method enhances the star spots by integrating along with their movement directions. But it still misses a lot of low-SNR stars as the determination of movement directions is sensitive to noise, which leads to the limited enhancement of low-SNR stars. To solve the problem, this study presented an improved UNet-based star spot extraction method for star sensors under dynamic conditions. The improved UNet, defined as SSA-UNet, used a star streak attention module (SSAM) to help it focus on the star spots and improve the extraction performance. The presented method used SSA-UNet for the rough extraction and modified the rough extraction in the subsequent process. Both simulation and ground experiments proved that the proposed method can accurately extract more low-SNR star spots than the image enhancement method, which had great significance for further improving the dynamic performance of star sensors.
Article
In order to solve the problem of insufficient short-wave infrared star map data for the near-earth space star sensors' performance evaluation and the hardship of characterizing the disturbance factors dominated by the atmosphere on the star map quality. Based on the coordinate conversion, energy transfer, and image quality degradation, a star map simulation method for near-earth space SWIR star sensor is proposed. The atmospheric disturbance factors considered include atmospheric background, refraction, turbulence, and aerosol-scattering. According to the presented simulation method, combined with the designed simulation process and the set input conditions, the simulation results of the star map at 0km altitude are given. The simulation results show that the stronger the sky background is, the larger the star map gray mean and the RMS of noise are, and the more extensive the gray mean and peak value of the 3x3 pixels where the star point is located; the larger the view zenith angle, the more significant the star point centroid offset. The star map quality is quantitatively evaluated based on assessment indicators and further verified based on the measured data. The evaluation and verification outcome support the inference given by the star map simulation results and illustrate the rationality of the star map simulation method. The paper can provide technical support for the optimization design, capability estimation, and algorithm validation of the near-earth space SWIR star sensor.