(Preprint) AAS 16-015 LOW COST SPACECRAFT ATTITUDE

Transcription

(Preprint) AAS 16-015 LOW COST SPACECRAFT ATTITUDE
(Preprint) AAS 16-015
LOW COST SPACECRAFT ATTITUDE DETERMINATION FOR
CUBESAT TYPE MISSIONS
Vinicius Guimaraes Goecks∗, Austin Probe†, Robyn Woollands†,
John Hurtado‡, and John L. Junkins§
We present a method for performing low cost attitude estimation for CubeSat type
missions. Our algorithm uses measurements from a custom built sun sensor, a star
camera, and inertial measurements. These sensing measurements are supplied in
real-time to an Multiplicative Kalman Filter for the purpose of generating continuous attitude estimates. The testing and validation of this algorithm is done in the
Land, Air, and Space Robotics Laboratory at Texas A&M University, using our
custom three degrees-of-freedom attitude test-bed interfacing with a suspended
target emulation pendulum. The algorithm is implemented using low cost commercial off-the-shelf hardware and open-source libraries for the required vision
based localization. This low cost, low power and small-scale attitude estimation
technology is ideal for use on CubeSats and ChipSats. Our algorithm is designed
to be part of a suite of tools that is currently being developed at the lab for attitude
estimation and control.
INTRODUCTION
CubeSats and ChipSats are regularly being considered for scientific missions. NASA Jet Propulsion Laboratory (JPL) has several CubeSat missions planned that will collect scientific data and
allow for further study in areas such as planetary science,1 Earth science,2 and astrophysics.3 Important science questions regarding Earth’s climate and distant galaxies can only be answered with
space-based instruments along with ancillary guidance, navigation, and control (GNC) sensors with
well optimized algorithms. But the high cost associated with launch and maintenance of traditional
satellites limits what can currently be accomplished. CubeSats, with their light weight, low power
requirements, and overall low cost, are revolutionizing space exploration, and affording more organizations the opportunity to become involved in space exploration and the development of CubeSat
technology.4 In addition, the concept of CubeSat constellations could provide further benefits not
realizable with traditional satellites.5
In this paper we develop an attitude estimation algorithm to support a CubeSat type mission.
Our algorithm requires input data from a custom built sun sensor, a star camera, and gyro measurements from an onboard micro-electro-mechanical inertial measurement unit (MEMS IMU). This
∗
Graduate Research Assistant, Department of Aerospace Engineering, Texas A&M University, TAMU 3141, College
Station, TX 77843-3141, USA. Scholarship CAPES - Brazil.
†
Graduate Research Assistant, Department of Aerospace Engineering, Texas A&M University, TAMU 3141, College
Station, TX 77843-3141, USA.
‡
Professor, Department of Aerospace Engineering, Texas A&M University, TAMU 3141, College Station, TX 778433141, USA.
§
Distinguished Professor, Department of Aerospace Engineering, Texas A&M University, TAMU 3141, College Station,
TX 77843-3141, USA.
1
information is supplied, in real-time, to our Multiplicative Kalman Filter (MKF) for the purpose of
generating continuous attitude estimates. The testing and validation of this algorithm is done in the
Land, Air, and Space Robotics Laboratory (LASR) at Texas A&M University, using the LASR Attitude Test-bed (LASRAT). This is a custom three degrees-of-freedom (3-DoF) experimental attitude
platform that interfaces with LASR’s Suspended Target Emulation Pendulum (STEP). The algorithm is implemented using low cost commercial off-the-shelf (COTS) hardware and open-source
libraries for the required computer vision based localization. This low cost, low power and smallscale attitude estimation technology is ideal for use on CubeSats and ChipSats. Our algorithm is
designed to be part of a suite of tools that is currently being developed at the LASR Lab for attitude
estimation and control.
ATTITUDE TEST-BED HARDWARE & SOFTWARE
In this section we present the attitude hardware components (Vicon system, sun sensor, star camera, LASRAT) and discuss computer vision library (Open CV) required for recording the measurements. The software for data processing and attitude estimation is presented later in the paper.
Vicon System
Vicon is a system of cameras and light-emitting diodes (LED) that use retro-reflective beacons to
identify bodies and provide “truth” data in the lab. At the LASR lab six Vicon cameras are mounted
on the lab ceiling and thus create a 3D workspace in which object positions may be measured with
cm accuracy. The Vicon system also provides a 6-DoF position and attitude state for an object on
measurements to four retro-reflective beacons fixed onboard. The Vicon system is used to establish
independent estimates of the “true” position and attitude of the simulated spacecraft (LASRAT)
during a maneuver, and this is compared with the output of the attitude estimator in order to quantify
the performance of our demonstration sensors and algorithms.
Vicon is also used for calibrating the sun sensor and star camera. The procedure begins by
measuring the positions of the respective light sources (artificial sun and star field) in the Vicon
workspace to generate a truth measurement. Following this the sun sensor and star camera are
used to measure the position of the respective light sources from a number of different locations
in the Vicon work space. Several hundred measurements are made and a standard deviation of
measurement errors is computed from the respective sun sensor and star camera data. These values
are required as part of filtering purposes in the attitude estimation algorithm.
Sun Sensor
The sun sensor consists of two high-sensitivity, 2048-pixel, linear image sensor charge-coupled
devices (CCD) that are positioned orthogonal to each other. This allows light sources to be recorded
and effectively measured in a 2D field of view. The basic concept of a CCD is that incoming photons
of light will hit the sensing elements and charge internal capacitors according to the photon flux.
After each integration time, the capacitors are discharged to a shift register and the different charges
are read by an analog-to-digital (A/D) converter.6 Using this approach, the x and y centroids of each
cluster of pixel response is independently measured based on one dimensional operations on each
of the two CCD array outputs.
In addition to the orthogonal CCDs, the sun sensor has a thin cover (aluminium foil) with two
slits positioned directly above the centers of the respective linear CCD arrays. The slits allow the
2
sun light to hit only a reduced part of the sensing elements depending on the incident angle of
the sunlight. Based on the resulting voltage pattern we can determine the position of the excited
elements and calculate the inclination angle of the incoming photons. Light from a single light
source (e.g. the Sun or artificial Sun in the lab) enters the slits and illuminates a portion of each
array (Figure 1). The magnitude and obliqueness of the illuminated sensing elements on these two
orthogonal linear CCD arrays are combined to give an estimate vector to the light source.7
To process the angle measured by the sun sensor, the centroid of the excited sensing elements on
the CCD for the x and y-axis is calculated and analysed as the image plane of a camera. Based on
the designed height of the case, h, and calculated centroid position from the center, d, the angle θ
is calculated using Eq.(1). The number of excited elements is directly related to the slit width, a
(Figure 1).
a
h
d
Figure 1. Sun sensor schematic showing one linear CCD array and light slit.
Design and Testing The sun sensor was chosen as an attitude sensor due to its simplicity in construction, fast response, low cost, energy, and computer processing requirements. The current prototype was 3D printed in the LASR lab and built with COTS components. The sensor is composed of
two orthogonal CCDs and a Teensy LC microcontroller mounted on a custom printed circuit board
(PCB). The next prototype is currently under development and will be drastically reduced in size
(about four to six times) due to smaller sensing units and all computer processing being held by the
attitude hardware.
The sensor was characterized in the Vicon space where an LED light source was used as the
artificial Sun. The measurements were taken from 25 different positions in the lab and the Sun
position was reconstructed as seen by the sensor using the Teensy LC microcontroller and exported
in real-time to MATLAB for visualization.
The results for the sun sensor calibration are shown in Figure 2(a) with the errors plotted in
Figure 2(b). The top and bottom panels of Figure 2(a) show the measured data (red dots) in the
respective x-direction and y-directions. The blue line in each of these plots represents where the
data would lie if the sun sensor made perfect measurements. That is, the reference angle measured
by Vicon (angle between the sun sensor and the Sun) would be the same as the angle measured by
the sun sensor. No sun sensor is perfect, and it is clear in the plots that there is some deviation of the
sun sensor data from the truth. The errors shown in Figure 2(b) are as much as +/- 10 degrees for
the x-component (top panel) and about +/-5 degrees for the y-component (bottom panel). This more
3
than the published accuracy of +/- 5 degrees for sun sensors in general,7 however, we expect that
our prototype sun sensor will not perform at state-of-the-art accuracy and anticipate improvements
in future designs.
θ = atan(d/h)
(1)
Angle x (deg)
40
20
0
-20
-40
-20
-15
-10
-5
0
5
10
15
20
25
Angle y (deg)
20
0
-20
Truth Angle
Measured Angle
-40
-20
-15
-10
-5
0
5
0
5
Reference angle (deg)
(a)
Error (deg)
10
0
-10
-20
-15
-10
-5
Reference angle (deg): x-axis
Error (deg)
10
0
-10
-20
-15
-10
-5
0
5
10
15
20
25
Reference angle (deg): y-axis
(b)
Figure 2. (a) Comparison between the angle read by the Vicon system (truth angle)
and that measured by the sun sensor. (b) Sun sensor error measurements in the x and
y-axis directions.
Star Camera and Star Field
The lab artificial star field consists of 5 red-colored LEDs arranged in a pattern representing
the Southern Cross constellation (Figure 3). The 5 LEDs are each wired in series with resistors
4
of different values to allow some variation in the “stellar” magnitudes. The color red allows the
position of the stars to be tracked by the Vicon system. A star camera images the field of view and
the incoming light is used to generate a centroid for each star using a blob-tracking algorithm. The
position of the centroid is representative of the position of the star in the star field. This process is
coded using Open CV in C/C++, an open-source computer vision library, running on a RaspberryPi
model A. A pyramid star-identification algorithm is used to search through the stars and compare
the angular distance between them with respect to the others in the field of view, thus allowing each
star to be correctly identified.8 At this point we mention that attitude estimation using a star field
of 5 stars would be a challenge. Thus for this paper we use simulated star data from the catalogue
mappar.mat.9 More details are given in the final section.
Figure 3. Experimental set-up showing the star field and attitude camera (or sun
sensor) located on the LASR Embedded Navigation System (LENS).
LASR Attitude Test-bed
The LASR Attitude Test-bed (LASRAT) is a custom 3-DoF experimental attitude platform that
houses the sun and star sensors, MEMS IMU, an Arduino UNO board and specific motor driver,
Xbee radio transmitter/receiver, a 3D printed platform, and three DC motors coupled to reaction
wheels.
The MEMS IMU is mounted on the 3D printed platform under the Arduino microcontroller and
outputs real-time acceleration, magnetometer and gyroscope data. The Xbee is located onboard the
Arduino and transmits a wireless signal to the USB Xbee receiver on the computer. This information is made directly available to MATLAB through a serial port for data processing and subsequent
return commands back to the the Arduino, wirelessly. The commands received by the Xbee on the
Arduino have instructions for the motor controller which allows the reactions wheels, and consequently the system, to be controlled as desired. The sun and star sensor data is used in combination
with the IMU data for the attitude estimation algorithm.
Arduino code is used for operating the attitude platform and MATLAB code is used for processing
the data and visualization on the computer. This current hardware set-up is a prototype and will
likely be refined further for use on actual CubeSat missions.
5
Figure 4. LASR Attitude Test-bed (LASRAT) on which sensors, electronics, and
reaction wheels are mounted.
ATTITUDE ESTIMATION SOFTWARE
“Errors using inadequate data are much less then those using no data at all.” - Charles Babbage
In this section we present an algorithm that affords two options for estimating the attitude of a
simulated spacecraft (LASRAT) in the lab. The first uses simulated star measurements and data
from the onboard gyroscope; the second uses simulated sun sensor data and measurements from the
onboard accelerometer, magnetometer, and gyroscope. The algorithm is based on the Multiplicative
Quaternion Kalman Filter given in Chapter 7 of Crassidis & Junkins.9 Here we outline the mathematical procedure required for implementing the algorithm, and refer the reader to the reference9
for a full derivation and detailed explanation.
For this experiment the quaternion Vicon “truth” data was used to simulate measurements from
either the star tracker or sun sensor. This is done because our lab star field contains only five stars
(Figure 3) and is thus inadequate to determine attitude for complex spacecraft rotational maneuvers.
In addition, we only have one prototype sun sensor and required multiple sensors to be mounted
onboard the LASRAT to provide full coverage. To simulate the star measurements we make use of
a MATLAB code provided by Malak,10 and to generate the simulated sun measurements the sun
vector was rotated into the body frame using the quaternion supplied from Vicon and artificial noise
was added.
The body measurements (ŷk ) are computed from the inertial (rn ) simulated measurements as
shown in Eq. (2), where A is the attitude matrix, q is the quaternion produced by Vicon, ν is
Gaussian noise, and hk (x̂k ) is the measurement matrix.



ỹk = 



ν1
 ν2



 +  ..
 .

νn
A(q)rn t
A(q)r1
A(q)r2
..
.
k
6





tk
≡ hk (x̂k ) + vk
(2)
The sensitivity matrix for the measurements is computed below.


[A(q̂− )r1 ×] 03×3  [A(q̂− )r2 ×] 03×3 


Hk (x̂k ) = 
..
.. 

.
. 
−
[A(q̂ )rn ×] 03×3 t
(3)
k
With the above information the Kalman gain is computed as
−1
−
− T
−
Kk = Pk− HkT (x̂−
,
k ) Hk (x̂k )Pk Hk (x̂k ) + R
(4)
where P is the covariance matrix and R is a diagonal matrix with the elements being the star
camera sensor standard deviation.
R = diag
σ12 I3×3 σ22 I3×3 · · ·
σn2 I3×3
(5)
ˆ+
The error-state is updated using Eq. (6). 4x̃
k has six components, the first three are the quarternion updates δ α̂k+T and the last three are the gyro bias updates 4β̂k+T .
+
ˆk = Kk [ỹk − hk (x̂k )]
4x̃
(6)
The post-update angular velocity and propagated gyro bias are given in Eq. (7) and (8), respectively.
ω̂k+ = ω
ek − β̂k+
(7)
−
= β̂k+
β̂k+1
(8)
The quaternion is updated using Eq. (9).
1
−
+
+
q̂+
k = q̂k + 2 Ξ(q̂k )δ α̂k ,
T
q4 I3×3 + [%×]
where Ξ(q̂) ≡
and % = q1 q2 q3
.
T
−%
(9)
The next step in the MKF is propagate the covariance matrix, Eq. (10).
−
Pk+1
= Φk Pk+ ΦTk + Υk Qk ΥkT ,
where Υk =
−I3×3 03×3
03×3 I3×3
(10)
.
This requires the computation of the state transition matrix (Φ) and the discrete process noise
(Qk ). The state transition matrix is computed as follows
7
Φ=
Φ11 Φ12
Φ21 Φ22
,
(11)
where
+ [ω̂×]2
Φ11 = I3×3 − [ω̂×] sin(kω̂k4t)
kω̂k
{1−cos(kω̂k4t)}
kω̂k2
Φ12 = [ω̂×] {1−cos(kω̂k4t)}
− I3×3 4t − [ω̂×]2
kω̂k2
{kω̂k4t−sin(kω̂k4t)}
kω̂k3
Φ21 = 03×3
Φ22 = I3×3 .
(12)
The discrete process noise is given by
Qk =
1 2
2
2 σu 4t I3×3
σu2 4t I3×3
σν2 4t + 31 σu24t3 I3×3
1 2
2
2 σu 4t I3×3
,
(13)
where σν is gyro drift noise random walk, and σu is gyro drift noise.
Finally the quaternion state is propagated forward using
+ +
q̂−
k+1 = Ω̄(ω̂k )q̂k ,

where Ω̄(ω̂k+ ) = 
+
ψˆk ≡
cos( 12
sin( 21 kω̂k+ k4t)ω̂k+
kω̂k+ k
h
i
+
ω̂ 4t)I3×3 − ψˆk + ×
k
+T
−ψˆk
(14)

+
ˆ
ψk
 and
1 +
cos( 2 ω̂k 4t)
.
This process is repeated for each time step and new gyro measurements are included as they
become available. Vicon data, and hence simulated star camera measurements, are available at a
frequency of 100 Hz. The gyro samples at a rate of 25Hz. This means that for every four star camera
measurements there is one gyro measurement. In addition to synchronizing the way data is received
by the filter, it is also important that the filter is properly tuned. The sensor calibration tests that
were discussed in the previous sections allowed a standard deviation value to be computed for each
sensor. This along with calibration data from the IMU data sheet was used for tuning the filter.
This discrete time propagation algorithm is an attractive approximation to the continuous algorithm because it does not require computationally expensive “continuous” numerical integration. It
can thus be easily implemented in real-time on-board a spacecraft. However, it must be noted that
since this is an approximation it will be valid only if the gyro sampling rate does not exceed the
Nqyquist limit. That is kω̂(t)k 4t < π/10. Since the gyro sampling rate is 25 Hz this places an
upper bound on the angular velocity of 7.85 rad/s or 1.25 revolutions per second.
8
RESULTS & ANALYSIS
We present two test cases that each represent a different configuration of the MKF. The first uses
artificial star measurements with superimposed noise, and gyro data from the onboard IMU. The
second test case uses simulated sun sensor data with superimposed noise, and measurements from
the gyro, accelerometer and magnetometer. For both test cases the initial quaternion is taken as
unity, thus representing a “lost-in-space” attitude. The motion for the first test case was a gentle
tumble and for the second test case it was mostly spin about the z-axis.
Figures 5 and 6 shows the true (Vicon) and estimated quaternions in blue and red respectively.
The quaternions output by Vicon show some jumps/discontinuities (quaternion flipping) that are
smoothed out by the filter. It is important to note that the specific attitude of a body can be described
by quaternions with different values. In Figure 7 the quaternions are converted into Euler angles
(Roll, Pitch, Yaw). The sharp jumps in the yaw panel correspond to motion through an angle of
2π. As the LASRAT slows down, the time to turn through 2π increases and change in yaw rate
decreases, as seen in the figure.
The filter performance is displayed in Figure 8, with the black lines representing the quaternion
error and the red dashed lines representing the 3-sigma error bounds. The filter performs well over
the duration of the simulation and stays within the 3-sigma error bounds. The gyro bias is shown in
Figure 9. As time progresses the filter and gyro bias converge and thus allow accurate estimates of
the attitude to be made.
Figure 10 to 14 are similar to the previous four figures but display the data from the second test
case. It is clear that the results from the first test case, which included star data, are more accurate
that the second test case that used only sun sensor and IMU data. This is expected as determining
attitude from multiple visible sources is better that using just a single source (the Sun). However,
the second test case does reveal that even with no star measurements (only sun sensor and IMU
measurements) reasonable attitude estimates can still be made, and that spacecraft can recover from
the initial “lost-in-space” configuration.
q1
0.5
0
-0.5
0
5
10
0
5
10
15
20
25
30
15
20
25
30
0.5
q2
0
-0.5
-1
Time (sec)
Figure 5. True (blue) and estimated (red) quaternions for the first test case, where
simulated star data and gyro measurements are made available to the filter. Motion is
a gentle tumble.
9
1
q3
0.5
0
-0.5
-1
0
5
10
0
5
10
15
20
25
30
15
20
25
30
1
q4
0.5
0
-0.5
-1
Time (sec)
Figure 6. True (blue) and estimated (red) quaternions for the first test case, where
simulated star data and gyro measurements are made available to the filter. Motion is
a gentle tumble.
Roll (deg)
200
0
-200
0
5
10
15
20
25
30
0
5
10
15
20
25
30
0
5
10
15
20
25
30
Pitch (deg)
50
0
-50
Yaw (deg)
200
0
-200
Time (sec)
Figure 7. Euler angles for the first test case where simulated star data and gyro
measurements are made available to the filter. Motion is a gentle tumble.
CONCLUSION
We developed two attitude estimation algorithms that may be used for CubeSat type missions.
The first method uses a single point, single axis, attitude estimator that combined a custom sun
sensor and data from an onboard MEMS IMU. The second uses a star camera and combines multiple
star measurements with the data from the onboard MEMS IMU. In both cases a Multiplicative
Kalman Filter (MKF) was employed to generate continuous attitude estimates. Our algorithm was
implemented using the low cost COTS hardware and open-source libraries for the required vision
based localization. The testing and validation of this algorithm was done in the LASR Lab at Texas
10
q3 error
q2 error
q1 error
FILTERED: Quaternion Error
0.2
0
-0.2
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
0.2
0
-0.2
0.2
0
-0.2
Time (sec)
Figure 8. Quaternion error (black) and 3-sigma error bounds (red) for the first test
case, where simulated star data and gyro measurements are made available to the
filter. Motion is a gentle tumble.
x (deg/s)
FILTERED: Bias Error
-20
-40
-60
-80
5
10
15
20
25
30
5
10
15
20
25
30
5
10
15
20
25
30
100
z (deg/s)
y (deg/s)
0
50
0
0
-2
-4
-6
0
Time (sec)
Figure 9. Gyro bias for the first test case where simulated star data and gyro measurements are made available to the filter. Motion is a gentle tumble.
A&M University, using the LASR Attitude Test-bed. For the two test cases presented in this paper
we see that out estimator performs well is it tracks the true attitude and errors fall within the 3sigma error bounds. The accuracy is a little better when using simulated star data and the gyro
measurements compared with that using simulated sun sensor data and gryo, accelerometer and
magnetometer data. This is expected as determining attitude from multiple visible sources (stars) is
better than using just a single source (the Sun). However, the second test case does reveal that even
with no star measurements (only sun sensor and IMU measurements) reasonable attitude estimates
11
0.6
q1
0.4
0.2
0
-0.2
0
2
4
6
8
0
2
4
6
8
10
12
14
16
18
20
10
12
14
16
18
20
0.6
q2
0.4
0.2
0
-0.2
Time (sec)
Figure 10. True (blue) and estimated (red) quaternions for the second test case, where
simulated sun sensor data and gyro, accelerometer and magnetometer measurements
are made available to the filter. Motion is mostly spin about the z-axis.
1
q3
0.5
0
-0.5
-1
0
2
4
6
8
0
2
4
6
8
10
12
14
16
18
20
10
12
14
16
18
20
1
q4
0.5
0
-0.5
-1
Time (sec)
Figure 11. True (blue) and estimated (red) quaternions for the second test case, where
simulated sun sensor data and gyro, accelerometer and magnetometer measurements
are made available to the filter. Motion is mostly spin about the z-axis.
can still be made, and that spacecraft can recover from the initial “lost-in-space” configuration.
ACKNOWLEDGMENT
The authors thank other members of the LASR Lab for insightful discussions and feedback,
especially Daniel Whitten for contributing pictures of the hardware.
REFERENCES
1
NASA JPL, “MarCO Mission.” http://www.jpl.nasa.gov/cubesat/missions/marco.
php. Accessed: January 2016.
2
NASA JPL, “IPEX Mission.” http://www.jpl.nasa.gov/cubesat/missions/ipex.php.
Accessed: January 2016.
12
Roll (deg)
100
0
-100
0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
Pitch (deg)
20
0
-20
Yaw (deg)
200
0
-200
Time (sec)
Figure 12. Euler angles for the second test case where simulated sun sensor data and
gyro, accelerometer and magnetometer measurements are made available to the filter.
Motion is mostly spin about the z-axis.
FILTERED: Quaternion Error
q1 error
0.5
0
-0.5
0
5
10
15
20
0
5
10
15
20
0
5
10
15
20
q2 error
0.5
0
-0.5
q3 error
0.5
0
-0.5
Time (sec)
Figure 13. Quaternion error (black) and 3-sigma error bounds (red) for the second
test case, where simulated sun sensor data and gyro, accelerometer and magnetometer
measurements are made available to the filter. Motion is mostly spin about the z-axis.
3
NASA JPL, “ASTERIA Mission.” http://www.jpl.nasa.gov/cubesat/missions/
asteria.php. Accessed: January 2016.
4
K. Woellert, P. Ehrenfreund, A. J. Ricco, and H. Hertzfeld, “Cubesats: Cost-effective science and technology platforms for emerging and developing nations,” Advances in Space Research, 2010.
5
A. Marinan and K. Cahoy, “From cubesats to constellations: Systems design and performance analysis,”
13
FILTERED: Bias Error
x (deg/s)
30
20
10
0
-10
0
2
4
6
8
10
12
14
16
18
20
0
2
4
6
8
10
12
14
16
18
20
0
2
4
6
8
10
12
14
16
18
20
y (deg/s)
30
20
10
0
-10
z (deg/s)
30
20
10
0
-10
Time (sec)
Figure 14. Gyro bias for the second test case where simulated sun sensor data and
gyro, accelerometer and magnetometer measurements are made available to the filter.
Motion is mostly spin about the z-axis.
tech. rep., Massachusetts Institute of Technology, 9 2013. SSL # 17-13.
C. Peterson, “How It Works: The Charged-Coupled Device, or CCD,” Journal of Young Investigators,
vol. 3, 2001.
7
M. A. Post, J. Li, and R. Lee, “A Low-Cost Photodiode Sun Sensor for CubeSat and Planetary Microrover,”
International Journal of Aerospace Engineering, vol. 2013, 2013.
8
D. Mortari, J. L. Junkins, and M. A. Samaan, “Lost-in-Space Pyramid Algorithm for Robust Star Pattern
Recognition (AAS 01-004).,” Advances in the Astronautical Sciences, vol. 107, pp. 49 – 68, 2001.
9
J. Crassidis and J. Junkins, Optimal Estimation of Dynamical Systems. CRC Press, 2012.
10
M. A. Samaan, Toward Faster and More Accurate Star Sensors Using Recursive Centroiding and Star
Identification. PhD thesis, Texas A&M University, 2003.
6
14