A Markov Decision Process Framework for Early Maneuver Decisions in Satellite Collision Avoidance
By: Francesca Ferrara , Lander W. Schillinger Arana , Florian Dörfler and more
Potential Business Impact:
Saves fuel when spacecraft avoid crashes.
This work presents a Markov decision process (MDP) framework to model decision-making for collision avoidance maneuver (CAM) and a reinforcement learning policy gradient (RL-PG) algorithm to train an autonomous guidance policy using historic CAM data. In addition to maintaining acceptable collision risks, this approach seeks to minimize the average fuel consumption of CAMs by making early maneuver decisions. We model CAM as a continuous state, discrete action and finite horizon MDP, where the critical decision is determining when to initiate the maneuver. The MDP model also incorporates analytical models for conjunction risk, propellant consumption, and transit orbit geometry. The Markov policy effectively trades-off maneuver delay-which improves the reliability of conjunction risk indicators-with propellant consumption-which increases with decreasing maneuver time. Using historical data of tracked conjunction events, we verify this framework and conduct an extensive ablation study on the hyper-parameters used within the MDP. On synthetic conjunction events, the trained policy significantly minimizes both the overall and average propellant consumption per CAM when compared to a conventional cut-off policy that initiates maneuvers 24 hours before the time of closest approach (TCA). On historical conjunction events, the trained policy consumes more propellant overall but reduces the average propellant consumption per CAM. For both historical and synthetic conjunction events, the trained policy achieves equal if not higher overall collision risk guarantees.
Similar Papers
Markov Decision Processes for Satellite Maneuver Planning and Collision Avoidance
Robotics
Saves satellite fuel by planning smarter moves.
Convex Maneuver Planning for Spacecraft Collision Avoidance
Robotics
Helps satellites dodge space junk automatically.
Bayesian Ambiguity Contraction-based Adaptive Robust Markov Decision Processes for Adversarial Surveillance Missions
Optimization and Control
Drones learn to outsmart enemies in dangerous missions.