August 4, 2020

Space Traffic Management and Data Fusion

The acceleration of the space population is staggering. A huge increase in the population of active satellites is not only imaginable but highly likely.

The acceleration of the space population is staggering. Mega-constellations are planned by various companies including SpaceX, Amazon, Planet, OneWeb, Telesat, and more. Over the next few years, an order of magnitude increase in the population of active satellites is not only imaginable but highly likely.

A recent search of space-track.org uncovered that approximately 5,500 active satellites are in orbit, with the addition of about 400 satellites launched in 2019 alone. Furthermore, the vast majority of resident space objects (RSO) are debris, making the total tracked number of objects in orbit roughly 20,000. The operational acceptance of the Space Fence is projected to increase the RSO catalog size upwards of 100,000 RSOs by significantly lowering the RSO detection threshold. Space Domain Awareness (SDA) and Space Traffic Management (STM) are clearly of paramount importance.

In addition to the rapid RSO population growth, the number of data providers is also growing swiftly. U.S. Space Command’s (USSPACECOM) Space Surveillance Network is highly specified, verified, and validated, but such specification is impractical for the influx of commercial providers. With multiple state solutions for a single RSO at any given instant in time, how can a satellite operator, orbit analyst, intelligence analyst, or decision maker determine which one is “correct?” There is a clear need to not only aggregate, but curate and fuse the data before analytics can be meaningfully applied.

During the December 2019 SACT (Sprint Advanced Concept Training), a week-long commercial-government exercise, in part to showcase advanced Commercial SSA capabilities to the Department of Defense and Department of Commerce, Slingshot Aerospace ingested RSO state data from government (18th Space Control Squadron) and commercial data providers (LeoLabs and Numerica) to understand what curation was needed for various providers. The data was curated and compared with calibration data from Spire and MITRE. All of the data were ingested from the Unified Data Library (UDL), for simplicity.

Results show that commercial capabilities rival, if not exceed, performance of the existing public space catalog. Because each data source has their own strengths and weaknesses in different settings (orbit regime, global coverage, RSO type, etc.), a scheme that coalesces the various satellite state estimates into one, most accurate solution is paramount.

Delta Position 1

 

Delta Position 2

 

To fuse this data into a best solution, a supervised learning regression model is developed that uses already processed provider state solutions propagated some amount of time and compared to calibration data. Over a large spread of objects in GEO, this model is able to learn how to characterize the uncertainty developed during propagation and apply this to other objects in GEO to improve their solutions when compared to the standalone estimates from the various providers. This is of high value when accurate state predictions are required after propagation over many hours, such as conjunction assessment.

Errors in the estimation of an object’s position in space can be introduced from both propagation and measurements (and associated modeling), and can be random (or aleatory), systematic (or epistemic), or even blunders. These errors can be very difficult to eliminate via improved modeling, especially when the source modeling is unknown. In this case, one might seek to learn, approximate, and account for these unknown errors. The first objective of this study was to curate the data and examine the errors associated with multiple providers. The second objective was to determine if an improved solution could be generated at any point in time, including in the future, by fusing state data from multiple providers.

The goal is to be able to use any state information from any provider, including TLEs, which do not contain uncertainty information. Similar studies have used TLEs as pseudo-observations, but not with a variety of providers and actual calibration data. A machine learning (ML) approach is considered to determine if we can better account for the systematic errors in the propagation, then use that knowledge to generate an improved, fused solution.

To find the best solution at any given time, including the future, a data fusion method was explored. Using a supervised ML model, we were able to properly account for the error introduced in the propagation of a satellite state and produced a fused result of two different providers that reduced the RMS of the original solutions by nearly 50%. This improvement was realized on RSOs other than what the model was initially trained on, demonstrating that these improvements may apply to all RSOs in a given regime. Future work will involve varying the predictors used in the model to better account for propagation time and to take advantage of RSO type and also include more sources of data. Additionally, methods will be explored to take provider solution uncertainties into account.

 

Machine learning fusion technique results

 

With the exponentially increasing amount of objects in space and an increasing diversity of observation providers, both government and commercial, to track them, it is of paramount importance to not only curate this data to allow for comparison and uncover insights, but to fuse the data into a single, more confident picture of where RSOs are and will be. Together, we derive more confidence than the individual.

If you’re interested in learning more and reading the full white paper, click the link below to download it. You can also learn more about our products, such as Slingshot Orbital, and get in touch with us by clicking here.

Thank you to our Slingshot team: Chase Brown for developing the machine learning models, providing analysis, and compiling results; and Brian Williams who provided valuable insight into the machine learning techniques. Thank you also to Moriba Jah, (newly tenured!!) Associate Professor at the University of Texas, for his advice and review of the paper. Finally, and most of all, we would like to thank Dallas Masters and Vu Nguyen at Spire Global; Vic Gardner and Nathan Griffith at LeoLabs; Todd “Q” Brost, Corrina Briggs, and Holly Borowski at Numerica Corporation; and Rob Harder and Tim McLaughlin at MITRE for providing access and insights into the data that they produce. The solution to this problem will only be achieved if we come together, and it has been humbling to collaborate with people who are bringing world-class capability to bear.

{{cta('1161e25c-3f4c-4cfd-a422-adef43a2b998','justifycenter')}}