Ishita Das & Kanishka Bhukya
Abstract
Cyber-attacks that target space assets are on the rise due to technological developments and with the ongoing geo-political tensions, such threats may result in serious national security ramifications for the affected countries. Further, the increasing autonomy of space objects raises some critically important considerations regarding cyber-safety and risk. This piece aims to provide an overview of the relationship between AI and cyber security in the context of the outer space sector while also highlighting the legal challenges in this regard.
I. Introduction
A bulk of the globe's critical infrastructures, such as marine trade, air transport, communication, earth observation, and defence, rely significantly on space, particularly space-based assets, to operate on a daily basis. This dependency presents a severe, yet often overlooked, security concern, particularly with regard to cyber-attacks. Currently, sophisticated technological breakthroughs such as Artificial Intelligence [“AI”] and Machine Learning [“ML”] are already reshaping the space industry. AI, in particular, has presented unparalleled opportunities for space-based operations by enabling space assets to attain autonomy from external intervention in tasks such as relative positioning, autonomous navigation, end-of-life management, and so on. AI could be a game-changer as regards cyber-security as reasoned by Jack W. Davidson, Professor of a course titled ‘Defence against the Dark Arts’ at the University of Virginia.
According to Davidson, to tackle the complexities of cyber-attacks on critical infrastructures that include the space sector, it is crucial to focus on building a strong defense system that can patch the breaches in cyber speed, or in a matter of minutes. Cyber reasoning systems could identify threats and deal with them more effectively than human programmers who could take much longer to address the same vulnerabilities. However, on the other hand, AI could also be used by the perpetrators to target autonomous vehicles and autonomous space assets, potentially affecting several countries and resulting in serious economic damage. A report prepared by experts associated with several universities including the University of Oxford and the University of Cambridge highlighted the challenges with the malicious use of AI. The report emphasises that AI could be used to turn autonomous assets into weapons and possibly cause crashes with other autonomous assets or non-autonomous assets. With regard to the outer space sector, if the perpetrators use AI to turn autonomous space assets into weapons and cause collisions with other autonomous or non-autonomous satellites, the impact could be grave for the affected countries. The next section of the piece explores the interface between AI, cybersecurity and the outer space sector.
II. Outer space, Artificial Intelligence, and Cybersecurity
AI is predominantly employed in the space industry for activities such as remote sensing, data processing, autonomous navigation, and monitoring the health of space assets. Remote sensing, for instance, is used to locate distant objects by employing electromagnetic radiation and a large quantity of varied and sophisticated data is collected during this process, making it difficult to be processed by a human operator. Here, AI algorithms come into play. Deep-learning algorithms are installed on satellites to pre-process sensory input and limit the quantity of data relayed to ground stations. In this process, for example, if an unauthorized attacker injects false training data with the intent of perverting the learned model, the AI algorithm begins to produce erroneous output, and the human operators at the ground stations may make decisions based on this erroneous output, threatening the integrity of the decisions made on the basis of such an output.
To further illustrate, AI is also employed for spacecraft health monitoring. Anomalies and fault detection are critical components in guaranteeing the spacecraft's safety in hostile space conditions. In most circumstances, it is difficult to repair a spacecraft once it has begun its journey, and considerable attention must be paid to fault identification and diagnosis. Traditional approaches rely on pre-programmed checks that are executed to guarantee that the system is operational. These approaches, however, are incapable of identifying new and unknown errors that may develop which have not been previously programmed, and it is in this context that AI may be effective. Here, a cyberattack on spacecraft health monitoring systems might result in the false identification of system failures. And, due to the lack of transparency and accountability in AI decision-making processes, this might result in several errors going undetected, jeopardising the space mission’s objectives.
To that end, a cursory glance at these cyber-security threats posed to AI-based space technologies would demonstrate that, at this developmental stage, AI is nowhere near intelligent, committing mistakes that no man would commit. Such mistakes might be unforeseen and difficult to correct. In certain circumstances, the consequences might seem amusing or illogical. However, when AI-based space technologies are utilised for military purposes, these concerns become far more severe, with a higher likelihood of errors or unanticipated emergent behaviours when the degree of complexity escalates and a situation surpasses the predicted parameters of an algorithm.
Military forces all across the globe have been increasingly involved in using AI and satellites to speed up attacks on potential targets. These forces hope to employ AI to detect targets in satellite data and then transmit that targeting data to the battlefields through communication satellites, allowing army personnel to strike military targets. Thanks to sensor suites and powerful machine learning and deep learning algorithms, these weapons can detect a target, turn random data into meaningful and usable targeting data, generate engagement decisions, and drive a weapon into killing the target without human intervention or command in a matter of few seconds. It is also anticipated that there will be a larger degree of autonomy in military applications of AI due to the sheer speed necessary in some cyber operations such as air and missile defence. Therefore, the deployment of these AI-enabled lethal autonomous weapon systems may pose a number of operational risks, such as simple malfunctions, software errors, unexpected environmental interactions, and, most importantly, the threat of adversaries advancing defensive measures that intentionally undermine or interfere with these autonomous systems (for example, spoofing or behavioural hacking), in an attempt to distort data or target the algorithm itself.
For instance, let us assume nation X initiates a malicious cyber-attack to spoof nation Y's AI-enabled automatic target recognition systems, causing the weapon system to misinterpret civilian objects as military installations. Based on this incorrect information and the incapacity of human supervisors to discover the spoofed images in time to take remedial action, Nation Y might cause fratricide, civilian deaths, or even an inadvertent escalation in a conflict. Such a spoofing assault on the weapon system's algorithm is usually carried out in such a manner that the image appears to the target recognition system as indistinguishable from a legitimate military target. And, this is based on an incorrect assumption that is unlikely to mislead the human eye.
Moreover, the explainability issue that occurs with AI use may worsen these interactions. An inadequate understanding of how the AI algorithm arrives at a specific decision may complicate identifying whether it was attributable to the mathematical model of the AI failing to accurately categorise the military target, for example, due to environmental boundary conditions or if the dataset was made subject to data poisoning by a malicious actor. Not only that, but unless the AI's machine-learning algorithm is ceased, it may learn things that it was not designed to learn or carry out tasks that humans did not anticipate it to perform. Therefore, since so much is at stake, it is critical that we address the emerging cyber threats that define AI-generated space systems and missions, particularly in a military setting, as well as define and oversee the AI system's degree of autonomy in space, in addition to its interface with human operators.
Cyber-attacks that target space assets can have a severe impact on the affected country’s space capabilities. This situation becomes more complex if AI is used to cause crashes or collisions between autonomous space assets and could also cause damage to non-autonomous satellites. While AI can be useful to detect and prevent collisions such as the automated collision avoidance system designed by the European Space Agency [“ESA”], if used maliciously, AI can be extremely detrimental to the physical integrity of space assets in particular and the safety of the outer space environment in general. While the cyber-attacks currently do not involve AI, there is a strong possibility of such attacks taking place in the near future, especially with regard to deep space activities that might rely upon the use of autonomous systems completely. The next section of the piece explores the legal challenges concerning the impact of cyber-attacks on autonomous space assets.
III. Legal challenges
The interface between AI and the outer space sector involves some serious legal questions. The most important challenge concerns the determination of liability. While the Outer Space Treaty deals with the concepts of ‘international responsibility’ and ‘international liability’ under Articles VI and VII, respectively, and the Liability Convention expands on the notion of liability through Articles II, III, and IV, it is essential to bear in mind that as these instruments were created in the 1960s-70s, the technological advancements in the contemporary setting might not fit the original legal imagination. However, there is a possibility to reimagine the provisions of the Outer Space Treaty and the Liability Convention in light of the same. For instance, Article VI of the Outer Space Treaty deals with international responsibility wherein the launching states are responsible for ‘national activities in outer space’ whether carried on by governmental or non-governmental entities.
Article VII deals with international liability and pins liability on the launching state for the damage caused to another state, including its natural or juridical persons. It is pertinent to note that international responsibility imposes international legal obligations on the launching states as regards supervision of national space activities and international liability imposes liability on the launching states for the damage caused by their space objects. Therefore, while the Outer Space Treaty maintains a distinction between the two concepts of international responsibility and international liability, both are closely related. The launching state is both responsible and liable for the space activities and any damage thereof. The ‘launching state’ could be either the state launching the space object, the state procuring the launch, the state from whose territory the launch has taken place, or the facility from where the launch was initiated [Article I(c), Liability Convention]. ‘Space object’ includes its component parts as well as its launch vehicle and parts thereof [Article I(d), Liability Convention]. Therefore, the term that appears in the Liability Convention is wide enough to cover autonomous space assets within its fold as AI capabilities are a part of the software component of such space assets.