Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

When Satellites Think for Themselves: The Governance Vacuum in Low-Earth Orbit

By Jason M. Loring
March 6, 2026

Author's Note: The author participated as a panelist at the University of Alabama in Huntsville's  2026 Business of Space Conference, speaking on "Governing AI and Edge Computing in Low-Earth Orbit" alongside Charlotte Houser (Corporate Counsel, Voyager Technologies Inc.), Dan Wald (Director, Artificial Intelligence, Booz Allen Hamilton), and Meg Vernal (Chief Legal Officer & General Counsel, Voyager Technologies Inc.) and would like to thank them for their insights.


Key Takeaways

  • Orbital AI systems are already making consequential autonomous decisions (e.g., collision avoidance, data processing, communications routing) without human review and without a clear legal framework governing any of it.
  • The foundational assumptions of AI governance, including identifiable jurisdiction, traceable accountability and meaningful auditability, largely break down when the system is moving at roughly 17,000 miles per hour across every jurisdiction on Earth every 90-120 minutes.
  • The three core governance gaps are (1) Assignment (who authorized the AI to decide, and what are its limits?), (2) Attribution (who is liable when something goes wrong across a complex operational chain?), and (3) Auditability (how do you verify compliance with a data center you cannot physically or practically access?).
  • The orbital context doesn't just complicate AI governance, it exposes the degree to which our existing frameworks were always dependent on physical location as a governance proxy. Space removes that proxy entirely.
  • The window for proactive frameworks is roughly 18-24 months. After that, market realities and geopolitical competition make coordination significantly harder.

At roughly 17,000 miles per hour, a satellite in low-Earth orbit ("LEO") passes over every jurisdiction on Earth every 90-120 minutes. Increasingly, the AI systems governing these satellites are making consequential decisions without human review: deciding whether to maneuver to avoid collisions, when to process personal data captured in orbit and how to route communications through globally distributed networks.

Our legal frameworks assume decisions happen in identifiable jurisdictions, under clear regulatory authority and with accountability mechanisms tied to physical location. Orbital AI largely shatters every one of those assumptions simultaneously.

This is a problem facing our aerospace clients right now as they evaluate orbital computing contracts, negotiate SLAs for space-based AI infrastructure, and try to answer basic due diligence questions that existing frameworks frankly were not built to answer.

Welcome to the Orbital Data Economy

The governance gaps are already embedded in active commercial deployments. Real-time Earth observation AI now processes satellite imagery on-orbit before downlink. Edge computing networks in LEO promise ultra-low latency for services ranging from financial trading to autonomous vehicle coordination. AI model training in space forces a direct confrontation with data sovereignty doctrine when the training infrastructure is literally orbiting the planet.

Market projections for orbital computing services are substantial and growing. But those projections assume regulatory clarity that does not yet exist. Capital markets are already pricing that uncertainty, with insurers carving out or limiting LEO coverage in light of increased collision risk (which AI autonomy amplifies rather than resolves).

The Legal Foundations

The 1967 Outer Space Treaty's framework of state responsibility, authorization, and continuing supervision, and the 1972 Liability Convention's rules on damage by space objects were built for physical harms and state actors, not commercial constellations with autonomous decision-making distributed across private entities. That mismatch is now commercial and not just academic.

Where Traditional Frameworks Break Down

Every privacy and data governance framework we have uses physical location as its primary organizing principle. GDPR asks where the data subject is located, where the controller is established and where processing occurs. US sectoral rules focus on where the regulated entity operates. State laws assert jurisdiction based on where harm is felt. All of these anchors become unstable when “where processing occurs” is a moving target crossing every jurisdiction multiple times daily.

Consider the scenario where a US company operates an orbital data center processing EU citizens' personal data. The satellite transits US and EU territory approximately every 90 minutes. China and Russia assert their laws apply during transit. The launch state claims jurisdiction under the Outer Space Treaty. Which privacy law governs?

Even before you reach cross-border transfer rules, territorial scope and applicable law are contested (an operator in one country, a processor in another, processing in orbit, data subjects and ground stations in multiple regions). No existing doctrine was designed for processing that is simultaneously mobile, continuous, and non-resident. "Where is processing?" is not only unclear, but it may also be multiple places at once and none continuously. The compliance questions follow immediately: Does orbital processing constitute a "data transfer" under GDPR? Do Standard Contractual Clauses apply when data moves to orbit? What are the breach notification obligations when the satellite may not have ground station contact within GDPR's 72-hour window? These are legitimate questions in a compliance assessment of orbital data services.

The Three Governance Gaps

Assignment: Who Decides When Satellites Decide?

Autonomous orbital AI systems are being given decision-making authority without clear legal determination of who authorized that delegation or under what constraints it operates. Unlike terrestrial AI, orbital systems can't easily be updated, patched, or overridden once deployed — a satellite operates for years with limited capacity for human intervention. Governance frameworks must establish clear boundaries on algorithmic authority before launch, because there is no meaningful opportunity for course correction afterward.

Collision avoidance makes this acute. Mega-constellations (Starlink alone plans as many as 42,000 satellites) rely on AI for split-second maneuver decisions no human can review in real time. When two autonomous systems miscalculate and collide, creating a debris field that threatens other operators' assets and potentially triggers cascading failures, the absence of clear pre-authorization frameworks isn't just a governance failure, but could lead to a liability catastrophe with no obvious defendant.

Attribution: When Things Go Wrong, Who's Accountable?

A representative modern satellite deployment might involve a satellite manufactured by Company A, an AI system developed by Company B, launched by Company C from Country D, registered in Country E, operated by Company F headquartered in Country G, providing services to customers in hundreds of countries. When that satellite's AI makes a decision that harms someone (like processing personal data unlawfully or interfering with another operator's system, contributing to a collision), attribution becomes a multi-jurisdictional puzzle that existing frameworks were not designed to solve. Space law's "launching state" concept provides a starting point, but it doesn't resolve the "many hands" problem when autonomous decisions pass through many organizational layers before causing harm.

Auditability: Can You Verify Compliance With a System You Can't Access?

Every major AI governance framework assumes auditability. Orbit breaks that assumption.

Terrestrial AI governance is built on audit rights (e.g., the ability to inspect systems, review training data, test for bias and verify security measures). How do you audit a data center in LEO? You can't physically or practically access the hardware. You can't observe operations in real time. You can't verify data deletion when required. Security assessments rely entirely on remote analysis and operator attestations. This isn't a problem that better contractual language solves; rather, it's a structural gap requiring purpose-built verification mechanisms like remote attestation standards, pre-deployment certification, and incident reporting requirements calibrated to orbital mechanics rather than terrestrial communication assumptions.

The Compliance Theater Risk

As orbital computing services become commercially available, organizations will face pressure to demonstrate governance compliance before frameworks exist to define what that means. The natural response (that is, to adopt the nearest available terrestrial framework and apply it with appropriate caveats) produces the same governance and compliance theater problems we have documented across AI governance generally: impressive-looking policies that satisfy procurement checklists while failing to address the actual governance questions the orbital environment creates.

A GDPR record of processing that lists "orbital data center" as a processing location without addressing which jurisdiction's law applies, how data subjects could exercise their rights, or how a breach would be notified is documentation that creates the appearance of compliance while leaving the underlying risks entirely unaddressed. If you can't show how a regulator or court would verify your controls remotely, you don't have a control.

What Governance Could Look Like

Several approaches could provide near-term practical clarity without waiting for the international treaty process.

Operator-centric jurisdiction would anchor regulatory authority to the operator's jurisdiction of incorporation and primary operations, mirroring how maritime and aviation law handle similar problems, providing baseline clarity on which regulatory regime applies with secondary rules governing cross-border data sharing.

Mandatory pre-deployment transparency would require operators to register AI system capabilities, training data sources, collision avoidance protocols, and data processing purposes with an international body before launch, similar to aircraft certification. This creates accountability without micromanaging operations and establishes a baseline record against which post-incident attribution can proceed.

Orbital incident reporting calibrated to the operational reality of space would extend timeframes to reflect communication constraints, create international coordination mechanisms for cross-border incidents, and focus on shared learning over enforcement. This is the aviation safety reporting model, which produced dramatic safety improvements precisely because it prioritized learning over blame.

Safe harbor frameworks for operators meeting defined governance standards could provide the regulatory certainty that investors and insurers need to price risk, with a clear determination of which privacy laws apply, liability protections for good-faith compliance efforts, and recognition across participating jurisdictions.

None of these require waiting for perfect international consensus. They require industry groups, national regulators, and practitioners to start working through the details now, while the market is still early enough that frameworks can shape deployment rather than chase it.

What This Means for Practitioners

For legal and compliance professionals, the timeline is shorter than it looks. Clients in financial services, healthcare, and technology are evaluating orbital computing services today and contract drafting, not policy writing, is where orbital AI governance begins.

Understanding the full operational chain before you draft (who manufactured the satellite, who developed the AI, who launched it, and where it's registered) is table stakes for any meaningful liability allocation. Build flexibility into compliance frameworks because the governing law question will not be resolved before your client needs to sign. And scrutinize audit rights provisions carefully; a standard contractual audit right assuming physical access to hardware is effectively unenforceable in this context and should be renegotiated for remote attestation alternatives.

The Urgency Question

Technology moving faster than governance is a familiar observation. What's different in the orbital context is the asymmetry of consequences. Terrestrial AI governance failures are generally recoverable. The Kessler Syndrome (a cascade of collisions creating debris that makes LEO unusable for decades) is not recoverable on any near-term human timeline. It is the permanent destruction of a global commons that every communication, navigation, and weather system on Earth depends on.

The window is roughly 18-24 months. After that, market realities and geopolitical competition will likely make coordination significantly harder. The decisions made in that window will be made by practitioners, policymakers, operators, and technologists willing to work through genuinely novel governance problems without the comfort of established precedent. Because at 17,000 miles per hour, satellites wait for no one (including the law).

For questions about AI governance frameworks, space-based data services, or cross-jurisdictional compliance strategy, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
  • Aerospace & Aviation
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member