Artificial intelligence is steadily transforming the maritime industry’s architecture — from predictive maintenance, safety, and ship security to navigation and port logistics, and even the manufacturing of autonomous ships. Marine insurers have already begun to recognize the industry's readiness for innovation and to collaborate toward efficiencies in pricing, risk assessment, and claims handling. And much like a ship, AI requires a level of “bridge discipline” to ensure it operates safely, remains accountable, and stays on course toward reliable and responsible outcomes in complex, real-world environments.
The National Association of Manufacturers reports that AI already provides a critical check on shipping agents’ misdeclaration of hazardous cargo, which has been linked to a concerning increase in deadly fires like the 2019 fire on the Yantian Express in which “coconut charcoal,” labeled as “coconut pellets,” caught fire, and the 2025 fire on the Wan Hai 503 which killed four crew members off the coast of India. Of course, machines are not necessarily more reliable or straightforward than human operators.
Joanne Waters of the UK’s DAC Beachcroft explains that for her clients, accountability in a partially, and increasingly autonomous environment is more difficult to assign — particularly under pressure. It’s harder to determine when, and if, a human should course-correct machines designed to avoid human error — and designed without the instinct developed from lived experience. Overlapping jurisdictions can also create regulatory conflict and confusion without contracts to properly allocate liability. AI-designed maritime routes can also give rise to intellectual property disputes impacting human operators and shipowners.
These trends aren’t new to technology attorneys. Innovation often outpaces oversight — it’s a vicious, predictable cycle in which exciting technological developments automate and accelerate otherwise laborious processes, expectations rise, and deceleration in governance sparks fears around market share and profitability. Except, AI governance isn’t just a “safety check.” It’s bridge discipline. It’s the professional rigor required to make good on the promise of visionary technology investment.
The issues of AI regulation and overlapping jurisdictions are layered. As shared in this blog’s April 8 post, “The Regulatory Tide Goes Out: What Global AI Governance Retrenchment Means for Organizations,” first-generation AI regulation is undergoing a reckoning against implementation realities, geopolitical competition, and industry resistance. The maritime industry, like tech, has seen its share of competing global economies and governance frameworks. However, the trend of regulatory retrenchment doesn’t leave industries completely adrift on AI governance. Durable global standards and governance frameworks have survived the rollback of regulation and offer a reliable baseline for organizations leveraging AI.
The National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) is perhaps the most widely adopted and well-resourced baseline for maritime AI. At its simplest, the NIST AI RMF filters governance into four interrelated functions: Govern, Map, Measure, and Manage.
“Govern” requires organizations to come clean about their intent and boundaries. Questions to ask at the outset include: How can AI help your organization — or more conscientiously, what are “permissible uses” of AI? Monitoring for maintenance needs? Accounting for hazardous cargo? Predicting security threats? Planning more fuel- or cost-effective shipping routes? Automating container handling and scheduling? Or are you invested in autonomous shipping? And perhaps more importantly, what uses are outside of your organization’s comfort zone? Ethical frameworks and an understanding of your organization’s risk tolerance are critical to categorizing, triaging, and responsible innovation.
Next, “Map” requires clarity on accountability structures. Shipping relies on a network of complex, interrelating contracts and requirements. Organizations seeking to incorporate AI should ensure that contracts, with both third-party vendors of AI tools and beneficiaries of the organization’s own implementation of AI, account for risks of bias, data leaks, or errors associated with the underlying models and appropriately allocate liability. Further, organizations would be wise to develop internal governance structures to approve and mitigate risks associated with AI. Importantly, organizations should identify who has the authority to disable or disposition misbehaving AI.
“Measure” requires an organization to ensure it can answer for AI it has developed or implemented in its operations. Regardless of the jurisdiction, insurers, regulators, and courts alike will expect organizations to understand why an AI made the decisions and took the actions that it did. Organizations should exercise due diligence by conducting continuous monitoring — which may even be automated in and of itself — to scan for things like model “drift” (shifts in model behavior over time) and data degradation.
Finally, “Manage” is how an organization will react when — and it’s always “when,” with technology — something does go wrong. In support of this claim, UK-based maritime cybersecurity specialist Cydrome asserts that “up to 60% of all newly disclosed software vulnerabilities on ship, onshore and offshore are being weaponized within 48 hours as hackers also begin to use AI to accelerate attacks.” Outside of direct attacks, organizations should decide how anomalies caused by model drift or data degradation can be escalated internally and must be reported externally pursuant to regulation or contract commitments. As alluded to earlier, organizations should train users or monitoring personnel on appropriate interventions for misbehaving AI. Critically, organizations must develop procedures for relevant documentation to protect themselves from undue liability or reputational harm.
The International Maritime Organization’s Maritime Safety Committee (MSC) has also adopted two industry‑led instruments supporting autonomous vessel operations within the existing regulatory landscape. MSC.1/Circ.1638 documents the outcome of the regulatory scoping exercise for Maritime Autonomous Surface Ships, analyzing existing maritime safety regulations and identifying areas for clarification or revision. MSC.428(98), on the other hand, addresses cybersecurity by incorporating cyber risk management into the Safety Management Systems of ships and companies.
AI holds a great deal of promise for the maritime industry, but it’s important that responsible parties reclaim the helm. Conducting a “bridge audit” of organizational use and governance structures can ensure important baseline standards are in place, prepare organizations for second-generation AI regulation, and protect against reputational harms which undermine the value of innovation.
For questions about AI governance program design, multi-jurisdictional compliance strategy, standards alignment, or navigating the current regulatory landscape, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. Stay tuned for continued insights from the AI Law and Policy Navigator.
