The gaming industry has always understood risk, including how to price, distribute, and profit from it. The first comprehensive baseline of AI adoption across the global gaming industry is now in and worth reading closely. According to the State of AI in Gaming 2026, AI adoption is widespread, but governance has not kept pace. The industry is embracing these tools faster than it is building the structures to manage them.
The report documents meaningful AI deployments across the industry, from spotting at-risk behaviors for intervention, monitoring transactions for indications of fraud or money laundering, or forecasting demand and appropriate resource allocation. Absent proper governance, of course, it’s equally ripe for harms, including over-optimization for engagement or revenue generation, or bias and disproportionate outcomes.
Although these harms aren’t exclusive to gaming, their convergence with gaming’s well-established regulatory architecture is unique. In gaming, licensing is grounded in transparency, auditability, and clear accountability. AI systems, which operate differently than traditional rule‑based systems, don’t automatically align with those expectations and complicate regulators’ ability to understand, review, and oversee how decisions are made.
In fact, the State of AI report reflects that regulators, across enforcement, regulatory compliance, licensing, and administrative functions, lack confidence in their ability to regulate AI in gaming. Regulators emphasized that insufficient knowledge, training, and technical competence limit their agencies’ abilities to provide deliberate, effective oversight.
That uncertainty among regulators reflects a parallel gap on the industry side. The same research shows that companies are moving quickly to adopt AI but are often doing so without mature governance frameworks or clearly defined oversight roles. Although 80% of respondent companies report using generative AI, only 20% had established dedicated governance roles or mature oversight structures. This imbalance makes it harder to scale AI responsibly, and to stand behind AI‑driven decisions when they are questioned. The report's AI Maturity Index puts a number on it: governance scores 30 out of 100. This is the lowest dimension by a substantial margin, and one that few companies appear positioned to improve, with only 8.4% of respondents planning to hire AI governance or ethics specialists.
That said, this cross-cutting governance gap creates a window for action before the tools become more entrenched and harder to recalibrate, and while there is still space for industry‑led approaches to shape regulatory expectations.
Where Governance Adds Real Value
From a regulatory perspective, consistent outcomes, thresholds, and interventions are fundamental to market integrity, consumer protection, and fair competition. Globally recognized AI governance frameworks, like the NIST AI Risk Management Framework (“AI RMF”), provide a consistent, baseline approach to traits like “trustworthiness” by translating them into specific, evaluable qualities such as validity, reliability, safety, and fairness. In practice, specificity gives regulators and organizations a shared vocabulary for assessing AI systems.
Similarly, governance frameworks like the AI RMF help companies establish clear, documented approval and risk-acceptance processes that embed accountability from development or procurement through use and disposition. Documentation of an organization’s risk parameters and decision making provide regulators, auditors, and courts insight into governance at speed. Understanding these organizational realities help oversight entities design effective, rather than simply punitive remedies, and foretell the practical impacts of regulatory decisions.
Given the speed of AI development and implementation, one universal fear is that designing and implementing thorough AI governance must necessarily hamper development. On the contrary, effective governance programs provide durable internal guardrails against which organizations can quickly and reliably evaluate new tools. Operational realities of AI, such as model drift, don’t fit frameworks designed for earlier technologies. Accountability, remediation, and trust, rather than perfection, are central.
Closing the Gap
Like trust, good governance hinges on clarity and accountability. Whether a dedicated AI governance role is feasible or not, organizations should clearly define ownership — and therefore, accountability — for AI systems, use cases, benefit and risk assessments, and approval processes across functions. The State of AI report found that governance maturity is lowest where responsibilities were diffuse or informal, and it’s not surprising. In order to perform effective governance, organizations should be able to comprehensively answer foreseeable questions like, “who approved this system?”
Consistent with established frameworks across governance sectors, the report also indicates that governance works best when it is proactive, rather than reactive, integrated into development and established compliance workflows, and designed to scale across platforms and jurisdictions. Rather than conducting exhaustive inventories and reactively designating governance controls, effective governance programs set risk thresholds, tiered processes, and monitoring targets which focus effort on higher-risk tools and prevent governance burnout.
Over time, these programs will adapt more easily to changes in technology and risk. Perhaps as critically, practical, adaptable programs motivate the accountability that underlies trust.
Studies like the State of AI confirm that the industry is still developing its capacity for governance. At the same time, effective AI governance is practical, rather than prescriptive. It requires translating changing regulatory expectations into decisions that fit real development, compliance, and operational environments.
As AI continues to shape the next generation of gaming products and services, competitiveness will depend not only on what systems can do, but on how confidently organizations can deploy, explain, and stand behind them. The industry’s challenge, and opportunity, is closing the gap between how quickly AI is being adopted and how maturely it is being governed. That window is open now, but not indefinitely.
For more information about AI governance, compliance, and legal considerations in the gaming industry, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team.
