This is the part A of the fourth and final installment in our AI Governance series.
Mere weeks after Grok referred to itself as "MechaHitler" the Pentagon licensed it. Presumably, DOD's confidence in xAI's system didn't stem from inherent AI safety; it came from deployment strategy and robust governance frameworks capable of handling systems that are necessarily imperfect, while preserving institutional decision-making authority.
Their decision illustrates the reality facing every organization deploying AI: perfect AI systems do not exist, but effective governance frameworks can make imperfect AI systems workable while maintaining organizational autonomy. The question is not whether to deploy AI, but how to deploy it effectively and responsibly while maintaining competitive agility and organizational independence.
Over the past three weeks, we have examined AI governance failures, mapped risk landscapes, and built practical governance frameworks. This week, we address implementation challenges in a two-part finalie to our AI Governance Series: deploying governance in environments where business pressure, technical complexity, and regulatory uncertainty can undermine even well-designed frameworks.
Most AI governance discussions focus on technical controls while ignoring the fundamental deployment strategy question and its implications for organizational autonomy. The choice between building, buying, or partnering for AI capabilities has profound governance implications often invisible until problems emerge — particularly regarding how much decision-making authority organizations retain versus transfer to external systems.
In-house development offers greatest governance control — you design safeguards, implement monitoring, and own the entire decision-making chain. When systems behave unexpectedly, you have full access to training data, model architecture, and deployment parameters needed for effective response. Perhaps most importantly, you maintain direct control over how AI influences your organization's decision-making processes.
Control, however, means responsibility. When you build AI systems, you own every failure mode, bias, and unexpected behavior. The deeper risk is what might be called "builder's bias" — the tendency to trust systems you create more than you should. For instance, a bank that developed its own loan approval AI might overlook discriminatory patterns because "we built it right."
Key Success Factors:
Commercial AI solutions offer proven technology backed by vendors with deep expertise. Major platforms typically invest more in safety research than individual organizations could afford while spreading development costs across multiple customers.
The primary governance challenge is dependency on vendor decisions. When vendors update models or change policies, your governance framework must adapt, but more importantly, your organization's decision-making capacity becomes dependent on external systems you do not control. The Grok incident illustrates this perfectly — organizations using xAI technology had no advance warning of prompt changes causing problematic behavior.
This creates a subtle but serious risk: organizations may gradually lose the institutional knowledge and human expertise necessary to operate independently of vendor AI systems, making them increasingly vulnerable to vendor decisions or system failures.
Key Success Factors:
Strategic partnerships combine vendor expertise with greater collaboration than standard commercial relationships. Partners often provide access to advanced capabilities while allowing input into governance approaches and maintaining some organizational decision-making autonomy.
Partnership governance requires alignment between organizations with different risk tolerances and ethical frameworks. When incidents occur, determining responsibility and coordinating response becomes complex — particularly when it's unclear which organization's judgment should prevail in ambiguous situations.
Key Success Factors:
Most organizations use all three approaches simultaneously: building for competitive advantages, buying for common functions, and partnering for specialized capabilities. This creates governance complexity requiring different oversight mechanisms for each deployment model while maintaining consistent organizational standards.
Given this reality, the most resilient governance frameworks are those that are modular and flexible enough to apply different controls to different deployment models while maintaining consistent oversight and preserving institutional capacity for independent judgment across all AI implementations.
The key insight is that regardless of deployment strategy, organizations must consciously preserve their capacity to think, analyze, and decide without AI mediation.
Take the next step. The Jones Walker Privacy, Data Strategy and Artificial Intelligence team of attorneys is available to discuss your AI governance and other AI needs. And stay tuned for the conclusion of AI Governance Series and other continued insights from the AI Law and Policy Navigator.
