Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Data Privacy Day 2026: Privacy as the Foundation of Responsible AI Governance

By Jason M. Loring
January 28, 2026

January 28, 2026 marks “Data Privacy Day,” giving us an opportunity to reflect on how privacy principles intersect with the rapidly evolving landscape of artificial intelligence. The period from 2024 through 2026 has witnessed unprecedented acceleration in AI regulation, from state legislatures enacting comprehensive AI laws, the EU AI Act reaching operational applicability, and federal agencies (particularly the FTC) signaling aggressive enforcement priorities around algorithmic harms. As AI systems become increasingly sophisticated and ubiquitous, privacy considerations are foundational to lawful deployment, regulatory compliance, and organizational risk management.

For legal and compliance professionals navigating AI governance in 2026, privacy challenges manifest across multiple dimensions: the personal information used to train models, the sensitive data processed during inference, the outputs that may inadvertently reveal proprietary information, and the regulatory frameworks that can vary dramatically across jurisdictions.

The Privacy-AI Intersection: More Than Compliance Theater

Not all data used with AI systems constitutes personal information under privacy statutes. But the strategic value of personal data in AI applications creates both opportunity and obligation. When responsibly implemented, AI systems that leverage personal information deliver:

  • Enhanced personalization that improves user experience and engagement;
  • More targeted insights that inform business strategy and operational decisions;
  • Nuanced inferences that enable sophisticated predictive analytics;
  • Highly informed decision-making in contexts from credit underwriting to healthcare delivery; and
  • Advanced data analysis that identifies patterns invisible to traditional statistical methods.

This value proposition creates significant incentives to incorporate personal data into AI systems. It also creates substantial legal exposure when organizations fail to implement adequate privacy controls.

Practical Privacy Risks in AI Deployment

Privacy violations in AI systems arise from multiple technical and operational vectors:

Sensitive Information Disclosure. AI applications can be manipulated through prompt injection attacks to reveal sensitive information embedded in training data or system prompts. Even without malicious actors, AI systems may inadvertently disclose proprietary information through outputs that reflect patterns in confidential training data. Organizations have experienced trade secret exposure when employees input sensitive information into public AI systems that use inputs for model training. Examples include contract clauses appearing in outputs provided to other users, internal code fragments being reproduced when developers seek debugging assistance, and confidential business strategies surfacing in responses to seemingly unrelated queries.

Unintended Training on Proprietary Data. Many commercial AI systems use inputs to continuously improve their models. When employees use these systems for legitimate business purposes, such as analyzing contracts, drafting communications, debugging code, they may inadvertently contribute proprietary information to the vendor's training dataset. This information can subsequently appear in outputs provided to other users, including competitors.

Personal Data in Training Datasets. Organizations developing AI systems must establish the lawful basis for using personal data in training. This requires evaluating what privacy policies, consent mechanisms, and third-party notices covered the original data collection; whether data scraping activities respected applicable protocols and terms of service; whether the organization secured necessary rights for AI training purposes; and whether individuals received adequate notice and choice about this secondary use.

Algorithmic Inferences as Personal Data. AI systems generate inferences about individuals that may constitute personal information under privacy statutes even when not directly collected from data subjects. These derived insights (e.g., creditworthiness assessments, health predictions, employment recommendations) create independent privacy obligations including transparency requirements, access rights, and accuracy obligations.

Re-identification Risks. AI systems' pattern recognition capabilities can defeat anonymization techniques that previously provided adequate privacy protection. Models trained on aggregated or anonymized datasets may enable re-identification of individuals when combined with auxiliary information or through inference attacks.

Building Privacy into AI Governance Frameworks

Effective AI privacy governance requires systematic controls embedded throughout the AI lifecycle:

1. Impact Assessments

Privacy impact assessments ("PIAs") or data protection impact assessments ("DPIAs") should be mandatory for AI systems that process personal information, particularly those affecting consequential decisions. These assessments should identify what personal data the system processes; the legal basis for processing; risks to individual rights and freedoms; technical and organizational measures to mitigate risks; and whether the system qualifies as “high-risk” under applicable AI regulations.

2. Data Mapping and Inventory

Organizations should maintain detailed inventories of AI systems that document data sources (including scraped data, licensed datasets, user inputs); categories of personal information processed; processing purposes; data retention periods; third-party data sharing; and cross-border data transfers. This inventory enables regulatory compliance, incident response, and individual rights fulfillment.

3. Explainability and Transparency

Privacy regulations increasingly require transparency about automated decision-making. Organizations should implement user-facing explanations of how AI systems process personal data; documentation of model logic sufficient to respond to individual access requests; and technical approaches that support meaningful explanation of specific outputs.

4. Security and Access Controls

AI systems create novel security risks requiring enhanced controls: access restrictions limiting who can query models or view outputs; input sanitization to prevent prompt injection attacks; output filtering to detect and suppress sensitive information leakage; audit logging of all system interactions; and secure model deployment that protects proprietary training data and model weights.

5. Monitoring and Testing

Organizations should implement ongoing monitoring, including regular testing for data leakage through outputs, bias detection and mitigation for protected characteristics, drift monitoring to detect changes in model behavior, privacy testing using adversarial techniques, and periodic re-assessment of privacy risks as AI systems evolve.

6. Vendor Risk Management

Third-party AI systems introduce indirect privacy risks. Vendor assessments should evaluate whether the vendor uses customer inputs for model training; what privacy settings and controls the vendor offers; how the vendor handles data subject rights requests; the vendor's security practices and incident response capabilities; and contractual protections including data processing agreements, liability allocation, and audit rights.

For frontier model vendors specifically, organizations should evaluate whether they offer enterprise “zero-retention” modes that prevent storage of inputs and outputs, content filtering capabilities that detect and block sensitive information before processing, and redact-on-ingest mechanisms that automatically remove personal identifiers from queries.

7. Training and Policy Development

Privacy protection requires workforce awareness. Organizations should implement training on privacy risks specific to AI systems, clear policies prohibiting input of sensitive information into public AI tools, guidance on privacy settings for approved AI systems, and procedures for escalating privacy concerns.

Privacy Settings: A Practical Control for Commercial AI Tools

When organizations cannot deploy private AI systems (air-gapped models using only internal data), privacy settings in commercial AI tools become critical risk controls.

ChatGPT (OpenAI). ChatGPT offers several privacy-relevant settings:

  • “Improve the model for everyone.” When enabled (the default), inputs and outputs may be used to train OpenAI's models. When disabled, conversations are not used for training but are retained for 30 days for abuse monitoring before deletion.
  • “Temporary chat.” When enabled, conversations are not saved to history and are deleted within 30 days. This setting prevents long-term storage of interactions but requires users to actively enable it for each session.

Gemini (Google). Users must navigate to "Gemini Apps Activity" settings to control whether Google retains and uses their conversations. The default setting allows Google to store and review conversations to improve its services. Organizations should instruct employees to disable Gemini Apps Activity to prevent retention and use of work-related interactions.

Claude (Anthropic). Claude users must proactively disable “Help improve Claude” in Privacy Settings. When enabled (the default), conversations may be reviewed by Anthropic to improve model performance. Disabling this setting prevents human review and long-term retention of conversations beyond what's necessary for abuse prevention.

Enterprise Licensing Advantage. Enterprise and team licenses for these tools typically include "Opt-Out of Training" enabled by default, a major selling point for corporate legal departments. Organizations using free or individual-tier licenses should strongly consider upgrading to enterprise agreements that provide contractual privacy protections rather than relying on individual users to configure privacy settings correctly.

Organizations should establish clear policies mandating use of privacy-protective settings when employees use commercial AI tools for work purposes. For all major AI platforms, corporate policies should require disabling training on inputs and outputs; using temporary/ephemeral chat modes when available; and prohibiting input of personal information, trade secrets, attorney-client privileged communications, or other sensitive data regardless of settings. Similar privacy configuration obligations should be evaluated for every AI tool authorized for business use, including Microsoft Copilot and industry-specific AI applications.

Organizations should operate on the assumption that all employee interactions with public-tier AI systems are discoverable in litigation and regulatory investigations unless enterprise privacy settings are contractually enabled and technically verified. This presumption of discoverability should drive procurement decisions, acceptable use policies, and ongoing compliance monitoring.

Multi-Jurisdictional Compliance Strategy: Finding Common Ground

Organizations operating across multiple jurisdictions face the challenge of complying with overlapping and sometimes conflicting privacy requirements. A practical approach identifies common compliance elements:

Transparency Baselines. Virtually all privacy frameworks require transparency about AI use. Organizations can satisfy multiple requirements simultaneously by implementing comprehensive notice covering that AI systems are used, categories of personal data processed, purposes of AI processing, the logic involved in automated decision-making, and individual rights available.

Individual Rights Infrastructure. Multiple jurisdictions provide overlapping rights (access, correction, deletion, objection to automated decision-making). Building systems to honor these rights regardless of requestor location simplifies compliance while demonstrating good faith privacy practices.

High-Risk System Identification. Both state AI laws and the EU AI Act use “high-risk” classifications based on similar factors: impact on employment, education, credit, housing, healthcare, and essential services. Organizations can use consistent risk classification methodologies across jurisdictions rather than maintaining separate risk assessment processes.

Human Oversight Requirements. Requirements for human review of consequential AI decisions appear across multiple frameworks. Implementing systematic human oversight that meets the most stringent applicable standard (meaningful opportunity to review, sufficient information to assess, authority to override) generally satisfies less demanding requirements.

Vendor Management Standards. Due diligence on third-party AI providers should address the full range of privacy obligations organizations face. Comprehensive vendor assessments that address EU AI Act requirements, state privacy law obligations, and federal enforcement priorities reduce duplicative effort while ensuring adequate risk evaluation.

Data Privacy Day 2026: Three Immediate Actions

As organizations assess their AI privacy posture on Data Privacy Day 2026, three concrete actions can significantly reduce risk:

1. Map Your "High-Risk" AI Systems. Use the overlapping definitions from the EU AI Act and Colorado AI Act to identify which of your AI systems impact “consequential decisions” affecting employment, education, credit, housing, healthcare, or essential services. Document the personal data these systems process, their decision-making logic, and existing human oversight mechanisms. This mapping exercise satisfies requirements across multiple jurisdictions while identifying systems requiring enhanced privacy controls.

2. Audit Vendor "Training" Toggles. Conduct a systematic audit of every employee-facing AI tool to ensure training opt-outs are properly configured. Verify that ChatGPT's "Improve the model for everyone" is disabled, Google Gemini's "Gemini Apps Activity" is turned off, and Anthropic Claude's "Help improve Claude" is disabled. For enterprise licenses, confirm that contractual opt-outs are actually implemented in the deployed systems. Assign responsibility for ongoing monitoring of these settings as vendors update interfaces and change default configurations.

3. Prepare Your Privacy Notice Updates. Draft updated privacy notices that explicitly mention AI use and automated decision-making notice requirements. These notices should explain that you use AI systems to make or support decisions, categories of personal information processed by AI, the logic involved in AI decision-making, and consumer rights regarding AI decisions. Preparing these notices now allows time for legal review, stakeholder feedback, and translation before the compliance deadline.

Looking Forward: Privacy as Strategic Differentiator

As AI governance transitions from aspirational best practices to mandatory legal compliance, privacy protection becomes a competitive advantage rather than merely a regulatory obligation. Organizations that implement systematic privacy controls demonstrate to customers, business partners, and regulators that they take AI risks seriously.

The strategic window for reactive privacy approaches has closed. With the Colorado AI Act taking effect June 30, 2026 and California's ADMT compliance obligations triggering January 1, 2027, organizations need operational privacy compliance programs rather than aspirational frameworks. The EU AI Act's high-risk requirements are already in force, creating immediate obligations for systems deployed in European markets.

Data Privacy Day 2026 arrives at a moment when privacy is no longer peripheral to AI governance. This is the moment to acknowledge that privacy has become the operational core of responsible AI governance, not simply a compliance checkbox. Organizations that treat privacy as an afterthought will find themselves explaining inadequate controls to regulators, remediating privacy incidents, and defending against enforcement actions. Those that embed privacy into their AI governance from the outset will be better positioned for whatever regulatory framework ultimately emerges from the potential conflict trajectory between state innovation and federal preemption efforts.

The era of “move fast and break things” is over for AI systems that process personal information. The era of “demonstrate privacy by design or face consequences” has arrived.


For questions about privacy compliance in AI systems, data protection impact assessments, or navigating multi-jurisdictional privacy requirements, please contact the Jones Walker Privacy, Data Strategy and Artificial Intelligence team.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member