Jones Walker Logo
  • News & Insights
  • Professionals
  • Services
  • News & Insights
  • Professionals
  • Services

  • Firm
  • Offices
  • Careers
  • Events
  • Media Center
  • Blogs
  • Contact

  • text

AI Law and Policy Navigator

Does Your Chatbot Have Trust Issues? How Regulatory Scrutiny is Accelerating

By Jason M. Loring, Michelle Ramsden
April 9, 2026

Every major US tech company now maintains a “trust and safety” function. But as AI chatbots become more embedded in consumer-facing services, a fundamental question remains: are they worthy of the trust that users place in them?

Where the Evidence Points

There’s no doubt that chatbots are lucrative for business. A recent Bloomberg report provides evidence of this engagement-optimization dynamic: Macy’s disclosed that users engaging with its Gemini-powered chatbot, “Ask Macy’s,” spent 400% more than other shoppers. Chatbots like Macy’s shopping assistant are optimized to generate outputs that satisfy user expectations and drive further engagement. 

The underlying mechanics of AI systems often appear opaque to the average user. That opacity often creates an illusion of precision and objectivity, leading consumers to place greater trust in chatbot outputs than they might in human advice. But emerging global research suggests that such trust may be misplaced.

Another report from the UK’s Center for Long-Term Resilience (CLTR) found that AI chatbots and agents are increasingly capable of “scheming,” or disregarding human instructions or programmed safeguards to achieve a goal. Scheming can range from a single policy violation to more serious forms of deception, such as intentionally misleading users or third-party systems.

Experts have compared this misbehavior to early-stage insider risk: 

The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern.” -- Tommy Shaffer Shane, researcher quoted in The Guardian 

At the same time, a study published in Science found that AI chatbot outputs are 49% more sycophantic than human responses. While this tendency is nothing new to chatbots (Open AI CEO Sam Altman admitted in April 2025 that ChatGPT “glazes too much”), researchers still found that users preferred and trusted sycophantic AI responses. This dynamic raises important questions about manipulation, user autonomy, and informed decision-making. The research doesn’t just document a chatbot problem: it documents what happens to human judgment when it’s consistently rewarded for accepting flattery. A human advisor who flatters you may be being dishonest. A system optimized to flatter you is doing exactly what it was designed to do. 

That behavioral profile (systems that deceive, flatter, and optimize for engagement over accuracy) is precisely what regulators are beginning to address. And there is a growing wave of federal and state legislative activity addressing chatbot-related harms, particularly in areas impacting mental health and child safety.

For example, on March 24, Washington Governor Bob Ferguson signed HB 2225 into law, establishing a private right of action for violations of chatbot transparency and safety requirements. Notably, the law targets “manipulative engagement techniques,” including scenarios in which chatbots mimic romantic relationships with minors. Meanwhile, several other states, including California (which we covered in an earlier Navigator post), Maine, New Hampshire, New York, and Utah, have enacted similar measures.

These developments reflect a broader trend toward accountability for online harms. Courts have already begun to find platforms liable for mental health outcomes, as illustrated by recent findings against Meta and YouTube in California.

Taken together, the convergence of increasingly sophisticated and concerning AI behavior with expanding legal accountability frameworks is a trend that businesses should monitor closely. While sweeping federal regulation targeting American AI giants remains unlikely, more targeted, sector-specific enforcement is already underway. The Federal Trade Commission (FTC) launched an inquiry last September into the impacts of AI chatbot use on children and teens and in November, the Food and Drug Administration admitted that “AI therapist” chatbots pose novel risks to be addressed in regulation. 

In this environment, chatbot deployments that fail to meet baseline transparency or safety requirements may face scrutiny under Section 5 of the FTC Act, particularly where chatbot behavior or design misleads users or omits material information about system limitations or incentives.

What Should Businesses Do Now? 

Organizations deploying AI chatbots should take proactive steps to align with this evolving regulatory landscape:

  1. Deploy flexible AI governance frameworks that prioritize transparency and accountability. This includes establishing processes for the training, testing, and auditing of AI systems that are proportionate to the risks associated with specific use cases.
  2. Develop the capability to monitor how chatbots interact with user inputs and whether they circumvent established prompting, organizational, or technical safeguards. Practically, businesses should scale interventions to foreseeable risk. The potential harm associated with a shopping assistant differs wildly from an AI therapist.
  3. Don’t assume that third-party vendors have conducted risk assessments appropriate for your specific deployment context. Instead, work with privacy, data strategy, and AI counsel to ensure that contractual provisions, internal controls, and oversight mechanisms appropriately allocate responsibility and mitigate risk associated with third-party chatbots.
  4. Be prepared to demonstrate meaningful, real-time oversight of chatbot performance and intervene promptly in cases of malfunction or harm. Regulators are increasingly focused not only on whether safeguards exist, but on whether they function appropriately.

In the end, it is not the chatbots that bear legal or reputational consequences — it is the businesses that deploy them. As reliance on chatbots continues to grow, so too does the obligation to deploy them responsibly.

For questions about AI governance and vendor risk management, contact the Jones Walker Privacy, Data Strategy, and Artificial Intelligence team. And stay tuned (and subscribe) for continued insights from the AI Law and Policy Navigator.

Related Professionals
  • name
    Jason M. Loring
    title
    Partner
    phones
    D: 404.870.7531
    email
    Emailjloring@joneswalker.com
  • name
    Michelle Ramsden
    title
    Special Counsel
    phones
    D: 404.870.7503
    email
    Emailmramsden@joneswalker.com

Related Practices

  • Privacy, Data Strategy, and Artificial Intelligence
Sign Up For Alerts
© 2026 Jones Walker LLP. All Rights Reserved.
PrivacyDisclaimerAvident Advisors
A LexMundi Member