BG elementBG element
Phung Touch

When Bradley Heppner faced a federal investigation into alleged $150 million fraud, he did what many executives might do in 2025: he turned to an AI tool for help thinking through his legal situation. He used Claude, an AI assistant created by Anthropic, to analyze his circumstances and develop potential responses. Later, he shared those AI-generated documents with his defense attorneys at Quinn Emanuel.

On February 10, 2026, U.S.District Judge Jed S. Rakoff delivered a ruling that should concern every executive using AI for sensitive business decisions: those AI conversations weren't private. Despite Heppner's subsequent sharing of the materials with his lawyers, the court found no attorney-client privilege protected them. The government could access every prompt, every AI response, every strategic consideration Heppner had documented.

This case isn't just about one defendant's legal misstep. It exposes a dangerous assumption many executives hold: that their AI interactions carry the same confidentiality protections as conversations with trusted advisors. They don't. And the implications extend far beyond the courtroom to every sensitive business decision where executives might reach for AI assistance.

The Three Confidentiality Failures That Executives Need to Understand

The federal court's reasoning reveals three critical ways AI tools fail to provide the confidentiality executives might expect. Understanding each failure is essential for anyone making business-critical decisions.

1. You're Not Talking to a Professional

The attorney-client privilege exists because of a carefully balanced policy decision: society accepts that some communications should remain confidential because they occur within a professional relationship governed by licensing, ethical duties, and regulatory oversight. Attorneys owe fiduciary duties to their clients. They face professional discipline for breaches of confidentiality. They are bound by rules of professional conduct.

AI tools have none of these constraints. As the court noted, the AI platform "has no law degree and is not a member of the bar. It owes no duties of loyalty and confidentiality to its users. It owes no professional duties to courts, regulatory bodies, and professional organizations."

Inputting sensitive information into an AI tool—whether about M&A strategy, regulatory responses, personnel decisions, or competitive intelligence—you're not engaging a professional advisor. You're using a software product. The legal and professional protections you might expect simply don't apply.

2. The Tool Explicitly Disclaims What You Think You're Getting

The government’s motion noted that Anthropic’s Constitution—a set of principles guiding Claude’s responses—directs the AI to choose responses that avoid giving the impression of providing specific legal advice and to suggest consulting a lawyer instead. The court found this significant in determining that the defendant could not have been seeking legal advice from the AI tool.

This isn't unique to legal advice. Most enterprise AI tools include similar disclaimers for financial advice, medical guidance, and other professional services. These aren't mere legal formalities—they reflect the fundamental limitation that these tools, however sophisticated, are not substitutes for professional advisors.

If you're using AI to help navigate complex business decisions—evaluating acquisition targets, responding to regulatory inquiries, managing crisis situations, or assessing legal exposure—the tool itself is telling you that what it provides isn't the professional advice you might need. More critically, courts may view your use of the tool as acknowledgment that you understood you weren't obtaining confidential professional counsel.

3. Your Inputs and Outputs Travel Through Third-Party Infrastructure

Perhaps most concerning for executives, the court found that Heppner's AI conversations lacked confidentiality because he "chose to share his prompts with an AI tool created by a third-party company that is publicly accessible."

According to the government’s motion, the court noted that Anthropic’s privacy policy permits the company to:

• Collect data on user prompts and AI outputs

• Use this data to train and improve its AI systems

• Disclose information to governmental regulatory authorities and third parties

When you input information about pending acquisitions, regulatory strategies, competitive intelligence, or crisis response plans into an AI tool, that information passes through the vendor's infrastructure. Depending on the vendor's terms, it may be retained, analyzed, or even used to improve the general AI model. In litigation, regulatory investigations, or discovery proceedings, these records could be accessible to opposing parties or government agencies.

While the Heppner case arose in a criminal prosecution, its implications extend to virtually any situation where executives use AI tools to work through sensitive business matters:

Strategic Planning and M&A

Using AI to analyze acquisition targets, model transaction structures, or draft negotiating strategies could create discoverable records of your thinking, valuation approaches, and strategic priorities. In later disputes over deal terms, disclosure obligations, or fiduciary duties, these AI interactions could become evidence against you.

Regulatory Response and Compliance

If you're using AI to help formulate responses to regulatory inquiries, assess compliance gaps, or develop remediation strategies, those conversations could be discoverable in subsequent enforcement actions. They might reveal your awareness of problems, your assessment of legal risks, or gaps between what you knew internally and what you disclosed to regulators.

Crisis Management and Reputation Protection

AI tools seem perfect for stress-testing messaging, analyzing stakeholder reactions, or developing response scenarios during a crisis. But these explorations could later be used to show your organization's real-time assessment of fault, awareness of damage, or strategic calculations about disclosure—potentially undermining your public positions.

Personnel Decisions and Workplace Investigations

Using AI to analyze employee performance patterns, assess termination risks, or think through investigation strategies creates records that could be discoverable in wrongful termination, discrimination, or retaliation cases. These records might reveal decision-making processes or considerations you'd prefer to keep confidential.

Competitive Intelligence and Market Analysis

AI analysis of competitor strategies, market opportunities, or pricing decisions creates a documented trail of your competitive thinking. In antitrust investigations, patent disputes, or trade secret cases, this trail could reveal strategic intent, market knowledge, or awareness of competitive dynamics that becomes relevant to the legal issues.

A Decision Framework for AI Use in Sensitive Business Contexts

The Heppner decision doesn't mean executives should avoid AI tools entirely. Rather, it demands thoughtful decision-making about when and how to use them. Consider this framework:

Before Using an AI Tool, Ask:

1. Could this information be damaging if disclosed in litigation, investigation, or public reporting?

If yes, think carefully before inputting it into any AI tool. Consider whether the analytical value justifies the disclosure risk.

2. Does the AI tool's privacy policy permit data retention, analysis, or disclosure?

Most consumer and general-purpose AI tools do. Enterprise versions with enhanced privacy commitments may not. Read the actual terms before assuming confidentiality.

3. Am I using this because it's the right tool, or because it's convenient?

AI tools excel at pattern recognition, data synthesis, and generating options. They're not substitutes for professional judgment on high-stakes decisions. If you're facing a situation that genuinely requires professional advice—legal, financial, strategic—engage actual professionals.

4. Would I be comfortable if opposing counsel or regulators saw this conversation?

This is the ultimate test. If the answer is no, don't have the conversation with an AI tool. Find a truly confidential channel—whether that's an in-person meeting with trusted advisors, a conversation with counsel, or simply working through the problem without creating digital records.

What Organizations Should Do Now

The Heppner decision should prompt immediate action by organizations in regulated industries:

1. Conduct AI Vendor Due Diligence

Review the terms of service, privacy policies, and data handling practices of every AI tool your organization uses or is considering. Understand exactly what happens to your data: Is it retained? For how long? Who can access it? Under what circumstances might it be disclosed? Can it be used to train AI models? The answers matter enormously for risk management.

2. Establish Clear Use Policies

Develop explicit policies governing when AI tools can and cannot be used for business purposes. These policies should address:

• Categories of information that should never be input into AI tools (e.g., pending M&A discussions, regulatory investigation details, attorney work product)

• Approved AI tools for different purposes and sensitivity levels

• Requirements for legal or compliance review before using AI for sensitive matters

• Procedures for vetting and approving new AI tools

3. Integrate AI Governance into Existing Information Security

Organizations already classify data by sensitivity and apply different controls accordingly. AI tool usage should fit within these existing frameworks. If certain information requires encryption, access controls, or need-to-know restrictions when stored in your systems, it likely shouldn't be freely input into third-party AI platforms.

4. Train Executives and Employees

The Heppner case demonstrates that even sophisticated executives may not fully understand the confidentiality implications of AI tool use. Organizations should provide clear, practical training that helps employees understand:

• Why AI conversations aren't protected like conversations with attorneys or other professionals

• What happens to data they input into AI tools

• How to identify situations where AI tools are inappropriate

• What alternatives exist for handling sensitive matters

5. Involve Counsel in AI Strategy Discussions

One striking aspect of the Heppner case is that the defendant's use of AI wasn't directed by his attorneys. Had counsel been involved in determining how AI might appropriately assist with case preparation—and what materials should be kept within the attorney-client relationship rather than generated through third-party tools—the outcome might have been different. Organizations should involve legal counsel early when considering AI adoption for sensitive business functions. Increasingly, they're also engaging responsible AI governance specialists who can conduct vendor due diligence, translate complex AI terms of service into business risk language, and design practical policies that protect sensitive information while enabling productive AI use.

The Broader Implications

The Heppner decision arrives at a critical moment in AI adoption. Executives across industries are rapidly integrating AI tools into their decision-making processes, often without fully considering the confidentiality implications. The technology is powerful and genuinely useful for many business purposes. But it's not a substitute for human judgment, professional advice, or confidential deliberation on high-stakes matters.

For organizations in highly regulated industries—financial services, healthcare, government contracting, pharmaceuticals, and others where regulatory scrutiny and litigation risk run high—the stakes are particularly significant. These organizations already operate under heightened obligations around data protection, audit trails, anddisclosure. AI tool usage creates new vectors for information exposure that must be carefully managed.

The court's ruling also signals that judges are beginning to grapple with AI's role in business and legal contexts. We should expect more such decisions as AI usage becomes ubiquitous and courts address questions of privilege, confidentiality, and evidentiary standards. Organizations that establish thoughtful AI governance frameworks now will be better positioned as legal standards continue to evolve.

Most fundamentally, the Heppnercase reminds us that convenience and capability don't equal confidentiality. AI tools can help executives think through complex problems, analyze data, and generate options. But they're not confidential advisors. The sooner organizations internalize this reality and build appropriate guardrails around AI usage, the better protected they'll be when sensitive business decisions inevitably face scrutiny.

 

About This Analysis

This article analyzes United States v. Heppner, Case No. 1:25-cr-00503-JSR (S.D.N.Y.), including the government's motion filed February 6, 2026, and Judge Rakoff's February 10,2026 bench ruling. The case represents the first significant judicial examination of whether AI-generated documents can be protected by attorney-client privilege or work product doctrine. Organizations in regulated industries should consult with legal counsel to understand how this precedent applies to their specific AI usage and governance needs.

Frequently Asked Questions: AI and Confidentiality

Are AI conversations protected by attorney-client privilege?

No. In the February 2026 Heppner case, a federal court ruled that conversations with AI tools like Claude or ChatGPT are not protected by attorney-client privilege, even if the AI-generated materials are later shared with attorneys. The court found that AI tools are not attorneys, do not provide legal advice, and users have no reasonable expectation of confidentiality when using third-party AI platforms.

Can executives use AI tools for confidential business strategy?

Executives should exercise extreme caution. Information entered into most AI tools passes through third-party infrastructure and may be retained, used for training, or disclosed to government authorities per vendor privacy policies. For truly confidential strategic matters—M&A planning, regulatory responses, crisis management—the risks of using general-purpose AI tools likely outweigh the benefits.

What did the Heppner case rule about AI and privilege?

The court ruled that 31 documents generated through an AI tool were neither protected by attorney-client privilege nor work product doctrine. The decision established that AI tools are not professional advisors, explicitly disclaim providing legal advice, and lack the confidentiality protections of traditional attorney-client relationships. This precedent has significant implications for executives using AI for sensitive business decisions.

What happens to my data when I use AI tools like ChatGPT or Claude?

According to vendor privacy policies cited in the Heppner case, AI platforms typically collect user prompts and AI outputs, may use this data to train and improve their systems, and may disclose information to governmental authorities and third parties. The specific terms vary by vendor and subscription tier—enterprise versions often have stronger privacy protections than free consumer versions.

How should executives use AI for sensitive business decisions?

Before using AI for sensitive matters, ask: Could this information damage us if disclosed in litigation? Does the AI vendor's privacy policy permit data retention or disclosure? Am I using this because it's right or just convenient? Would I be comfortable if opposing counsel saw this conversation? If any answer raises concerns, use traditional confidential channels instead—in-person meetings with advisors, conversations with counsel, or working through problems without creating digital records.

What AI governance policies do regulated industries need?

Organizations in regulated industries should establish policies governing: categories of information that should never be input into AI tools; approved AI vendors for different purposes and sensitivity levels; requirements for legal or compliance review before using AI for sensitive matters; procedures for vetting new AI tools; and integration with existing information security and data classification frameworks. These policies should be accompanied by executive training on confidentiality risks.

Does sharing AI outputs with my lawyer make them privileged?

No. The Heppner court ruled that you cannot retroactively create privilege by sharing pre-existing, non-privileged materials with an attorney. Just as forwarding a Google search or library research to your lawyer doesn't make those materials privileged, sending AI-generated documents to counsel after they're created does not cloak them with attorney-client privilege.

What's the difference between consumer and enterprise AI tools for confidentiality?

Enterprise AI tools often include enhanced privacy commitments, such as: no data retention beyond the session, no use of inputs for model training, stricter access controls, and potentially contractual confidentiality obligations. However, even enterprise tools may not create attorney-client privilege or work product protection. Organizations must carefully review vendor terms and understand exactly what privacy guarantees they're receiving.

Can AI tools be used during internal investigations or litigation?

This is high-risk territory. Any AI queries about investigation strategy, witness analysis, or litigation approaches could become discoverable and potentially waive privilege over related attorney work product. If legal counsel directs the use of AI as part of investigation or litigation preparation, and appropriate enterprise-grade tools are used, there may be stronger arguments for protection—but this remains legally uncertain following Heppner. Consult with counsel before using AI in these contexts.

What should organizations do now in response to the Heppner ruling?

Organizations should immediately: (1) review AI vendor terms of service and privacy policies to understand data handling practices, (2) establish clear use policies defining when AI tools can and cannot be used, (3) integrate AI governance into existing information security frameworks, (4) train executives and employees on confidentiality risks, and (5) involve legal counsel and responsible AI governance specialists early when considering AI adoption for sensitive business functions.