
When Bradley Heppner faced a federal investigation into alleged $150 million fraud, he did what many executives might do in 2025: he turned to an AI tool for help thinking through his legal situation. He used Claude, an AI assistant created by Anthropic, to analyze his circumstances and develop potential responses. Later, he shared those AI-generated documents with his defense attorneys at Quinn Emanuel.
On February 10, 2026, U.S.District Judge Jed S. Rakoff delivered a ruling that should concern every executive using AI for sensitive business decisions: those AI conversations weren't private. Despite Heppner's subsequent sharing of the materials with his lawyers, the court found no attorney-client privilege protected them. The government could access every prompt, every AI response, every strategic consideration Heppner had documented.
This case isn't just about one defendant's legal misstep. It exposes a dangerous assumption many executives hold: that their AI interactions carry the same confidentiality protections as conversations with trusted advisors. They don't. And the implications extend far beyond the courtroom to every sensitive business decision where executives might reach for AI assistance.
The federal court's reasoning reveals three critical ways AI tools fail to provide the confidentiality executives might expect. Understanding each failure is essential for anyone making business-critical decisions.
The attorney-client privilege exists because of a carefully balanced policy decision: society accepts that some communications should remain confidential because they occur within a professional relationship governed by licensing, ethical duties, and regulatory oversight. Attorneys owe fiduciary duties to their clients. They face professional discipline for breaches of confidentiality. They are bound by rules of professional conduct.
AI tools have none of these constraints. As the court noted, the AI platform "has no law degree and is not a member of the bar. It owes no duties of loyalty and confidentiality to its users. It owes no professional duties to courts, regulatory bodies, and professional organizations."
Inputting sensitive information into an AI tool—whether about M&A strategy, regulatory responses, personnel decisions, or competitive intelligence—you're not engaging a professional advisor. You're using a software product. The legal and professional protections you might expect simply don't apply.
The government’s motion noted that Anthropic’s Constitution—a set of principles guiding Claude’s responses—directs the AI to choose responses that avoid giving the impression of providing specific legal advice and to suggest consulting a lawyer instead. The court found this significant in determining that the defendant could not have been seeking legal advice from the AI tool.
This isn't unique to legal advice. Most enterprise AI tools include similar disclaimers for financial advice, medical guidance, and other professional services. These aren't mere legal formalities—they reflect the fundamental limitation that these tools, however sophisticated, are not substitutes for professional advisors.
If you're using AI to help navigate complex business decisions—evaluating acquisition targets, responding to regulatory inquiries, managing crisis situations, or assessing legal exposure—the tool itself is telling you that what it provides isn't the professional advice you might need. More critically, courts may view your use of the tool as acknowledgment that you understood you weren't obtaining confidential professional counsel.
Perhaps most concerning for executives, the court found that Heppner's AI conversations lacked confidentiality because he "chose to share his prompts with an AI tool created by a third-party company that is publicly accessible."
According to the government’s motion, the court noted that Anthropic’s privacy policy permits the company to:
• Collect data on user prompts and AI outputs
• Use this data to train and improve its AI systems
• Disclose information to governmental regulatory authorities and third parties
When you input information about pending acquisitions, regulatory strategies, competitive intelligence, or crisis response plans into an AI tool, that information passes through the vendor's infrastructure. Depending on the vendor's terms, it may be retained, analyzed, or even used to improve the general AI model. In litigation, regulatory investigations, or discovery proceedings, these records could be accessible to opposing parties or government agencies.
While the Heppner case arose in a criminal prosecution, its implications extend to virtually any situation where executives use AI tools to work through sensitive business matters:
Using AI to analyze acquisition targets, model transaction structures, or draft negotiating strategies could create discoverable records of your thinking, valuation approaches, and strategic priorities. In later disputes over deal terms, disclosure obligations, or fiduciary duties, these AI interactions could become evidence against you.
If you're using AI to help formulate responses to regulatory inquiries, assess compliance gaps, or develop remediation strategies, those conversations could be discoverable in subsequent enforcement actions. They might reveal your awareness of problems, your assessment of legal risks, or gaps between what you knew internally and what you disclosed to regulators.
AI tools seem perfect for stress-testing messaging, analyzing stakeholder reactions, or developing response scenarios during a crisis. But these explorations could later be used to show your organization's real-time assessment of fault, awareness of damage, or strategic calculations about disclosure—potentially undermining your public positions.
Using AI to analyze employee performance patterns, assess termination risks, or think through investigation strategies creates records that could be discoverable in wrongful termination, discrimination, or retaliation cases. These records might reveal decision-making processes or considerations you'd prefer to keep confidential.
AI analysis of competitor strategies, market opportunities, or pricing decisions creates a documented trail of your competitive thinking. In antitrust investigations, patent disputes, or trade secret cases, this trail could reveal strategic intent, market knowledge, or awareness of competitive dynamics that becomes relevant to the legal issues.
The Heppner decision doesn't mean executives should avoid AI tools entirely. Rather, it demands thoughtful decision-making about when and how to use them. Consider this framework:
1. Could this information be damaging if disclosed in litigation, investigation, or public reporting?
If yes, think carefully before inputting it into any AI tool. Consider whether the analytical value justifies the disclosure risk.
2. Does the AI tool's privacy policy permit data retention, analysis, or disclosure?
Most consumer and general-purpose AI tools do. Enterprise versions with enhanced privacy commitments may not. Read the actual terms before assuming confidentiality.
3. Am I using this because it's the right tool, or because it's convenient?
AI tools excel at pattern recognition, data synthesis, and generating options. They're not substitutes for professional judgment on high-stakes decisions. If you're facing a situation that genuinely requires professional advice—legal, financial, strategic—engage actual professionals.
4. Would I be comfortable if opposing counsel or regulators saw this conversation?
This is the ultimate test. If the answer is no, don't have the conversation with an AI tool. Find a truly confidential channel—whether that's an in-person meeting with trusted advisors, a conversation with counsel, or simply working through the problem without creating digital records.
The Heppner decision should prompt immediate action by organizations in regulated industries:
Review the terms of service, privacy policies, and data handling practices of every AI tool your organization uses or is considering. Understand exactly what happens to your data: Is it retained? For how long? Who can access it? Under what circumstances might it be disclosed? Can it be used to train AI models? The answers matter enormously for risk management.
Develop explicit policies governing when AI tools can and cannot be used for business purposes. These policies should address:
• Categories of information that should never be input into AI tools (e.g., pending M&A discussions, regulatory investigation details, attorney work product)
• Approved AI tools for different purposes and sensitivity levels
• Requirements for legal or compliance review before using AI for sensitive matters
• Procedures for vetting and approving new AI tools
Organizations already classify data by sensitivity and apply different controls accordingly. AI tool usage should fit within these existing frameworks. If certain information requires encryption, access controls, or need-to-know restrictions when stored in your systems, it likely shouldn't be freely input into third-party AI platforms.
The Heppner case demonstrates that even sophisticated executives may not fully understand the confidentiality implications of AI tool use. Organizations should provide clear, practical training that helps employees understand:
• Why AI conversations aren't protected like conversations with attorneys or other professionals
• What happens to data they input into AI tools
• How to identify situations where AI tools are inappropriate
• What alternatives exist for handling sensitive matters
One striking aspect of the Heppner case is that the defendant's use of AI wasn't directed by his attorneys. Had counsel been involved in determining how AI might appropriately assist with case preparation—and what materials should be kept within the attorney-client relationship rather than generated through third-party tools—the outcome might have been different. Organizations should involve legal counsel early when considering AI adoption for sensitive business functions. Increasingly, they're also engaging responsible AI governance specialists who can conduct vendor due diligence, translate complex AI terms of service into business risk language, and design practical policies that protect sensitive information while enabling productive AI use.
The Heppner decision arrives at a critical moment in AI adoption. Executives across industries are rapidly integrating AI tools into their decision-making processes, often without fully considering the confidentiality implications. The technology is powerful and genuinely useful for many business purposes. But it's not a substitute for human judgment, professional advice, or confidential deliberation on high-stakes matters.
For organizations in highly regulated industries—financial services, healthcare, government contracting, pharmaceuticals, and others where regulatory scrutiny and litigation risk run high—the stakes are particularly significant. These organizations already operate under heightened obligations around data protection, audit trails, anddisclosure. AI tool usage creates new vectors for information exposure that must be carefully managed.
The court's ruling also signals that judges are beginning to grapple with AI's role in business and legal contexts. We should expect more such decisions as AI usage becomes ubiquitous and courts address questions of privilege, confidentiality, and evidentiary standards. Organizations that establish thoughtful AI governance frameworks now will be better positioned as legal standards continue to evolve.
Most fundamentally, the Heppnercase reminds us that convenience and capability don't equal confidentiality. AI tools can help executives think through complex problems, analyze data, and generate options. But they're not confidential advisors. The sooner organizations internalize this reality and build appropriate guardrails around AI usage, the better protected they'll be when sensitive business decisions inevitably face scrutiny.
About This Analysis
This article analyzes United States v. Heppner, Case No. 1:25-cr-00503-JSR (S.D.N.Y.), including the government's motion filed February 6, 2026, and Judge Rakoff's February 10,2026 bench ruling. The case represents the first significant judicial examination of whether AI-generated documents can be protected by attorney-client privilege or work product doctrine. Organizations in regulated industries should consult with legal counsel to understand how this precedent applies to their specific AI usage and governance needs.