AI Searches Are Now Discoverable in Criminal Proceedings
Artificial intelligence platforms like ChatGPT and Claude are changing how people research, write, and plan. Many individuals use these tools as sounding boards for complex personal or business problems. However, if you are under criminal investigation, relying on an AI chatbot to prepare your defense could become a significant liability.
A federal court in New York recently published its rationale for a groundbreaking ruling regarding digital privacy and criminal defense. The court determined that a defendant’s search history and communications on a generative artificial intelligence platform are not protected from discovery by federal prosecutors. This ruling is among the first in the nation to specifically address whether AI searches connected to a criminal case fall under attorney-client privilege or the work product doctrine.
For anyone navigating the legal system, understanding how courts view AI evidence is critical. The technology is highly accessible, but it lacks the traditional legal protections you might assume apply to your private research. The criminal defense attorneys at Jacobs & Dow, LLC explain the details of the New York ruling, why the court decided to allow the AI evidence, and what this means for individuals facing criminal charges.
The Case Study: Bradley Heppner and AI Discoverability
The ruling stems from a federal case involving corporate executive Bradley Heppner. A federal grand jury indicted Heppner on serious charges, including fraud, conspiracy, falsifying records, and lying to auditors. Prosecutors alleged that he defrauded investors out of more than $150 million.
During the investigation, federal agents arrested Heppner and seized his electronic devices and documents. Upon review, Heppner’s attorney revealed that these seized items contained detailed communications between Heppner and Claude, an AI chatbot similar to ChatGPT or Grok. Heppner had used the commercial large language model to discuss the government’s investigation into his alleged criminal activity.
Crucially, Heppner conducted these AI searches and prepared reports outlining his defense strategies completely on his own, without any direction from a licensed attorney. When the government sought to review these searches as evidence, Heppner tried to keep them sealed. His legal team argued that the materials were protected by attorney-client privilege and the work product doctrine because Heppner created them in anticipation of his indictment.
Legal Analysis: The Lack of Attorney-Client Privilege
The federal court ultimately disagreed with Heppner’s defense team. To qualify for attorney-client privilege, communications must meet strict criteria. They must occur between a client and their attorney, be intended to remain confidential, and be made specifically to obtain or provide legal advice. The court noted that Heppner’s interactions with the AI platform failed at least two, and possibly all three, of these foundational requirements.
The most glaring issue is that an AI chatbot is not a licensed attorney. Heppner’s communications could never meet the first requirement of the privilege test. While some commentators argue that using an AI platform is just like using a cloud-based word processor, the court firmly rejected this comparison.Â
Legal privilege requires a trusting human relationship with a licensed professional who owes a fiduciary duty to the client and is subject to professional discipline. Claude and its competitors do not qualify as legal professionals.
The Privacy Pitfall: How LLMs Handle Your Data
The court also found that Heppner’s communications were not actually confidential. When users sign up for platforms like Claude, they consent to the company’s privacy policy. These policies generally state that the platform operator collects data on user inputs to train the algorithm.
Furthermore, the company behind Claude reserves the right to share user searches with third parties, which explicitly includes the government. Because of these terms of service, the court concluded that users simply do not have a substantial privacy interest in their conversations with large language models.
Adding to the problem, Claude specifically told Heppner that it could not provide formal legal advice. When prompted, the AI responded with a standard disclaimer stating it was not a lawyer. The court even included a footnote warning that if a defendant shares genuinely privileged information with an AI chatbot, the act of sharing it with that third-party platform waives the legal privilege entirely.
Work Product Doctrine: The Need for Counsel’s Direction
Heppner’s team also tried to shield the AI searches from discovery by claiming they were protected under the work product doctrine. This legal principle generally protects materials prepared in anticipation of litigation.
The court quickly dismantled this argument. Even if Heppner prepared the inputs and reports because he knew he was going to be indicted, he did not prepare them by or at the direction of legal counsel.Â
Because Heppner acted entirely on his own, the inputs and reports did not reflect any actual legal strategy formulated by his attorney at the time they were created. Therefore, the unguided AI searches fell completely outside the definition of protected legal work product.
Jurisdiction Note: What This Means for Connecticut
It is highly important to note that this ruling comes from a federal court in New York. This specific precedent regarding AI evidence has not yet been confirmed by the Connecticut legislature, nor has it been adopted as binding law or precedent in Connecticut state courts.
However, the legal reasoning applied in New York provides a strong indicator of how other jurisdictions might handle similar issues. Courts across the country often look to early federal rulings when navigating new technologies.Â
Until the Connecticut courts establish their own clear rules on the discoverability of AI searches, defendants should operate under the assumption that their unguided chatbot inquiries are vulnerable to government subpoenas and discovery requests.
Secure Proper Legal Representation in Connecticut
Relying on artificial intelligence to build a legal strategy is incredibly risky. As the Heppner case demonstrates, unguided AI use is discoverable, and the platforms themselves offer no legal confidentiality. The only way to ensure your defense strategies remain private and protected is to work directly with a licensed, experienced criminal defense attorney.
If you are facing criminal charges or are the target of an investigation in Connecticut, do not turn to a chatbot for legal advice. Contact the experienced legal team at Jacobs & Dow, LLC. Our attorneys understand the complexities of the law and will provide the confidential, strategic guidance you need to protect your rights and your future. Reach out to Jacobs & Dow, LLC today to schedule a consultation.