
If worrying about our soon-to-be AI overlords wasn’t enough, two rulings recently dropped that illustrate the risk that an AI platform’s terms of use and privacy policy limitations may pose to confidential information. On February 17, 2026, in U.S. v. Heppner, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York ruled that AI-generated documents were not entitled to either attorney/client or attorney work product protection, citing waiver of confidentiality due to the defendant’s consent to Anthropic’s privacy policy. And on January 5, 2026, in a case that has not received the same notoriety, Trinidad v. OpenAI, Inc., 2026 U.S. Dist. LEXIS 1129, 2026 WL 21791, Judge Jon Tigar of the U.S. District Court for the Northern District of California dismissed that case because he found that the plaintiff forfeited her trade secrets claim under OpenAI’s terms of use.
These cases not only serve as a cautionary tale for trade secret owners but a reminder of the impact that terms of service–the fine print many of us scroll through and consent to when securing access to a website or online tool–may have in future trade secret disputes. This begs the following question: should courts reconsider applying those terms of service and privacy policies so broadly in situations of confidentiality? Read on to find out . . .
What Judge Rakoff Hath Wrought: Since Judge Rakoff announced his decision from the bench on February 10, the legal internet has been agog, presumably because many lawyers fear that their clients’ unsupervised legal research through an open AI platform could be discoverable.
For those not familiar with the underlying facts, Heppner was being investigated for securities and wire fraud. Heppner used Anthropic’s AI platform, Claude, to answer legal questions and generate related documents after being served with a grand jury subpoena. But there were two problems with Heppner’s use of Claude. First, those queries and documents were not done at the direction of or in consultation with an attorney; in other words, he did it on his own. Second, and more problematically, those queries and resulting documents were done in an open AI platform–meaning, those queries and documents were governed by Anthropic’s Privacy Policy.
After Heppner’s devices were seized, his lawyer objected to the government’s review of the documents generated through Claude, citing the attorney/client privilege and the attorney work product doctrine. At a pretrial conference on February 10, which was subsequently memorialized by his February 17 opinion, Judge Rakoff dismantled those objections under a relatively straightforward legal analysis. Essentially, Judge Rakoff found that there was no privilege or protection because Heppner generated the documents on his own without direction from his attorney.
Judge Rakoff also found that the communications memorialized in the AI documents were not confidential because Heppner did not have a reasonable expectation of privacy when he used that tool. As Judge Rakoff noted, under Anthropic’s written Privacy Policy, Anthropic collects data on both user’s “inputs” and Claude’s “outputs” to “train” Claude. In addition, Judge Rakoff emphasized that, under those terms, “Anthropic reserves the right to disclose such data to a host of ‘third parties,’ including ‘governmental regulatory authorities.'”
But for trade secret owners, Judge Rakoff’s brief discussion of Heppner’s waiver of privilege in a footnote is particularly noteworthy. Specifically, Judge Rakoff found in Footnote 3 that “even if certain information that Heppner input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party.” (A shout out to my panel, particularly Professor Matthew D’Amore for noting this in our recent presentation to the New York City Bar Association on AI and trade secrets). While not determinative of Judge Rakoff’s opinion, this language would mean that information shared or generated with an open AI platform would lose its confidentiality under those terms of service.
A pro se plaintiff pays the price for using ChatGPT: Judge Tigar’s reasoning in the Trinidad case directly addressed the impact of an AI platform’s terms of service on a trade secret owner’s claims (credit to Thompson Hine’s quarterly trade secret update for bringing this case to my attention). In that case, the plaintiff, Rebecca Trinidad, sued OpenAI, alleging OpenAI stole her protocols and templates which OpenAI then allegedly adopted and commercialized with its industry partners.
After Trinidad’s flurry of motion practice (a request for TRO, two motions for a preliminary injunction, a motion for sanctions, and a motion to add OpenAI founder Sam Altman as a defendant), OpenAI moved to dismiss her claims, arguing that as to her trade secret claim, Trinidad had agreed to OpenAI’s Terms of Use which stated that any “input or Output from its Services may be used in connection with [OpenAI’s] Services.”
Judge Tigar agreed and dismissed her case. On her DTSA claim, Judge Tigar found that she had failed to allege that “she took any reasonable measures to keep her ‘protocols and frameworks’ secret.” By admitting that she developed these frameworks using an OpenAI product, ChatGPT, he found that Trinadad would have been required to “voluntarily share” the information she alleged were her trade secrets with OpenAI. “Because she ‘disclose[d] [her] trade secret to others who are under no obligation to protect the confidentiality of the information, . . . [her] property right is extinguished.”
Trinidad argued that she owned the output of her conversations with ChatGPT under OpenAI’s Terms of Use. But Judge Tigar held that establishing ownership alone was not sufficient or relevant to the question of whether the information was secret. He also rejected her argument that the Terms of Use qualified as a contract of adhesion, reminding her that the relevant inquiry was whether she took reasonable measures to keep the frameworks secret and that she consented to their disclosure by accepting those terms.
Takeaways. Three come to mind. First, trade secret owners–and their employees–risk waiving confidentiality (and potentially trade secret protection) when using an “open” or consumer AI platform in connection with the development of any confidential projects or inventions. Judge Rakoff concluded that Heppner had no expectation of privacy (or confidentiality) because Anthropic’s privacy policy warned that Anthropic could disclose data to “government regulatory authorities” and “third parties.” Likewise, Judge Tigar concluded Trinidad failed to protect her alleged trade secrets because she shared them with OpenAI, which he held provided no commitment to confidentiality under its terms of use.
These risks should come as no surprise. Since ChatGPT’s introduction in November 2022, lawyers have warned of the danger posed by inputting otherwise confidential information into an AI platform’s LLM. That’s because the information might be used to train the program, answer questions of others using the program, or for any other purpose deemed appropriate by the AI’s creators, creating a risk of waiver.
One would expect a different analysis if a company has a private or customized enterprise LLM. But of course, the terms of the agreement with the LLM provider will determine whether the trade secret owner has adequately safeguarded the information. If, for example, the agreement reserves the right of the AI company to provide information to the government or other third parties, confidentiality could be deemed to have been waived consistent with Judge Rakoff’s analysis. So, hypothetically at least, a robust negotiation of those terms ensuring confidentiality, deletion, limited dissemination, etc., will be necessary.
Second, it’s important to remember that while terms of service may taketh away, they giveth to other trade secret owners. Take the OpenEvidence v. Doximity case pending in the U.S. District Court of Massachusetts as an example. OpenEvidence, the creator of a popular generative AI tool for healthcare professionals and patients, asserted that Doximity misappropriated its trade secrets when Doximity violated OpenEvidence’s terms of service. OpenEvidence alleged Doximity violated those terms by surreptitiously gaining access using the licensing credentials of at least one physician; having gained access under false pretenses, Doximity then used prompt injunction attacks to learn more about the functionality and other features of OpenEvidence’s LLM. Indeed, in its initial complaint, OpenEvidence presented a litany of violations of those terms of service, which forbid reverse engineering and gaining access under false pretenses.
Third, from a policy standpoint, should non-negotiable terms of service be outcome-determinative of a trade secret’s status or protection, especially as the use of AI becomes such a ubiquitous tool in everyday life? This isn’t an easy question to answer.
On the one hand, the Trinidad plaintiff wasn’t exactly a model trade secret owner. Judge Tigar noted that she had multiple disputes with other AI companies, including one which another court characterized as “frivolous.” And her motion practice in the OpenAI case reinforced that status (seeking to add Sam Altman as a defendant, etc.). Finally, as OpenAI’s motion to dismiss made clear, her complaint suffered from multiple infirmities. So it’s easy to understand why he ruled the way he did on the issue of terms of service, which was a clean way to dispose of an otherwise messy complaint.
But on the other hand, some courts have been cognizant of overreaching when asked to enforce a website’s terms of service. For example, if a platform’s terms of service were to forbid a user from competing against it, would a court enforce it? At least one federal court has ruled that it would not. In TopstepTrader, LLC v. OneUp Trader, LLC, Case No. 17 C 4412 (N.D. Illinois June 28, 2017), the U.S. District Court for the Northern District of Illinois found that a website provider was attempting to use its terms and conditions to improperly restrict competition under Illinois non-compete law.
And here are some other questions worth considering. How far are courts willing to go to find waiver through the use of an online tool if its terms of service reserve the right to share information with other parties? And shouldn’t it matter that it is generally intended that information shared with the platform is not going to be shared with a fellow human being but rather to a machine? (Again, excellent points made by my outstanding NYCBA panel of Matthew D’Amore, Jim Ko and Dean Pelletier that bear repeating in this policy context). And, finally, is Claude really a third party?
One of the hallmarks of trade secret law is its flexibility, particularly when deciding whether a trade secret owner’s efforts to safeguard its trade secrets were reasonable under the circumstances. Perhaps a less mechanical approach, one that considers the sophistication and resources of a party and the extent of the information actually shared or generated, should be applied before strictly enforcing an AI platform’s terms of service to dispose of a trade secret claim?