AI & Law · Legal Technology · The Future

NDAs in the AI Era: What Your Confidentiality Clause Misses

By Waleed Hamada 16 min read

NDAs in the AI Era: What Your Confidentiality Clause Does Not Cover and Why It Matters

Someone on your team pasted a confidential client brief into an AI writing tool to help draft a proposal. It saved them forty minutes. It may have violated your NDA with that client. The person who did it almost certainly did not know that. The NDA almost certainly does not address it explicitly. And the AI tool almost certainly used the input to improve its model.

This is not a theoretical concern. It is a daily operational reality for businesses using AI productivity tools alongside confidentiality obligations that were written before those tools existed. The non-disclosure agreement is one of the most commonly executed legal documents in commercial life. It is also one of the most outdated in the face of how work is actually done in 2026. Understanding what a modern NDA needs to say, and why, is not a matter of legal pedantry. It is a matter of knowing whether you are actually protected.

What an NDA Is and What It Actually Protects

A non-disclosure agreement is a contract in which one or both parties agree not to disclose specified information to third parties and to use that information only for agreed purposes. The legal protection it provides depends on four things: the precision of the definition of confidential information, the clarity of the permitted use limitations, the enforceability of the remedy provisions, and the ability to prove a breach has occurred.

Each of these elements is challenged by the AI environment in a specific way. Vague definitions of confidential information that were workable in a pre-AI context may fail to cover derivative works, AI outputs, or model training data generated from the protected information. Permitted use limitations that do not address AI processing create ambiguity about whether using a tool to assist in processing confidential data constitutes a permitted use or an unauthorized disclosure. Remedy provisions designed for traditional information breaches may not map cleanly onto AI-mediated disclosure. And the ability to prove a breach is complicated by the fact that AI-mediated disclosure may be invisible, untraceable, and probabilistic rather than discrete.

The AI Tool Problem: Three Ways Confidential Information Leaves the Room

The risk that AI tools create for NDA compliance is not uniform. It manifests in three distinct mechanisms, each with different legal implications and different practical responses.

Model Training Data Ingestion

Many general-purpose AI tools, including some versions of widely used large language model interfaces, use user inputs to improve the underlying model unless the user has explicitly opted out or subscribed to an enterprise tier with contractual data use restrictions. When confidential information is entered into such a tool, it may become part of the training data that makes the model more capable. That information may, under certain prompting conditions, be reproduced in responses to entirely different users who have no connection to the original disclosure. This is not a hypothetical risk. Memorization in large language models has been documented in academic research, and the extraction of training data through adversarial prompting has been demonstrated in practice.

Third-Party Server Storage

Even AI tools that do not use inputs for training typically store them on the provider’s servers, at least transiently. Most standard NDA definitions of disclosure encompass sharing information with third parties. The AI tool provider is a third party. Transmitting confidential information to the provider’s servers, even for the purpose of processing a query and discarding the data afterward, may constitute disclosure under the NDA’s definition. Whether it does depends entirely on the NDA’s specific language and the applicable governing law’s interpretation of disclosure.

AI-Generated Derivative Works

The third mechanism is perhaps the least understood. If a receiving party uses confidential information as input to an AI tool and receives an output, and that output is then shared with others, the output may itself contain or substantially reflect the confidential information even though it does not quote it directly. A market analysis generated by an AI trained on confidential competitive intelligence. A product specification derived from a confidential technical brief. A pitch deck developed using an AI tool that processed confidential financial projections. Whether these outputs are themselves confidential information under the NDA, and whether the act of generating them constitutes a breach of the use limitation clause, depends on the NDA’s language in ways that most standard forms do not address.

Confidential information does not only leave the room when someone forwards an email. It leaves when someone types it into a tool that has not been contractually restricted from using it.

Legal Chain Editorial Team

What Standard NDA Language Says and What It Does Not

A standard NDA typically defines confidential information as any information disclosed by the disclosing party that is marked confidential or that a reasonable person would understand to be confidential given the context of the disclosure. It prohibits disclosure to third parties and limits use of the information to the purposes of the agreement. It includes carve-outs for information that is already publicly available, already known to the receiving party, or independently developed by the receiving party.

None of these standard provisions directly address AI tools. The definition does not specify whether feeding information into an AI tool constitutes disclosure to a third party. The use limitation does not specify whether AI-assisted processing is a permitted use. The carve-outs do not address whether AI-generated outputs based on confidential inputs are independently developed works or derivative disclosures. The remedy provisions do not address the specific challenge of proving and quantifying damage from a model training data breach.

This creates a legal grey zone that is expensive to litigate and easy to avoid with properly drafted agreements. The cost of updating an NDA template to address AI tools is minimal. The cost of litigating an ambiguous AI-related breach is substantial, and the outcome is uncertain given the limited case law in this specific area.

Two people reviewing a contract document together at a table representing NDA negotiation and drafting
Updating an NDA template to address AI tools is a one-time investment that eliminates recurring legal uncertainty for every agreement signed thereafter.

The Legal Framework: Trade Secrets, Contract Law, and AI

The Defend Trade Secrets Act of 2016 provides federal protection for trade secrets in the United States, supplementing the Uniform Trade Secrets Act adopted by most states. Under the DTSA, a trade secret is information that derives economic value from not being generally known, and that the owner has taken reasonable measures to keep secret. The DTSA does not specifically address AI tools, but its requirement for reasonable protective measures is directly relevant. If a business routinely allows employees to enter trade secret information into AI tools without contractual restrictions or access controls, a court may find that the business has not taken reasonable measures to protect the secrecy of that information, potentially destroying its trade secret status entirely.

This is not an abstract concern. Trade secret litigation frequently turns on whether the plaintiff maintained adequate secrecy protocols. A defendant who can demonstrate that confidential information was routinely exposed to third-party AI tools without restriction has a credible argument that the information was not adequately protected and therefore does not qualify for trade secret protection at all. The NDA is the contractual mechanism that, combined with internal access controls and acceptable use policies for AI tools, demonstrates the reasonable measures required to maintain trade secret status.

In the European Union, the Trade Secrets Directive of 2016 provides similar protection with a comparable reasonable measures requirement. The GDPR’s data minimization and security principles also intersect with NDA obligations in AI contexts: confidential information that includes personal data is subject to GDPR even when processed through an AI tool, and the GDPR’s data processing requirements must be satisfied alongside the NDA’s confidentiality requirements.

What a Modern NDA Must Say About AI

Updating an NDA to address the AI environment does not require a complete redraft. It requires the addition or modification of specific provisions that address the three mechanisms described above. The following provisions represent the minimum additions a modern NDA should include.

The definition of confidential information should be expanded to explicitly include any information derived from, generated by, or based on confidential information, including AI-generated outputs that reflect or are informed by the protected information. This closes the derivative works gap.

The use limitation should be expanded to specify that the receiving party may use AI tools to process confidential information only if those tools operate under contractual terms that prohibit the use of the inputs for model training and that do not permit the tool provider to access or store the confidential information beyond the immediate processing session. This closes the model training and server storage gaps.

A new AI-specific clause should require the receiving party to maintain and enforce an acceptable use policy for AI tools that specifically addresses the handling of the disclosing party’s confidential information, to use only AI tools on an approved list provided to the disclosing party upon request, and to notify the disclosing party promptly if confidential information is inadvertently processed by a non-compliant AI tool.

The remedy provisions should address the specific challenge of proving AI-mediated disclosure. Because the harm from model training data ingestion may be probabilistic and difficult to quantify in traditional damages terms, liquidated damages provisions for AI-specific breaches provide a more practically enforceable remedy than general damages claims that require the disclosing party to prove causation and quantum in a novel legal context.

Standard NDA Provision AI Era Gap Required Addition or Modification
Definition of confidential information Does not cover AI-generated derivatives of protected information Add: includes outputs derived from or informed by protected information
Prohibition on disclosure to third parties Ambiguous as to whether AI tool providers are third parties Add: AI tool providers are third parties; permitted only if data use restrictions apply
Permitted use limitation Does not address AI-assisted processing of protected information Add: AI tool use permitted only with approved tools subject to no-training restrictions
Security obligations Does not address AI tool acceptable use policies Add: requirement for AI-specific acceptable use policy and approved tool list
Remedy provisions General damages difficult to prove for model training breaches Add: liquidated damages for AI-specific breach categories

The Receiving Party’s Perspective: Compliance in Practice

For the party receiving confidential information under an NDA, the AI era creates compliance obligations that go beyond the legal team. Every employee who uses AI productivity tools in their work needs to understand which categories of information they may and may not process through those tools. That understanding does not come from reading the NDA. It comes from an internal AI acceptable use policy that translates the NDA’s legal obligations into operational guidance.

The practical elements of such a policy include a categorical rule prohibiting the entry of information marked confidential under any active NDA into any AI tool that has not been reviewed and approved by the company’s legal or compliance function, a process for employees to request approval of specific AI tools for specific use cases involving confidential data, and a clear incident notification path for situations where confidential information has been inadvertently processed by a non-compliant tool.

This operational infrastructure is what makes an NDA’s AI provisions enforceable from the inside. A company that has the right contractual language but no internal compliance process to support it is still exposed to breach claims, because the breach is more likely to occur and the legal claim of adequate protective measures is harder to sustain without evidence of systematic compliance effort.

Team meeting in a modern office reviewing compliance documents on laptops representing AI acceptable use policy development
An AI acceptable use policy translates NDA legal obligations into operational guidance that every employee can follow. Without it, the contractual protection exists only on paper.

Enforcement: Proving an AI-Related Breach

The enforcement challenge for AI-related NDA breaches is substantial. Traditional NDA breaches leave evidence: forwarded emails, copied documents, testimony from recipients of the disclosed information. An AI-mediated breach may produce none of these. The information was entered into a tool interface. It was processed. The processing logs, if they exist, are in the possession of the AI tool provider, not the parties to the NDA. The model training data, if the information was used for training, is distributed across a model’s weights in a form that cannot be directly extracted and is not readable as discrete information.

This enforcement difficulty makes preventive contractual drafting more important, not less. Because after-the-fact enforcement is technically difficult, the primary value of well-drafted AI provisions in an NDA is deterrence and the creation of clear operational standards that prevent the breach from occurring in the first place. Liquidated damages provisions and mandatory incident notification requirements serve this deterrence function: they make the consequences of a breach quantifiable and the knowledge of a breach discoverable without requiring the disclosing party to independently identify that model training has occurred.

For businesses that want to verify the integrity of their NDA documentation and signing records, the tamper-evident audit trail provided by Legal Chain’s Trust Layer ensures that the signed NDA itself, the version executed by the parties, is preserved in a form that can be independently verified. In an NDA dispute, the starting point is establishing what the agreement actually said and who signed it. A blockchain-anchored document eliminates that threshold dispute immediately, allowing the parties and any court to focus on the substantive question of whether the obligations were breached.

NDAs, AI, and the Startup Context

For startups, NDAs are particularly consequential because the confidential information they most need to protect, technical architecture, business model, customer data, and early financial projections, is precisely the information most likely to be processed through AI tools by employees working at speed without legal support close at hand.

A founder’s agreement, an investor NDA, a technology partnership confidentiality agreement, and an employment NDA for a key technical hire all need to reflect the AI era’s disclosure risks. The cost of drafting these agreements correctly from the start is substantially lower than the cost of discovering mid-series that a key technical secret was inadvertently disclosed through an AI tool that a team member used in good faith. The Legal Chain platform’s contract drafting capabilities support this from the first document, with AI-assisted review that surfaces missing provisions, including AI-specific gaps, before the agreement is executed.

For nonprofits handling confidential donor information, beneficiary data, or grant strategy under confidentiality agreements with funders, the same risks apply with the added dimension of charitable mission exposure. A data breach or confidentiality violation that involves a major funder’s strategic plans can damage not just a single agreement but the organization’s access to future funding. Legal Chain’s nonprofit pricing makes professional-grade NDA drafting and review accessible at rates designed for mission-driven organizations operating without dedicated legal departments.


Continue Reading on Legal Chain


Frequently Asked Questions

Is entering confidential information into an AI tool a breach of an NDA?

It depends on the NDA’s terms and the AI tool’s data handling practices. Most standard NDAs define disclosure as sharing information with a third party. If the AI tool’s provider uses inputs for model training or stores them on external servers, entering confidential information may constitute disclosure to a third party in breach of the NDA. If the tool operates on-premises or processes data without transmission to external servers, the analysis changes. The specific facts of each situation determine the outcome, and this is a question for qualified legal counsel.

What should a modern NDA say about AI tools?

A modern NDA should explicitly address whether the receiving party may use AI tools to process confidential information, specify which categories of AI tools are permitted or prohibited, require the receiving party to use only AI tools that do not use inputs for model training, require notification if confidential information is inadvertently processed by a non-compliant AI tool, and clarify whether AI-generated outputs derived from confidential information are themselves confidential.

Can an AI model trained on confidential information leak that information?

Yes. Research has demonstrated that large language models can reproduce training data in their outputs under certain prompting conditions, a phenomenon known as memorization. If a model has been trained on confidential information, it may be possible for an adversarial user to extract that information through carefully constructed prompts. This is a recognized risk in AI security research and is one reason why responsible AI providers offer enterprise agreements that prohibit the use of customer inputs for model training.

What is the difference between a mutual and a one-way NDA?

A one-way (unilateral) NDA protects the confidential information of one party only. The disclosing party is protected. The receiving party has no reciprocal protection. A mutual NDA protects the confidential information of both parties. In situations where both parties will share sensitive information, such as a merger discussion or a technology partnership, a mutual NDA is appropriate. In situations where only one party will share information, such as a vendor receiving a client’s trade secrets, a one-way NDA is standard.

How long should an NDA last?

NDA duration varies by context. Confidentiality obligations during the term of a business relationship are typically indefinite for as long as the information remains confidential. Post-termination confidentiality obligations are typically two to five years for general business information and indefinite for trade secrets, which have no defined duration of protection under the Defend Trade Secrets Act in the United States. Overly long confidentiality periods may be unenforceable in some jurisdictions if a court finds them unreasonable.

Can Legal Chain help draft or review an NDA?

Yes. Legal Chain’s AI-powered platform can assist with NDA drafting and review, flagging clauses that deviate from standard, identifying missing provisions, and surfacing potential risk areas. The platform is not a law firm and does not provide legal advice. For NDAs with significant commercial consequences, Legal Chain recommends using the platform as a first-pass review tool and engaging a qualified attorney for final review and advice.


Legal Chain Editorial Team
The Legal Chain Editorial Team covers AI-driven legal technology, electronic signature law, and blockchain-based document integrity. Legal Chain is not a law firm and does not provide legal advice. Always consult a qualified attorney for advice specific to your situation. Learn more about Legal Chain.

Draft NDAs That Reflect the World as It Actually Works.

Legal Chain’s AI-powered platform drafts, reviews, and anchors your confidentiality agreements to a tamper-evident blockchain record. Your NDA is only as strong as the evidence that supports it. Join the free beta today.


Discover more from

Subscribe to get the latest posts sent to your email.