It starts innocently enough: You’ve been meaning to update your will for months but haven’t had time to meet with your attorney. You have heard how helpful artificial intelligence (AI) can be in performing tasks, so you ask an AI chatbot about making a basic will. Before you know it, you’re asking about family conflict and disinheriting someone. Eventually, you paste in something sensitive — a memo from your attorney — and ask the AI to explain it “in plain English.”
That convenience comes with a hidden risk: You may be creating a detailed digital record about your private intentions. Unlike a conversation with your attorney, that record may not be confidential — and in the wrong situation, it could be requested and used later in a dispute.
When AI Conversations Stop Being Private
The law often struggles to keep pace with technology. It can take years for courts to decide how old rules apply to newer tools. But once a court does weigh in, that decision can shape what happens to everyday records, including AI chats, when there’s a dispute.
In U.S. v. Heppner, Bradley Heppner used a generative AI platform to assist in his legal strategy amid a criminal investigation. He prepared detailed reports outlining his defense strategies, factual arguments, and legal theories — much like the sensitive “brainstorming” someone may engage in when planning their estate.
When federal agents seized Heppner’s electronic devices, they discovered dozens of documents detailing these AI exchanges. Heppner’s legal team fought to keep them private, arguing they were protected by attorney-client privilege.
The court disagreed. Its ruling established several critical principles that every “DIY estate planner” should note:
- AI is not an attorney. An AI platform is a third-party commercial tool, not a legal professional. Since Heppner was not communicating with a licensed lawyer, the confidentiality required for attorney-client privilege was not present.
- The privacy policy trap. The judge pointed to the AI provider’s own privacy policy, which stated that the company collects user “inputs” to train its models and reserves the right to disclose that data to third parties or government authorities. As a result, Heppner had no “reasonable expectation of confidentiality” when clicking “Accept.”
- No retroactive protection. Heppner later shared his AI-generated materials with his attorneys. However, the court ruled that doing so did not retroactively make them privileged. If a document is created outside a protected relationship, involving an attorney afterward does not change its status.
- The work product limitation. Because Heppner used the AI independently rather than at his attorney’s direction, the materials were treated like ordinary records (meaning the other side could request them and use them as evidence).
Although this case wasn’t about wills, the underlying principles are equally applicable to estate planning: If you share sensitive details with a third-party AI tool, you should assume those records may not stay private.
Estate Planning Implications of Artificial Intelligence
The Heppner ruling is a game-changer for estate planning in more ways than one.
If you use a public AI to brainstorm your will or summarize your attorney’s advice, you aren’t just talking to a helpful assistant. You are voluntarily disclosing your private intentions to a third-party platform that can collect, store, and potentially share your data.
In a potential future dispute over your estate, those “private” logs may not be protected by your attorney. Disgruntled heirs or creditors may request, review, and use them to challenge your plan.
Post-Heppner, treating AI for estate planning like a “DIY” experiment can turn private chats into a discoverable digital paper trail that could undermine your best-intentioned estate plans in the following ways:
The “Loss of Privacy” Trap
The Heppner ruling established that AI platforms are third-party commercial entities, legally distinct from legal counsel.
- The action: Pasting an email from your estate planning lawyer into an AI tool to “translate it into plain English.”
- Why it can backfire: Sharing an attorney’s advice with a third party can weaken the privacy protections that normally apply to attorney communications. In a lawsuit, someone may try to obtain the AI transcript and the original lawyer’s email.
The “Were They Thinking Clearly?” Paper Trail
Disgruntled heirs sometimes challenge a will by claiming the person wasn’t of sound mind.
- The action: Repeatedly asking AI the same questions or typing messages that show confusion about your family members, accounts, or property within a prompt.
- Why it can backfire: If someone later challenges your will, those messages could be used as evidence that you were confused or not fully able to make decisions at the time.
The “Who Was in the Room?” Digital Fingerprints
Heirs may claim a sibling “pressured” a parent to change their will.
- The action: Using an AI tool to revise a will while at a specific family member’s house or on their device.
- Why it can backfire: AI logs may include timestamps and, depending on the platform, location data. The information may fuel claims that someone pressured them to change their plan.
The “Profile” Problem
Modern AI does more than store what you say. It builds a profile of who you are.
- The action: Engaging with AI in a way that suggests emotional vulnerability, depression, or high suggestibility.
- Why it can backfire: Some services use your inputs to personalize responses. That can create additional records about your behavior and preferences that you never meant to become part of a legal dispute.
The “State Law Hallucination” Trap
- The action: Asking an AI to “write a valid will for Florida” (or any state), printing it, and then signing it.
- Why it can backfire: Will rules can be strict and state-specific. If the document misses a required step or uses the wrong wording, it could be challenged, and your family could pay the price in delays and legal fees.
The “Hiding Assets” Trap
Courts have little patience for “bad faith” attempts to shield assets from legal obligations, and AI prompts can turn suspicion into documented evidence of intent.
- The action: Asking an AI how to move money so an ex-spouse, creditor, or government program “can’t find it.”
- Why it can backfire: These prompts could be used as evidence of intent to dodge a legal obligation. If your asset transfers are later challenged, your own words may be used to argue you acted improperly. Deleting a chat doesn’t always mean it’s gone.
Hopefully by now, it’s clear that “casual” AI prompts could be a serious liability to your estate plan.
If you use artificial intelligence as an educational tool, treat it like a public place. Assume what you type could be saved, reviewed, or shared later. In the wake of Heppner, here is a good rule of thumb: never enter into AI what you wouldn’t want a judge to read.
Put your trust in the experienced estate planning attorneys at Kurre Schneps to prepare a comprehensive estate plan for you, including a will. Contact us today for a consultation.