For Sellers · AI & Your Listing Data

AI and Your Listing Data — What's Worth Understanding

AI has real value in real estate work. The trade-offs are also real, and most agents — and most sellers — have not had a reason to look at them closely. An awareness reference, not a checklist.

AI tools have become part of how real estate gets done. Drafting, summarizing, analyzing comps, extracting items from inspection reports, dictating notes after a showing — there is genuine value here, and the agents using these tools are not doing anything wrong by using them. What is worth understanding is that AI shows up in several different ways, each with its own trade-off, and the default settings on most consumer tools are not what most people would choose if they had thought about them.

This page is a reference. It explains the four modes of AI use that appear in agent workflows, what each one does with your data, and what The Cyr Team does with each mode in its own practice. It is not a script to send your agent. It is not a list of failures to look for. It is what we would tell a friend asking how to think about this.

The shorter version. AI in real estate is not one thing — it is at least four things, and they handle your data differently. Consumer chat windows on default settings train on conversations. Desktop AI tools that read your files have a separate set of risks. Commercial API access is different again. The mode matters more than the brand name.

The Four Modes of AI Use

A vocabulary for the rest of the page

Almost every AI tool an agent might use fits into one of these four categories. The differences are easier to see once you have names for them.

1. Consumer chat, training enabled
The default state for ChatGPT Free/Plus/Pro and Claude Free/Pro/Max accounts since 2025. Conversations are used to train future models with multi-year retention. Most consumer AI users are in this state without realizing it. Trade-off: maximum convenience, maximum exposure.
2. Consumer chat, training disabled
The same chat interface with the privacy toggle turned off. Conversations are no longer used for training. Account history still exists and is visible to anyone with that login. Standard 30-day post-deletion retention applies. Trade-off: same interface, materially reduced exposure, account-level controls still matter.
3. Desktop agentic AI
Tools like Claude Cowork, ChatGPT Agent, and Atlas browser that run on the computer with permission to read local files, browse the web, and take multi-step actions. The training question is separate from the agentic risk — these tools have prompt-injection and file-exfiltration vulnerabilities documented in published CVEs. Trade-off: powerful automation, broad attack surface.
4. Commercial API access
Purpose-built tools that call Anthropic's or OpenAI's API under commercial terms. Inputs and outputs are not used for training. Retention is short. Access is scoped to that one tool's function. This is the most contained mode and the one most professional software is built on. Trade-off: requires purpose-built tooling, not something an individual agent runs on their laptop.

The rest of the page refers to these by name. When something says "consumer chat" it means modes 1 and 2. "Desktop agent" means mode 3. "API access" means mode 4. Most agents using AI are in modes 1 or 2. Most professional real estate software runs on mode 4. Mode 3 is the newest, smallest, and most discussed in security circles.

Training and Retention

Where conversations go · 4 questions
1. What is the most important AI setting on a consumer Claude or ChatGPT account?

The model-training toggle. In Claude, it is found at Settings → Privacy → Help improve Claude — off means new conversations are not used for training. In ChatGPT, it is Settings → Data Controls → Improve the model for everyone. Both default to on. Both are forward-looking only — anything already used in a training run cannot be removed retroactively, which is why turning it off early matters more than turning it off perfectly.

2. Does Anthropic use Claude conversations to train future AI models?

Verified On consumer plans, yes, by default since September 2025. Claude Free, Pro, and Max train on conversations unless the user toggles off Help improve Claude. Opt-in data is retained for five years — a sixty-fold increase over the prior 30-day retention. Claude for Work, Enterprise, Government, and Education plans do not train on user data and are explicitly excluded under commercial terms.

3. Does ChatGPT train on conversations?

Verified On consumer plans, yes, by default. ChatGPT Free, Plus, and Pro train on inputs unless the user disables Improve the model for everyone in Data Controls. ChatGPT Enterprise, Team, and API platform accounts do not train on data by default under commercial contracts. One quirk worth knowing: providing thumbs-up or thumbs-down feedback on a response may opt that specific conversation into training even when the global toggle is off.

4. Can deleted conversations still appear in a training run?

Yes, if the conversation was already included in a training run before deletion or opt-out. Both Anthropic and OpenAI state opt-out is forward-looking only. Practically, this means turning off training matters most when done early. Data already processed under default settings during a training cycle may already be in a model — there is no way to retrieve it, but knowing the boundary is useful.

Desktop Agentic AI

Mode 3, in more detail · 3 questions
5. What is desktop agentic AI and how is it different from chat?

Desktop agentic AI tools — Claude Cowork, ChatGPT Agent, and Atlas browser — run on the computer with permission to read local files, browse the web using the user's session cookies, and take multi-step actions. They are a different category from chat windows. The attack surface includes anything in folders they can access and anything on web pages they visit. The training question is separate from the agentic risk; both apply, in different ways.

6. What is prompt injection and why does it come up?

Verified Prompt injection is when a document or website contains hidden instructions that hijack the AI's intent. In March 2026 researchers demonstrated Claude Cowork could be tricked into sending local files to an attacker using hidden text in a PDF. CVEs 2025-59536 and 2026-21852 documented the issue. The risk applies to any AI tool that reads untrusted content — not unique to Cowork, but most pronounced when the AI has filesystem access.

7. Is the Claude in Chrome extension a separate consideration?

Yes. The extension reads browser content including web pages open in tabs. If the MLS backend, a private CRM, or transaction management system is open while the extension is active, that content is within reach of the AI. Anthropic's own guidance recommends limiting extension access to trusted sites and excluding sensitive platforms. Worth knowing what is open in the browser when extension AI is in use.

Commercial API Access

Mode 4, the one purpose-built tools use · 2 questions
8. How is commercial API access different from consumer chat?

When a purpose-built tool calls Anthropic's or OpenAI's API under commercial terms, the data is not used to train models, retention defaults to seven days at Anthropic (down from 30 in September 2025) and as little as zero on OpenAI's enterprise tier. Access is scoped to that one tool's function — there is no chat history surface accumulating conversations. It is a different category of AI use than typing into a chat window.

9. Are training-disabled consumer accounts and commercial API access equivalent?

No, though both are much better than default consumer settings. Training-disabled consumer chat still retains conversation history in the user's account, visible to anyone with that login. Commercial API access has no conversation history surface, shorter retention by default, and binds to commercial contract terms. They mitigate different parts of the same overall exposure. Both are useful. Neither is invulnerable.

Practical Handling

What is worth thinking about · 4 questions
10. What types of information warrant the most careful handling, regardless of AI mode?

Some information warrants stricter handling in any mode:

  • Exact target prices and walk-away numbers — ranges are usually enough
  • Gate or alarm codes, security system details
  • Vacancy schedules, owner travel plans, contractor access windows
  • Mortgage payoff figures and net-to-seller calculations
  • Personal context — divorce, estate, health, relocation timing

This is the data most worth keeping out of any AI conversation, or referring to in summary terms rather than specifics.

11. Microsoft 365 Copilot — where does it fit?

M365 Copilot for Business does not train on tenant data and operates within the customer's Microsoft 365 compliance boundary, which puts it closer to mode 4. As of early 2026, Microsoft also ships Claude Cowork inside M365 Copilot Cowork — which inherits mode 3's desktop-agentic risk profile separately from the training question. Consumer Copilot Free is a different product. Knowing which version is in use matters.

12. If a buyer's agent receives my inspection report, can I prevent them from using AI on it?

Not technically. Once a PDF is in another agent's possession, no mechanism blocks them from uploading it to any AI tool. The practical levers are sending the minimum necessary information, transmitting through a workflow that records acknowledgment of confidentiality terms, and recognizing that the receiving agent's own practices are now part of the exposure. The acknowledgment matters more for documentation than for prevention.

13. What about pre-listing inspection reports shared with multiple buyer agents?

Context A pre-listing inspection multiplies the exposure because the same report goes to many agents over the listing period. Practical approaches include tiered disclosure — a summary version for early interest, the full report only for agents on active offers — plus a confidentiality acknowledgment built into the request workflow. Neither prevents upload, but both shift documentation and signal intent to careful agents.

Talking With an Agent

If it raises a question for you · 3 questions
14. How likely is it that most agents have audited their AI tools?

Most have not. Real estate is not centrally regulated for AI tool use, and agents typically operate as independent contractors choosing their own software. Default settings are common. This is not a failing of any individual agent — it is the normal state of a fast-moving consumer technology meeting a regulated profession that has not caught up. The question is not whether an agent has audited every tool, but whether they have a framework for thinking about it.

15. Is it worth discussing AI use with an agent before signing a listing agreement?

It can be a useful conversation, framed as understanding rather than challenge. An agent who can describe which AI tools they use, on what account tier, and how they handle confidential client data is one who has thought about it. Their specific answer matters less than whether they have one. Any drafting of contract language around AI use should go through licensed counsel — The Cyr Team is not a law firm.

16. Is any AI setup in real estate fully secure?

No. Every mode of AI use involves trade-offs between capability, convenience, and exposure. The honest standard is not invulnerability — it is whether the agent has thought about the trade-offs, made deliberate choices, and can describe what they actually do. That standard is more useful than any checklist because it accounts for how the technology continues to change. Awareness, not perfection.

How The Cyr Team Uses AI

The honest accounting

AI is part of how we work. We use it because it has real value — drafting market analyses, extracting items from inspection reports, dictating call notes, summarizing comps, building tools that did not exist a year ago. We do not pretend it does not touch client information, because that would not be true.

What we do across the four modes:

  • Mode 1 (consumer, training on): we do not use this for any client work.
  • Mode 2 (consumer, training off): this is where our document drafting work happens — listing presentations, market analyses, content. Training is verified off. Account security is maintained. The conversation history surface is its main residual exposure.
  • Mode 3 (desktop agentic): we do not use Cowork, ChatGPT Agent, or Atlas browser on client data. The risk profile is not one we have chosen to take on.
  • Mode 4 (commercial API): all our deployed tools — offer analyzer, listing intake, inspection workflow, FUB call logger — run here. API keys are server-side, never client-visible. This is the mode that scales with the least exposure.

None of this is invulnerable. Mode 2 still has account-level exposure. Mode 4 still has the 7-day API retention window. We make these choices deliberately, we explain them when asked, and we update our practices as the technology and the policies change. The standard we hold ourselves to is awareness — not perfection — because perfection is not available in this category.

Not sure which fits?

That's fine — most situations have overlap. Tell us what's going on and we'll point you in the right direction.

Or call (484) 259-7910

Primary sources verified for this page

  • Anthropic — Updates to Consumer Terms and Privacy Policy, September 2025
  • Anthropic Help Center — Use Claude Cowork Safely
  • OpenAI — Data Controls documentation, 2026
  • PromptArmor — Claude Cowork Exfiltrates Files (research disclosure)
  • MintMCP — Claude Cowork File Exfiltration Vulnerability, March 2026
  • Harmonic Security — Securing Claude Cowork: A Security Practitioner's Guide
  • TechCrunch — Anthropic users face a new choice, August 2025
  • CVE-2025-59536, CVE-2026-21852 — Claude Code prompt injection / API allowlist

Last updated May 12, 2026. The AI landscape continues to change. The Cyr Team revises this page quarterly or when material vendor policies shift.