AI and Your Listing Data — What's Worth Understanding
AI has real value in real estate work. The trade-offs are also real, and most agents — and most sellers — have not had a reason to look at them closely. An awareness reference, not a checklist.
AI tools have become part of how real estate gets done. Drafting, summarizing, analyzing comps, extracting items from inspection reports, dictating notes after a showing — there is genuine value here, and the agents using these tools are not doing anything wrong by using them. What is worth understanding is that AI shows up in several different ways, each with its own trade-off, and the default settings on most consumer tools are not what most people would choose if they had thought about them.
This page is a reference. It explains the four modes of AI use that appear in agent workflows, what each one does with your data, and what The Cyr Team does with each mode in its own practice. It is not a script to send your agent. It is not a list of failures to look for. It is what we would tell a friend asking how to think about this.
The Four Modes of AI Use
Almost every AI tool an agent might use fits into one of these four categories. The differences are easier to see once you have names for them.
The rest of the page refers to these by name. When something says "consumer chat" it means modes 1 and 2. "Desktop agent" means mode 3. "API access" means mode 4. Most agents using AI are in modes 1 or 2. Most professional real estate software runs on mode 4. Mode 3 is the newest, smallest, and most discussed in security circles.
Training and Retention
The model-training toggle. In Claude, it is found at Settings → Privacy → Help improve Claude — off means new conversations are not used for training. In ChatGPT, it is Settings → Data Controls → Improve the model for everyone. Both default to on. Both are forward-looking only — anything already used in a training run cannot be removed retroactively, which is why turning it off early matters more than turning it off perfectly.
Verified On consumer plans, yes, by default since September 2025. Claude Free, Pro, and Max train on conversations unless the user toggles off Help improve Claude. Opt-in data is retained for five years — a sixty-fold increase over the prior 30-day retention. Claude for Work, Enterprise, Government, and Education plans do not train on user data and are explicitly excluded under commercial terms.
Verified On consumer plans, yes, by default. ChatGPT Free, Plus, and Pro train on inputs unless the user disables Improve the model for everyone in Data Controls. ChatGPT Enterprise, Team, and API platform accounts do not train on data by default under commercial contracts. One quirk worth knowing: providing thumbs-up or thumbs-down feedback on a response may opt that specific conversation into training even when the global toggle is off.
Yes, if the conversation was already included in a training run before deletion or opt-out. Both Anthropic and OpenAI state opt-out is forward-looking only. Practically, this means turning off training matters most when done early. Data already processed under default settings during a training cycle may already be in a model — there is no way to retrieve it, but knowing the boundary is useful.
Desktop Agentic AI
Desktop agentic AI tools — Claude Cowork, ChatGPT Agent, and Atlas browser — run on the computer with permission to read local files, browse the web using the user's session cookies, and take multi-step actions. They are a different category from chat windows. The attack surface includes anything in folders they can access and anything on web pages they visit. The training question is separate from the agentic risk; both apply, in different ways.
Verified Prompt injection is when a document or website contains hidden instructions that hijack the AI's intent. In March 2026 researchers demonstrated Claude Cowork could be tricked into sending local files to an attacker using hidden text in a PDF. CVEs 2025-59536 and 2026-21852 documented the issue. The risk applies to any AI tool that reads untrusted content — not unique to Cowork, but most pronounced when the AI has filesystem access.
Yes. The extension reads browser content including web pages open in tabs. If the MLS backend, a private CRM, or transaction management system is open while the extension is active, that content is within reach of the AI. Anthropic's own guidance recommends limiting extension access to trusted sites and excluding sensitive platforms. Worth knowing what is open in the browser when extension AI is in use.
Commercial API Access
When a purpose-built tool calls Anthropic's or OpenAI's API under commercial terms, the data is not used to train models, retention defaults to seven days at Anthropic (down from 30 in September 2025) and as little as zero on OpenAI's enterprise tier. Access is scoped to that one tool's function — there is no chat history surface accumulating conversations. It is a different category of AI use than typing into a chat window.
No, though both are much better than default consumer settings. Training-disabled consumer chat still retains conversation history in the user's account, visible to anyone with that login. Commercial API access has no conversation history surface, shorter retention by default, and binds to commercial contract terms. They mitigate different parts of the same overall exposure. Both are useful. Neither is invulnerable.
Practical Handling
Some information warrants stricter handling in any mode:
- Exact target prices and walk-away numbers — ranges are usually enough
- Gate or alarm codes, security system details
- Vacancy schedules, owner travel plans, contractor access windows
- Mortgage payoff figures and net-to-seller calculations
- Personal context — divorce, estate, health, relocation timing
This is the data most worth keeping out of any AI conversation, or referring to in summary terms rather than specifics.
M365 Copilot for Business does not train on tenant data and operates within the customer's Microsoft 365 compliance boundary, which puts it closer to mode 4. As of early 2026, Microsoft also ships Claude Cowork inside M365 Copilot Cowork — which inherits mode 3's desktop-agentic risk profile separately from the training question. Consumer Copilot Free is a different product. Knowing which version is in use matters.
Not technically. Once a PDF is in another agent's possession, no mechanism blocks them from uploading it to any AI tool. The practical levers are sending the minimum necessary information, transmitting through a workflow that records acknowledgment of confidentiality terms, and recognizing that the receiving agent's own practices are now part of the exposure. The acknowledgment matters more for documentation than for prevention.
Context A pre-listing inspection multiplies the exposure because the same report goes to many agents over the listing period. Practical approaches include tiered disclosure — a summary version for early interest, the full report only for agents on active offers — plus a confidentiality acknowledgment built into the request workflow. Neither prevents upload, but both shift documentation and signal intent to careful agents.
Talking With an Agent
Most have not. Real estate is not centrally regulated for AI tool use, and agents typically operate as independent contractors choosing their own software. Default settings are common. This is not a failing of any individual agent — it is the normal state of a fast-moving consumer technology meeting a regulated profession that has not caught up. The question is not whether an agent has audited every tool, but whether they have a framework for thinking about it.
It can be a useful conversation, framed as understanding rather than challenge. An agent who can describe which AI tools they use, on what account tier, and how they handle confidential client data is one who has thought about it. Their specific answer matters less than whether they have one. Any drafting of contract language around AI use should go through licensed counsel — The Cyr Team is not a law firm.
No. Every mode of AI use involves trade-offs between capability, convenience, and exposure. The honest standard is not invulnerability — it is whether the agent has thought about the trade-offs, made deliberate choices, and can describe what they actually do. That standard is more useful than any checklist because it accounts for how the technology continues to change. Awareness, not perfection.
How The Cyr Team Uses AI
AI is part of how we work. We use it because it has real value — drafting market analyses, extracting items from inspection reports, dictating call notes, summarizing comps, building tools that did not exist a year ago. We do not pretend it does not touch client information, because that would not be true.
What we do across the four modes:
- Mode 1 (consumer, training on): we do not use this for any client work.
- Mode 2 (consumer, training off): this is where our document drafting work happens — listing presentations, market analyses, content. Training is verified off. Account security is maintained. The conversation history surface is its main residual exposure.
- Mode 3 (desktop agentic): we do not use Cowork, ChatGPT Agent, or Atlas browser on client data. The risk profile is not one we have chosen to take on.
- Mode 4 (commercial API): all our deployed tools — offer analyzer, listing intake, inspection workflow, FUB call logger — run here. API keys are server-side, never client-visible. This is the mode that scales with the least exposure.
None of this is invulnerable. Mode 2 still has account-level exposure. Mode 4 still has the 7-day API retention window. We make these choices deliberately, we explain them when asked, and we update our practices as the technology and the policies change. The standard we hold ourselves to is awareness — not perfection — because perfection is not available in this category.
Not sure which fits?
That's fine — most situations have overlap. Tell us what's going on and we'll point you in the right direction.
Or call (484) 259-7910
Primary sources verified for this page
- Anthropic — Updates to Consumer Terms and Privacy Policy, September 2025
- Anthropic Help Center — Use Claude Cowork Safely
- OpenAI — Data Controls documentation, 2026
- PromptArmor — Claude Cowork Exfiltrates Files (research disclosure)
- MintMCP — Claude Cowork File Exfiltration Vulnerability, March 2026
- Harmonic Security — Securing Claude Cowork: A Security Practitioner's Guide
- TechCrunch — Anthropic users face a new choice, August 2025
- CVE-2025-59536, CVE-2026-21852 — Claude Code prompt injection / API allowlist
Last updated May 12, 2026. The AI landscape continues to change. The Cyr Team revises this page quarterly or when material vendor policies shift.