The Cyr Team · How We Use AI
Where We Draw the Line — and Why the Line Exists
The boundary isn't a limitation of AI. It's a deliberate choice about where accountability has to live. A model can synthesize data. It cannot be responsible for the outcome of your transaction. We can.
Research into how consumers use AI in real estate documents a consistent concern: people worry about not knowing when AI is being used versus when a human is making the call. That concern is rational. If AI is producing the recommendation and the agent is just presenting it, the client deserves to know — because the nature of the accountability changes entirely.
This page is our answer to that concern. It explains specifically where AI operates in our practice, where it stops, and what happens at the boundary. It also makes a promise: at any point in the process, you can ask us what came from AI and what came from us. The answer will always be clear.
The line we draw is not about AI's limitations. It's about ours — and about where responsibility has to sit for the relationship to mean anything.
What does AI never do — and why does each boundary exist?
The hub page lists what AI never does. This page explains why each boundary exists — because the reason matters as much as the rule.
AI never
Negotiates offers or counteroffers
Negotiation requires reading the other party, understanding what they actually need versus what they're saying they need, and making real-time judgment calls under pressure. AI has no access to the conversation happening on the other side of the table. We do — and we're accountable for the outcome of every call we make.
AI never
Advises on offer strategy
Offer strategy in a specific market, on a specific property, against specific competition requires knowing things AI doesn't know: how this seller is likely to respond, what the listing agent's track record suggests about their client's priorities, what the absorption data in this district means right now versus six weeks ago. That judgment is ours.
AI never
Determines your list or purchase price
AI can synthesize comparable sales. It cannot account for the conversation we had walking through your home, the condition issues that won't show up in MLS data, the buyer pool behavior we've observed in your specific neighborhood over the past 90 days, or the strategic timing considerations that change what the right price actually is. Pricing is a recommendation we make and stand behind.
AI never
Makes any judgment call that affects your outcome
Every decision point in a transaction — which offer to accept, whether to push back on an inspection item, how to respond to an appraisal gap, when to walk away — requires someone who is accountable for being right. AI is not accountable. We are. Every judgment call that affects what you net, what you pay, or whether the transaction closes is ours.
AI never
Replaces a conversation that needs a human in it
Some conversations require a person — not a well-structured AI output. The conversation about whether to accept the offer. The conversation about what to do when the inspection comes back difficult. The conversation about whether this is actually the right move for your situation right now. Those conversations need a human who knows your situation, has judgment, and can be held accountable. That's us.
AI never
Provides legal or financial advice
We are not attorneys and we are not financial advisors. AI-generated content that sounds like legal or financial advice is still not legal or financial advice — and presenting it as such would be doing you a disservice. When a question requires legal or financial expertise, we tell you that clearly and connect you with the right professional.
What happens between the AI output and the recommendation you receive?
The judgment layer
AI produces a starting point. 17+ of transactions in four specific counties produces the recommendation.
When AI generates a district market narrative from our weekly data, we read it before it publishes. When AI extracts fields from an offer PDF, we verify them against the source document. When AI synthesizes comparable sales into a pricing range, we layer in everything the data doesn't capture — property condition, neighborhood nuance, buyer pool behavior, timing considerations, and what we know from being in this market every week.
The gap between the AI output and the recommendation you receive is where our experience lives. It's not a small gap. It's the difference between a data summary and advice you can make a decision on.
We don't put data in and accept what comes out. That's not how we work — and it's not what you're paying for.
Where specifically does human review happen in our workflow?
Every AI-generated output in our practice passes through a specific review checkpoint before it reaches a client or publishes. Here is where those checkpoints are.
| What AI produces | The review checkpoint |
|---|---|
| Weekly school district market narratives | Read and confirmed against our current market data before publishing. If the narrative misrepresents what the data shows, it doesn't go live. |
| Offer data extraction from buyer PDFs | Every extracted field verified against the source PDF before the comparison goes to the seller. Nothing gets to the client that hasn't been checked. |
| Listing intake and positioning strategy | AI synthesizes the intake data into a starting strategy. We review, revise, and apply market-specific judgment before any recommendation is presented. |
| Buyer intake and search strategy | Same process — AI organizes the inputs, we shape the strategy based on what we know about current inventory, timing, and market conditions the intake form can't capture. |
| Buyer persona development | AI identifies candidate personas from property characteristics. We review for Fair Housing compliance and apply our knowledge of who actually buys in this market before any positioning decision is made. |
| Content and market research | AI surfaces patterns and synthesizes sources. We determine what's accurate, what's relevant, and what actually applies to our market before it informs anything client-facing. |
How do you know what came from AI and what came from us?
The transparency promise
At any point in the process, you can ask us what came from AI and what came from us. The answer will always be clear.
We don't hide where AI is involved in our work — we've built an entire section of our site explaining it in detail. But we also recognize that knowing AI is used somewhere in a process is different from knowing whether the specific recommendation in front of you was shaped by AI or by human judgment.
The answer to that question is always: the recommendation is ours. AI may have produced a draft, organized data, or synthesized comparables. But the recommendation — what we're telling you to do, what we think the right price is, what we think you should offer, what we think you should accept — is a judgment call we made and are accountable for.
If you ever want to know specifically where AI touched something in your transaction, ask. We'll tell you exactly what the AI produced, what we changed, and why. That conversation is always available — not just at the beginning, but at any point.
Research into how consumers use AI in real estate found that one of the top concerns is not knowing when AI is being used versus human judgment. We've addressed that concern by making the answer available on demand — not just in a general policy statement, but in a specific conversation about your specific transaction whenever you want to have it.
What if AI told you something different than what we told you?
Bring it. That gap is usually where the most useful conversation happens.
AI works from general patterns. We work from this market, this county, this street, right now. When those answers diverge, the divergence is almost always where the real information lives — either because the general pattern doesn't apply to your specific situation, or because the AI answer is missing context we have, or occasionally because the AI has surfaced something worth examining more carefully.
We don't dismiss AI answers. We interrogate the gap between what AI says and what we're recommending — and we explain it. If our answer is different, we tell you why. If the AI answer raises a question we haven't fully addressed, we want to know.
An AI-informed client asking a sharp question is not a problem. It's the most useful conversation we can have.
The tools are ours. The judgment is ours. The accountability is ours.
If you have a question about where AI is involved in our practice — or where it isn't — we're here to answer it directly.