Why AI search decides differently than Google
A Google search for "dentist in Brooklyn" returns ten blue links, a map, and four ads. The patient scans, clicks, compares, and decides. An AI search for the same thing returns a paragraph: "If you are in Brooklyn, three well-reviewed practices are [Name A], [Name B], and [Name C], each offering…" The patient often does not click anything. They screenshot, call, or book.
The shift matters because the pick is narrower. Google shows ten options. ChatGPT and Perplexity usually pick three. Copilot sometimes picks one. That compression pushes all of the marketing weight onto a single question: what does the engine already believe about your practice, right now, before the patient asked?
AI engines build that belief by reading the public web and cross-referencing what multiple trusted sources say about you. They do not run live auctions like Google Ads. They do not rank by keyword match like classic SEO. They pattern-match on consistency, authority, and specificity — and they do it before the patient's prompt is ever typed.
The seven signals AI engines use to pick a dentist
1 Review volume and consistency across platforms
Our benchmark across 40+ dental clients: practices named first in ChatGPT recommendation prompts have an average of 147 Google reviews, 18 Healthgrades reviews, and 23 Yelp reviews, with a 4.6+ star average on all three. Practices below 50 Google reviews are named in fewer than 8% of prompts.
Volume alone is not enough. The engine looks for review velocity: at least 3 new reviews per month, not one burst followed by silence. And it compares platforms — a practice with 200 Google reviews and zero Healthgrades presence looks inconsistent and gets filtered down.
2 Schema.org markup for LocalBusiness + Dentist
Your site should carry two schema blocks: LocalBusiness with full NAP (name, address, phone), opening hours, geo-coordinates, sameAs links to your Google, Healthgrades, and social profiles; and Dentist (a subtype of MedicalBusiness) with specialty, treatments offered, accepted insurance, and languages spoken.
Fewer than 25% of US dental practice sites we audit have valid Dentist schema. Adding it in a single afternoon moves some practices from "unknown to the AI" to "plausible candidate" within two to three weeks.
3 Citation consistency across directories
Name, address, and phone must match exactly across Google Business Profile, Healthgrades, Yelp, Zocdoc, WebMD, 1-800-Dentist, your state dental association directory, and the ADA Find-a-Dentist tool. One suite number difference or a hyphenated phone format mismatch flags inconsistency and lowers the engine's confidence.
Audit in a spreadsheet quarterly. The single fix that moves the needle for practices that have moved locations or changed phone numbers in the last five years is a directory cleanup pass.
4 Depth of content that demonstrates clinical expertise
Thin sites with "Services: Cleanings, Fillings, Crowns" get no traction. Sites with treatment-specific pages — an implant page that explains zirconia vs titanium, recovery timelines, cost ranges, and what makes your case selection conservative — signal expertise to the engine. Each specific claim ("I place about 120 implants per year, with a 98% 5-year survival rate") becomes a potential pull-quote.
Aim for 800 to 1,500 words per major treatment page, with structured sub-headers that mirror the questions patients actually ask.
5 Third-party mentions on trusted sources
A single interview in Dentistry Today, a guest post on Dentaltown, a quote in a local newspaper about a cosmetic procedure, or a guest appearance on a dental podcast — each of these shows up when the engine searches for your practice name. Mentions from sources the engine already trusts carry disproportionate weight.
We aim for two new third-party mentions per quarter for each dental client. Over 12 months that is 8 external validations, which is typically the difference between a top-3 and top-10 AI recommendation position.
6 Google Business Profile completeness and activity
A complete GBP has 15+ photos of the office and team, every service populated, Q&A with seeded answers, weekly posts (updates, offers, events), messaging enabled, and attributes filled in (wheelchair accessible, online booking, accepts insurance). AI engines increasingly treat GBP as a canonical source. A practice with a half-filled GBP gets discounted even if its website is strong.
7 Site freshness and content velocity
A dental site that has not been updated in 18 months signals neglect to the engine. The fix is modest: one new patient-education article per month, updated treatment pages once per quarter, a quarterly "what's new at the practice" post. Nothing heroic — just demonstrated ongoing activity.
The 90-day GEO plan for a dental practice
Below is the exact cadence we run. It is not a maximum program — it is the minimum sequence that consistently produces measurable AI visibility lift in the fastest practices we work with.
Foundation
- Audit all 12 major directories and fix NAP inconsistencies
- Deploy LocalBusiness + Dentist schema
- Complete Google Business Profile (photos, Q&A, services, attributes)
- Baseline AI visibility audit across ChatGPT, Perplexity, Gemini, Copilot for 12 target prompts
- Set up review-ask automation (post-visit text with one-tap Google link)
Authority
- Publish 4 treatment-specific pages (800–1,500 words each)
- Place one guest post on Dentaltown or a state dental-society blog
- Secure one trade-press mention (Dentistry Today, Inside Dentistry)
- Push 20+ patient review asks per week
- Weekly GBP posts; respond to every review within 48 hours
Monitoring
- Re-run the baseline audit — track prompt coverage delta
- Identify 3 prompts where you are named and 3 where you are not — find the gap
- Publish 2 more content pieces to close the gap
- Set up monthly tracking cadence (15 min per month to log citations)
- Document wins in a case study for your own PR use
Case: Phoenix pediatric practice, 0 to top-recommended in 84 days
Representative case — details generalized to protect practice identity
Starting point (Day 1): 62 Google reviews at 4.4 stars. No Dentist schema. Healthgrades profile abandoned 2 years prior with wrong phone number. Website last updated 2023. ChatGPT recommendation prompt "pediatric dentist in [area of Phoenix]" named three competitors and did not mention this practice. Perplexity: same. Gemini: same.
What we did: Directory cleanup (9 profiles corrected or claimed). LocalBusiness + Dentist + MedicalBusiness schema deployed. GBP rebuilt with 47 photos, 18 Q&A entries, weekly posts. Six treatment pages written (sedation, fluoride alternatives, thumb-habit appliances, emergency visits, first-visit guide, insurance-acceptance FAQ). One interview placed in AAPD Pediatric Dentistry. Post-visit text sequence pushed 212 review asks in 12 weeks; 94 new Google reviews landed.
Result at Day 84: 156 Google reviews at 4.7 stars. Named in 9 of 12 monitored AI prompts (Day 1: 0). Patient intake form asks added a "how did you find us" field — 17% of new patients in weeks 10–12 said "ChatGPT" or "an AI told me." Practice revenue up 23% year-over-year in Q2, with the owner attributing the majority to digital inbound rather than referrals.
What fails more often than it succeeds
Buying reviews. Google filters them within days; the pattern (IP clusters, no profile history, templated language) is obvious. You lose the reviews and often the whole listing.
Keyword-stuffing the homepage with city names. AI engines do not care. They read across the site and across the web. A single well-structured LocalBusiness schema beats "Best Family Dentist in Houston, Texas, USA" repeated 14 times in the footer.
Ignoring the front desk. Patients who had a bad front-desk experience write bad reviews no matter how good the clinical care was. Front-desk training is a GEO investment dressed in operations clothing.
Posting generic stock-photo "tips" on social. AI engines do not read your Instagram grid. They read the metadata of the third-party articles that link back to you. Social is a channel for patient trust, not for AI signal.
How to measure whether GEO is working
Three metrics, reviewed monthly:
Prompt coverage. Pick 12 prompts that a patient in your area might actually type. Run each through ChatGPT, Perplexity, Gemini, and Copilot. Log how many name your practice, and in what position. Baseline at Day 1, re-measure at Day 30, 60, 90, then monthly.
Citation source list. When an AI engine cites your practice, which source did it pull from (your site, Healthgrades, a news article, Yelp)? Over 90 days you want the list to lengthen and diversify. A practice cited only from its own site is fragile.
New-patient attribution. Add "Where did you hear about us?" to your intake form. Add "ChatGPT / AI search" and "Perplexity / other AI tool" as explicit options. Patients who answer honestly give you the ground truth no dashboard can provide.
What this means for a typical US dental practice
If you run a 2–5-chair practice with $900K to $3M in annual production, the math on GEO is straightforward. A single new patient from a cosmetic or implant consult is worth $3,000 to $15,000 in lifetime value. You need the plan to deliver two to three incremental new patients per month to pay for itself at any reasonable agency fee. In practice, the dental clients we work with see 8 to 20 incremental new-patient inquiries per month attributed to AI search by month 4.
The catch is that the window is narrowing. As competitors build their schema, populate their directories, and publish depth-of-expertise content, the work to catch up compounds. The practice that deploys the foundation layer in Q2 2026 wins a head start over the practice that deploys it in Q4 2026. A year from now, the cheap wins will be gone and only the incremental work will be left.
Frequently Asked Questions
Why does ChatGPT recommend some dentists and not others?
ChatGPT combines brand mentions on trusted sources, review volume, Schema.org coverage, and citation consistency across directories. A practice with 180 Google reviews, full LocalBusiness schema, and three trade-press interviews gets named in most recommendation prompts; a practice with 40 reviews and a bare site does not. It is not random and not a simple ranking winner. It is the sum of seven signals working together.
How many reviews does a dental practice need for AI visibility?
Our working benchmark is at least 80 Google reviews with a 4.6+ star average, plus 3 or more new reviews per month. Practices below that threshold are rarely named first. Above 150 reviews, recommendations increase sharply, especially when paired with Healthgrades or Yelp presence that matches your Google profile.
Are US dentists allowed to actively ask for patient reviews?
Yes, as long as the ask is neutral and the patient is free to write whatever they want. The ADA Code of Ethics and FTC guidelines allow post-visit text or email asking the patient to share an honest experience. Incentives, steering toward 5-star reviews, or sending template language are not permitted and will trigger Google spam filters.
How much time does GEO take for a dental practice?
A typical 90-day engagement takes the practice itself about 2 to 4 hours per month: interviews about case outcomes, content approval, and small website edits. The technical work (schema, directories, monitoring, content, PR) is done by the agency. After 90 days it drops to 1 to 2 hours per month for maintenance.
Can a dental practice combine GEO with existing marketing spend?
Yes. Most practices already have a budget for Google Ads, Meta ads, and perhaps a local agency. GEO does not replace any of that. In our experience, 20 to 30 percent of the existing SEO budget typically shifts toward GEO-specific activity without increasing total marketing spend.
Is your practice already being recommended?
Get a free AI visibility scan across ChatGPT, Perplexity, Gemini, and Copilot.
Request your free scan →