Dermatologists

Best AI Scribe for Dermatologists: Automated Lesion Mapping That Converts Voice to Structured FHIR Grids
TL;DR — Why This Matters for Board-Certified Dermatologists
What Competitors Miss: Dermatology Demands Structured Multi-Lesion Body-Site Architecture, Not Better Transcription
Scribing.io Clinical Logic: How Voice-Mapped Lesion Grids Prevent a $1,120 Denial on a 44-Patient Day
Technical Reference: ICD-10 Documentation Standards for High-Volume Lesion Encounters
Voice-Mapped Lesion Grids vs. Manual Body Diagrams: A Workflow Comparison
FHIR R4 Architecture: BodyStructure, Observation, and Procedure Resource Graph
CPT Threshold Logic: 17000/17003 vs. 17004 vs. 17110/17111 Decision Engine
Audit Defense: How Structured SNOMED-Coded Grids Withstand MAC Scrutiny
Implementation: EHR Integration, Onboarding, and Go-Live Timeline
Book a 15-Minute Demo
TL;DR — Why This Matters for Board-Certified Dermatologists
On a 40+ patient day, you may treat 15 or more actinic keratoses across multiple body sites. Generic AI scribes transcribe your words; Scribing.io structures them. When you say "8 AKs left dorsal forearm, 4 scalp vertex, 5 RLQ back—LN2 two 10-second cycles," the system voice-maps each lesion into a SNOMED CT–coded body site, creates linked FHIR R4 Observations for size and morphology, and auto-selects the correct CPT threshold logic (17004 when ≥15 lesions, not 17000 + 17003 × 14). The result: zero manual drawing, zero coding denials, and a defensible audit trail exported directly to your EHR.
What Competitors Miss: Dermatology Demands Structured Multi-Lesion Body-Site Architecture, Not Better Transcription
Every AI scribe vendor pitching dermatology makes the same promise: listen to the encounter, produce a cleaner note faster. Sunoh.ai claims to "recognize specialized medical terminology, including complex lesion descriptions and anatomical mapping required in skin care." DeepScribe markets ambient capture with specialty awareness. Abridge emphasizes real-time summarization. Strip away the positioning and you find the same product: a transcription engine wrapped in a note template. Recognizing the term "actinic keratosis" and structuring it into a queryable, billable, auditable data object tied to a discrete body site are fundamentally different engineering problems.
Here is the gap no competitor has closed, and the core reason Scribing.io exists as a distinct category of product for dermatology:
Dermatology is the only high-volume specialty where a single encounter routinely generates 15+ discrete clinical findings, each requiring its own anatomical location, morphological description, measurement, and procedural linkage—and where the billing outcome changes materially based on the count of those findings per body region.
A cardiologist dictating a murmur grade documents one finding with one location. A psychiatrist scoring a PHQ-9 records a single composite metric. Neither faces the combinatorial explosion that occurs when 17 actinic keratoses spread across three body regions each need a coded site, a count, a morphology label, a procedure method, and a freeze-cycle parameter—all of which must reconcile to the correct CPT code before the claim drops. (For how Scribing.io handles specialty-specific logic in lower-combinatorial fields, see our workflow guides for Cardiology and Psychiatry.)
The Anchor Truth driving Scribing.io's dermatology architecture: dermatologists see 40+ patients a day, as JAMA Dermatology workforce analyses consistently document. They need AI that can Voice-Map 15+ lesion locations—spoken naturally as "RLQ of back," "left dorsal forearm," or "scalp vertex"—into a structured grid without manual drawing, template clicking, or post-visit editing.
Scribing.io's original contribution is a three-layer pipeline that no competitor publicly replicates:
Voice-to-SNOMED CT Mapping — Spoken anatomical shorthand (e.g., "RLQ of back") is resolved in real time to the appropriate SNOMED CT
bodySiteconcept (e.g., SCTID 368148009 for a region of the trunk), eliminating free-text ambiguity. The NLM's SNOMED CT terminology service provides the canonical reference hierarchy.FHIR R4 BodyStructure Persistence — Each resolved body site is persisted as a FHIR R4 BodyStructure resource. Multiple
Observationresources (size, morphology, procedure method, freeze-cycle count) reference that single BodyStructure, creating a one-to-many relational graph per lesion cluster.Automated CPT Threshold Logic — The system queries the total lesion count across all BodyStructures tagged with destruction procedures to determine whether the correct code is 17000/17003, 17004, 17110/17111, or a combination—before the note ever reaches a biller. This logic derives directly from the AMA's CPT code definitions and CMS billing guidelines.
No competitor publicly documents SNOMED-to-FHIR body-site persistence for dermatology. No competitor automates the 15-lesion CPT threshold switch. This is not an incremental feature; it is a different category of product.
Scribing.io Clinical Logic: How Voice-Mapped Lesion Grids Prevent a $1,120 Denial on a 44-Patient Day
This section walks through the exact scenario that dermatology billing teams encounter weekly—and demonstrates, step by step, how Scribing.io's structured pipeline eliminates the denial.
The Scenario
On a 44-patient day, a board-certified dermatologist treats 17 actinic keratoses distributed across three body regions:
Body Region (Spoken) | Lesion Count | SNOMED CT bodySite (Resolved) |
|---|---|---|
Left dorsal forearm | 8 | Structure of dorsal surface of forearm (SCTID 368149001) |
Scalp vertex | 4 | Skin structure of vertex of scalp (SCTID 43067004) |
RLQ of back | 5 | Skin structure of right lower quadrant of back (SCTID 368148009) |
Total | 17 | — |
The treatment is cryotherapy with liquid nitrogen, two 10-second freeze–thaw cycles per lesion.
What Goes Wrong Without Structured Data
The clinician's note says: "Multiple AKs treated with LN2." The word "multiple" is clinically accurate but computationally useless. The billing team, lacking discrete per-site counts, submits:
17000 × 1 (first lesion destruction, premalignant)
17003 × 14 (each additional lesion, 2nd through 15th)
This appears reasonable—until the Medicare Administrative Contractor (MAC) reprocesses the claim. CMS guidelines stipulate that when 15 or more premalignant lesions are destroyed, the provider must bill 17004 alone, not 17000 + 17003 stacked. The AMA's CPT codebook defines 17004 as "Destruction … of premalignant lesions … 15 or more lesions." The submitted coding is structurally incorrect. The MAC denies for:
Incorrect CPT selection — The 15-lesion threshold mandates 17004 as the sole code.
Missing discrete body-site documentation — Without per-region counts, the medical record cannot substantiate the total lesion count or the clinical rationale for destruction at each site.
Financial exposure: $1,120 in reimbursement at risk, plus audit flag propagation to the next 12 months of claims.
How Scribing.io Resolves This in Real Time
The clinician speaks naturally during the encounter:
"Map 8 AKs left dorsal forearm, 4 scalp vertex, 5 RLQ back—LN2 two 10-second cycles."
Here is the system's processing sequence:
Pipeline Stage | Action | Output |
|---|---|---|
1. Speech Recognition | Ambient capture with dermatology-tuned vocabulary model; numeric triggers ("8," "4," "5") and anatomical noun phrases detected | Raw transcript with lesion-count triggers and body-region spans tagged |
2. Entity Extraction | NLP identifies (count, lesion type, body region, procedure, parameters) tuples using a dermatology-specific named entity recognition layer | Three structured tuples: |
3. SNOMED CT Resolution | Each spoken body region maps to a canonical SNOMED CT | Three coded bodySite entries with fully qualified SNOMED CT identifiers |
4. FHIR R4 Resource Creation | Each bodySite persists as a | 3 BodyStructure resources, 9+ Observation resources (count, morphology, procedure detail per site) |
5. CPT Threshold Logic | System sums total lesion count across all destruction-tagged BodyStructures: 8 + 4 + 5 = 17 → ≥15 threshold met | Auto-selects 17004 (not 17000 + 17003 × 14). If total were 14, system would select 17000 + 17003 × 13. |
6. ICD-10 Linkage | Actinic keratosis diagnosis linked to each BodyStructure site | L57.0 attached to all three BodyStructure resources with site-specific pointers |
7. Photo Anchoring | Dermoscopic or clinical images tagged to their corresponding BodyStructure by SNOMED CT bodySite code | Images queryable by anatomical site for audit defense; FHIR Media resources reference the same BodyStructure |
8. EHR Export | Structured note with embedded FHIR references exported via HL7 FHIR R4 API or C-CDA wrapper to Epic, athenahealth, Modernizing Medicine, or other target EHR | Discrete, coded documentation in the EHR—not a text blob. Billable data flows directly to the practice management system. |
Result: The $1,120 denial is avoided. The audit trail is airtight. The clinician never drew on a body diagram, clicked a template, or typed a character. Total interaction time: the 8 seconds it took to speak the sentence.
Technical Reference: ICD-10 Documentation Standards for High-Volume Lesion Encounters
Accurate ICD-10 coding is the foundation upon which CPT selection, medical necessity, and payer adjudication rest. In dermatology, two codes dominate high-volume destruction encounters, and the CMS ICD-10 coding guidelines require maximum specificity to prevent automatic denials:
ICD-10 Code | Description | Documentation Requirements | Common CPT Pairing | Scribing.io Automation |
|---|---|---|---|---|
Actinic keratosis | Lesion count per body site; morphological description; treatment method (cryotherapy, curettage, photodynamic therapy); clinical rationale for destruction vs. biopsy; documentation of UV-damaged skin context per AAD clinical guidelines | 17000, 17003, 17004 (premalignant destruction series) | Auto-links L57.0 to each BodyStructure tagged with AK morphology; validates that premalignant CPT codes (17000 series) are selected, not benign codes (11200 series); flags missing per-site counts before note finalization | |
Other seborrheic keratosis | Lesion count per body site; notation that lesion is benign; treatment method; laterality when applicable | 17110, 17111 (benign destruction series) | Auto-links L82.1 to each BodyStructure tagged with SK morphology; validates benign CPT track; flags if provider inadvertently mixes premalignant and benign codes on the same lesion site |
Maximum Specificity Enforcement
The difference between an L57.0 and an unspecified L57.9 ("actinic keratosis, unspecified") is frequently the difference between a clean adjudication and a denial with additional documentation request (ADR). Scribing.io enforces maximum specificity through three mechanisms:
Morphology validation: The NLP layer requires a lesion-type classification (AK, SK, verruca, etc.) before generating a diagnosis code. If the clinician says "keratosis" without qualification, the system prompts for premalignant vs. benign distinction before committing a code.
Body-site completeness gate: No ICD-10 code is attached to a claim-ready note until at least one SNOMED CT–coded BodyStructure is linked. This prevents the "L57.0 floating in free text with no anatomical anchor" pattern that triggers MAC ADRs.
Cross-track collision detection: If a clinician documents both AKs (premalignant → 17000 series) and SKs (benign → 17110 series) in the same encounter, the system ensures each lesion type routes to its correct CPT track. Mixed-track errors—billing an AK destruction under 17110, or an SK under 17003—are blocked at the resource-creation layer, not caught downstream by a biller.
Why Discrete Body-Site Counts Are a Documentation Standard
The CMS MAC audit protocols for dermatology destruction claims increasingly require per-region lesion counts rather than aggregate "multiple" descriptors. The logic is straightforward: without discrete counts, an auditor cannot verify that the 15-lesion threshold (17004 vs. 17000/17003) was correctly applied, that each site-specific ICD-10 code is substantiated, or that benign and premalignant lesions were coded on separate CPT tracks.
Scribing.io enforces this standard structurally. Because each lesion set is stored as a FHIR Observation referencing a specific BodyStructure, the system cannot produce a note that says "multiple AKs" without an associated integer count and coded body site. The architecture makes incomplete documentation impossible, not merely discouraged.
Voice-Mapped Lesion Grids vs. Manual Body Diagrams: A Workflow Comparison
Dermatology EHR modules have historically relied on clickable body diagrams or freehand drawing tools to document lesion locations. These interfaces were designed for encounters with 3–5 lesions. They collapse under the weight of 15+ lesions spread across scalp, trunk, and extremities. A systematic review of EHR usability in dermatology consistently identifies body-diagram interaction as the primary documentation bottleneck in high-volume practices.
Workflow Dimension | Manual Body Diagram (Legacy EHR) | Generic AI Scribe (e.g., Sunoh.ai, DeepScribe) | Scribing.io Voice-Mapped Lesion Grid |
|---|---|---|---|
Input method | Click/draw on 2D diagram per lesion; type annotations | Ambient transcription → free-text note blob | Ambient voice → structured SNOMED CT grid |
Time per 15-lesion encounter | 4–7 minutes of manual clicking and annotation | ~0 minutes clinician time; 3–5 minutes biller/coder time to extract counts from prose | ~8 seconds of spoken input; 0 minutes downstream extraction |
Body-site coding | Free-text labels on diagram; no SNOMED CT linkage | Free-text in note body; no coded bodySite | SNOMED CT bodySite per lesion cluster; canonical identifiers |
Lesion count discreteness | Implicit from pin count on diagram (often miscounted) | "Multiple" or enumerated in prose (not machine-queryable) | Integer count per BodyStructure (machine-queryable, summable) |
CPT logic | Manual selection by biller | Manual selection by biller; transcript may not contain count | Automated threshold detection: 17004 if ≥15, 17000/17003 if <15, separate 17110/17111 benign track |
Photo linkage | Uploaded separately; manual matching to diagram pins | Not linked to structured data | FHIR Media resources reference same BodyStructure; queryable by SNOMED site |
Audit defensibility | Moderate — diagram exists but lacks coded structure | Low — note is prose; counts may be ambiguous | High — every claim element traces to a discrete FHIR resource with SNOMED, ICD-10, and CPT linkage |
EHR interoperability | Diagram is a proprietary image; does not export as structured data | Note exports as text; no FHIR resource export | Full FHIR R4 bundle export (BodyStructure, Observation, Procedure, Media) or C-CDA wrapper |
The workflow math is stark. On a 44-patient day where 12 encounters involve multi-lesion destruction, the manual body diagram approach consumes 48–84 minutes of post-encounter documentation time. A generic AI scribe shifts that burden to the billing team. Scribing.io eliminates it entirely—structured data is generated during the encounter, not reconstructed after it.
FHIR R4 Architecture: BodyStructure, Observation, and Procedure Resource Graph
Understanding the data model is essential for practice administrators evaluating EHR integration requirements. Scribing.io generates a FHIR R4–compliant resource graph for each multi-lesion encounter. Here is the structure for the 17-AK scenario:
Resource Hierarchy
Patient (1) — The encounter subject.
Encounter (1) — The clinical visit context, with date, provider, and facility.
Condition (1) — L57.0 Actinic keratosis, linked to the encounter.
BodyStructure (3) — One per anatomical region:
Left dorsal forearm (SCTID 368149001)
Scalp vertex (SCTID 43067004)
Right lower quadrant of back (SCTID 368148009)
Observation (9+) — Per-site observations:
Lesion count (valueInteger: 8, 4, 5 respectively)
Morphology (valueCodeableConcept: actinic keratosis)
Procedure method (valueCodeableConcept: cryotherapy, liquid nitrogen, 2 cycles × 10 seconds)
Procedure (1) — Destruction of premalignant lesions; references all three BodyStructures; carries the CPT code 17004.
Media (variable) — Dermoscopic images, each referencing the BodyStructure corresponding to its anatomical site.
Why This Architecture Matters for Interoperability
Epic's FHIR R4 endpoints, athenahealth's API platform, and Modernizing Medicine's EMA all accept FHIR R4 resources. When Scribing.io exports a structured bundle rather than a text note, the EHR can natively index lesion counts, body sites, and procedure codes as discrete data elements. This means:
Population health queries can surface all patients with ≥10 AKs treated in the last 12 months—impossible with free-text notes.
Quality measure reporting (e.g., MIPS) can pull procedure volume and diagnosis prevalence directly from structured resources.
Prior authorization systems can consume BodyStructure and Observation data to pre-validate medical necessity without manual chart review.
CPT Threshold Logic: 17000/17003 vs. 17004 vs. 17110/17111 Decision Engine
The CPT code selection for lesion destruction is not a simple lookup—it is a conditional decision tree with two independent tracks (premalignant and benign) and a critical threshold gate. Scribing.io implements this logic as a deterministic rule engine, not a probabilistic model, ensuring 100% reproducibility.
Premalignant Track (AK, Bowen's Disease, etc.)
Total Premalignant Lesion Count | CPT Code(s) | AMA CPT Definition |
|---|---|---|
1 | 17000 | Destruction, premalignant lesion, first lesion |
2–14 | 17000 + 17003 × (N−1) | 17003: each additional lesion, 2nd through 14th |
≥15 | 17004 only | Destruction, premalignant lesions, 15 or more. Replaces 17000/17003; not additive. |
Benign Track (SK, Verruca, Molluscum, etc.)
Total Benign Lesion Count | CPT Code(s) | AMA CPT Definition |
|---|---|---|
1–14 | 17110 | Destruction of benign lesions, up to 14 |
≥15 | 17111 | Destruction of benign lesions, 15 or more |
Mixed-Encounter Logic
When a single encounter includes both premalignant (AK) and benign (SK) destructions, the counts are tracked independently. Example: 12 AKs + 6 SKs in one visit → 17000 + 17003 × 11 (premalignant track, <15) plus 17110 (benign track, <15). Scribing.io maintains separate counters per morphology classification, ensuring that the tracks never cross-contaminate—a common billing error when coders work from unstructured notes that intermingle AK and SK descriptions in the same paragraph.
This logic is deterministic, auditable, and version-controlled against the current AMA CPT codebook edition. When CPT definitions change (as they did with the AMA's annual update cycle), the rule engine is updated before the new code year takes effect.
Audit Defense: How Structured SNOMED-Coded Grids Withstand MAC Scrutiny
MAC audits and Recovery Audit Contractor (RAC) reviews for dermatology destruction claims follow a predictable pattern. The auditor's checklist, derived from CMS compliance program guidance, includes:
Does the medical record document the total number of lesions destroyed? — An integer, not "multiple" or "several."
Does the record identify each body site where destruction occurred? — Specific anatomical locations, not "trunk" or "extremity."
Is there a clinical description supporting the diagnosis code? — Morphology consistent with AK (premalignant) vs. SK (benign).
Does the CPT code match the documented lesion count? — 17004 only if ≥15 premalignant; 17000/17003 if <15.
Is there photographic or histopathological evidence supporting the clinical finding? — When available, photos should be linked to specific sites.
Scribing.io's structured output satisfies every checklist item by construction, not by chance:
Item 1: Each BodyStructure's linked Observation contains a
valueIntegerfor lesion count. Total is computed and displayed in the note.Item 2: Each BodyStructure carries a SNOMED CT
bodySitecode with a human-readable display name. "Skin structure of right lower quadrant of back" leaves no ambiguity.Item 3: The morphology Observation explicitly codes "actinic keratosis" or "seborrheic keratosis," which maps directly to the ICD-10 code.
Item 4: The CPT code is derived from the lesion count by a deterministic rule engine, not selected by a biller interpreting prose.
Item 5: FHIR Media resources are tagged with the same BodyStructure reference, creating a verifiable link between the photo and the coded site.
The practical impact: practices using Scribing.io can respond to an ADR by exporting the FHIR resource bundle for the encounter. Every data element the auditor needs—count, site, morphology, CPT, ICD-10, photo—is present as structured, machine-readable data with human-readable rendering. No chart abstraction. No biller reconstruction. No ambiguity.
Implementation: EHR Integration, Onboarding, and Go-Live Timeline
Scribing.io integrates with the EHR systems that dermatology practices actually use. The implementation path varies by platform:
EHR Platform | Integration Method | Typical Go-Live Timeline |
|---|---|---|
Epic | FHIR R4 API (via App Orchard / Open.Epic); SMART on FHIR launch context | 4–6 weeks (includes Epic security review) |
Modernizing Medicine (EMA) | REST API with FHIR resource mapping; direct integration with EMA's procedure and diagnosis modules | 3–4 weeks |
athenahealth | athenahealth Marketplace API; FHIR R4 clinical data endpoints | 3–5 weeks |
AdvancedMD / Nextech | C-CDA document import with embedded FHIR references; HL7v2 for legacy workflows | 4–6 weeks |
Custom / On-Premise | FHIR R4 bundle export to local integration engine (Mirth, Rhapsody) | 6–8 weeks |
Onboarding Sequence
Week 1: Technical kickoff — EHR API credentials provisioned, SNOMED CT body-site synonym library customized to the practice's preferred anatomical shorthand (e.g., "RLQ back" vs. "right lower back" vs. "infrascapular right").
Week 2: Clinical workflow mapping — Scribing.io's dermatology product team shadows 2–3 clinic sessions to calibrate the speech model to the provider's diction, pace, and lesion-description patterns.
Week 3: Parallel operation — Scribing.io runs alongside existing documentation workflow. Structured output is compared to manually generated notes for accuracy validation.
Week 4: Go-live — Scribing.io becomes the primary documentation and coding pipeline. The practice's billing team validates the first week of claims against the structured FHIR output.
Post-go-live, the system continuously refines its synonym resolution and entity extraction accuracy based on provider corrections, with quarterly accuracy reports delivered to the practice administrator.
Book a 15-Minute Demo
See SNOMED-coded voice lesion mapping with FHIR BodyStructure/Observation export that auto-picks 17004 vs. 17000/17003 and posts structured data to your EHR—no manual drawing. In 15 minutes, we will walk through the exact 17-AK scenario described in this playbook using your practice's EHR platform and your preferred anatomical shorthand.
Book your demo at Scribing.io →
Bring your most complex multi-lesion encounter from the last month. We will show you the structured FHIR output, the auto-selected CPT code, and the audit-ready documentation—generated from a single spoken sentence.

