Posted on

May 7, 2026

New York AI Disclosure Laws for Doctors: The 2026 Clinical Operations Playbook

New York AI Disclosure Laws for Doctors: The 2026 Clinical Operations Playbook

Posted on

May 14, 2026

New York AI Disclosure Laws for Doctors: The 2026 Clinical Operations Playbook

TL;DR — What Every Chief Compliance Officer Needs to Know

New York's proposed 2026 AI Transparency Act requires that patients be told before the visit if an algorithm is summarizing their medical history. This intersects with the state's existing six-year medical-record-retention mandate, meaning every disclosure—and every patient opt-out—must be encounter-scoped, audit-traceable, and retained for a minimum of six years. The AMA's 2024 AI principles call for "appropriate disclosure and documentation" but provide zero implementation guidance for state-level mandates, FHIR/HL7 interoperability, or pre-visit timing triggers. This playbook closes that gap with field-tested clinical logic, ICD-10 documentation standards, and a concrete interoperability architecture that Scribing.io enforces by default. Read the full Safety & Privacy Guide for foundational context.

  • What Competitors Missed: The Six-Year Retention–Disclosure Intersection

  • Scribing.io Clinical Logic: The Manhattan Endocrinology Scenario

  • Pre-Visit Disclosure Trigger Architecture

  • FHIR & HL7 v2 Interoperability for AI Disclosure Compliance

  • Technical Reference: ICD-10 Documentation Standards

  • Retention, Audit, and Civil-Penalty Risk Matrix

  • Multi-State Compliance Comparison: New York vs. California vs. Federal

  • Implementation Roadmap for Chief Compliance Officers

What Competitors Missed: The Six-Year Retention–Disclosure Intersection That Creates Real Liability

The AMA's 2024 AI principles represent an important aspirational framework. They call for "appropriate disclosure and documentation when AI directly impacts patient care, access to care, medical decision making, communications, or the medical record." That language matters—but it is directional, not operational.

Here is what the AMA framework—and every competitor analysis we have reviewed—fails to address: New York's proposed 2026 "AI Transparency Act" does not exist in a vacuum. It intersects with New York Education Law § 6530(32) and 10 NYCRR § 415.5, which together mandate a minimum six-year retention period for medical records from the date of last encounter (or, for minors, until age 21 plus six years). When the AI Transparency Act requires pre-visit disclosure of algorithmic involvement, that disclosure itself becomes a medical-record artifact subject to the same retention timeline. Scribing.io was built to treat disclosure artifacts as first-class clinical documents—not afterthought metadata.

This creates a compound obligation that no high-level principle can satisfy. For additional context on how federal requirements layer onto state mandates, see our HIPAA 2026 Update.

Obligation Layer

Source

Requirement

AMA Coverage

Pre-visit AI disclosure

NY AI Transparency Act (2026 proposed)

Patient must be informed before the visit that an algorithm will summarize their history

Mentioned in principle only; no timing specification

Patient opt-out capture

NY AI Transparency Act (2026 proposed)

Patient preference must be recorded at the encounter level

Not addressed

Record retention

NY Education Law § 6530(32)

All encounter documentation retained ≥ 6 years

Not addressed

Audit traceability

OPMC investigation authority

Model name, version, timestamp, and any corrections must be reproducible

Not addressed

HIPAA intersection

45 CFR § 164.530(j)

Policies/procedures and compliance documentation retained ≥ 6 years

Mentioned generically

The practical implication: a disclosure that is delivered but not retained in an encounter-scoped, auditable format for six years is functionally equivalent to no disclosure at all under New York regulatory scrutiny.

Scribing.io's architecture was designed for exactly this intersection. Every AI-involved encounter generates an encounter-level FHIR Consent resource (with policyAuthority='NY', scope='patient-privacy', category='ai-history-summarization') plus a FHIR AuditEvent and Provenance chain capturing model name, version, and timestamp. These resources are immutable and retention-policy-tagged at creation.

The information-gain principle here is stark: Competitors tell you that you should disclose. We tell you how the disclosure must be structured, where it must persist, how long it must survive, and what happens when a regulator asks to see it six years later.

Scribing.io Clinical Logic: Handling the Manhattan Endocrinology Allergy-Misclassification Scenario

This section walks through a scenario that exposes the full liability surface of non-compliant AI documentation—and demonstrates how Scribing.io's clinical logic resolves it at every failure point. It is modeled on complaint patterns we observe from the New York OPMC.

The Scenario

A Manhattan endocrinology clinic uses an AI pre-chart summarizer for a new type 2 diabetes patient. No pre-visit disclosure or opt-out capture occurs. After an allergy misclassification is caught in the note, the patient files a complaint. OPMC requests proof of disclosure and an AI audit trail, stalling dozens of encounters and creating civil-penalty exposure.

Failure-Point Analysis Without Scribing.io

Failure Point

What Went Wrong

Regulatory Exposure

1. No pre-visit disclosure

Patient was never informed that AI was summarizing their medical history before the encounter

Direct violation of proposed AI Transparency Act pre-visit trigger; OPMC inquiry grounds

2. No opt-out capture

Patient preference regarding AI involvement was never solicited or recorded

No Consent resource exists; clinic cannot demonstrate patient autonomy was respected

3. Allergy misclassification propagated

AI summarizer reclassified a documented sulfa allergy as "sulfonamide sensitivity—low risk," and no human verification step caught it before the note was signed

Clinical safety event; malpractice exposure; potential adverse drug event if metformin alternatives with sulfonamide structure were prescribed

4. No AI audit trail

No record of which model version generated the summary, when it ran, or what the original vs. modified output was

OPMC cannot reconstruct the AI's role; investigation scope expands to all encounters using the same summarizer

5. Encounter cascade

OPMC's inability to isolate the issue to a single encounter forces a broad records request

Dozens of encounters stalled; 99204 and related claims at risk of recoupment; civil-penalty exposure per encounter

How Scribing.io Resolves Every Failure Point — Step by Step

Step 1: Pre-Visit Disclosure (Automated, Multi-Channel)

When the appointment is scheduled, Scribing.io's integration layer triggers disclosure through three channels simultaneously:

  • Appointment-reminder SMS (≤160 characters): "[Clinic Name]: Your upcoming visit uses AI-assisted chart review. You may opt out. Reply STOP-AI or ask at check-in. See [portal link]."

  • Patient portal banner: Persistent notification on the appointment detail page, linking to a plain-language explanation of AI involvement and a one-click opt-out form.

  • Telehealth waiting room (if applicable): Interstitial disclosure screen requiring acknowledgment before the session begins.

All three channels log delivery confirmation and are linked to the encounter via appointment ID. This satisfies the "before the visit" temporal trigger in the Act's anchor requirement: patients must be told before the visit if an algorithm is summarizing their medical history.

Step 2: Encounter-Level Consent Capture

At check-in (or portal acknowledgment), the patient's preference is recorded as a discrete FHIR Consent resource. Critical design decisions:

  • The provision.period is encounter-scoped (single calendar day), not a blanket authorization

  • policyAuthority explicitly references New York

  • category specifies ai-history-summarization — not generic "AI use"

  • The resource is immutable once the encounter closes; modifications generate a new versioned resource

If a patient opts out, the Consent resource records provision.type = "deny", and the scribe workflow suppresses AI summarization for that encounter entirely.

Step 3: Human Verification of the Allergy Line

Scribing.io's scribe workflow includes a mandatory allergy-line verification checkpoint. When the AI summarizer outputs an allergy list, the scribe interface flags any allergy reclassifications, severity changes, or deletions in a highlighted diff view. The physician or scribe must explicitly confirm or correct each flagged item before the note can be signed. This aligns with JAMA's 2024 guidance on clinician oversight of AI-generated clinical content.

In this scenario, the sulfa → "sulfonamide sensitivity—low risk" reclassification would be flagged as a severity downgrade. The clinician reviews, corrects it back to the documented allergy, and the correction is logged as a discrete event.

Step 4: AuditEvent and Provenance Chain

Every AI-involved action generates a FHIR AuditEvent linked to a Provenance resource. The AuditEvent captures:

  • Model identifier: Scribing.io Pre-Chart Summarizer v1.4.2-20260301

  • Recorded timestamp: 2026-03-15T08:45:12-05:00

  • Entity reference: linked to Encounter/67890

  • Agent role: requestor = false (machine agent, not human)

When the allergy correction occurs, a second AuditEvent logs the original AI output, the corrected value, the correcting clinician's NPI, and the correction timestamp. The Provenance resource chains both events to create a complete audit trail.

Step 5: Regulatory Response Readiness

When OPMC requests proof of disclosure and an AI audit trail, the clinic exports from Scribing.io's audit dashboard:

  1. The Consent resource showing pre-visit disclosure delivery and patient acknowledgment

  2. The SMS carrier delivery receipt with timestamp proving "before the visit" compliance

  3. The AuditEvent chain showing model v1.4.2, the allergy misclassification, the human correction, and the final signed note

  4. The Provenance resource linking all artifacts to Encounter/67890

The inquiry is scoped to a single encounter. The 99204 claim is preserved. No cascade. No civil penalty. No license jeopardy.

Compare this to California AI Laws requirements, where the timing trigger differs but the audit-trail obligation is equally stringent.

Pre-Visit Disclosure Trigger Architecture: Satisfying "Before the Visit" in Three Channels

The proposed AI Transparency Act's most operationally challenging requirement is its temporal trigger: the patient must be informed before the visit. This is not "at check-in." It is not "during the encounter." It is before. The NIH's 2023 systematic review on patient notification effectiveness demonstrates that multi-channel redundancy significantly improves both comprehension and documented compliance rates.

Scribing.io addresses this with a redundant three-channel architecture:

Channel

Trigger Event

Timing

Format Constraints

Proof of Delivery

Fallback

Appointment-Reminder SMS

Appointment created or confirmed in PM system

24–72 hours before visit (configurable)

≤160 characters; plain language; opt-out instruction

Carrier delivery receipt logged to encounter

Escalate to phone call queue or portal-only disclosure

Patient Portal Banner

Appointment visible on patient dashboard

Persistent from appointment creation until encounter close

HTML banner with expandable explanation; one-click opt-out

Banner impression + click-through logged per session

If no portal account, SMS and check-in bear full weight

Telehealth Waiting Room

Patient joins virtual waiting room

Before clinician connection; blocks session start until acknowledged

Full-screen interstitial; requires explicit "I understand" or "Opt out" action

Click event + timestamp stored as AuditEvent

Phone call disclosure with verbal consent recorded

Timing-Proof Logic

The system enforces a hard constraint: if no disclosure delivery proof exists with a timestamp preceding the encounter's period.start, the AI summarizer is suppressed for that encounter. This is a fail-closed design. The scribe workflow proceeds without AI pre-chart summarization, and the clinician is notified at chart open that AI was not used due to disclosure non-delivery.

This fail-closed behavior is the critical differentiator. Competitors that allow AI processing without confirmed pre-visit disclosure create an affirmative violation on every such encounter. Scribing.io makes non-compliance structurally impossible.

SMS Character Budget Breakdown

The 160-character constraint is non-negotiable for single-segment delivery on all carriers. Our template allocates:

  • Clinic identifier: 15–25 characters

  • AI disclosure statement: 60–80 characters

  • Opt-out instruction: 30–40 characters

  • URL (shortened): 20–25 characters

Example: "DrSmith Endo: Your 3/15 visit uses AI chart review. Opt out: reply STOP-AI or visit scrbng.io/opt" — 98 characters, well within budget.

FHIR & HL7 v2 Interoperability for AI Disclosure Compliance

Not every EHR in a Manhattan practice supports FHIR R4 write operations. Scribing.io's interoperability layer accounts for this reality with a dual-path architecture:

Path A: FHIR R4 Native (Preferred)

When the EHR supports FHIR R4 write, Scribing.io creates three linked resources per AI-involved encounter:

  1. Consent — encounter-scoped AI disclosure and patient preference

  2. AuditEvent — model name, version, timestamp, and any corrections

  3. Provenance — chain linking Consent, AuditEvent, and the DocumentReference (signed note)

All three resources include a meta.tag with system='https://scribing.io/retention-policy' and code='ny-6yr', ensuring the EHR's data lifecycle management respects the six-year floor.

Path B: HL7 v2 Fallback (Legacy EHR Systems)

When FHIR write is unavailable, Scribing.io transmits an HL7 v2 ORU^R01 message with:

Segment

Field

Value

Purpose

OBX-3

Observation Identifier

AI_DISCLOSURE^AI Involvement Disclosure^SCRIBING

Discrete code identifying this as an AI disclosure artifact

OBX-5

Observation Value

PERMIT or DENY

Patient's encounter-level preference

OBX-14

Date/Time of Observation

Disclosure delivery timestamp

Proves "before the visit" timing

OBX-17

Observation Method

SMS^Portal^Telehealth

Channel(s) used for disclosure

Additionally, a signed PDF is attached as a Media resource (or HL7 v2 OBX with ED datatype) containing the full disclosure text, patient acknowledgment timestamp, and model-version metadata. This PDF is mapped to the encounter and subject to the same six-year retention tag.

Interoperability Decision Matrix

EHR Capability

Scribing.io Path

Disclosure Artifact

Retention Enforcement

FHIR R4 Write + Search

Path A (Native)

Consent + AuditEvent + Provenance

meta.tag retention policy; immutable after encounter close

FHIR R4 Read-Only

Path B + FHIR Read verification

HL7 v2 ORU^R01 + signed PDF + read-back confirmation

PDF stored in EHR document store; Scribing.io mirror with 6-year TTL

HL7 v2 Only

Path B (Full Fallback)

ORU^R01 with OBX AI_DISCLOSURE + signed PDF Media

Scribing.io retention store; quarterly reconciliation with EHR

This dual-path design ensures compliance regardless of EHR modernization status—a critical consideration for independent practices and community health centers that may still run HL7 v2 interfaces.

Technical Reference: ICD-10 Documentation Standards

AI-generated documentation introduces specific denial risks when codes lack the specificity that payers require. Scribing.io's code-validation layer enforces maximum specificity before note signature, preventing the most common AI-related documentation failures.

Encounter-Type Codes Relevant to AI Disclosure

When an encounter involves AI-related patient counseling or administrative documentation, the following codes apply: Z71.89 — Other specified counseling; Z02.9 — Encounter for administrative examination. Z71.89 is appropriate when the clinician spends documented time counseling the patient about AI involvement in their care, particularly when the patient has questions about the AI pre-chart summary or requests explanation of how the AI processed their history. Z02.9 applies in limited administrative-encounter contexts where the visit is primarily for documentation purposes related to AI disclosure requirements.

How AI Summarizers Cause Specificity Failures

The most common AI-related coding failure occurs when a summarizer defaults to unspecified codes rather than carrying forward the specificity documented in the source record. For example, a patient with documented familial hypercholesterolemia (E78.01) may be summarized with unspecified hyperlipidemia (E78.5) — a downgrade that triggers payer edits and potential denials.

Scribing.io addresses this with a code-specificity guard:

  1. Source comparison: The AI summarizer's output codes are compared against the most recent problem list and prior encounter codes

  2. Specificity-loss flag: Any code that is less specific than the prior-documented code is flagged for clinician review

  3. Hard block on unspecified defaults: If a 4th-character unspecified code is used when a more specific code exists in the patient's history, the note cannot be signed without explicit clinician override

  4. Audit trail: Any clinician override of a specificity flag is logged in the AuditEvent chain with clinical justification

Documentation Standards for the Endocrinology Scenario

In our Manhattan scenario, the type 2 diabetes patient's encounter requires documentation supporting a 99204 (new patient, moderate complexity). Scribing.io ensures:

  • E11.65 (Type 2 diabetes mellitus with hyperglycemia) carries forward from the referral note rather than defaulting to E11.9 (without complications)

  • The allergy documentation (sulfa allergy) is captured as a discrete AllergyIntolerance resource, not buried in narrative text where it cannot be coded or queried

  • Medical decision-making elements are explicitly documented to support the 99204 level, with AI-assisted documentation clearly delineated from clinician-authored assessment

Per CMS E/M documentation guidelines, the time and complexity thresholds for 99204 require 45–59 minutes of total time or moderate-complexity medical decision-making. AI-assisted documentation must not obscure the elements that justify this level.

Retention, Audit, and Civil-Penalty Risk Matrix

The following matrix quantifies the regulatory exposure when disclosure artifacts are absent, incomplete, or non-retainable:

Compliance Gap

Regulatory Trigger

Penalty Range

Claim Impact

Scribing.io Mitigation

No pre-visit disclosure delivered

Patient complaint → OPMC inquiry

Civil penalty per encounter; potential licensure action

All AI-involved encounter claims vulnerable to recoupment

Fail-closed design: AI suppressed if no delivery proof exists

Disclosure delivered but not retained

OPMC records request at year 4

Equivalent to no disclosure under NY retention law

Retrospective claim vulnerability

Immutable Consent resource with 6-year retention tag

No model-version tracking

Adverse event investigation

Investigation scope expands to all encounters using same model

Cascade risk: dozens to hundreds of claims stalled

AuditEvent captures model-version per encounter

No correction trail

Allergy-related adverse event

Malpractice + OPMC + potential HIPAA breach (inaccurate record)

Claim denial + civil liability

Diff-based correction logging with clinician NPI and timestamp

Blanket consent (not encounter-scoped)

Patient disputes AI use for specific visit

Consent cannot be mapped to the encounter in question

Unable to demonstrate compliance for that specific encounter

Encounter-scoped provision.period on every Consent resource

Six-Year Retention Implementation

Scribing.io's retention architecture uses three mechanisms in parallel:

  • EHR-resident artifacts: FHIR resources or HL7 v2 segments stored within the EHR's native data store, subject to the organization's existing retention policies

  • Scribing.io mirror store: An encrypted, HIPAA-compliant secondary store with a hard six-year TTL floor and automatic legal-hold capability

  • Quarterly reconciliation: Automated comparison between EHR-resident and mirror artifacts to detect accidental deletion or data-lifecycle policy conflicts

Multi-State Compliance Comparison: New York vs. California vs. Federal

Practices operating across state lines face divergent requirements. This comparison identifies the specific operational differences that demand jurisdiction-aware configuration:

Requirement

New York (Proposed 2026)

California (AB 3030, effective 2025)

Federal (HIPAA/ONC)

Disclosure timing

Before the visit

Before or during the encounter

No AI-specific timing requirement

Disclosure content

Must specify that an algorithm is summarizing medical history

Must disclose AI involvement in "patient communications"

General TPO disclosure under Notice of Privacy Practices

Opt-out right

Explicit encounter-level opt-out required

Opt-out required; scope less defined

No AI-specific opt-out right

Record retention

6 years (medical records); applies to disclosure artifacts

7 years (adults); 1 year past age 18 (minors)

6 years (HIPAA policies/procedures)

Audit trail specificity

Model name, version, timestamp, corrections

Less prescriptive; "meaningful transparency"

Access logs under Security Rule; no AI-specific fields

Enforcement body

OPMC + AG

Medical Board + AG

OCR + OIG

Scribing.io's jurisdiction engine automatically applies the most restrictive applicable standard based on practice location, patient residence, and encounter type. A New York practice treating a California-resident patient via telehealth triggers both states' requirements simultaneously. For California-specific implementation details, see our California AI Laws guide.

Implementation Roadmap for Chief Compliance Officers

Deploying compliant AI disclosure infrastructure is not a single-sprint project. The following roadmap sequences activities by dependency and regulatory deadline:

Phase 1: Discovery and Gap Assessment (Weeks 1–3)

  1. Inventory all AI touchpoints: Identify every algorithm that reads, summarizes, or modifies patient records — including third-party tools embedded in your EHR

  2. Map EHR interoperability capability: Determine whether your EHR supports FHIR R4 write, FHIR R4 read-only, or HL7 v2 only

  3. Audit current disclosure practices: Document existing consent forms, notice of privacy practices, and patient communication templates

  4. Identify retention-policy conflicts: Check whether your EHR's data lifecycle policies could auto-purge disclosure artifacts before six years

Phase 2: Architecture Configuration (Weeks 4–6)

  1. Configure Scribing.io interoperability path: FHIR native or HL7 v2 fallback based on Phase 1 findings

  2. Deploy pre-visit disclosure templates: SMS, portal banner, and telehealth interstitial configured for your practice branding

  3. Enable allergy-line verification: Activate the diff-based checkpoint in the scribe workflow

  4. Set retention tags: Confirm ny-6yr retention policy is applied to all disclosure-related resources

Phase 3: Staff Training and Workflow Testing (Weeks 7–9)

  1. Front-desk training: Check-in staff must understand the opt-out workflow and how to record verbal preferences

  2. Clinician training: Physicians must understand the allergy-verification checkpoint and why override requires documentation

  3. Tabletop OPMC drill: Simulate a records request; verify that the audit export produces a complete, encounter-scoped artifact chain within 72 hours

Phase 4: Go-Live and Continuous Monitoring (Week 10+)

  1. Enable production disclosure triggers: All new appointments generate pre-visit AI disclosures

  2. Monitor delivery rates: Dashboard tracking SMS delivery success, portal impressions, and opt-out rates

  3. Quarterly retention reconciliation: Automated comparison between EHR and Scribing.io mirror stores

  4. Annual policy review: As the AI Transparency Act moves through the legislative process, Scribing.io pushes configuration updates to match final enacted language

Conversion Hook

Book a 15-minute demo to see our NY AI Transparency Act workflow in action: automated pre-visit disclosure (SMS/portal/telehealth), encounter-level Consent + AuditEvent with six-year retention, model-version tracking, and instant audit export. Schedule at Scribing.io →

Conclusion: Operational Compliance Is the Only Compliance

Aspirational frameworks do not survive OPMC inquiries. AMA principles do not generate FHIR Consent resources. Vendor whitepapers do not produce encounter-scoped AuditEvents with model-version metadata.

New York's 2026 AI Transparency Act—whether enacted as proposed or modified during committee—establishes the regulatory floor that every AI-using practice in the state must build to. The six-year retention intersection makes this a compound obligation that cannot be satisfied with a one-time consent form or a generic privacy notice.

Scribing.io does not sell disclosure compliance as a feature. It is the architectural foundation of how our scribe workflow operates. Every encounter. Every model version. Every correction. Logged, retained, exportable, defensible.

That is what operational compliance looks like when a regulator is at the door.

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.