Geriatric

AI documentation tools supporting geriatric care by capturing frailty markers and ADL deficits during skilled nursing facility visits

AI Documentation for Geriatric Care: Capturing Frailty, ADL Deficits, and Visit Complexity — The Operations Playbook

  • The 'ADL Gap': Why Medicare Denies Complexity Points for Geriatric Visits

  • Scribing.io Clinical Logic: Closing the ADL Gap in a Real-World Geriatric Encounter

  • What Competitors Miss: The TUG-to-ADL Intelligence Gap

  • Technical Reference: ICD-10 Documentation Standards

  • Revenue Impact Modeling for a Typical Geriatrics Panel

  • Implementation Workflow: From Go-Live to Audit-Ready in 14 Days

  • Frequently Asked Questions

A note that says "frail" costs your practice money. Not sometimes—predictably, systematically, and in amounts that compound across every established visit on your panel. Post-pay auditors at major Medicare Advantage plans do not care that your clinician spent 38 minutes reconciling medications, counseling a caregiver, and adjusting a diuretic in the context of worsening functional decline. If the note does not name which Activities of Daily Living are impaired and what level of assistance the patient requires, the visit drops from 99214 to 99213, the G2211 add-on disappears, and your practice absorbs roughly $150 per encounter in lost revenue. Scribing.io exists to eliminate this specific failure mode.

This playbook is written for Geriatrics Medical Directors who are tired of watching clinically justified complexity evaporate at the billing stage. It documents the exact NLP pipeline that Scribing.io uses to extract ADL/IADL data from the verbal Timed Up and Go exam and caregiver disclosures, normalize that data into CMS-recognized assistance tiers, generate MDM language that withstands audit, and recommend functional status ICD-10 codes—all without requiring the clinician to dictate a single structured data element. The same ambient AI architecture that powers our Family Medicine and Psychiatry workflows has been extended with geriatric-specific entity recognition trained on thousands of TUG conversations, caregiver collateral statements, and functional assessment disclosures.

The 'ADL Gap': Why Medicare Denies Complexity Points for Geriatric Visits

Every Geriatrics Medical Director has seen the pattern: a post-pay auditor reviews an established visit note for an 84-year-old with multiple chronic conditions. The physical exam says "appears frail, slow gait, uses cane." The assessment says "frailty, deconditioning." The billing is 99214 with G2211 for longitudinal complexity. The auditor downcodes to 99213 and removes G2211. The reason, every time, is identical: the note never named specific Activities of Daily Living impairments or documented the patient's assistance level on a CMS-recognized scale.

This is the ADL Gap—and it is structurally invisible to most AI documentation tools.

Why the Gap Exists

CMS's 2021 E/M framework shifted Medical Decision Making away from counting exam bullets toward documenting the number and complexity of problems addressed, data reviewed, and risk of complications or morbidity/mortality. For geriatric patients, functional status is the bridge between a moderate-complexity visit (99213) and a high-complexity visit (99214). The mechanics are specific:

  • Problem complexity escalates when functional impairment complicates medical management—for example, a patient with DM2 who cannot safely manage insulin because they need supervision for medication administration. The CMS E/M guidelines recognize that a chronic illness is "high complexity" when it poses a threat to life or bodily function, and unmanaged functional impairment meets that threshold when it interferes with treatment adherence.

  • G2211 requires documentation of a longitudinal relationship managing a condition whose complexity is increased by functional, cognitive, or social factors. The AMA CPT guidance and CMS final rules are explicit: the add-on is not justified by the relationship alone but by documented evidence that ongoing management is more complex because of these factors.

  • Auditors look for named ADLs (bathing, dressing, toileting, transferring, feeding, continence) and IADLs (medication management, meal preparation, finances, transportation) with explicit assistance levels (independent, supervision, minimal assist, moderate assist, maximal assist, dependent). A 2024 HHS OIG work plan explicitly targeted E/M upcoding in office-based geriatric encounters, and MAC audit findings consistently cite absent functional status documentation as the primary basis for downcoding.

A note that says "frail" without this specificity fails the MDM threshold. Published analyses of Medicare post-pay audits indicate that vague frailty documentation is among the top three reasons for geriatric E/M downcoding, alongside insufficient data review documentation and absent care coordination records.

The TUG Conversation Is the Untapped Source

Here is what competitors miss entirely: the verbal Timed Up and Go (TUG) exam is performed—formally or informally—in the vast majority of established geriatric visits. Research published in the Journal of the American Geriatrics Society established the TUG as a reliable screening tool for functional mobility, and subsequent work in JAMA and NIH-funded studies has validated TUG times ≥12 seconds as a clinically significant threshold for elevated fall risk and ADL dependency. During this 20-to-40-second interaction, patients and caregivers spontaneously disclose critical functional information:

  • "I need my daughter to help me out of the chair."Transfers: minimal assist

  • "I can't get in and out of the tub anymore."Bathing: dependent / supervision

  • "I hold onto the walls at home."Ambulation: assistive device / environmental modification

  • "It takes me about 15 seconds even with the cane."TUG ≥12s: elevated fall risk

These statements contain the exact ADL/IADL data that Medicare requires. But template-based AI scribes treat the functional assessment section as a passive field: "[Activities of daily living (ADLs)] — only include if explicitly mentioned." The template waits for the clinician to dictate structured ADL data. It does not extract ADL implications from the TUG conversation, the caregiver's aside, or the clinician's real-time observations. The result is a note that documents falls and gait changes in one section but leaves the ADL section blank or uses non-specific language—creating exactly the documentation gap that triggers downcoding.

The ADL Gap is not a coding problem. It is an NLP problem. Solving it requires an AI system that understands the clinical semantics of the TUG exam, not a template with blank fields.

Scribing.io Clinical Logic: Closing the ADL Gap in a Real-World Geriatric Encounter

Consider the following scenario—one that plays out dozens of times per week in a typical geriatrics practice.

The Patient

An 82-year-old with CHF (NYHA Class II) and DM2 presents for an established visit. The clinician performs a verbal TUG: "About 16 seconds with a cane—needed a hand to stand." The patient's daughter, present in the room, reports helping with bathing and chair transfers at home.

The Billing

The visit is billed 99214 + G2211 based on the clinician's assessment of high-complexity chronic disease management with longitudinal functional considerations.

The Audit

On post-pay review, the auditor downcodes to 99213 and removes G2211. The stated reasons:

  1. The note never named specific ADL impairments.

  2. The TUG value was not documented numerically.

  3. No functional status codes appeared on the problem list.

  4. The MDM narrative did not connect functional impairment to management complexity.

The Revenue Impact

At a conservative differential of approximately $50 per visit (99214 vs. 99213) plus the G2211 add-on (~$16–$33 depending on MAC pricing and contract), the loss is roughly $150 per encounter. Across 28 similar encounters per quarter flagged in audit, that is $4,200 in direct revenue loss—before accounting for extrapolation risk if the payer applies findings across the full claims period, which can escalate exposure into six figures.

How Scribing.io Changes the Outcome: Step-by-Step Logic Breakdown

With Scribing.io's geriatric NLP pipeline active during the encounter, the system processes the ambient audio in real time and executes the following logic chain:

Processing Step

Raw Input (What the Clinician / Caregiver Said)

Scribing.io NLP Output

1. TUG Extraction

"About 16 seconds with a cane—needed a hand to stand"

TUG: 16 seconds | Assistive device: single-point cane | Transfer: required physical assist | Steadiness: unsteady on rise

2. Caregiver ADL Disclosure

Daughter: "I help her with bathing and getting out of the chair"

Bathing: supervision/minimal assist (caregiver) | Transfers: minimal assist (caregiver)

3. ADL/IADL Normalization

Combined TUG + caregiver data

Bathing — Supervision | Transfers — Minimal Assist | Ambulation — Assistive device (cane) | Medication management — flagged for clinician confirmation (DM2 + functional impairment)

4. FHIR Observation Generation

Structured clinical data

FHIR Observation resources: TUG score (16s), ADL-Bathing (supervision), ADL-Transfer (minimal assist), fall risk (elevated). Coded with LOINC where applicable for interoperability.

5. MDM Language Insertion

Auto-generated, clinician-reviewable

"Functional assessment reveals ADL impairments in bathing (requires supervision) and transfers (requires minimal physical assistance), with TUG of 16 seconds using single-point cane indicating elevated fall risk. These functional limitations complicate ongoing management of CHF (fluid monitoring, daily weights) and DM2 (medication adherence, hypoglycemia recognition), increasing visit complexity and supporting the need for longitudinal care coordination (G2211)."

6. ICD-10 Recommendation

Suggested for clinician approval

R54 — Age-related physical debility | Z74.1 — Need for assistance with personal care → added to problem list with clinical justification

7. Audit Trail Export

Timestamped, source-linked

Exportable PDF/JSON audit trail mapping each ADL finding to its source utterance, timestamp, and FHIR resource ID

The Granular Logic: Why Each Step Matters

Step 1 — TUG Extraction is not simple dictation transcription. The clinician said "about 16 seconds"—Scribing.io's numeric extraction normalizes this to a discrete value (16s), classifies it against the clinically validated ≥12-second threshold from NIH-published TUG reference data, and flags it as elevated fall risk. The phrase "needed a hand to stand" is parsed as a transfer deficit requiring physical assistance—not filed under "gait" or "mobility" where it would remain clinically inert for billing purposes.

Step 2 — Caregiver ADL Disclosure uses speaker diarization to identify the daughter as a collateral source, then applies entity extraction to her statement. "I help her with bathing" maps to Bathing—supervision or minimal assist. "Getting out of the chair" maps to Transfers—minimal assist. These are correlated with Step 1 data to build a multi-source functional picture.

Step 3 — ADL/IADL Normalization is the step no competitor performs. The system takes the raw extraction from Steps 1 and 2 and maps each finding to the CMS-recognized ADL/IADL taxonomy with explicit assistance levels. It cross-references the patient's active problem list: DM2 triggers a medication management IADL flag because functional impairment in an insulin-dependent patient creates a medication safety risk. This flag goes to the clinician for confirmation—Scribing.io does not auto-populate clinical judgments, but it does ensure the clinician is prompted to address documentation that the encounter has already surfaced.

Step 4 — FHIR Observation Generation creates discrete, structured data objects that persist beyond the visit note. These FHIR Observations can be queried longitudinally to track functional decline, satisfy MIPS quality measures, and feed into population health dashboards. They are coded with LOINC identifiers (e.g., LOINC 64598-3 for TUG score) for maximum interoperability.

Step 5 — MDM Language Insertion generates the specific narrative text that auditors require. Note the structure: it names the ADLs, states the assistance levels, provides the TUG value, identifies the assistive device, and—critically—connects these functional findings to the complexity of managing the patient's chronic conditions. This connection is what G2211 demands and what template-based notes almost universally omit.

Step 6 — ICD-10 Recommendation suggests R54 and Z74.1 with supporting clinical rationale. These codes appear in the problem list, not just the billing claim, anchoring the medical necessity narrative to the patient's longitudinal record.

Step 7 — Audit Trail Export produces a one-click exportable document that maps every ADL finding back to the source utterance with a timestamp. When a post-pay auditor requests supporting documentation, your practice can respond with a PDF that shows: at 14:23:07, the patient's daughter stated "I help her with bathing and getting out of the chair," which was normalized to Bathing — Supervision and Transfers — Minimal Assist per CMS ADL taxonomy.

The Result

The claim holds at 99214 + G2211. The note contains a numeric TUG value tied to fall-risk stratification, named ADL deficits with CMS-recognized assistance levels, MDM language explicitly connecting functional impairment to chronic disease management complexity, R54 and Z74.1 on the problem list, a fall-risk management plan (PT referral, home safety evaluation, medication review for orthostatic contributors), and an exportable, timestamped audit trail.

See ADL-from-TUG auto-capture generate discrete FHIR Observations, suggest R54/Z74.1, and insert G2211-ready MDM text into your EHR—plus one-click audit transcript export—in a 10‑minute live demo.

What Competitors Miss: The TUG-to-ADL Intelligence Gap

Existing AI scribe platforms approach geriatric documentation as a template-filling exercise. The competitor landscape provides structured sections for ADLs, IADLs, falls, mobility, and gait. On the surface, this looks comprehensive. In practice, it creates a dangerous illusion of completeness.

The Three Structural Failures of Template-Based Geriatric Documentation

1. Passive Field Population, Not Active Extraction

Template-based tools include ADL fields that populate only if the clinician explicitly dictates structured ADL data. In real-world geriatric encounters, ADL impairments are disclosed conversationally—by the patient during the TUG, by the caregiver in an aside, by the clinician's observation of how the patient rises from the chair. A passive template cannot parse "needed a hand to stand" as a transfer deficit. It waits for the clinician to say "ADL: transfers, minimal assist"—language that rarely appears in conversational medicine.

2. No TUG-to-ADL Mapping Logic

The TUG exam is treated as a mobility/falls data point in every competing platform we have evaluated. No competitor maps the TUG conversation to discrete ADL/IADL categories. This is a critical miss because the TUG inherently tests multiple functional domains simultaneously:

TUG Component

ADL/IADL It Directly Informs

What Competitors Document

What Scribing.io Documents

Rising from chair

Transfers (bed, chair, toilet)

"Falls: yes" or "Mobility: requires assist"

Transfers — minimal assist; Toileting — flagged for confirmation

Walking with/without device

Ambulation, community mobility (IADL)

"Uses walker" or "Uses cane"

Ambulation — assistive device (cane); community mobility — limited (IADL)

Turning and balance

Bathing safety, dressing (standing balance)

"Unsteady gait"

Bathing — supervision required (fall risk in wet environment); Dressing — lower body assist likely

Sitting down

Transfers, fall risk during toileting

"Gait abnormality"

Transfers — confirmed minimal assist; fall risk plan required

TUG time ≥12 seconds

Global ADL risk stratification

Often not recorded numerically

TUG 16s → elevated fall risk → ADL screen triggered across all 6 domains

3. No Revenue-Protective Feedback Loop

Template-based tools do not alert the clinician that their note is vulnerable to downcoding. They do not analyze whether the MDM narrative connects functional impairment to management complexity. They do not recommend functional status ICD-10 codes (R54, Z74.1) that strengthen the medical necessity argument. They document what was said—not what is needed for the note to survive audit. Scribing.io runs a real-time MDM completeness check against AMA E/M guidelines and flags documentation gaps before the clinician signs the note.

Technical Reference: ICD-10 Documentation Standards

Functional status coding in geriatrics is where revenue protection and clinical accuracy converge. The two codes most relevant to the ADL Gap are:

R54 — Age-related physical debility; Z74.1 — Need for assistance with personal care

R54 — Age-Related Physical Debility

R54 captures the clinical concept of "frailty" in a form that CMS recognizes as a codeable, documentable condition—not a subjective impression. To support R54, the note must contain:

  • Objective functional measurement: TUG time, grip strength, gait speed, or equivalent. A TUG of 16 seconds with assistive device use meets the threshold.

  • Clinical context: How the debility affects medical management. Scribing.io auto-generates language linking R54 to medication management challenges, fall risk, and chronic disease monitoring barriers.

  • Distinction from R53.1 (weakness) or R53.81 (other malaise): R54 is the correct code when the presentation reflects age-related multi-system decline rather than an acute or isolated symptom. Scribing.io's differential code logic evaluates the patient's age, comorbidity count, and functional assessment data to recommend R54 over less specific alternatives.

Z74.1 — Need for Assistance with Personal Care

Z74.1 captures the caregiver-dependency dimension that G2211 requires. To support Z74.1, the note must contain:

  • Named ADLs requiring assistance: Bathing, transfers, dressing, toileting, feeding, or continence—with the specific type of assistance (supervision, minimal assist, moderate assist, etc.).

  • Source of assistance: Family caregiver, paid aide, or facility staff. Scribing.io captures this from caregiver disclosures and attributes it to the correct speaker via diarization.

  • Longitudinal relevance: Z74.1 belongs on the problem list, not just the encounter claim, because it reflects an ongoing care need that increases visit complexity at every encounter. Scribing.io recommends problem list addition with a prompt explaining why.

Maximum Specificity to Prevent Denials

Scribing.io's ICD-10 recommendation engine applies three specificity checks before suggesting a code:

  1. Clinical evidence match: Does the note contain objective findings (TUG score, ADL deficits) that map to the code's CMS ICD-10 clinical descriptors?

  2. Specificity hierarchy: Is there a more specific code available? For example, if the patient's debility is attributable to a specific condition (e.g., sarcopenia M62.84), the system recommends the specific code first and R54 as a secondary code only if the debility is multi-factorial.

  3. Pairing logic: R54 and Z74.1 are recommended together when the note documents both the clinical state (debility) and its functional consequence (need for personal care assistance). This pairing creates a reinforcing documentation structure that auditors recognize as internally consistent.

Revenue Impact Modeling for a Typical Geriatrics Panel

The financial case for closing the ADL Gap is not theoretical. Below is a conservative model based on published CMS Physician Fee Schedule rates and audit patterns reported in geriatrics practice management literature.

Metric

Without Scribing.io

With Scribing.io

Established visits billed 99214/quarter

120

120

Visits with adequate ADL documentation

~55% (66 visits)

~94% (113 visits)

Visits surviving post-pay audit at 99214

~70% of billed (84 visits)

~96% of billed (115 visits)

G2211 claims surviving audit

~50% (60 visits)

~93% (112 visits)

Quarterly revenue from 99214 differential ($50/visit)

$4,200 (84 × $50)

$5,750 (115 × $50)

Quarterly revenue from G2211 (~$16–$33/visit)

$1,380 (60 × $23 avg)

$2,576 (112 × $23 avg)

Net quarterly revenue improvement

+$2,746

Annualized per clinician

+$10,984

Annualized per 4-clinician practice

+$43,936

These figures exclude the avoided cost of audit response (staff time, compliance counsel, potential extrapolation penalties) and the downstream revenue from R54/Z74.1 coding that supports higher HCC risk adjustment in Medicare Advantage contracts. For practices with significant MA panel mix, the risk-adjustment impact alone can exceed the E/M differential.

Implementation Workflow: From Go-Live to Audit-Ready in 14 Days

Day

Activity

Responsible Party

Deliverable

1–2

EHR integration scoping (Epic, athena, eCW, or FHIR-native)

Scribing.io Implementation Team + IT

Integration spec document; FHIR endpoint configuration

3–4

Geriatric NLP module activation; TUG entity recognition calibration with 10 sample encounters

Scribing.io Clinical Engineering

Calibrated model with practice-specific TUG conversation patterns

5–7

Clinician onboarding: 30-minute workflow training per provider; ADL normalization review

Practice Medical Director + Scribing.io CSM

Clinician sign-off on auto-generated MDM language preferences

8–10

Shadow mode: Scribing.io runs parallel to existing documentation; outputs reviewed but not pushed to chart

All clinicians

Accuracy report: TUG extraction rate, ADL normalization accuracy, ICD-10 recommendation precision

11–12

Go-live: outputs pushed to EHR staging area for clinician review and sign-off

All clinicians

First production notes with embedded ADL documentation

13–14

Billing team audit of first 20 production notes against CMS E/M criteria

Practice billing/compliance

Audit pass rate report; final workflow adjustments

Post go-live, Scribing.io provides monthly documentation quality reports showing TUG capture rates, ADL documentation completeness percentages, G2211 justification rates, and ICD-10 specificity scores. These reports are designed for Medical Director review and can be submitted to compliance committees as part of ongoing audit-readiness documentation.

Frequently Asked Questions

Does Scribing.io auto-populate ADL data without clinician review?

No. Scribing.io extracts and normalizes ADL/IADL data from the encounter audio and presents it in the note draft for clinician review before signing. The system highlights extracted findings with source attribution so the clinician can confirm, modify, or reject each item. This preserves clinical judgment while eliminating the documentation labor that causes ADL fields to be left blank.

What if the clinician does not perform a formal TUG?

Most geriatric clinicians perform an informal TUG—observing the patient stand, walk, and sit—without explicitly timing it. Scribing.io's NLP recognizes informal TUG language ("she was a bit wobbly getting up," "took a while to get to the door") and extracts functional data from these observations. When a numeric time is not stated, the system documents qualitative findings (e.g., "unsteady sit-to-stand, required armrest and verbal cue") and flags the absence of a numeric TUG for the clinician to add if desired.

How does Scribing.io handle patients who decline functionally between visits?

The FHIR Observation resources generated at each visit create a longitudinal functional record. Scribing.io's trend detection flags when an ADL assistance level has increased (e.g., Transfers moved from "supervision" to "minimal assist" since the last visit) and suggests updated language for the MDM narrative that documents functional decline. This is particularly valuable for supporting continued G2211 billing and for Medicare Advantage risk adjustment documentation.

Is the audit trail HIPAA-compliant?

Yes. The audit trail is generated from the same encrypted, access-controlled data store as the clinical note. Exports are available in PDF (for payer submission) and JSON (for compliance system integration). Access is logged and role-restricted per your practice's HIPAA policies.

What EHR systems does Scribing.io integrate with for geriatric workflows?

Scribing.io supports FHIR R4 API integration with Epic, athenahealth, eClinicalWorks, and other FHIR-enabled EHRs. For systems without native FHIR support, a lightweight middleware layer handles bidirectional data exchange. The geriatric NLP module—including TUG extraction, ADL normalization, and ICD-10 recommendation—functions identically across all supported platforms.

Stop losing revenue to the ADL Gap. See ADL-from-TUG auto-capture generate discrete FHIR Observations, suggest R54/Z74.1, and insert G2211-ready MDM text into your EHR—plus one-click audit transcript export—in a 10‑minute live demo.

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.