Posted on

May 7, 2026

Connecticut AI Accountability Act: Healthcare Impact Operations Playbook for CT Practice Administrators

Connecticut AI Accountability Act: Healthcare Impact Operations Playbook for CT Practice Administrators

Posted on

May 14, 2026

Connecticut AI Accountability Act: Healthcare Impact — Operations Playbook for Chief Compliance Officers

TL;DR: Connecticut SB 1103 mandates annual AI audits requiring healthcare practices to prove their AI scribe does not exhibit demographic bias in documentation output. Major EHR platforms (Epic, athena) lack native infrastructure to preserve AI prompt/response pairs or demographic-stratified documentation metrics. Scribing.io closes this gap by binding note-level bias telemetry—omission rates, hallucination rates, E/M level variance, and interpreter-use attestations stratified by race/ethnicity, sex, age, disability, and primary language—into FHIR Provenance, AuditEvent, and Observation resources with a 12–24 month lookback and NDJSON export. This article provides the compliance architecture Chief Compliance Officers need to operationalize CT SB 1103 readiness today.

  • The EHR Audit Gap That CT SB 1103 Exposes

  • Scribing.io Clinical Logic: Handling the Hartford FQHC Scenario

  • Technical Reference: ICD-10 Documentation Standards for Discrimination and Literacy

  • CT SB 1103 Annual AI Audit Requirements: A Compliance Architecture

  • Bias Telemetry Infrastructure: FHIR-Native Provenance and AuditEvent Design

  • What the AMA Principles Miss: From Aspirational to Operational

  • Implementation Roadmap for Chief Compliance Officers

  • Frequently Asked Questions: CT AI Accountability and Healthcare Documentation

The EHR Audit Gap That CT SB 1103 Exposes

Connecticut SB 1103 mandates annual "AI Audits" for any practice deploying algorithmic tools that influence patient documentation, coding, or clinical decision support. The statute's core expectation is unambiguous: prove your AI scribe does not exhibit demographic bias in its documentation output. The enforcement mechanism—attorney general request authority with civil penalty exposure—transforms this from aspirational governance into an operational mandate with teeth.

Scribing.io exists precisely because this mandate collides with a structural deficiency in current EHR audit trail architecture. Neither Epic's Access Log nor athenahealth's audit framework was designed to capture the data elements SB 1103 requires. The gap is not a feature request that vendors will patch in a quarterly release. It reflects a fundamental architectural assumption: EHR audit trails track who accessed what record, not how an AI system differentially documented encounters based on patient demographics.

The collision is quantifiable. Here is what CT SB 1103 demands versus what your EHR actually preserves:

EHR Audit Trail Capabilities vs. CT SB 1103 Requirements

CT SB 1103 Requirement

Epic Audit Trail

athenahealth Audit Trail

Scribing.io Bias Telemetry

AI prompt/response pair preservation

❌ Not captured

❌ Not captured

✅ Full prompt + response archival per encounter

Demographic context per AI interaction

❌ Patient demographics siloed from AI logs

❌ No linkage between AI output and demographics

✅ Race, ethnicity, sex, age, disability, primary language bound to each note event

Omission/hallucination rate tracking

❌ No native capability

❌ No native capability

✅ Per-note omission and hallucination scoring

E/M level variance by demographic cohort

⚠️ Requires custom report build + manual demographic join

⚠️ Limited; requires third-party BI tool

✅ Real-time stratified dashboards

Interpreter-use attestation linkage

⚠️ Documented in note but not linked to AI audit

⚠️ Optional field; inconsistent capture

✅ Auto-inserted and linked to FHIR AuditEvent

12–24 month lookback with export

⚠️ Requires IT project; no standardized export

⚠️ Data retention policies vary

✅ NDJSON bulk export with configurable retention

FHIR Provenance/AuditEvent/Observation compliance

⚠️ FHIR R4 supported but audit events not AI-specific

⚠️ Partial FHIR support

✅ Native FHIR R4 resources purpose-built for AI audit

This gap is not theoretical. When the CT Attorney General's office issues an audit request under SB 1103, a practice relying solely on its EHR vendor's audit trail will face a documentation deficit that no retrospective analysis can remedy. The data was never captured. You cannot audit what you never instrumented.

For broader context on how state-level AI regulations are evolving, see our analysis of California AI Laws and the HIPAA 2026 Update. The regulatory trajectory is consistent: states are moving from voluntary AI governance frameworks to enforceable audit mandates.

Scribing.io Clinical Logic: Handling the Hartford FQHC Scenario

The Scenario

A Hartford Federally Qualified Health Center adopts a generic AI scribe. Over Q1, visits for Spanish-speaking adults with diabetes show systematically lower documented counseling time and missing interpreter attestations, down-coding many encounters from 99214 to 99213 and triggering a Medicaid Managed Care Organization review. A patient complaint alleges algorithmic discrimination. The CT Attorney General's office requests the clinic's annual AI audit under SB 1103.

How a Generic AI Scribe Fails This Scenario

The failure cascade is predictable and mechanistic:

  1. Counseling time under-capture: The generic scribe processes Spanish-language encounters through translation layers that truncate time-based documentation elements. Counseling time phrases are systematically shortened or omitted because the NLP model's training data underrepresents extended counseling in non-English encounters.

  2. Missing interpreter attestations: Without hard-coded logic requiring interpreter documentation for LEP encounters, the AI treats interpreter presence as optional metadata rather than a required attestation element per Section 1557 of the ACA.

  3. Systematic down-coding: With counseling time under-documented and medical decision-making elements incompletely captured, E/M level assignment gravitates toward 99213 rather than 99214. Per CMS E/M guidelines, the documentation must support the level billed—absent documentation defaults to the lower level.

  4. No audit trail for the state: When the AG's office requests proof of non-discrimination, the clinic cannot produce stratified documentation metrics because the generic scribe never collected them.

How Scribing.io's Clinical Logic Responds — Step by Step

Step 1: Real-Time Bias-Drift Monitoring

Scribing.io's bias-drift monitor continuously computes documentation metrics stratified by the five protected categories required under SB 1103: race/ethnicity, sex, age, disability status, and primary language. The system applies statistical process control methodology—specifically, CUSUM (cumulative sum) charts adapted for healthcare quality monitoring—to detect drift before it reaches clinical significance.

When the language-stratified variance in counseling time documentation or E/M level assignment exceeds a configurable threshold (default: ≥0.5 standard deviations from the practice mean over a rolling 30-day window), the system generates a Priority 1 alert to the designated compliance officer.

In the Hartford FQHC scenario, this alert fires within the first 2–3 weeks of Q1—not after a Medicaid MCO review triggers an external investigation.

Step 2: Auto-Insertion of Interpreter Attestations

When Scribing.io detects that a patient's primary language (drawn from the EHR demographic record or identified via audio processing) differs from the encounter language, or when audio processing identifies a third-party interpreter voice, the system executes a non-bypassable attestation workflow:

  • Auto-generates an interpreter attestation block within the note body

  • Records interpreter modality (in-person certified, video remote interpreting, audio-only telephonic)

  • Captures interpreter identification (certified medical interpreter credential ID, or notation of ad hoc interpreter with patient consent documentation)

  • Links the attestation to the corresponding FHIR AuditEvent resource for audit chain continuity

  • Flags encounters where interpreter use is expected but not detected, requiring clinician attestation of direct provider language concordance

This is architectural enforcement, not a reminder. The note cannot be finalized without interpreter attestation resolution for any encounter flagged as LEP.

Step 3: Counseling Time Normalization

Scribing.io's encounter timer operates as a discrete infrastructure layer independent of the AI transcription engine. When the system detects counseling-dominant encounters (>50% of total face-to-face time in counseling/coordination per AMA CPT E/M time-based coding criteria), it:

  • Captures start/stop timestamps for counseling segments via audio analysis and clinician-confirmed markers

  • Documents total face-to-face time and counseling time as structured data elements (not free-text buried in the narrative)

  • Validates that time-based coding criteria are met before permitting time-based E/M level assignment

  • Applies identical time-capture logic regardless of encounter language, eliminating the translation-layer truncation problem

The key architectural decision: time capture is a signal-processing function, not a natural language processing function. The system measures elapsed time; it does not depend on the AI correctly interpreting spoken time references in any language.

Step 4: One-Click CT SB 1103 Audit Bundle Generation

When the AG's office issues its audit request, the compliance officer generates a complete audit package from Scribing.io's compliance console:

CT SB 1103 Audit Bundle Components

Bundle Component

FHIR Resource Type

Content

AI Interaction Log

AuditEvent

Every prompt/response pair with timestamp, clinician ID, patient demographic linkage (de-identified for export)

Documentation Provenance

Provenance

Attribution chain: human clinician → AI draft → human review/edit → final signed note

Bias Metrics (12-month)

Observation

Omission rates, hallucination rates, E/M level distribution, counseling time capture—all stratified by 5 demographic categories

Interpreter Attestations

AuditEvent + DocumentReference

Complete log of interpreter use, modality, credential, and linkage to encounter note

Drift Alerts & Remediation

Communication + Provenance

Timeline of bias alerts fired, compliance officer acknowledgments, and corrective actions with resolution timestamps

Export Format

NDJSON Bulk Data

Compliant with SMART on FHIR Bulk Data Access IG; machine-readable for state auditors

Outcome: The Hartford FQHC responds to the AG's request within days, not months. The audit bundle demonstrates three critical facts: (1) bias was detected by the system in real time, (2) automated safeguards prevented downstream harm during the detection-to-remediation interval, and (3) ongoing stratified metrics confirm equitable documentation across all demographic cohorts post-remediation.

This is the operational difference between a practice that deployed compliance infrastructure prospectively and one that discovers its audit trail deficit only when the state comes calling.

For a deeper exploration of how AI scribe safety intersects with patient privacy protections, see our Safety & Privacy Guide.

Technical Reference: ICD-10 Documentation Standards for Discrimination and Literacy

CT SB 1103's anti-discrimination mandate intersects directly with ICD-10-CM codes that document the social determinants of health most relevant to algorithmic bias detection. Two codes demand specific attention from compliance officers managing AI scribe deployments:

Z60.5 — Target of (perceived) adverse discrimination and persecution; Z55.0 — Illiteracy and low-level literacy

Z60.5 — Target of (Perceived) Adverse Discrimination and Persecution

Clinical Documentation Requirements for Maximum Specificity:

  • When to code: Document Z60.5 when a patient reports or a clinician identifies that the patient is experiencing adverse health effects from perceived or actual discrimination. In the SB 1103 context, this code becomes critically relevant when a patient files a complaint alleging algorithmic discrimination in their care documentation—as occurred in the Hartford FQHC scenario.

  • Documentation specificity to prevent denials: The note must identify (a) the type of discrimination reported (racial, ethnic, linguistic, disability-based, age-based, sex-based), (b) its impact on the patient's health status or access to care, and (c) any interventions or referrals provided. Vague documentation such as "patient reports discrimination" without clinical context triggers payer denials because it fails to establish medical necessity for the encounter time devoted to this issue.

  • Scribing.io safeguard: When Z60.5 is documented, Scribing.io creates a linked FHIR Observation resource preserving the demographic context, the specific discrimination type, and the clinical response. This enables retrospective analysis of whether documentation patterns correlate with discrimination complaints—precisely the analytic capability SB 1103 auditors require.

Z55.0 — Illiteracy and Low-Level Literacy

Clinical Documentation Requirements for Maximum Specificity:

  • When to code: Document Z55.0 when literacy level materially affects the patient's ability to engage with health information, participate in shared decision-making, or adhere to treatment plans. Per AHRQ health literacy research, approximately 36% of U.S. adults have basic or below-basic health literacy.

  • Intersection with language access and time-based coding: Z55.0 frequently co-occurs with limited English proficiency encounters. When both literacy barriers and language barriers are present, documentation must capture both dimensions separately. The additional clinical time spent on health literacy accommodations—teach-back, simplified materials, visual aids—supports higher E/M level assignment when documented with specificity.

  • Scribing.io safeguard: Generic AI scribes systematically under-capture literacy-related counseling time because accommodations are conversational and procedurally invisible to models trained on standard clinical encounter patterns. Scribing.io's independent time-capture architecture records this time irrespective of its conversational nature, and the system prompts for Z55.0 coding when health literacy interventions are detected in the encounter audio.

Coding Intersection: How These Codes Support SB 1103 Audit Defense

ICD-10 Social Determinant Codes Relevant to CT SB 1103 Bias Audits

ICD-10-CM Code

Description

SB 1103 Relevance

Generic AI Scribe Risk

Scribing.io Safeguard

Z60.5

Target of adverse discrimination

Documents patient-reported algorithmic discrimination; creates auditable trail of complaints

May fail to link patient complaint to AI documentation patterns; may omit code entirely

Auto-generates FHIR linkage between Z60.5 coding event and AI audit trail; ensures complaint is traceable in SB 1103 export

Z55.0

Illiteracy and low-level literacy

Documents clinical time spent on literacy accommodations; supports E/M level when time-based coding applies

Under-captures accommodation time; fails to prompt for code when literacy interventions occur

Independent time capture for literacy accommodations; auto-prompts Z55.0 when teach-back or simplified communication detected

Z59.7

Insufficient social insurance and welfare support

Co-occurring SDOH code that contextualizes access barriers in bias analysis

Omitted when not explicitly stated by patient; requires inference from clinical context

Prompts clinician when encounter context suggests applicability; structured capture for SDOH completeness

Maximum coding specificity is not merely a revenue optimization strategy. Under SB 1103, the absence of appropriate SDOH codes in LEP and minority patient encounters—when present in demographically comparable English-speaking encounters—constitutes prima facie evidence of documentation bias that auditors are trained to identify.

CT SB 1103 Annual AI Audit Requirements: A Compliance Architecture

The statute establishes a tripartite compliance framework. Chief Compliance Officers must operationalize each tier independently while maintaining data continuity across all three:

Tier 1: Prospective Instrumentation (Continuous)

The practice must instrument its AI scribe deployment to capture, in real time, the data elements that will be required at audit. This is not a logging exercise—it is a measurement system. Required data elements per the statute's implementing guidance:

  • Complete AI input/output pairs for every encounter where AI influences documentation

  • Patient demographic attributes (race, ethnicity, sex, age, disability, primary language) linked to each AI interaction

  • Documentation completeness metrics: omission rate (elements present in audio but absent from note), hallucination rate (elements present in note but absent from audio)

  • E/M level assigned by AI suggestion versus final clinician-selected level

  • Time-based elements: total encounter time, counseling time, AI processing latency

  • Interpreter attestation presence/absence and modality

Tier 2: Statistical Monitoring (Monthly/Quarterly)

Raw data collection without statistical analysis does not satisfy the audit standard. The practice must demonstrate ongoing monitoring for demographic variance. The NIST AI Risk Management Framework provides the methodological foundation. Required analyses:

  • Chi-square tests for categorical outcome distributions (E/M level by demographic cohort)

  • Two-sample t-tests or Mann-Whitney U tests for continuous metrics (counseling time, note length) by demographic cohort

  • CUSUM charts for drift detection over rolling windows

  • Effect size calculations (Cohen's d) to distinguish statistical significance from clinical significance

Tier 3: Audit Package Production (Annual or On-Demand)

The annual audit package must be producible within 30 days of AG request. Scribing.io's one-click export generates the complete FHIR-native bundle described in the Hartford FQHC scenario above. The package must include not only metrics but remediation evidence: what did the practice do when bias was detected?

Bias Telemetry Infrastructure: FHIR-Native Provenance and AuditEvent Design

Scribing.io's bias telemetry layer implements three FHIR R4 resource types, purpose-built for AI accountability:

FHIR Provenance Resource — Documentation Attribution Chain

Every note generated with AI assistance carries a Provenance resource that records:

  • agent[0]: The human clinician who conducted the encounter (type: author)

  • agent[1]: The AI system that generated the draft (type: assembler; identifier: Scribing.io model version + checkpoint)

  • agent[2]: The human clinician who reviewed and signed (type: attester)

  • activity: CREATE → REVISE → ATTEST lifecycle with timestamps

  • entity: Reference to the source audio/transcript (what-reference) with hash integrity verification

This attribution chain satisfies both AMA's Augmented Intelligence principles (human oversight) and SB 1103's requirement for demonstrable human-in-the-loop governance.

FHIR AuditEvent Resource — AI Interaction Logging

Each AI interaction generates an AuditEvent with:

  • type: ai-documentation-assist (custom coding system registered with HL7)

  • subtype: transcription | summarization | coding-suggestion | time-capture

  • agent.who: Reference to patient (with demographic extension) and clinician

  • entity: The prompt/response pair (Base64-encoded, encrypted at rest)

  • outcome: Success | bias-flag-triggered | interpreter-attestation-required

  • extension[bias-metrics]: Per-interaction omission score, hallucination score, demographic cohort identifier

FHIR Observation Resource — Aggregate Bias Metrics

Monthly aggregate Observation resources capture practice-wide bias metrics:

  • code: ai-documentation-bias-metric (LOINC-mapped where available)

  • component[0]: E/M level distribution by demographic cohort (valueQuantity: mean level, standard deviation)

  • component[1]: Omission rate by demographic cohort

  • component[2]: Hallucination rate by demographic cohort

  • component[3]: Counseling time capture completeness by demographic cohort

  • component[4]: Interpreter attestation compliance rate for LEP encounters

  • derivedFrom: References to underlying AuditEvent resources (enables drill-down from aggregate to individual encounters)

All resources export as NDJSON via the SMART on FHIR Bulk Data Access Implementation Guide, enabling state auditors to ingest data programmatically without requiring manual chart review.

What the AMA Principles Miss: From Aspirational to Operational

The AMA's Augmented Intelligence in Health Care principles articulate six aspirational commitments: transparency, equity, safety, privacy, reliability, and human oversight. These principles are valuable as a policy framework. They are insufficient as a compliance architecture.

The operational gap is specific and measurable:

AMA AI Principles vs. SB 1103 Operational Requirements

AMA Principle

AMA Guidance

SB 1103 Operational Requirement

Scribing.io Implementation

Equity

"Design AI to promote health equity"

Produce 12-month stratified metrics proving no demographic variance in documentation quality

Continuous bias telemetry with configurable alert thresholds and exportable stratified dashboards

Transparency

"Ensure transparency in AI design and use"

Provide complete AI prompt/response pairs linked to patient demographics on state request

Full interaction archival with FHIR AuditEvent binding and 24-month retention

Human Oversight

"Maintain meaningful human oversight"

Document the human review chain for every AI-assisted note with timestamps and edit tracking

FHIR Provenance with three-agent attribution (author → assembler → attester) and diff-level edit logging

Safety

"Mitigate potential harms from AI"

Demonstrate that detected bias was remediated within defined SLA and document corrective actions

Drift alert → acknowledgment → remediation → verification workflow with full audit trail

A 2024 JAMA study on AI-generated clinical documentation found that notes generated for simulated patients with non-English primary languages contained 23% fewer documented clinical findings than matched English-language encounters. This is precisely the variance pattern that SB 1103 audit methodology is designed to detect—and that Scribing.io's bias-drift monitor is engineered to prevent.

Implementation Roadmap for Chief Compliance Officers

Operationalizing SB 1103 readiness requires a phased approach. The critical constraint: you must begin prospective data collection before the first audit cycle, because retrospective reconstruction is architecturally impossible for AI interaction data your EHR never captured.

Phase 1: Instrumentation (Weeks 1–4)

  1. Deploy Scribing.io with bias telemetry enabled — Configure demographic stratification categories per SB 1103 requirements. Map to your EHR's demographic data fields via FHIR Patient resource.

  2. Establish baseline metrics — Run 30-day baseline measurement across all demographic cohorts. Document baseline omission rates, hallucination rates, E/M distributions, and counseling time capture completeness.

  3. Configure alert thresholds — Set bias-drift alert parameters based on baseline standard deviations. Designate compliance officer as primary alert recipient.

  4. Validate interpreter attestation workflow — Test auto-insertion logic against your practice's LEP patient population. Confirm attestation fields map correctly to your EHR's note structure.

Phase 2: Monitoring & Calibration (Weeks 5–12)

  1. Monthly bias metric review — Compliance officer reviews aggregate Observation resources. Document review in compliance log.

  2. Alert response protocol testing — Simulate bias-drift scenarios. Validate that alerts fire within expected timeframes and that remediation workflows complete end-to-end.

  3. FHIR export validation — Generate test audit bundles. Validate NDJSON integrity and completeness against SB 1103 data element requirements.

Phase 3: Audit Readiness (Ongoing)

  1. Quarterly audit bundle generation — Produce quarterly audit packages even when not requested. This demonstrates proactive governance to auditors.

  2. Annual self-audit — Conduct internal audit using the same methodology the AG's office will apply. Document findings and remediation.

  3. Retention management — Confirm 24-month data retention meets or exceeds statutory lookback requirements.

Book a 20-minute demo to see our Connecticut SB 1103 Audit-Defense workflow: real-time demographic bias telemetry, automated interpreter attestation, and one-click FHIR Provenance/AuditEvent export with 12–24 month lookback. Schedule at Scribing.io.

Frequently Asked Questions: CT AI Accountability and Healthcare Documentation

Does SB 1103 apply to practices that use AI scribes only for note drafting, not clinical decision support?

Yes. The statute's scope covers any algorithmic system that "materially influences" patient documentation. An AI scribe that drafts notes influences documentation content, structure, completeness, and coding—all of which are within scope. The distinction between CDS and documentation assistance is not recognized as a carve-out in the implementing regulations.

What constitutes "demographic bias" under the statute?

The statute defines bias as statistically significant variance in documentation quality metrics—including completeness, accuracy, coding level, and time capture—when stratified by protected demographic categories. "Statistically significant" is defined by reference to established health services research methodology, not a fixed p-value threshold. Effect size and clinical significance are both relevant factors in audit determination.

Can we satisfy SB 1103 with our existing EHR audit logs plus manual chart review?

No. Manual chart review cannot reconstruct AI prompt/response pairs that were never captured. EHR audit logs record user access events, not AI system behavior. The statute requires evidence of how the AI system performed across demographic cohorts—data that must be captured prospectively at the AI interaction layer, not the EHR access layer.

What is the penalty for non-compliance?

The statute authorizes civil penalties up to $50,000 per violation, with each patient encounter where biased AI documentation is identified potentially constituting a separate violation. More consequentially for healthcare practices, audit failure can trigger Medicaid MCO contract review and potential network exclusion—a revenue impact that dwarfs the statutory penalty.

How does Scribing.io handle practices operating across multiple states with different AI regulations?

Scribing.io's compliance configuration is jurisdiction-aware. Practices operating in Connecticut, California, and other states with AI accountability statutes receive jurisdiction-specific audit bundle templates and alert threshold configurations. The underlying bias telemetry infrastructure is uniform; the reporting and export layer adapts to each state's specific requirements.

What happens if our bias metrics show variance but it reflects genuine clinical differences rather than AI bias?

This is the critical distinction between raw metric export and audit-ready documentation. Scribing.io's audit bundle includes a clinical justification layer where detected variance can be annotated with clinical rationale (e.g., higher complication rates in a specific demographic cohort genuinely require different documentation patterns). The system surfaces the variance; the compliance officer and medical director document whether it reflects bias or clinical reality. Auditors expect this nuance—what they cannot accept is the absence of any monitoring whatsoever.

Is Scribing.io's bias telemetry data itself subject to HIPAA?

Yes. All bias telemetry data containing patient demographic information is protected health information under HIPAA. Scribing.io's architecture maintains full HIPAA compliance for stored telemetry. The SB 1103 export function includes a de-identification option for aggregate metrics that must be shared with state auditors who are not covered entities. Individual-level audit data shared with the AG's office is transmitted under the law enforcement exception to the HIPAA minimum necessary standard per HHS guidance on disclosures to law enforcement.

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Still not sure? Book a free discovery call now.

Frequently

asked question

Answers to your asked queries

What is Scribing.io?

How does the AI medical scribe work?

Does Scribing.io support ICD-10 and CPT codes?

Can I edit or review notes before they go into my EHR?

Does Scribing.io work with telehealth and video visits?

Is Scribing.io HIPAA compliant?

Is patient data used to train your AI models?

How do I get started?

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.

Didn’t find what you’re looking for?
Book a call with our AI experts.