AI Governance Staff Training
Enter your information to begin.
Your practice uses AI tools that listen to patient visits and generate clinical notes. The AI creates a DRAFT — not a finished record.
Your provider reviews it, corrects errors, and signs it. Only then does it become a legal medical record.
Your role depends on your position, but everyone has a part in making sure AI is used safely and correctly.
AI can fabricate clinical findings that never happened. Published studies show error rates above 25% across major platforms. Every error in a signed note is a potential liability.
Fabricated Findings
Exam details you never performed documented as if they happened.
Invented Medications
Drugs never prescribed listed in the patient's record.
False History
Past medical history or allergies that don't exist.
Wrong Codes
Billing codes inflated beyond what the visit supports.
When a provider signs an AI-generated note, it carries the same legal weight as a handwritten note.
The AI vendor is not liable — their contracts explicitly disclaim accuracy. The provider who signed it is liable.
Insurance carriers are now adding AI exclusion endorsements. If your policy has one and an AI-related claim is filed, the claim gets denied. You bear the full cost.
- Review every note word-for-word — not a skim
- Verify all exam findings — did you actually perform them?
- Verify all medications — did you actually prescribe them?
- Document your verification (amendment, addendum, or signature note)
- Never sign a note you can't fully defend
Medical Assistant
AI notes are drafts with errors. Flag obvious problems before the provider signs. Make sure patient disclosure is documented. Know how to disable the AI system in an emergency.
Front Desk
Give every patient the AI disclosure notice. Document their consent or opt-out. If they opt out, flag the record immediately.
Billing
Never submit AI-suggested codes without provider verification. AI frequently recommends inflated codes. $28,619 per false claim under the False Claims Act.
Every patient must be informed AI is used in their care.
Error Correction Protocol:
Kill Switch — disable AI immediately when:
Patient harm event
Systematic errors detected
Data breach suspected
Regulatory inquiry received
Following these protocols isn't just about the practice — it protects you personally.
- Malpractice claims: Documented governance shows you followed proper procedures
- Insurance reviews: Governance documentation demonstrates responsible AI use
- Regulatory audits: Your compliance record is already built
- Personal liability: Following protocols gives you individual evidence
Questions about your practice's certification? Contact us at sentinelriskgrp.com
If your practice provides any behavioral health, psychiatric, or substance use disorder (SUD) treatment — or if you see patients via telehealth who receive those services — two additional layers of regulation apply.
Substance use disorder treatment records have stricter federal protections than standard HIPAA. Key points every staff member must know:
- Separate consent required. A general HIPAA authorization does NOT cover SUD records. Patients must give specific, written consent before SUD information can be disclosed — even to other providers.
- AI systems don't know the difference. An AI scribe will document substance use history, treatment details, and medication-assisted therapy the same way it documents anything else. If that note is shared without proper Part 2 consent, the practice is in violation.
- Re-disclosure prohibition. Anyone who receives Part 2 information cannot re-disclose it. If an AI-generated note containing SUD information is sent to a referring provider or insurer without consent, every downstream disclosure is also a violation.
- Practical rule: Before signing any note involving SUD treatment, verify the patient's Part 2 consent is on file and current. Flag SUD records in your EHR so they are not automatically shared through health information exchanges.
Effective July 1, 2026, Tennessee law creates the first statute allowing patients to sue directly over AI misuse in behavioral health:
$5,000 Per Violation
Each improper use of AI in behavioral health treatment is a separate violation.
Treble Damages
Willful violations = triple damages. No annual cap.
No Independent Decisions
AI cannot make therapeutic decisions without direct provider oversight and approval.
Telehealth Reach
Applies if the patient is in Tennessee, regardless of where the provider is located.
You need 10 out of 12 correct (80%) to pass.