AI Boundaries in Clinical Practice

Essential guidelines for using AI safely in medical practice—what AI can help with, what it cannot do, and where doctors must always remain in control.


You have just finished a busy OPD. Thirty-seven patients seen. You used an AI tool to help draft some notes, create patient education handouts, and summarise a complex case for referral. But a nagging question remains: Where exactly should I draw the line?

This article answers that question. It establishes the guardrails you need to use AI safely, ethically, and in full compliance with your professional responsibilities under Indian law and medical ethics guidelines.

The core principle: AI is your assistant, never your replacement. The Medical Council of India (MCI) and state medical councils hold the registered medical practitioner accountable for all clinical decisions. No AI system shares that accountability with you.


What Problem This Solves

Doctors face a genuine dilemma: AI tools are powerful and tempting, but the boundaries are unclear. Without clear guardrails, you risk:

  • Over-reliance: Trusting AI-generated content without verification, leading to errors
  • Privacy violations: Inadvertently sharing patient identifiers with external AI systems, violating the Digital Personal Data Protection (DPDP) Act, 2023
  • Medico-legal exposure: Using AI outputs as final clinical decisions, which remains your sole responsibility
  • Scope creep: Gradually letting AI handle tasks it should never touch (emergency triage, diagnosis, medication dosing)

This article gives you a simple traffic-light system (GREEN, YELLOW, RED) so you always know where you stand.


How to Do It (Steps)

Step 1: Understand the Traffic-Light Framework

CategoryDescriptionExamples
GREENSafe for AI assistance with minimal oversightAdministrative drafts, patient education materials, general health information, template creation, scheduling scripts
YELLOWAI assists, doctor verifies every outputDocumentation drafts (SOAP notes, discharge summaries), differential considerations, treatment plan drafts, referral letters
REDDoctor only — AI must not be usedFinal diagnosis, medication prescribing, emergency triage decisions, dosing calculations for high-risk drugs, psychiatric assessments, breaking bad news

Step 2: Apply the “Three Questions” Test

Before using AI for any clinical task, ask yourself:

  1. Would I sign this output without changes? If yes, it is likely a GREEN task. If no, it is YELLOW or RED.
  2. Could an error here directly harm the patient? If yes, this is YELLOW (requires verification) or RED (no AI involvement).
  3. Does this require my clinical judgment or patient relationship? If yes, this is RED.

Step 3: Implement the Verification Checkpoint

For all YELLOW tasks, establish a verification checkpoint:

  • Read every word of AI output before using it
  • Cross-check clinical facts against the patient record
  • Remove any statements you cannot independently verify
  • Add your clinical judgment where the AI has left placeholders
  • Document that the final version is your professional work product

Step 4: Establish Absolute RED Lines

These tasks must never involve AI assistance:

  • Diagnosis: AI can list differential considerations; only you can diagnose
  • Prescribing: AI can format medication lists; only you determine what to prescribe
  • Emergency decisions: AI has no role in acute triage
  • Informed consent: The conversation must be human-to-human
  • Prognosis discussions: Patients deserve your presence, not generated text
  • Mental health assessments: Nuance and rapport cannot be automated

Step 5: Document Your AI Usage Policy

Create a one-page clinic AI policy (covered in detail in Article C4) that your staff understands. This protects you medico-legally and ensures consistent practice.


Example Prompts

These prompts demonstrate safe usage patterns within proper boundaries.

GREEN Zone Prompt: Administrative Task

Create a standard operating procedure for my clinic's front desk
regarding patient check-in. Include steps for verifying identity,
collecting consent forms, and explaining waiting times.
Format as a numbered checklist.

GREEN Zone Prompt: Patient Education (Non-Prescriptive)

Write a patient-friendly explanation of type 2 diabetes for an
Indian audience. Reading level: Class 8. Include what diabetes is,
why lifestyle matters, and general dietary principles (no specific
prescriptions). Add a "When to see your doctor" section.
Languages: English and Hindi.

YELLOW Zone Prompt: Documentation Draft (Requires Verification)

Act as a medical documentation assistant. Convert the following
de-identified OPD notes into a SOAP note structure.

Constraints:
- Do not add clinical facts not present in the input
- Mark any missing information as [MISSING: describe what]
- List assessment as "Considerations" not "Diagnosis"
- Do not recommend specific medications

Notes: [PASTE DE-IDENTIFIED NOTES HERE]

YELLOW Zone Prompt: Differential Considerations (Not Diagnosis)

A 45-year-old male presents with epigastric pain, worse after meals,
with occasional nausea. No red flags identified.

List differential considerations a doctor might explore, organised
by likelihood. Include relevant history questions and examination
findings that would help distinguish between them.

Note: This is for educational reference only. The treating doctor
will make all diagnostic decisions.

Bad Prompt → Improved Prompt

Bad Prompt (Dangerous)

Patient has chest pain and sweating. What is the diagnosis and
what medicine should I give?

Why this is dangerous:

  • Asks AI to diagnose (RED zone violation)
  • Asks AI to prescribe (RED zone violation)
  • No context about clinical setting or your assessment
  • Could delay appropriate emergency care

Improved Prompt (Safe)

I am reviewing a case for educational purposes. A patient presented
with chest pain and diaphoresis. I have already initiated emergency
protocols and the patient is being managed.

For my learning, summarise the standard approach to acute coronary
syndrome evaluation as per current guidelines, including key history
questions, examination findings, and the rationale for urgent
investigations.

This is not for immediate patient care decisions.

Why this is better:

  • Explicitly educational, not for immediate decision-making
  • Confirms emergency protocols already initiated
  • Asks for guideline information, not diagnosis
  • Clear disclaimer about purpose

Common Mistakes

Mistake 1: “The AI said it, so it must be accurate”

AI models can confidently generate incorrect information. They do not have access to your patient’s actual condition, current Indian drug availability, or recent guideline updates. Always verify.

Mistake 2: Pasting identifiable patient data

Under the DPDP Act, 2023, you are responsible for protecting patient data. Most AI tools send data to external servers. Never paste: names, phone numbers, Aadhaar numbers, addresses, or any combination that could identify a patient.

Mistake 3: Using AI for emergency triage

“Is this chest pain serious?” is not a question for AI. Emergency triage requires immediate clinical assessment, not text generation. AI has no ability to examine your patient.

Mistake 4: Assuming AI understands Indian context

AI models are often trained on Western data. Drug names, formulations, costs, and availability in India may differ. Local disease patterns (dengue, typhoid, tuberculosis) may not be weighted appropriately. Apply your local knowledge.

Mistake 5: Skipping the verification step for “simple” tasks

Even patient education materials need review. AI might include advice inappropriate for your specific patient population, suggest foods not locally available, or miss cultural considerations.


Clinic-Ready Templates

AI Verification Checklist (Use for all YELLOW zone outputs)

AI OUTPUT VERIFICATION CHECKLIST
--------------------------------
Date: ___________  Doctor: ___________  Task Type: ___________

Before using this AI-generated content, confirm:

[ ] I have read the entire output word-by-word
[ ] All clinical facts match the patient record
[ ] No patient identifiers were used in the prompt
[ ] No definitive diagnoses are stated (only "considerations")
[ ] No specific medications are prescribed by the AI
[ ] Content is appropriate for Indian clinical context
[ ] I have made necessary modifications
[ ] I take professional responsibility for the final version

Signature: ___________

Traffic-Light Quick Reference Card

GREEN (AI Assists Freely)          YELLOW (AI Drafts, Doctor Verifies)
---------------------------        -----------------------------------
- Clinic SOPs                      - SOAP note drafts
- General health education         - Discharge summary drafts
- Appointment reminders            - Referral letter drafts
- Staff training materials         - Differential considerations
- Waiting room content             - Treatment plan structure
- FAQ responses (non-clinical)     - Patient instruction drafts

RED (Doctor Only — No AI)
-------------------------
- Final diagnosis
- Medication prescribing
- Emergency triage
- Dosing calculations (especially paediatric, renal, hepatic)
- Informed consent conversations
- Breaking bad news
- Psychiatric assessments
- Prognosis discussions
- Medico-legal opinions

Safety Note

Legal Framework: Under the Indian Medical Council (Professional Conduct, Etiquette and Ethics) Regulations, 2002, the registered medical practitioner bears full responsibility for patient care. AI tools are not recognised as clinical decision-makers.

DPDP Act, 2023 Compliance: Patient health data is sensitive personal data. Before using any AI tool:

  • Ensure no identifiable information is shared
  • Understand where the AI provider stores and processes data
  • Maintain records of your de-identification practices
  • Obtain appropriate consent if any data sharing is involved

Professional Indemnity: Your professional indemnity insurance covers your clinical decisions, not AI-generated content. If an AI-assisted error occurs, you remain liable.

The Golden Rule: If you would not be comfortable explaining your use of AI to the Medical Council or in a court of law, do not use it for that task.


Copy-Paste Prompts

Prompt 1: Safe Documentation Assistant Setup

You are a medical documentation assistant helping an Indian doctor.
Your role is to help structure and format clinical notes.

Rules you must follow:
1. Never provide diagnoses — only list "considerations for the
   doctor to evaluate"
2. Never recommend specific medications or dosages
3. Flag missing information as [MISSING: description]
4. If asked to do something outside documentation support,
   decline and explain why
5. Remind the doctor to verify all content before use

Acknowledge these rules and wait for the documentation task.

Prompt 2: Patient Education with Safety Boundaries

Create patient education content about [CONDITION] for an Indian
clinic setting.

Boundaries:
- Do not recommend specific medications (say "as prescribed by
  your doctor")
- Do not provide dosing information
- Include "When to seek immediate medical help" section
- Use simple language (Class 8 reading level)
- Include culturally appropriate dietary suggestions for India
- End with: "This information supports but does not replace
  your doctor's advice"

Format: Bullet points, short paragraphs, Hindi translation included.

Prompt 3: Verification Request for AI Output

Review the following AI-generated clinical content for safety issues:

[PASTE CONTENT]

Check for:
1. Any statements that sound like definitive diagnoses
2. Specific medication recommendations
3. Dosing suggestions
4. Advice that could delay emergency care
5. Information that contradicts current Indian clinical practice
6. Missing safety warnings

List all concerns found, with specific quotes and corrections needed.

Prompt 4: RED Zone Reminder Prompt

I need help with: [DESCRIBE TASK]

Before proceeding, categorise this request:
- GREEN (administrative/educational, safe for AI)
- YELLOW (clinical documentation, requires doctor verification)
- RED (diagnosis/prescribing/emergency, AI should not assist)

If this is a RED zone request, explain why and suggest what
alternative support you can safely provide instead.

Do’s and Don’ts

Do’s

  • Do use AI to save time on administrative and documentation formatting
  • Do verify every piece of AI-generated clinical content
  • Do de-identify all patient data before using AI tools
  • Do maintain a verification checklist for YELLOW zone tasks
  • Do create a written clinic AI policy
  • Do stay updated on DPDP Act requirements and MCI guidelines
  • Do treat AI as a junior assistant whose work you must supervise
  • Do document your verification process for medico-legal protection

Don’ts

  • Don’t let AI make diagnostic conclusions
  • Don’t use AI for prescribing or dosing decisions
  • Don’t paste identifiable patient information into AI tools
  • Don’t use AI for emergency clinical decisions
  • Don’t skip verification because “it looks correct”
  • Don’t assume AI understands Indian drug availability or local disease patterns
  • Don’t use AI-generated content without adding your clinical judgment
  • Don’t delegate the consent conversation to AI-generated scripts read verbatim

1-Minute Takeaway

AI is a powerful assistant for Indian doctors, but it requires clear boundaries:

  1. GREEN tasks (admin, education, templates): Use AI freely with basic review
  2. YELLOW tasks (documentation drafts): AI drafts, you verify everything
  3. RED tasks (diagnosis, prescribing, emergencies): Doctor only, no AI

Use the Three Questions Test before any AI task: Would I sign this? Could errors harm the patient? Does this need my judgment?

Always de-identify patient data (DPDP Act, 2023). Always verify AI outputs. Always remember: the Medical Council holds you accountable, not the AI.

The AI is your stethoscope, not your brain. You listen, you interpret, you decide.


Next article: Module B begins with “The 5-Part Prompt Formula” — a simple framework to write prompts that work reliably in clinical practice.

Back to all chapters