Below is a presentation you can actually deliver without slides, framed exactly as WHO India asked: AI for Literature Review, but presented as state estimation + perturbation, not a static tutorial. It is deliberately live, adaptive, and honest about uncertainty.
I’ll give you:
No prepared slides required. That’s the point.
“This is not a demo of a tool. It’s a live experiment in state estimation under uncertainty.”
Tell them explicitly:
“By state, I mean: what the system currently knows, how confident it is, how it updates when challenged.”
That sets expectations immediately at WHO level.
Present this verbally as a control system, not a ladder of intelligence.
State variables
“This is the library before anyone asks a question.”
📌 WHO translation: like a global guideline archive—useful, but stale the moment reality shifts.
State variables
“This is where your question shapes the state.”
📌 WHO relevance: framing errors = literature bias (eg. “RCTs only” silently excludes LMIC evidence)
State variables
“This is where the AI starts acting, not just recalling.”
📌 WHO relevance: This is the literature review engine — but opaque unless interrogated.
State variables
“This is where it becomes dangerous and useful.”
📌 WHO relevance: Guideline drafts, summaries, recommendations live here — without guarantees.
State variables
“Once text becomes action, errors are no longer academic.”
📌 WHO relevance: This is where ethics stops being abstract.
This is the instigator moment.
Say:
“Please choose any public health topic. Something messy. Something current. Something you care about.”
Examples they might pick:
You narrate state changes out loud while querying the AI.
Example narration:
“Right now, the system is operating mostly in World AI + Perception mode.” “Notice how it defaults to high-income country literature.” “Watch what happens when we constrain geography.”
You are not showing answers. You are showing behavior under perturbation.
Now you push back deliberately.
Examples of live perturbations:
Each perturbation is a control input.
Say explicitly:
“We are now governing the system — not trusting it.”
This demonstrates:
This is the lesson, not the output.
Ask them this, then summarize.
You can offer three observations:
It retrieves fast, but confidence is poorly regulated unless challenged.
Not a search problem. Not a summarization problem. A governance problem.
Speed magnifies framing errors. Without perturbation, AI reifies dominant narratives.
End by reframing their original request:
❌ “How do we use AI tools for literature review?” ✅ “How do we maintain high Kalman gain when AI mediates evidence?”
Translate that for policy people:
If you remember nothing else, remember this:
STATE = (Knowledge × Framing × Optimization × Synthesis × Consequence)
AI adds speed.
Humans must add calibration.
Charity = correction under uncertainty.
Or, in WHO language:
AI without governance accelerates error. AI with governance accelerates learning.
You might end with:
“AI will not give WHO certainty. It will give WHO velocity. The question is whether governance keeps up.”
If you want, next we can:
Just say which direction.