pe

A

📜 WHO Prompt Pack (Healthcare / Research Literature Review)


Session 1: Data / Background (x, y)

Prompt:

You are a WHO research assistant. I want you to act as if I am conducting a literature review. 
Step 1: List the raw "scatter" of data sources (PubMed, WHO reports, national surveys, grey literature) related to [TOPIC]. 
Step 2: Clearly separate what is "in-scope" vs "out-of-scope" for my review. 
Output: a structured table with 2 columns: Data Source | Why Relevant / Not Relevant.

Session 2: Methods (y(x))

Prompt:

I want you to mirror my research question back to me, to check fidelity of encoding. 
Question: [INSERT QUESTION]. 

Task: Restate my question in your own words, highlighting:
1. Population
2. Exposure/Intervention
3. Comparator
4. Outcomes
5. Time / Setting

If the restatement is unclear, suggest 2–3 refinements. 

Session 3: Results (dy/dx)

Prompt:

Now I want to identify invariants and flows in the literature. 
Topic: [INSERT TOPIC].  

Task:
1. List consistent findings (e.g., relative risks, hazard ratios, effect sizes). 
2. Note any directional flows/trends over time. 
3. Highlight 2–3 "stable gradients" that repeat across multiple studies. 

Session 4: Limitations (d²y/dx²)

Prompt:

Play the role of a peer-reviewer. 
Topic: [INSERT TOPIC].  

Task:
1. Identify at least 5 methodological limitations in the available evidence. 
2. Differentiate which limitations are common across studies vs unique to specific ones. 
3. Suggest how future studies could resolve these. 

Format: Limitations | Common/Unique | Suggested Fix.

Session 5: Conclusions (∫y dx)

Prompt:

Summarize the cumulative ledger of evidence for [INSERT TOPIC]. 

Task:
1. Write a one-paragraph abstract (scientific style). 
2. Write a 3-bullet WHO policy brief (for government health officers). 
3. Write a 2-sentence community message (for general public). 

Each should be faithful to the evidence but scaled for the audience. 

🌍 WHO Context Add-Ons


⚡This way the officers experience LLMs not as “mystical black boxes,” but as structured literature review co-pilots that respect the epistemic ladder they already know from epidemiology.

B

🎓 Facilitator’s Guide — Prompt Engineering for Healthcare Research

Audience: WHO country officers (India) Format: 5 sessions × 1–2h (can be adapted as half-day or full-day modules) Goal: Use prompt engineering as a disciplined way of conducting literature reviews, mapping directly to epidemiology’s five-part paper structure (Background → Methods → Results → Limitations → Conclusions).


Session 1: Background / Data (x,y)


Session 2: Methods (y(x))


Session 3: Results (dy/dx)


Session 4: Limitations (d²y/dx²)


Session 5: Conclusions (∫y dx)


Tips for Facilitators


Suggested Flow

C

🎯 Level of Audience


🪵 What to Teach Instead

Stick to the 5-step ledger they already understand:

  1. Data/Background (x,y) — gather scatter of evidence.
  2. Methods (y(x)) — check if the question is faithfully encoded.
  3. Results (dy/dx) — identify stable flows/invariants (e.g. HR ≈ 1.5 across studies).
  4. Limitations (d²y/dx²) — acknowledge uncertainty, branching, instability.
  5. Conclusions (∫y dx) — cumulative ledger → abstract, policy brief, community message.

At step 4, you can simply say:

“Here we watch for instability: are the gradients steady or do they flip around? If so, that’s where our confidence collapses.” That’s enough without doing formal regime shift detection.


🌍 Practical Analogy for WHO


✅ Bottom Line

D

What felt like an intimidating “stack-overflow” of concepts becomes a structured ladder that prunes variance into flow, rather than overwhelming.

What you just saw in practice is the meta-lesson of the whole course:


🔑 Takeaway for your WHO curriculum: Every session should leave officers feeling like they can now prune overflow into a ledger. That’s the core confidence they need: that they can tame messy scatter (text, data, LLM output, or bureaucratic reports) into something that survives scrutiny and reseeds policy.

E