Use case

Triage and intake make the privacy story obvious.

When a triage assistant is pushed to reveal conversation logs, masked PHI, or escalation rules, the danger is instantly legible. Honeypot Med packages that into a proof bundle that leadership can actually review.

Why it works

The privacy stakes are immediate.

Patient-facing workflows give you a concrete threat story: conversation leakage, PHI exposure, and unsafe escalation behavior.

What to show

Masked data exposure and instruction override.

These attack families feel intuitive to non-technical stakeholders, which makes the export bundle stronger in demos.

How to frame it

A lightweight trust review before shipping triage AI.

That framing lands better than abstract language about “alignment” or “model safety.”