Discover every task across your organization through AI interviews, score automation potential on five axes, and ship in priority order — every quarter.
“Not a single executive in our company can tell you, with confidence, which tasks could be automated tomorrow and which truly require a human.”
— CHRO of a publicly listed firm, January 2026
Run quarterly. The first cycle is a 60–90 minute deep interview; following cycles are 30-minute delta interviews.
Department-aware 1:1 interview. Extracts work across 16 axes from the standard work-analysis canon (5W1H + Worker Spec + Critical Incident).
Specifiability · data · verification · reversibility · accountability. Same scale across every task. AI classifies into Auto / Assist / Lead / Human-only.
Drill down: company → division → team → member → session → task. After human verification, every task lands on the priority matrix.
Quick-win quadrant first. Auto-routes to one of three tracks: employee-self (Gemini · NotebookLM), IT project (Agent Spec), or human-only.
Specifiability, data, verification, reversibility, accountability — the canonical five-decision framework from automation research, applied uniformly so cross-team comparison actually means something.
Value × automation score × monthly hours, auto-binned into four quadrants.
Full → delta. Each subsequent cycle measures only what changed, so it gets faster.
The system decides the right delivery path per task.
D1 + R2 prefix isolation. No path from one tenant's data to another's.
16-axis extraction rate — measured at interview close
Average interview length (delta cycles under 30 min)
Automation classes — Auto / Assist / Lead / Human
Evaluation axes — work analysis + Worker Spec + Critical Incident
Start with one team, 5 to 10 people. If the first cycle doesn't deliver, we refund — no questions.