Cancer Clinical Trial Eligibility Screening
How we built an AI agent that reads unstructured doctor notes, extracts clinical data, and determines patient eligibility for cancer trials — cutting hours of manual chart review down to minutes.
The Challenge
Clinical trial coordinators face a documentation bottleneck. Patient records are written in free-form narrative by clinicians — dense, unstructured, and full of medical shorthand. To screen a patient for trial eligibility, a coordinator has to read through pages of notes, extract specific data points (diagnosis, stage, prior treatments, biomarkers, performance status), and manually cross-reference them against a set of inclusion and exclusion criteria.
For an oncology practice running multiple trials simultaneously, this process is slow, expensive, and inconsistent. High-value clinical staff spend significant time on a task that is fundamentally data extraction and classification.
What We Built
We built an AI agent pipeline that takes raw doctor notes as input and returns a structured eligibility assessment — along with the exact follow-up questions the care team needs to gather any missing information.
The system extracts relevant clinical variables from unstructured text, maps them to trial eligibility criteria, classifies the patient as likely eligible, likely ineligible, or needs-more-information, and generates targeted questions to fill any gaps before the coordination call.
How It Works
- Doctor notes input — raw clinical notes passed to the extraction agent
- Variable extraction — AI pulls key clinical data: diagnosis, staging, treatment history, biomarkers, dates, and performance status
- Patient profile structuring — extracted variables mapped into a standardised, auditable profile
- Eligibility classification — profile compared against trial criteria: eligible / ineligible / pending
- Gap detection — any missing or ambiguous data points are flagged
- Follow-up question generation — specific clinical questions generated for the care team to resolve gaps before the coordination call
Need Help Building AI Automations?
The Results
- Chart review time reduced from ~45 minutes per patient to under 3 minutes
- Consistent extraction criteria applied to every patient — no variation by reviewer
- Structured output integrates directly into trial management workflows
- Follow-up question generation catches missing data before coordination calls
- Eligible patients identified faster — more trial slots filled
