Cancer Clinical Trial Eligibility Screening

Table of Contents

Cancer Clinical Trial Eligibility Screening

How we built an AI agent that reads unstructured doctor notes, extracts clinical data, and determines patient eligibility for cancer trials — cutting hours of manual chart review down to minutes.

The Challenge

Clinical trial coordinators face a documentation bottleneck. Patient records are written in free-form narrative by clinicians — dense, unstructured, and full of medical shorthand. To screen a patient for trial eligibility, a coordinator has to read through pages of notes, extract specific data points (diagnosis, stage, prior treatments, biomarkers, performance status), and manually cross-reference them against a set of inclusion and exclusion criteria.

For an oncology practice running multiple trials simultaneously, this process is slow, expensive, and inconsistent. High-value clinical staff spend significant time on a task that is fundamentally data extraction and classification.

What We Built

We built an AI agent pipeline that takes raw doctor notes as input and returns a structured eligibility assessment — along with the exact follow-up questions the care team needs to gather any missing information.

The system extracts relevant clinical variables from unstructured text, maps them to trial eligibility criteria, classifies the patient as likely eligible, likely ineligible, or needs-more-information, and generates targeted questions to fill any gaps before the coordination call.

How It Works

  1. Doctor notes input — raw clinical notes passed to the extraction agent
  2. Variable extraction — AI pulls key clinical data: diagnosis, staging, treatment history, biomarkers, dates, and performance status
  3. Patient profile structuring — extracted variables mapped into a standardised, auditable profile
  4. Eligibility classification — profile compared against trial criteria: eligible / ineligible / pending
  5. Gap detection — any missing or ambiguous data points are flagged
  6. Follow-up question generation — specific clinical questions generated for the care team to resolve gaps before the coordination call

Need Help Building AI Automations?

We build custom Claude and n8n automation systems for businesses. Schedule a free consultation.

The Results

  • Chart review time reduced from ~45 minutes per patient to under 3 minutes
  • Consistent extraction criteria applied to every patient — no variation by reviewer
  • Structured output integrates directly into trial management workflows
  • Follow-up question generation catches missing data before coordination calls
  • Eligible patients identified faster — more trial slots filled

Join Our AI Community

Get access to the JSON workflow files from this article, weekly live sessions, and a community of builders working through the same challenges. Everything is free and the community is active.

Free Community

Join 1,000+ AI Automation Builders

Weekly tutorials, live calls & direct access to Ryan & Matt.

Join Free →

Keep Learning