Skip to main content
All CollectionsCorti APIModel Performance & Evaluation
Enhancing Clinical Documentation with a Clinician-in-the-Loop Approach
Enhancing Clinical Documentation with a Clinician-in-the-Loop Approach
Updated today

Introduction

Clinical documentation is a critical yet time-consuming task for healthcare professionals. Traditional scribe methods rely on AI-driven models to generate notes from transcripts, often requiring significant revisions.

At Corti, we are developing a clinician-in-the-loop approach that empowers clinicians to curate medical facts before note generation. This method improves note relevance, enhances clinician control, and streamlines the documentation process.

Context: Corti’s Fact-based workflow shifts the clinician’s role from editing a completed note to refining a structured set of facts. By selecting relevant facts and adding missing information, clinicians ensure that the final note is both precise and meaningful. This approach also enables seamless delegation to a Large Language Model (LLM) for final note composition, reducing cognitive load and improving efficiency.

This article explores our study on clinician-in-the-loop documentation, its impact on documentation quality, and our next steps in refining the approach.

The Study: We conducted a study to test the concept of clinician-in-the-loop documentation using actual data. Our objectives were to:

  1. Demonstrate the feasibility of a clinician-in-the-loop approach.

  2. Evaluate the performance of Corti's alignment model.

  3. Develop a framework to assess the effects of changes in prompts, user experience, and other factors.

  4. Prepare for a clinical test with real clinicians refining the fact lists.

To achieve these goals, we utilized the Primock57 dataset, a benchmark set of 57 clinical encounters containing raw transcripts of doctor-patient conversations and corresponding physician-written notes. The physician-written notes served as the gold standard for evaluating meaning retention and accuracy.

Methodology

Generating Facts

  1. Fact Extraction ($F$): Corti’s fact generation model extracts medical facts from segmented transcript chunks, mimicking real-time inference.

  2. Fact Pruning ($F_P$): Using the Corti alignment model, facts that do not align with the gold-standard note ($D_G$) are removed, simulating clinician-driven pruning.

  3. Fact Augmentation ($F_{P'}$): Facts missing from $F_P$ but present in $D_G$ are identified and added to form an enhanced set.

Generating Notes

  1. Traditional Scribe Approach ($D_T$): Notes generated directly from transcripts using Corti’s legacy scribe LLM.

  2. Fact-Based Note ($D_F$): Notes generated from the full fact list.

  3. Pruned Fact-Based Note ($D_P$): Notes generated from clinician-pruned facts.

  4. Augmented Fact-Based Note ($D_{P'}$): Notes generated from pruned and supplemented facts.

Results & Insights We analyzed the generated documentation using key metrics:

  • Completeness: Proportion of $D_G$ meaning retained in generated notes.

  • Conciseness: Proportion of generated content that aligns with $D_G$.

  • Groundedness: Proportion of statements grounded in the transcript.

  • Word Count: Average length of notes.

Key Findings

  • Maintaining Completeness: Fact-based documentation methods ($D_F$, $D_P$, $D_{P'}$) maintained completeness comparable to the traditional scribe approach ($D_T$), indicating that the clinician-in-the-loop approach does not compromise information capture.

  • Improved Conciseness: Pruned fact-based notes ($D_P$ and $D_{P'}$) were more concise than transcript-based notes, reducing irrelevant details.

  • Groundedness Consistency: Groundedness remained stable across all approaches, confirming that fact-based documentation does not introduce additional inaccuracies.

  • Minimal Impact from ASR Quality: The study compared documentation generated from automatic speech recognition (ASR) transcripts and human-annotated transcripts, finding negligible differences in completeness and conciseness, indicating robustness to ASR imperfections.

Implications & Next Steps Our findings demonstrate that incorporating clinicians in the documentation process reduces irrelevant facts while maintaining completeness.

Future steps include:

  • Refining prompt engineering to reduce verbosity in fact generation.

  • Exploring improved note-splitting techniques for enhanced clarity.

  • Iterating with different prompt templates to refine quality assurance.

  • Conducting trials with real clinicians actively curating facts.

By empowering clinicians in the note-generation process, we aim to create a documentation experience that is efficient, precise, and conducive to improved patient care.

Stay tuned for further developments as we refine and expand this approach!

Did this answer your question?