Expert Review
Automated extraction provides powerful efficiency, but sometimes you need human validated clinical data. Patient Journey Intelligence lets you explore data generated by curation jobs, examine the evidence behind each extracted value, and make corrections when needed. Every edit is saved and versioned, creating a complete audit trail of how your curated data evolved from automated extraction to clinically validated results.
Why Expert Review Matters
NLP models extract vast amounts of data efficiently, but clinical judgment is sometimes essential for ensuring accuracy. A pathology report might contain ambiguous phrasing, clinical context might suggest a different interpretation than the literal text, or edge cases might require domain expertise to resolve correctly. Additionally, the extraction instructions in your ontology definitions may be unclear or ambiguous, leading to extraction errors. Expert review provides a way to identify these patterns, understand where your ontology needs refinement, and validate results against clinical standards.
Patient Journey Intelligence bridges the gap between automated extraction and clinical validation by providing transparent access to both the extracted values and the source evidence that supports them. This transparency lets clinical experts verify accuracy, correct errors, and ensure every value in your dataset meets clinical standards.
Exploring Curated Data
After a curation job completes, the results are immediately available for review and validation. You can browse through the extracted data, examining what the NLP models found and how they interpreted the clinical documentation.
Accessing Extraction Results
Navigate to your completed curation jobs and select one to open the results workspace. You'll see a list of all patients processed by the job, along with the data extracted for each patient based on your ontology schema.
The interface organizes results to make exploration intuitive—you can browse by patient, filter by specific fields or values, search for particular cases, and quickly identify records that might need attention.
Understanding Extracted Values
Each extracted value comes with complete context. When you examine a field, you can see the final extracted value, the source documents that contributed to this value, highlighted text showing where the information came from, confidence scores indicating the NLP model's certainty, and any conflicting or contradictory evidence found in other documents.
This evidence transparency is crucial for validation. Instead of accepting extracted values blindly, you can verify that the NLP correctly interpreted the clinical documentation and identified the right information.
Validating and Correcting Extractions
When you identify values that need correction—whether due to NLP errors, ambiguous documentation, or clinical judgment differing from the automated interpretation—you can edit them directly.
Making Corrections
Click on any extracted field to open it for editing. The system shows you the current value and all supporting evidence, then lets you modify the value as needed. You might correct a misidentified diagnosis code, update staging information based on clinical context the NLP missed, clarify ambiguous terminology, or override automated extractions when human judgment suggests a different interpretation.
As you make edits, you can document your reasoning, explaining why the correction was necessary. This documentation becomes part of the permanent record, helping other reviewers understand the decision and supporting quality assurance processes.
Version Control for All Changes
Every edit is automatically saved and versioned. The system maintains a complete history showing the original automated extraction, all subsequent modifications, who made each change and when, and the rationale documented for each edit.
This version control serves multiple purposes. It provides a complete audit trail for regulatory compliance and research reproducibility, lets you compare how values evolved from initial extraction to final validated state, enables you to restore previous values if needed, and creates transparency for quality assurance reviews.
You can view the version history for any field at any time, seeing the complete evolution of that value through the validation process.
Working with Evidence
The evidence system is what transforms Expert Review from simple data editing into clinically grounded validation.
Examining Source Documents
For any extracted value, you can access the complete source documentation. The system shows you the specific clinical notes, reports, or records that contained the information, highlights the relevant text passages, indicates which documents supported the extraction versus which might have contradictory information, and provides confidence scores from the NLP models.
This direct connection to source evidence lets you verify that extractions accurately reflect the clinical documentation. You're not just editing values in a database—you're validating them against the actual patient record.
Evidence-Based Corrections
When you make corrections, the evidence view helps ensure accuracy. You can review all available documentation before changing a value, identify cases where the NLP missed important context, spot patterns where certain types of extractions consistently need correction, and document exactly why a correction was necessary with reference to specific evidence.
This evidence-based approach ensures corrections improve data quality rather than introducing new errors.
Building Trust Through Transparency
Expert Review creates trustworthy registries by making every aspect of the curation process transparent and auditable.
Complete Audit Trails
The combination of version control and evidence tracking creates comprehensive audit trails. For every value in your registry, you can trace its complete history from initial automated extraction through any manual corrections to its final validated state. You can see who made changes, when they were made, what evidence supported the decisions, and what reasoning guided the validation process.
This level of documentation supports research publications, regulatory submissions, quality reporting, and internal quality assurance processes. Stakeholders can verify that your registry data is both accurate and properly validated.
Continuous Quality Improvement
The patterns revealed through expert review help improve your entire curation process. By examining which extractions consistently need correction, you can refine ontology extraction instructions to reduce future errors, identify types of clinical documentation that pose particular challenges for NLP, understand where additional training data might improve model performance, and optimize your review process based on where human validation adds the most value.
Each correction becomes a learning opportunity, gradually improving the accuracy of automated extractions and reducing the validation burden over time.
The Expert Review Workflow
In practice, expert review follows a natural workflow that balances efficiency with thoroughness.
Start by reviewing the extraction results from your curation job, focusing first on cases flagged for attention—perhaps those with low confidence scores or conflicting evidence. Examine the extracted values and their supporting evidence, verifying that the NLP correctly interpreted the clinical documentation. Make corrections where needed, documenting your reasoning. Review the version history to track how values evolved through validation.
This workflow ensures systematic coverage while allowing you to focus validation effort where it matters most. Not every extracted value requires manual review—focus on cases where clinical judgment is essential or where the automated extraction seems uncertain.
Scaling Expert Validation
While every registry benefits from expert review, the level of validation required varies by use case. Research registries might require extensive validation with dual review of ambiguous cases. Quality reporting registries might focus validation on specific high-value fields. Operational registries might use sampling-based validation to verify overall accuracy.
Expert Review adapts to your needs. The evidence transparency and editing capabilities remain consistent, but you control how much validation effort to invest. Use confidence scores and evidence quality indicators to prioritize which cases need human review, rely on automated extractions where confidence is high, and focus expert attention where clinical judgment adds the most value.
The Partnership Between Automation and Expertise
Expert Review embodies the principle that automated extraction and clinical expertise complement each other. Automation provides the scale—efficiently processing thousands of patients and millions of documents. Human experts provide the judgment—validating nuanced cases, catching errors, and ensuring every value meets clinical standards.
By combining transparent evidence, flexible editing, and comprehensive version control, Expert Review creates registries that are both efficient to maintain and trustworthy to use. The result is curated data you can confidently rely on for research, quality improvement, and clinical decision support.