Natural language processing to extract actionable findings from radiology reports
This project aims to take historical data from the EMR for a single relatively well understood disease process (appendicitis) where a radiology exam (CT scan) is known to lead to an improved outcome (decreased incidence of negative appendectomies) and apply NLP algorithms to text reports (radiology and pathology) in order to identify significant radiographic, pathologic, and physical exam findings. Ultimately this information can be cross-referenced with laboratory test results to form the basis of an automated decision support tool for physicians ordering radiology studies, radiologist dictating reports, and physicians interpreting radiology reports. For Year 2, Expansion to Children and amp;apos;s Medical Center to test same algorithms to a different institution and expand cohort to include ICD9 789.0 Unspecified abdominal pain
Year 1 Results:
Our test set consists of 400 radiology reports from UT Southwestern Medical Center. The
radiology MDs, Seth Toomay and Travis Browning annotated the reports as positive or negative
for appendicitis. Using our lexicons for anatomical terms near the appendix and inflammation
terms, we created four groups of 100 reports each: (1) reports containing at least one term
from both lexicons, (2) reports containing an inflammation term, but not an anatomical term,
(3) reports containing an anatomical term, but not an inflammation term, and (4) reports
containing neither inflammation nor anatomical terms. This resulted in a more balanced
training set, since many of the initial reports were not relevant to appendicitis.
In this project we have developed an approach for identifying radiology reports which indicate
appendicitis. The approach is based on recognizing direct assertions of appendicitis and indirect
evidence such as indications of inflammation near the appendix. These indications are detected
through the use of patterns which capture the syntactic dependency structure of the text.
Various linguistic phenomenon such as negation and conjunctions are handled. Each report is
automatically categorized as indicative or non-indicative of appendicitis by combining all
relevant statements found within the reports. Direct indications of appendicitis are given more
influence in the decision and indications which are general or specific to anatomical structures
other than the appendix are given less influence. Our evaluation shows that our approach can
identify reports consistent with appendicitis with a precision of 84%, a recall of 87% and an F1
score of 85. Error analysis has revealed promising directions for relatively easy improvement
including section identification and continued input from the doctors.
The automatic identification of actionable findings in radiology reports can lead to
improvements in patient outcomes and quality of care. One application could be an interpretive
software layer which could alert the referring clinician about radiology reports containing
language associated with a given disease process. A second application would be enabling the
radiologist to ensure that a given outcome resulted from the language they used.
The study cohort will include any patient previously seen at a UT Southwestern hospital or clinical with a diagnosis of any type of Appendicitis and other diseases of the appendix based on the following set of ICD9 codes that has been entered into the Epic EHR system.
ICD9 540 x 543.99 Appendicitis
ICD9 540.0 Acute Appendicitis with generalized peritonitis
ICD9 540.1 Acute Appendicitis with peritoneal abscess
ICD9 540.9 Acute Appendicitis without mention of peritonitis
ICD9 541. Appendicitis, unqualified
ICD9 542 Other Appendicitis
ICD 543 Other disease of the appendix
ICD 543 Patients Hyperplasia of appendix (lymphoid)
ICD 543.9 Patients Other unqualified diseases of the appendix
ICD 789.0 x 789.9 Unspecified abdominal pain