Automatic disease annotation from radiology reports using artificial intelligence implemented by a recurrent neural network

Changhwan Lee, Yeesuk Kim, Young Soo Kim, Jongseong Jang

Research output: Contribution to journalArticleResearchpeer-review

Abstract

OBJECTIVE. Radiology reports are rich resources for biomedical researchers. Before utilization of radiology reports, experts must manually review these reports to identify the categories. In fact, automatically categorizing electronic medical record (EMR) text with key annotation is difficult because it has a free-text format. To address these problems, we developed an automated system for disease annotation. MATERIALS AND METHODS. Reports of musculoskeletal radiography examinations performed from January 1, 2016, through December 31, 2016, were exported from the database of Hanyang University Medical Center. After sentences not written in English and sentences containing typos were excluded, 3032 sentences were included. We built a system that uses a recurrent neural network (RNN) to automatically identify fracture and nonfracture cases as a preliminary study. We trained and tested the system using orthopedic surgeon–classified reports. We evaluated the system for the number of layers in the following two ways: the word error rate of the output sentences and performance as a binary classifier using standard evaluation metrics including accuracy, precision, recall, and F1 score. RESULTS. The word error rate using Levenshtein distance showed the best performance in the three-layer model at 1.03%. The three-layer model also showed the highest overall performance with the highest precision (0.967), recall (0.967), accuracy (0.982), and F1 score (0.967). CONCLUSION. Our results indicate that the RNN-based system has the ability to classify important findings in radiology reports with a high F1 score. We expect that our system can be used in cohort construction such as for retrospective studies because it is efficient for analyzing a large amount of data.

Original languageEnglish
Pages (from-to)734-740
Number of pages7
JournalAmerican Journal of Roentgenology
Volume212
Issue number4
DOIs
StatePublished - 2019 Apr 1

Fingerprint

Artificial Intelligence
Radiology
Electronic Health Records
Radiography
Orthopedics
Retrospective Studies
Research Personnel
Databases

Keywords

  • Automatic annotation
  • Deep learning
  • Natural language processing
  • Radiology reports
  • Recurrent neural network

Cite this

@article{7a41ab0c7437479fbaa629b9436a59bd,
title = "Automatic disease annotation from radiology reports using artificial intelligence implemented by a recurrent neural network",
abstract = "OBJECTIVE. Radiology reports are rich resources for biomedical researchers. Before utilization of radiology reports, experts must manually review these reports to identify the categories. In fact, automatically categorizing electronic medical record (EMR) text with key annotation is difficult because it has a free-text format. To address these problems, we developed an automated system for disease annotation. MATERIALS AND METHODS. Reports of musculoskeletal radiography examinations performed from January 1, 2016, through December 31, 2016, were exported from the database of Hanyang University Medical Center. After sentences not written in English and sentences containing typos were excluded, 3032 sentences were included. We built a system that uses a recurrent neural network (RNN) to automatically identify fracture and nonfracture cases as a preliminary study. We trained and tested the system using orthopedic surgeon–classified reports. We evaluated the system for the number of layers in the following two ways: the word error rate of the output sentences and performance as a binary classifier using standard evaluation metrics including accuracy, precision, recall, and F1 score. RESULTS. The word error rate using Levenshtein distance showed the best performance in the three-layer model at 1.03{\%}. The three-layer model also showed the highest overall performance with the highest precision (0.967), recall (0.967), accuracy (0.982), and F1 score (0.967). CONCLUSION. Our results indicate that the RNN-based system has the ability to classify important findings in radiology reports with a high F1 score. We expect that our system can be used in cohort construction such as for retrospective studies because it is efficient for analyzing a large amount of data.",
keywords = "Automatic annotation, Deep learning, Natural language processing, Radiology reports, Recurrent neural network",
author = "Changhwan Lee and Yeesuk Kim and Kim, {Young Soo} and Jongseong Jang",
year = "2019",
month = "4",
day = "1",
doi = "10.2214/AJR.18.19869",
language = "English",
volume = "212",
pages = "734--740",
journal = "American Journal of Roentgenology",
issn = "0361-803X",
number = "4",

}

Automatic disease annotation from radiology reports using artificial intelligence implemented by a recurrent neural network. / Lee, Changhwan; Kim, Yeesuk; Kim, Young Soo; Jang, Jongseong.

In: American Journal of Roentgenology, Vol. 212, No. 4, 01.04.2019, p. 734-740.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Automatic disease annotation from radiology reports using artificial intelligence implemented by a recurrent neural network

AU - Lee, Changhwan

AU - Kim, Yeesuk

AU - Kim, Young Soo

AU - Jang, Jongseong

PY - 2019/4/1

Y1 - 2019/4/1

N2 - OBJECTIVE. Radiology reports are rich resources for biomedical researchers. Before utilization of radiology reports, experts must manually review these reports to identify the categories. In fact, automatically categorizing electronic medical record (EMR) text with key annotation is difficult because it has a free-text format. To address these problems, we developed an automated system for disease annotation. MATERIALS AND METHODS. Reports of musculoskeletal radiography examinations performed from January 1, 2016, through December 31, 2016, were exported from the database of Hanyang University Medical Center. After sentences not written in English and sentences containing typos were excluded, 3032 sentences were included. We built a system that uses a recurrent neural network (RNN) to automatically identify fracture and nonfracture cases as a preliminary study. We trained and tested the system using orthopedic surgeon–classified reports. We evaluated the system for the number of layers in the following two ways: the word error rate of the output sentences and performance as a binary classifier using standard evaluation metrics including accuracy, precision, recall, and F1 score. RESULTS. The word error rate using Levenshtein distance showed the best performance in the three-layer model at 1.03%. The three-layer model also showed the highest overall performance with the highest precision (0.967), recall (0.967), accuracy (0.982), and F1 score (0.967). CONCLUSION. Our results indicate that the RNN-based system has the ability to classify important findings in radiology reports with a high F1 score. We expect that our system can be used in cohort construction such as for retrospective studies because it is efficient for analyzing a large amount of data.

AB - OBJECTIVE. Radiology reports are rich resources for biomedical researchers. Before utilization of radiology reports, experts must manually review these reports to identify the categories. In fact, automatically categorizing electronic medical record (EMR) text with key annotation is difficult because it has a free-text format. To address these problems, we developed an automated system for disease annotation. MATERIALS AND METHODS. Reports of musculoskeletal radiography examinations performed from January 1, 2016, through December 31, 2016, were exported from the database of Hanyang University Medical Center. After sentences not written in English and sentences containing typos were excluded, 3032 sentences were included. We built a system that uses a recurrent neural network (RNN) to automatically identify fracture and nonfracture cases as a preliminary study. We trained and tested the system using orthopedic surgeon–classified reports. We evaluated the system for the number of layers in the following two ways: the word error rate of the output sentences and performance as a binary classifier using standard evaluation metrics including accuracy, precision, recall, and F1 score. RESULTS. The word error rate using Levenshtein distance showed the best performance in the three-layer model at 1.03%. The three-layer model also showed the highest overall performance with the highest precision (0.967), recall (0.967), accuracy (0.982), and F1 score (0.967). CONCLUSION. Our results indicate that the RNN-based system has the ability to classify important findings in radiology reports with a high F1 score. We expect that our system can be used in cohort construction such as for retrospective studies because it is efficient for analyzing a large amount of data.

KW - Automatic annotation

KW - Deep learning

KW - Natural language processing

KW - Radiology reports

KW - Recurrent neural network

UR - http://www.scopus.com/inward/record.url?scp=85063627732&partnerID=8YFLogxK

U2 - 10.2214/AJR.18.19869

DO - 10.2214/AJR.18.19869

M3 - Article

VL - 212

SP - 734

EP - 740

JO - American Journal of Roentgenology

JF - American Journal of Roentgenology

SN - 0361-803X

IS - 4

ER -