GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    In: Quantitative Imaging in Medicine and Surgery, AME Publishing Company, Vol. 11, No. 6 ( 2021-6), p. 2486-2498
    Type of Medium: Online Resource
    ISSN: 2223-4292 , 2223-4306
    Language: Unknown
    Publisher: AME Publishing Company
    Publication Date: 2021
    detail.hit.zdb_id: 2653586-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Journal of Clinical Medicine, MDPI AG, Vol. 10, No. 1 ( 2020-12-28), p. 84-
    Abstract: (1) Background: Time-consuming SARS-CoV-2 RT-PCR suffers from limited sensitivity in early infection stages whereas fast available chest CT can already raise COVID-19 suspicion. Nevertheless, radiologists’ performance to differentiate COVID-19, especially from influenza pneumonia, is not sufficiently characterized. (2) Methods: A total of 201 pneumonia CTs were identified and divided into subgroups based on RT-PCR: 78 COVID-19 CTs, 65 influenza CTs and 62 Non-COVID-19-Non-influenza (NCNI) CTs. Three radiology experts (blinded from RT-PCR results) raised pathogen-specific suspicion (separately for COVID-19, influenza, bacterial pneumonia and fungal pneumonia) according to the following reading scores: 0—not typical/1—possible/2—highly suspected. Diagnostic performances were calculated with RT-PCR as a reference standard. Dependencies of radiologists’ pathogen suspicion scores were characterized by Pearson’s Chi2 Test for Independence. (3) Results: Depending on whether the intermediate reading score 1 was considered as positive or negative, radiologists correctly classified 83–85% (vs. NCNI)/79–82% (vs. influenza) of COVID-19 cases (sensitivity up to 94%). Contrarily, radiologists correctly classified only 52–56% (vs. NCNI)/50–60% (vs. COVID-19) of influenza cases. The COVID-19 scoring was more specific than the influenza scoring compared with suspected bacterial or fungal infection. (4) Conclusions: High-accuracy COVID-19 detection by CT might expedite patient management even during the upcoming influenza season.
    Type of Medium: Online Resource
    ISSN: 2077-0383
    Language: English
    Publisher: MDPI AG
    Publication Date: 2020
    detail.hit.zdb_id: 2662592-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    In: Scientific Reports, Springer Science and Business Media LLC, Vol. 12, No. 1 ( 2022-07-27)
    Abstract: Artificial intelligence (AI) algorithms evaluating [supine] chest radiographs ([S] CXRs) have remarkably increased in number recently. Since training and validation are often performed on subsets of the same overall dataset, external validation is mandatory to reproduce results and reveal potential training errors. We applied a multicohort benchmarking to the publicly accessible (S)CXR analyzing AI algorithm CheXNet, comprising three clinically relevant study cohorts which differ in patient positioning ([S]CXRs), the applied reference standards (CT-/[S] CXR-based) and the possibility to also compare algorithm classification with different medical experts’ reading performance. The study cohorts include [1] a cohort, characterized by 563 CXRs acquired in the emergency unit that were evaluated by 9 readers (radiologists and non-radiologists) in terms of 4 common pathologies, [2] a collection of 6,248 SCXRs annotated by radiologists in terms of pneumothorax presence, its size and presence of inserted thoracic tube material which allowed for subgroup and confounding bias analysis and [3] a cohort consisting of 166 patients with SCXRs that were evaluated by radiologists for underlying causes of basal lung opacities, all of those cases having been correlated to a timely acquired computed tomography scan (SCXR and CT within  〈  90 min). CheXNet non-significantly exceeded the radiology resident (RR) consensus in the detection of suspicious lung nodules (cohort [1], AUC AI/RR: 0.851/0.839, p  = 0.793) and the radiological readers in the detection of basal pneumonia (cohort [3], AUC AI/reader consensus: 0.825/0.782, p  = 0.390) and basal pleural effusion (cohort [3], AUC AI/reader consensus: 0.762/0.710, p  = 0.336) in SCXR, partly with AUC values higher than originally published (“Nodule”: 0.780, “Infiltration”: 0.735, “Effusion”: 0.864). The classifier “Infiltration” turned out to be very dependent on patient positioning (best in CXR, worst in SCXR). The pneumothorax SCXR cohort [2] revealed poor algorithm performance in CXRs without inserted thoracic material and in the detection of small pneumothoraces, which can be explained by a known systematic confounding error in the algorithm training process. The benefit of clinically relevant external validation is demonstrated by the differences in algorithm performance as compared to the original publication. Our multi-cohort benchmarking finally enables the consideration of confounders, different reference standards and patient positioning as well as the AI performance comparison with differentially qualified medical readers.
    Type of Medium: Online Resource
    ISSN: 2045-2322
    Language: English
    Publisher: Springer Science and Business Media LLC
    Publication Date: 2022
    detail.hit.zdb_id: 2615211-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    In: Critical Care Medicine, Ovid Technologies (Wolters Kluwer Health), Vol. 48, No. 7 ( 2020-07), p. e574-e583
    Abstract: Interpretation of lung opacities in ICU supine chest radiographs remains challenging. We evaluated a prototype artificial intelligence algorithm to classify basal lung opacities according to underlying pathologies. Design: Retrospective study. The deep neural network was trained on two publicly available datasets including 297,541 images of 86,876 patients. Patients: One hundred sixty-six patients received both supine chest radiograph and CT scans (reference standard) within 90 minutes without any intervention in between. Measurements and Main Results: Algorithm accuracy was referenced to board-certified radiologists who evaluated supine chest radiographs according to side-separate reading scores for pneumonia and effusion (0 = absent, 1 = possible, and 2 = highly suspected). Radiologists were blinded to the supine chest radiograph findings during CT interpretation. Performances of radiologists and the artificial intelligence algorithm were quantified by receiver-operating characteristic curve analysis. Diagnostic metrics (sensitivity, specificity, positive predictive value, negative predictive value, and accuracy) were calculated based on different receiver-operating characteristic operating points. Regarding pneumonia detection, radiologists achieved a maximum diagnostic accuracy of up to 0.87 (95% CI, 0.78–0.93) when considering only the supine chest radiograph reading score 2 as positive for pneumonia. Radiologist’s maximum sensitivity up to 0.87 (95% CI, 0.76–0.94) was achieved by additionally rating the supine chest radiograph reading score 1 as positive for pneumonia and taking previous examinations into account. Radiologic assessment essentially achieved nonsignificantly higher results compared with the artificial intelligence algorithm: artificial intelligence-area under the receiver-operating characteristic curve of 0.737 (0.659–0.815) versus radiologist’s area under the receiver-operating characteristic curve of 0.779 (0.723–0.836), diagnostic metrics of receiver-operating characteristic operating points did not significantly differ. Regarding the detection of pleural effusions, there was no significant performance difference between radiologist’s and artificial intelligence algorithm: artificial intelligence-area under the receiver-operating characteristic curve of 0.740 (0.662–0.817) versus radiologist’s area under the receiver-operating characteristic curve of 0.698 (0.646–0.749) with similar diagnostic metrics for receiver-operating characteristic operating points. Conclusions: Considering the minor level of performance differences between the algorithm and radiologists, we regard artificial intelligence as a promising clinical decision support tool for supine chest radiograph examinations in the clinical routine with high potential to reduce the number of missed findings in an artificial intelligence–assisted reading setting.
    Type of Medium: Online Resource
    ISSN: 0090-3493
    Language: English
    Publisher: Ovid Technologies (Wolters Kluwer Health)
    Publication Date: 2020
    detail.hit.zdb_id: 2034247-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...