GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    In: Applied Sciences, MDPI AG, Vol. 11, No. 16 ( 2021-08-15), p. 7488-
    Abstract: We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p 〈 0.001) overlap difference for spleen (DiceIML/DiceManual = 0.91/0.87), breast tumors (DiceIML/DiceManual = 0.84/0.82), and lung nodules (DiceIML/DiceManual = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (DiceIML/DiceManual = 0.91/0.89). For inter-rater consistency, significant (p 〈 0.045) differences were found for spleen (DiceIML/DiceManual = 0.91/0.87), breast (DiceIML/DiceManual = 0.86/0.81), lung (DiceIML/DiceManual = 0.85/0.89), the non-enhancing (DiceIML/DiceManual = 0.79/0.67) and the enhancing (DiceIML/DiceManual = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.
    Type of Medium: Online Resource
    ISSN: 2076-3417
    Language: English
    Publisher: MDPI AG
    Publication Date: 2021
    detail.hit.zdb_id: 2704225-X
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Neuro-Oncology Advances, Oxford University Press (OUP), Vol. 2, No. Supplement_4 ( 2020-12-31), p. iv22-iv34
    Abstract: Gliomas represent a biologically heterogeneous group of primary brain tumors with uncontrolled cellular proliferation and diffuse infiltration that renders them almost incurable, thereby leading to a grim prognosis. Recent comprehensive genomic profiling has greatly elucidated the molecular hallmarks of gliomas, including the mutations in isocitrate dehydrogenase 1 and 2 (IDH1 and IDH2), loss of chromosomes 1p and 19q (1p/19q), and epidermal growth factor receptor variant III (EGFRvIII). Detection of these molecular alterations is based on ex vivo analysis of surgically resected tissue specimen that sometimes is not adequate for testing and/or does not capture the spatial tumor heterogeneity of the neoplasm. Methods We developed a method for noninvasive detection of radiogenomic markers of IDH both in lower-grade gliomas (WHO grade II and III tumors) and glioblastoma (WHO grade IV), 1p/19q in IDH-mutant lower-grade gliomas, and EGFRvIII in glioblastoma. Preoperative MRIs of 473 glioma patients from 3 of the studies participating in the ReSPOND consortium (collection I: Hospital of the University of Pennsylvania [HUP: n = 248], collection II: The Cancer Imaging Archive [TCIA; n = 192] , and collection III: Ohio Brain Tumor Study [OBTS, n = 33]) were collected. Neuro-Cancer Imaging Phenomics Toolkit (neuro-CaPTk), a modular platform available for cancer imaging analytics and machine learning, was leveraged to extract histogram, shape, anatomical, and texture features from delineated tumor subregions and to integrate these features using support vector machine to generate models predictive of IDH, 1p/19q, and EGFRvIII. The models were validated using 3 configurations: (1) 70–30% training–testing splits or 10-fold cross-validation within individual collections, (2) 70–30% training–testing splits within merged collections, and (3) training on one collection and testing on another. Results These models achieved a classification accuracy of 86.74% (HUP), 85.45% (TCIA), and 75.15% (TCIA) in identifying EGFRvIII, IDH, and 1p/19q, respectively, in configuration I. The model, when applied on combined data in configuration II, yielded a classification success rate of 82.50% in predicting IDH mutation (HUP + TCIA + OBTS). The model when trained on TCIA dataset yielded classification accuracy of 84.88% in predicting IDH in HUP dataset. Conclusions Using machine learning algorithms, high accuracy was achieved in the prediction of IDH, 1p/19q, and EGFRvIII mutation. Neuro-CaPTk encompasses all the pipelines required to replicate these analyses in multi-institutional settings and could also be used for other radio(geno)mic analyses.
    Type of Medium: Online Resource
    ISSN: 2632-2498
    Language: English
    Publisher: Oxford University Press (OUP)
    Publication Date: 2020
    detail.hit.zdb_id: 3009682-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    In: Journal of Clinical Oncology, American Society of Clinical Oncology (ASCO), Vol. 40, No. 16_suppl ( 2022-06-01), p. e13538-e13538
    Abstract: e13538 Background: Breast density is considered a well-established breast cancer risk factor. As quasi-3D, digital breast tomosynthesis (DBT) becomes increasingly utilized for screening, there is an opportunity to routinely estimate volumetric breast density (VBD). However, current methods extrapolate VBD from 2D images acquired with DBT and/or depend on the existence of raw DBT data, which is rarely archived due to cost and storage constraints. Using a racially diverse screening cohort, this study evaluates the potential of deep learning for VBD assessment based solely on 3D reconstructed, “for presentation” DBT images. Methods: We retrospectively analyzed 1,080 negative DBT screening exams obtained between 2011 and 2016 from the Hospital of the University of Pennsylvania (racial makeup, 41.2% White, 54.2% Black, 4.6% Other; mean age ± SD, 57 ± 11 years; mean BMI ± SD, 28.7 ± 7.1 kg/m2), for which both 2D raw and 3D reconstructed DBT images (Selenia Dimensions, Hologic Inc) were available. Corresponding 3D reference-standard tissue segmentations were generated from previously validated software that uses both 3D reconstructed slices and raw 2D DBT data to provide VBD metrics, shown to be strongly correlated with VBD measures from MRI image volumes. We based our deep learning algorithm on the U-Net architecture within the open-source Generally Nuanced Deep Learning Framework (GaNDLF) and created a 3-label image segmentation task (background, dense tissue, and fatty tissue). Our dataset was randomly split into training (70%), validation (15%) and test (15%) sets. We report on the performance of our deep learning algorithm against corresponding reference-standard segmentations for a cranio-caudal (CC) view-only subset. We also stratify our results by the two main racial groups (White and Black). Our evaluation measure was the weighted Dice score (DSC), with 0 signifying no overlap and 1 signifying perfect overlap, overall and separately for each label. Results: Our deep learning algorithm achieved an overall DSC of 0.682 (STD = 0.136). It accurately segmented the three labels of background, fatty tissue, and dense tissue, with DSC scores of 0.995, 0.884, and 0.617, respectively. DSC for White and Black women were 0.688 (STD = 0.127) and 0.680 (STD = 0.146), respectively. Conclusions: Our preliminary analysis suggests that deep learning shows promise in the estimation of VBD using 3D DBT reconstructed, “for presentation” CC view images and does not demonstrate bias among racial groups. Future work involving optimization of performance in other breast views as well as transfer learning based on ground truth masks by clinical radiologists could further enhance this method. In view of rapid clinical conversion to DBT screening, such a tool has the potential to enable large retrospective epidemiological and personalized risk assessment studies of breast density with DBT.
    Type of Medium: Online Resource
    ISSN: 0732-183X , 1527-7755
    RVK:
    RVK:
    Language: English
    Publisher: American Society of Clinical Oncology (ASCO)
    Publication Date: 2022
    detail.hit.zdb_id: 2005181-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    American Association for Cancer Research (AACR) ; 2022
    In:  Cancer Research Vol. 82, No. 12_Supplement ( 2022-06-15), p. 1929-1929
    In: Cancer Research, American Association for Cancer Research (AACR), Vol. 82, No. 12_Supplement ( 2022-06-15), p. 1929-1929
    Abstract: Background: It has been widely established that breast density is an independent breast cancer risk factor. With the increasing utilization of digital breast tomosynthesis (DBT) in breast cancer screening, there is an opportunity to estimate volumetric breast density (VBD) routinely. However, current available methods extrapolate VBD from 2D images acquired with DBT and/or depend on the existence of raw DBT data which is rarely archived by clinical centers due to cost and storage constraints. This study aims to harness deep learning to develop a computational tool for VBD assessment based solely on 3D reconstructed, “for presentation” DBT images. Methods: We retrospectively analyzed 1,080 negative DBT screening exams (09/20/2011 - 11/25/2016) from the Hospital of the University of Pennsylvania (mean age ± SD, 57 ± 11 years; mean BMI ± SD, 28.7 ± 7.1 kg/m2; racial makeup, 41.2% White, 54.2% Black, 4.6% Other), for which both 3D reconstructed and 2D raw DBT images (Selenia Dimensions, Hologic Inc) were available. All available standard views (left and right mediolateral-oblique and cranio-caudal views) were included for each exam, leading to 7,850 DBT views. Corresponding 3D reference-standard tissue segmentations were generated from a previously validated software that uses both 3D reconstructed slices and raw 2D DBT data to provide VBD metrics, shown to be strongly correlated with VBD measures from MRI image volumes. We based our deep learning algorithm on the U-Net architecture within the open-source Generally Nuanced Deep Learning Framework (GaNDLF) and created a 3-label image segmentation task (background, dense tissue, and fatty tissue). Our dataset was randomly split into training (70%), validation (15%) and test (15%) sets, while ensuring that all views of the same DBT exam were assigned to the same set. The performance of our deep learning algorithm against the corresponding reference-standard segmentations was measured in terms of Dice scores (DSC), with 0 signifying no overlap and 1 signifying perfect overlap, overall, as well as separately for each label. Results: After training was complete, our deep learning algorithm achieved a DSC of 0.78 on the validation, as well as on the test set. Our method accurately segmented background from breast tissue (DSC = 0.94) and demonstrated moderate to high performance in segmenting dense and fatty tissue, respectively (DSC = 0.49 and 0.89). Conclusion: Our preliminary analysis suggests that deep learning shows promise in the estimation of VBD using 3D DBT reconstructed, “for presentation” images. Future work involving transfer learning based on ground truth masks by clinical radiologists could further enhance this method’s performance. In view of rapid clinical conversion to DBT screening, such a tool has the potential to enable large retrospective epidemiologic and personalized risk assessment studies of breast density with DBT. Citation Format: Vinayak S. Ahluwalia, Walter Mankowski, Sarthak Pati, Spyridon Bakas, Ari Brooks, Celine M. Vachon, Emily F. Conant, Aimilia Gastounioti, Despina Kontos. Deep-learning-enabled volumetric breast density estimation with digital breast tomosynthesis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1929.
    Type of Medium: Online Resource
    ISSN: 1538-7445
    Language: English
    Publisher: American Association for Cancer Research (AACR)
    Publication Date: 2022
    detail.hit.zdb_id: 2036785-5
    detail.hit.zdb_id: 1432-1
    detail.hit.zdb_id: 410466-3
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Communications Engineering, Springer Science and Business Media LLC, Vol. 2, No. 1 ( 2023-05-16)
    Abstract: Deep Learning (DL) has the potential to optimize machine learning in both the scientific and clinical communities. However, greater expertise is required to develop DL algorithms, and the variability of implementations hinders their reproducibility, translation, and deployment. Here we present the community-driven Generally Nuanced Deep Learning Framework (GaNDLF), with the goal of lowering these barriers. GaNDLF makes the mechanism of DL development, training, and inference more stable, reproducible, interpretable, and scalable, without requiring an extensive technical background. GaNDLF aims to provide an end-to-end solution for all DL-related tasks in computational precision medicine. We demonstrate the ability of GaNDLF to analyze both radiology and histology images, with built-in support for k -fold cross-validation, data augmentation, multiple modalities and output classes. Our quantitative performance evaluation on numerous use cases, anatomies, and computational tasks supports GaNDLF as a robust application framework for deployment in clinical workflows.
    Type of Medium: Online Resource
    ISSN: 2731-3395
    Language: English
    Publisher: Springer Science and Business Media LLC
    Publication Date: 2023
    detail.hit.zdb_id: 3121995-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Scientific Data, Springer Science and Business Media LLC, Vol. 9, No. 1 ( 2022-07-23)
    Abstract: Breast cancer is one of the most pervasive forms of cancer and its inherent intra- and inter-tumor heterogeneity contributes towards its poor prognosis. Multiple studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of having consistency in: a) data quality, b) quality of expert annotation of pathology, and c) availability of baseline results from computational algorithms. To address these limitations, here we propose the enhancement of the I-SPY1 data collection, with uniformly curated data, tumor annotations, and quantitative imaging features. Specifically, the proposed dataset includes a) uniformly processed scans that are harmonized to match intensity and spatial characteristics, facilitating immediate use in computational studies, b) computationally-generated and manually-revised expert annotations of tumor regions, as well as c) a comprehensive set of quantitative imaging (also known as radiomic) features corresponding to the tumor regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments.
    Type of Medium: Online Resource
    ISSN: 2052-4463
    Language: English
    Publisher: Springer Science and Business Media LLC
    Publication Date: 2022
    detail.hit.zdb_id: 2775191-0
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: JCO Clinical Cancer Informatics, American Society of Clinical Oncology (ASCO), , No. 4 ( 2020-11), p. 234-244
    Abstract: To construct a multi-institutional radiomic model that supports upfront prediction of progression-free survival (PFS) and recurrence pattern (RP) in patients diagnosed with glioblastoma multiforme (GBM) at the time of initial diagnosis. PATIENTS AND METHODS We retrospectively identified data for patients with newly diagnosed GBM from two institutions (institution 1, n = 65; institution 2, n = 15) who underwent gross total resection followed by standard adjuvant chemoradiation therapy, with pathologically confirmed recurrence, sufficient follow-up magnetic resonance imaging (MRI) scans to reliably determine PFS, and available presurgical multiparametric MRI (MP-MRI). The advanced software suite Cancer Imaging Phenomics Toolkit (CaPTk) was leveraged to analyze standard clinical brain MP-MRI scans. A rich set of imaging features was extracted from the MP-MRI scans acquired before the initial resection and was integrated into two distinct imaging signatures for predicting mean shorter or longer PFS and near or distant RP. The predictive signatures for PFS and RP were evaluated on the basis of different classification schemes: single-institutional analysis, multi-institutional analysis with random partitioning of the data into discovery and replication cohorts, and multi-institutional assessment with data from institution 1 as the discovery cohort and data from institution 2 as the replication cohort. RESULTS These predictors achieved cross-validated classification performance (ie, area under the receiver operating characteristic curve) of 0.88 (single-institution analysis) and 0.82 to 0.83 (multi-institution analysis) for prediction of PFS and 0.88 (single-institution analysis) and 0.56 to 0.71 (multi-institution analysis) for prediction of RP. CONCLUSION Imaging signatures of presurgical MP-MRI scans reveal relatively high predictability of time and location of GBM recurrence, subject to the patients receiving standard first-line chemoradiation therapy. Through its graphical user interface, CaPTk offers easy accessibility to advanced computational algorithms for deriving imaging signatures predictive of clinical outcome and could similarly be used for a variety of radiomic and radiogenomic analyses.
    Type of Medium: Online Resource
    ISSN: 2473-4276
    Language: English
    Publisher: American Society of Clinical Oncology (ASCO)
    Publication Date: 2020
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...