GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    In: Physics and Imaging in Radiation Oncology, Elsevier BV, Vol. 12 ( 2019-10), p. 80-86
    Type of Medium: Online Resource
    ISSN: 2405-6316
    Language: English
    Publisher: Elsevier BV
    Publication Date: 2019
    detail.hit.zdb_id: 2963795-8
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Medical Physics, Wiley, Vol. 50, No. 8 ( 2023-08), p. 4854-4870
    Abstract: Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. Purpose To construct and validate a model for deep‐learning‐based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR‐guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. Methods Five deep‐learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter‐rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN‐DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet‐SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two‐raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open‐source GitHub repository. Results In general, MRRN‐DS more accurately segmented tumors than other methods on the testing datasets. MRRN‐DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p   〈  0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p  = 0.04). FPSnet‐SL was similarly accurate as MRRN‐DS in Dataset2 ( p  = 0.30), but MRRN‐DS significantly outperformed FPSnet and FPSnet‐SL in both Dataset1 (0.60 vs. 0.51 [ p  = 0.01] and 0.54 [ p  = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [ p  = 0.002] and 0.24 [ p  = 0.004] respectively). Finally, MRRN‐DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). Conclusions MRRN‐DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN‐DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2023
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    In: Medical Physics, Wiley, Vol. 46, No. 10 ( 2019-10), p. 4392-4404
    Abstract: Accurate tumor segmentation is a requirement for magnetic resonance (MR)‐based radiotherapy. Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross‐modality (MR‐CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert‐segmented CT images was developed. Methods Eighty‐one T2‐weighted MRI scans from 28 patients with non‐small cell lung cancers (nine with pretreatment and weekly MRI and the remainder with pre‐treatment MRI scans) were analyzed. Cross‐modality model encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning network. This model was used to translate 377 expert segmented non‐small cell lung cancer CT scans from the Cancer Imaging Archive into pseudo MRI that served as additional training set. This method was benchmarked against shallow learning using random forest, standard data augmentation, and three state‐of‐the art adversarial learning‐based cross‐modality data (pseudo MR) augmentation methods. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdorff distance metrics, and volume ratio. Results The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback–Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of (0.75 ± 0.12) and the lowest Hausdorff distance of (9.36 mm ± 6.00 mm) on the test dataset using a U‐Net structure. This approach produced highly similar estimations of tumor growth as an expert ( P  = 0.37). Conclusions A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross‐modality information using a model that explicitly incorporates knowledge of tumors in modality translation to augment segmentation training. The results show the feasibility of the approach and the corresponding improvement over the state‐of‐the‐art methods.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2019
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    IOP Publishing ; 2020
    In:  Physics in Medicine & Biology Vol. 65, No. 20 ( 2020-10-07), p. 205001-
    In: Physics in Medicine & Biology, IOP Publishing, Vol. 65, No. 20 ( 2020-10-07), p. 205001-
    Type of Medium: Online Resource
    ISSN: 1361-6560
    Language: Unknown
    Publisher: IOP Publishing
    Publication Date: 2020
    detail.hit.zdb_id: 1473501-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    Institute of Electrical and Electronics Engineers (IEEE) ; 2020
    In:  IEEE Transactions on Medical Imaging Vol. 39, No. 12 ( 2020-12), p. 4071-4084
    In: IEEE Transactions on Medical Imaging, Institute of Electrical and Electronics Engineers (IEEE), Vol. 39, No. 12 ( 2020-12), p. 4071-4084
    Type of Medium: Online Resource
    ISSN: 0278-0062 , 1558-254X
    RVK:
    Language: Unknown
    Publisher: Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2020
    detail.hit.zdb_id: 2068206-2
    detail.hit.zdb_id: 622531-7
    SSG: 12
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Medical Physics, Wiley, Vol. 50, No. 8 ( 2023-08), p. 4758-4774
    Abstract: Adaptive radiation treatment (ART) for locally advanced pancreatic cancer (LAPC) requires consistently accurate segmentation of the extremely mobile gastrointestinal (GI) organs at risk (OAR) including the stomach, duodenum, large and small bowel. Also, due to lack of sufficiently accurate and fast deformable image registration (DIR), accumulated dose to the GI OARs is currently only approximated, further limiting the ability to more precisely adapt treatments. Purpose Develop a 3‐D Pro gressively refined joint R egistration‐ Seg mentation (ProRSeg) deep network to deformably align and segment treatment fraction magnetic resonance images (MRI)s, then evaluate segmentation accuracy, registration consistency, and feasibility for OAR dose accumulation. Method ProRSeg was trained using five‐fold cross‐validation with 110 T2‐weighted MRI acquired at five treatment fractions from 10 different patients, taking care that same patient scans were not placed in training and testing folds. Segmentation accuracy was measured using Dice similarity coefficient (DSC) and Hausdorff distance at 95th percentile (HD95). Registration consistency was measured using coefficient of variation (CV) in displacement of OARs. Statistical comparison to other deep learning and iterative registration methods were done using the Kruskal‐Wallis test, followed by pair‐wise comparisons with Bonferroni correction applied for multiple testing. Ablation tests and accuracy comparisons against multiple methods were done. Finally, applicability of ProRSeg to segment cone‐beam CT (CBCT) scans was evaluated on a publicly available dataset of 80 scans using five‐fold cross‐validation. Results ProRSeg processed 3D volumes (128 × 192 × 128) in 3 s on a NVIDIA Tesla V100 GPU. It's segmentations were significantly more accurate () than compared methods, achieving a DSC of 0.94 ±0.02 for liver, 0.88±0.04 for large bowel, 0.78±0.03 for small bowel and 0.82±0.04 for stomach‐duodenum from MRI. ProRSeg achieved a DSC of 0.72±0.01 for small bowel and 0.76±0.03 for stomach‐duodenum from public CBCT dataset. ProRSeg registrations resulted in the lowest CV in displacement (stomach‐duodenum : 0.75%, : 0.73%, and : 0.81%; small bowel : 0.80%, : 0.80%, and : 0.68%; large bowel : 0.71%, : 0.81%, and : 0.75%). ProRSeg based dose accumulation accounting for intra‐fraction (pre‐treatment to post‐treatment MRI scan) and inter‐fraction motion showed that the organ dose constraints were violated in four patients for stomach‐duodenum and for three patients for small bowel. Study limitations include lack of independent testing and ground truth phantom datasets to measure dose accumulation accuracy. Conclusions ProRSeg produced more accurate and consistent GI OARs segmentation and DIR of MRI and CBCTs compared to multiple methods. Preliminary results indicates feasibility for OAR dose accumulation using ProRSeg.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2023
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: Medical Physics, Wiley, Vol. 46, No. 10 ( 2019-10), p. 4699-4707
    Abstract: To predict the spatial and temporal trajectories of lung tumor during radiotherapy monitored under a longitudinal magnetic resonance imaging (MRI) study via a deep learning algorithm for facilitating adaptive radiotherapy (ART). Methods We monitored 10 lung cancer patients by acquiring weekly MRI‐T2w scans over a course of radiotherapy. Under an ART workflow, we developed a predictive neural network (P‐net) to predict the spatial distributions of tumors in the coming weeks utilizing images acquired earlier in the course. The three‐step P‐net consisted of a convolutional neural network to extract relevant features of the tumor and its environment, followed by a recurrence neural network constructed with gated recurrent units to analyze trajectories of tumor evolution in response to radiotherapy, and finally an attention model to weight the importance of weekly observations and produce the predictions. The performance of P‐net was measured with Dice and root mean square surface distance (RMSSD) between the algorithm‐predicted and experts‐contoured tumors under a leave‐one‐out scheme. Results Tumor shrinkage was 60% ± 27% (mean ± standard deviation) by the end of radiotherapy across nine patients. Using images from the first three weeks, P‐net predicted tumors on future weeks (4, 5, 6) with a Dice and RMSSD of (0.78 ± 0.22, 0.69 ± 0.24, 0.69 ± 0.26), and (2.1 ± 1.1 mm, 2.3 ± 0.8 mm, 2.6 ± 1.4 mm), respectively. Conclusion The proposed deep learning algorithm can capture and predict spatial and temporal patterns of tumor regression in a longitudinal imaging study. It closely follows the clinical workflow, and could facilitate the decision‐making of ART. A prospective study including more patients is warranted.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2019
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    In: Medical Physics, Wiley, Vol. 47, No. 2 ( 2020-02), p. 626-642
    Abstract: To evaluate pix2pix and CycleGAN and to assess the effects of multiple combination strategies on accuracy for patch‐based synthetic computed tomography (sCT) generation for magnetic resonance (MR)‐only treatment planning in head and neck (HN) cancer patients. Materials and methods Twenty‐three deformably registered pairs of CT and mDixon FFE MR datasets from HN cancer patients treated at our institution were retrospectively analyzed to evaluate patch‐based sCT accuracy via the pix2pix and CycleGAN models. To test effects of overlapping sCT patches on estimations, we (a) trained the models for three orthogonal views to observe the effects of spatial context, (b) we increased effective set size by using per‐epoch data augmentation, and (c) we evaluated the performance of three different approaches for combining overlapping Hounsfield unit (HU) estimations for varied patch overlap parameters. Twelve of twenty‐three cases corresponded to a curated dataset previously used for atlas‐based sCT generation and were used for training with leave‐two‐out cross‐validation. Eight cases were used for independent testing and included previously unseen image features such as fused vertebrae, a small protruding bone, and tumors large enough to deform normal body contours. We analyzed the impact of MR image preprocessing including histogram standardization and intensity clipping on sCT generation accuracy. Effects of mDixon contrast (in‐phase vs water) differences were tested with three additional cases. The sCT generation accuracy was evaluated using mean absolute error (MAE) and mean error (ME) in HU between the plan CT and sCT images. Dosimetric accuracy was evaluated for all clinically relevant structures in the independent testing set and digitally reconstructed radiographs (DRRs) were evaluated with respect to the plan CT images. Results The cross‐validated MAEs for the whole‐HN region using pix2pix and CycleGAN were 66.9 ± 7.3 vs 82.3 ± 6.4 HU, respectively. On the independent testing set with additional artifacts and previously unseen image features, whole‐HN region MAEs were 94.0 ± 10.6 and 102.9 ± 14.7 HU for pix2pix and CycleGAN, respectively. For patients with different tissue contrast (water mDixon MR images), the MAEs increased to 122.1 ± 6.3 and 132.8 ± 5.5 HU for pix2pix and CycleGAN, respectively. Our results suggest that combining overlapping sCT estimations at each voxel reduced both MAE and ME compared to single‐view non‐overlapping patch results. Absolute percent mean/max dose errors were 2% or less for the PTV and all clinically relevant structures in our independent testing set, including structures with image artifacts. Quantitative DRR comparison between planning CTs and sCTs showed agreement of bony region positions to 〈 1 mm. Conclusions The dosimetric and MAE based accuracy, along with the similarity between DRRs from sCTs, indicate that pix2pix and CycleGAN are promising methods for MR‐only treatment planning for HN cancer. Our methods investigated for overlapping patch‐based HU estimations also indicate that combining transformation estimations of overlapping patches is a potential method to reduce generation errors while also providing a tool to potentially estimate the MR to CT aleatoric model transformation uncertainty. However, because of small patient sample sizes, further studies are required.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2020
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...