GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    In: Medical Physics, Wiley, Vol. 46, No. 5 ( 2019-05), p. 2052-2063
    Abstract: This work aims to develop a new framework of image quality assessment using deep learning‐based model observer (DL‐MO) and to validate it in a low‐contrast lesion detection task that involves CT images with patient anatomical background. Methods The DL‐MO was developed using the transfer learning strategy to incorporate a pretrained deep convolutional neural network (CNN), a partial least square regression discriminant analysis (PLS‐DA) model and an internal noise component. The CNN was previously trained to achieve the state‐of‐the‐art classification accuracy over a natural image database. The earlier layers of the CNN were used as a deep feature extractor, with the assumption that similarity exists between the CNN and the human visual system. The PLSR model was used to further engineer the deep feature for the lesion detection task in CT images. The internal noise component was incorporated to model the inefficiency and variability of human observer (HO) performance, and to generate the ultimate DL‐MO test statistics. Seven abdominal CT exams were retrospectively collected from the same type of CT scanners. To compare DL‐MO with HO, 12 experimental conditions with varying lesion size, lesion contrast, radiation dose, and reconstruction types were generated, each condition with 154 trials. CT images of a real liver metastatic lesion were numerically modified to generate lesion models with four lesion sizes (5, 7, 9, and 11 mm) and three contrast levels (15, 20, and 25 HU). The lesions were inserted into patient liver images using a projection‐based method. A validated noise insertion tool was used to synthesize CT exams with 50% and 25% of routine radiation dose level. CT images were reconstructed using the weighted filtered back projection algorithm and an iterative reconstruction algorithm. Four medical physicists performed a two‐alternative forced choice (2AFC) detection task (with multislice scrolling viewing mode) on patient images across the 12 experimental conditions. DL‐MO was operated on the same datasets. Statistical analyses were performed to evaluate the correlation and agreement between DL‐MO and HO. Results A statistically significant positive correlation was observed between DL‐MO and HO for the 2AFC low‐contrast detection task that involves patient liver background. The corresponding Pearson product moment correlation coefficient was 0.986 [95% confidence interval (0.950, 0.996)]. Bland–Altman agreement analysis did not indicate statistically significant differences. Conclusions The proposed DL‐MO is highly correlated with HO in a low‐contrast detection task that involves realistic patient liver background. This study demonstrated the potential of the proposed DL‐MO to assess image quality directly based on patient images in realistic, clinically relevant CT tasks.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2019
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Medical Physics, Wiley, Vol. 50, No. 3 ( 2023-03), p. 1428-1435
    Abstract: To measure the accuracy of material decomposition using a dual‐source photon‐counting‐detector (DS‐PCD) CT operated in the high‐pitch helical scanning mode and compare the results against dual‐source energy‐integrating‐detector (DS‐EID) CT, which requires use of a low‐pitch value in dual‐energy mode. Methods A DS‐PCD CT and a DS‐EID CT were used to scan a cardiac motion phantom consisting of a 3‐mm diameter iodine cylinder. Iodine maps were reconstructed using DS‐PCD in high‐pitch mode and DS‐EID in low‐pitch mode. Image‐based circularity, diameter, and iodine concentration of the iodine cylinder were calculated and compared between the two scanners. With institutional review board approval, in vivo exams were performed with the DS‐PCD CT in high‐pitch mode. Images were qualitatively compared against patients with similar heart rates that were scanned with DS‐EID CT in low‐pitch dual‐energy mode. Results On iodine maps, the mean circularity was 0.97 ± 0.02 with DS‐PCD in high‐pitch mode and 0.95 ± 0.06 with DS‐EID in low‐pitch mode. The mean diameter was 2.9 ± 0.2 mm with DS‐PCD and 3.1 ± 0.2 mm with DS‐EID, both of which are close to the 3 mm ground truth. For DS‐PCD, the mean iodine concentration was 9.6 ± 0.8 mg/ml and this was consistent with the 9.4 ± 0.6 mg/ml value obtained with the cardiac motion disabled. For DS‐EID, the concentration was 12.7 ± 1.2 mg/ml with motion enabled and 11.7 ± 0.5 mg/ml disabled. The background noise in the iodine maps was 15.1 HU with DS‐PCD and 14.4 HU with DS‐EID, whereas the volume CT dose index (CTDI vol ) was 3 mGy with DS‐PCD and 11 mGy with DS‐EID. On comparison of six patients (three on PCD, three on EID) with similar heart rates, DS‐PCD provided iodine maps with well‐defined coronaries even at a high heart rate of 86 beats per minute. Meanwhile, there were substantial motion artifacts in iodine maps obtained with DS‐EID for patients with similar heart rates. Conclusion In a cardiac motion phantom, DS‐PCD CT can perform accurate material decomposition in high‐pitch mode, providing iodine maps with excellent geometric accuracy and robustness to motion at approximately 38% of the dose for similar noise as DS‐EID CT.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2023
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Wiley ; 2020
    In:  Medical Physics Vol. 47, No. 12 ( 2020-12), p. 6294-6309
    In: Medical Physics, Wiley, Vol. 47, No. 12 ( 2020-12), p. 6294-6309
    Abstract: To develop a convolutional neural network (CNN) that can directly estimate material density distribution from multi‐energy computed tomography (CT) images without performing conventional material decomposition. Methods The proposed CNN (denoted as Incept‐net) followed the general framework of encoder–decoder network, with an assumption that local image information was sufficient for modeling the nonlinear physical process of multi‐energy CT. Incept‐net was implemented with a customized loss function, including an in‐house‐designed image‐gradient‐correlation (IGC) regularizer to improve edge preservation. The network consisted of two types of customized multibranch modules exploiting multiscale feature representation to improve the robustness over local image noise and artifacts. Inserts with various densities of different materials [hydroxyapatite (HA), iodine, a blood–iodine mixture, and fat] were scanned using a research photon‐counting detector (PCD) CT with two energy thresholds and multiple radiation dose levels. The network was trained using phantom image patches only, and tested with different‐configurations of full field‐of‐view phantom and in vivo porcine images. Furthermore, the nominal mass densities of insert materials were used as the labels in CNN training, which potentially provided an implicit mass conservation constraint. The Incept‐net performance was evaluated in terms of image noise, detail preservation, and quantitative accuracy. Its performance was also compared to common material decomposition algorithms including least‐square‐based material decomposition (LS‐MD), total‐variation regularized material decomposition (TV‐MD), and U‐net‐based method. Results Incept‐net improved accuracy of the predicted mass density of basis materials compared with the U‐net, TV‐MD, and LS‐MD: the mean absolute error (MAE) of iodine was 0.66, 1.0, 1.33, and 1.57 mgI/cc for Incept‐net, U‐net, TV‐MD, and LS‐MD, respectively, across all iodine‐present inserts (2.0–24.0 mgI/cc). With the LS‐MD as the baseline, Incept‐net and U‐net achieved comparable noise reduction (both around 95%), both higher than TV‐MD (85%). The proposed IGC regularizer effectively helped both Incept‐net and U‐net to reduce image artifact. Incept‐net closely conserved the total mass densities (i.e., mass conservation constraint) in porcine images, which heuristically validated the quantitative accuracy of its outputs in anatomical background. In general, Incept‐net performance was less dependent on radiation dose levels than the two conventional methods; with approximately 40% less parameters, the Incept‐net achieved relatively improved performance than the comparator U‐net, indicating that performance gain by Incept‐net was not achieved by simply increasing network learning capacity. Conclusion Incept‐net demonstrated superior qualitative image appearance, quantitative accuracy, and lower noise than the conventional methods and less sensitive to dose change. Incept‐net generalized and performed well with unseen image structures and different material mass densities. This study provided preliminary evidence that the proposed CNN may be used to improve the material decomposition quality in multi‐energy CT.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2020
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    In: Medical Physics, Wiley, Vol. 50, No. 2 ( 2023-02), p. 821-830
    Abstract: Deep artificial neural networks such as convolutional neural networks (CNNs) have been shown to be effective models for reducing noise in CT images while preserving anatomic details. A practical bottleneck for developing CNN‐based denoising models is the procurement of training data consisting of paired examples of high‐noise and low‐noise CT images. Obtaining these paired data are not practical in a clinical setting where the raw projection data is not available. This work outlines a technique to optimize CNN denoising models using methods that are available in a routine clinical setting. Purpose To demonstrate a phantom‐based training framework for CNN noise reduction that can be efficiently implemented on any CT scanner. Methods The phantom‐based training framework uses supervised learning in which training data are synthesized using an image‐based noise insertion technique. Ten patient image series were used for training and validation (9:1) and noise‐only images obtained from anthropomorphic phantom scans. Phantom noise‐only images were superimposed on patient images to imitate low‐dose CT images for use in training. A modified U‐Net architecture was used with mean‐squared‐error and feature reconstruction loss. The training framework was tested for clinically indicated whole‐body‐low‐dose CT images, as well as routine abdomen‐pelvis exams for which projection data was unavailable. Performance was assessed based on root‐mean‐square error, structural similarity, line profiles, and visual assessment. Results When the CNN was tested on five reserved quarter‐dose whole‐body‐low‐dose CT images, noise was reduced by 75%, root‐mean‐square‐error reduced by 34%, and structural similarity increased by 60%. Visual analysis and line profiles indicated that the method significantly reduced noise while maintaining spatial resolution of anatomic features. Conclusion The proposed phantom‐based training framework demonstrated strong noise reduction while preserving spatial detail. Because this method is based within the image domain, it can be easily implemented without access to projection data.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2023
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Medical Physics, Wiley, Vol. 49, No. 1 ( 2022-01), p. 70-83
    Abstract: Conventional model observers (MO) in CT are often limited to a uniform background or varying background that is random and can be modeled in an analytical form. It is unclear if these conventional MOs can be readily generalized to predict human observer performance in clinical CT tasks that involve realistic anatomical background. Deep‐learning‐based model observers (DL‐MO) have recently been developed, but have not been validated for challenging low contrast diagnostic tasks in abdominal CT. We consequently sought to validate a DL‐MO for a low‐contrast hepatic metastases localization task. Methods We adapted our recently developed DL‐MO framework for the liver metastases localization task. Our previously‐validated projection‐domain lesion‐/noise‐insertion techniques were used to synthesize realistic positive and low‐dose abdominal CT exams, using the archived patient projection data. Ten experimental conditions were generated, which involved different lesion sizes/contrasts, radiation dose levels, and image reconstruction types. Each condition included 100 trials generated from a patient cohort of 7 cases. Each trial was presented as liver image patches (160×160×5 voxels). The DL‐MO performance was calculated for each condition and was compared with human observer performance, which was obtained by three sub‐specialized radiologists in an observer study. The performance of DL‐MO and radiologists was gauged by the area under localization receiver‐operating‐characteristic curves. The generalization performance of the DL‐MO was estimated with the repeated twofold cross‐validation method over the same set of trials used in the human observer study. A multi‐slice Channelized Hoteling Observers (CHO) was compared with the DL‐MO across the same experimental conditions. Results The performance of DL‐MO was highly correlated to that of radiologists (Pearson's correlation coefficient: 0.987; 95% CI: [0.942, 0.997]). The performance level of DL‐MO was comparable to that of the grouped radiologists, that is, the mean performance difference was ‐3.3%. The CHO performance was poorer than the grouped radiologist performance, before internal noise could be added. The correlation between CHO and radiologists was weaker (Pearson's correlation coefficient: 0.812, and 95% CI: [0.378, 0.955] ), and the corresponding performance bias (‐29.5%) was statistically significant. Conclusion The presented study demonstrated the potential of using the DL‐MO for image quality assessment in patient abdominal CT tasks.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2022
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Medical Physics, Wiley, Vol. 49, No. 10 ( 2022-10), p. 6346-6358
    Abstract: Dual‐energy CT with virtual noncalcium (VNCa) images allows the evaluation of focal intramedullary bone marrow involvement in patients with multiple myeloma. However, current commercial VNCa techniques suffer from excessive image noise and artifacts due to material decomposition used in synthesizing VNCa images. Objectives In this work, we aim to improve VNCa image quality for the assessment of focal multiple myeloma, using an A rtificial intelligence based G eneralizable A lgorithm for mul T i‐ E nergy CT (AGATE) method. Materials and methods AGATE method used a custom dual‐task convolutional neural network (CNN) that concurrently carries out material classification and quantification. The material classification task provided an auxiliary regularization to the material quantification task. CNN parameters were optimized using custom loss functions that involved cross‐entropy, physics‐informed constraints, structural redundancy in spectral and material images, and texture information in spectral images. For training data, CT phantoms (diameters 30 to 45 cm) with tissue‐mimicking inserts were scanned on a third generation dual‐source CT system. Scans were performed at routine dose and half of the routine dose. Small image patches (i.e., 40 × 40 pixels) of tissue‐mimicking inserts with known basis material densities were extracted for training samples. Numerically simulated insert materials with various shapes increased diversity of training samples. Generalizability of AGATE was evaluated using CT images from phantoms and patients. In phantoms, material decomposition accuracy was estimated using mean‐absolute‐percent‐error (MAPE), using physical inserts that were not used during the training. Noise power spectrum (NPS) and modulation transfer function (MTF) were compared across phantom sizes and radiation dose levels. Five patients with multiple myeloma underwent dual‐energy CT, with VNCa images generated using a commercial method and AGATE. Two fellowship‐trained musculoskeletal radiologists reviewed the VNCa images (commercial and AGATE) side‐by‐side using a dual‐monitor display, blinded to VNCa type, rating the image quality for focal multiple myeloma lesion visualization using a 5‐level Likert comparison scale (−2 = worse visualization and diagnostic confidence, −1 = worse visualization but equivalent diagnostic confidence, 0 = equivalent visualization and diagnostic confidence, 1 = improved visualization but equivalent diagnostic confidence, 2 = improved visualization and diagnostic confidence). A post hoc assignment of comparison ratings was performed to rank AGATE images in comparison to commercial ones. Results AGATE demonstrated consistent material quantification accuracy across phantom sizes and radiation dose levels, with MAPE ranging from 0.7% to 4.4% across all testing materials. Compared to commercial VNCa images, the AGATE‐synthesized VNCa images yielded considerably lower image noise (50–77% noise reduction) without compromising noise texture or spatial resolution across different phantom sizes and two radiation doses. AGATE VNCa images had markedly reduced area under NPS curves and maintained NPS peak frequency (0.7 lp/cm to 1.0 lp/cm), with similar MTF curves (50% MTF at 3.0 lp/cm). In patients, AGATE demonstrated reduced image noise and artifacts with improved delineation of focal multiple myeloma lesions (all readers comparison scores indicating improved overall diagnostic image quality [scores 1 or 2]). Conclusions AGATE demonstrated reduced noise and artifacts in VNCa images and ability to improve visualization of bone marrow lesions for assessing multiple myeloma.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2022
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: Medical Physics, Wiley, Vol. 49, No. 6 ( 2022-06), p. 3683-3691
    Abstract: The purpose of this work is to evaluate the scaled computed tomography (CT) number accuracy of an artificial 120 kV reconstruction technique based on phantom experiments in the context of radiation therapy planning. Methods An abdomen‐shaped electron density phantom was scanned on a clinical CT scanner capable of artificial 120 kV reconstruction using different tube potentials from 70 to 150 kV. A series of tissue‐equivalent phantom inserts (lung, adipose, breast, solid water, liver, inner bone, 30%/50% CaCO 3 , cortical bone) were placed inside the phantom. Images were reconstructed using a conventional quantitative reconstruction kernel as well as the artificial 120 kV reconstruction kernel. Scaled CT numbers of inserts were measured from images acquired at different kVs and compared with those acquired at 120 kV, which were deemed as the ground truth. The relative error was quantified as the percentage deviation of scaled CT numbers acquired at different tube potentials from their ground truth values acquired at 120 kV. Results Scaled CT numbers measured from images reconstructed using the conventional reconstruction demonstrated a strong kV‐dependence. The relative error in scaled CT numbers ranged from 0.6% (liver insert) to 31.1% (cortical bone insert). The artificial 120 kV reconstruction reduced the kV dependence, especially for bone tissues. The relative error in scaled CT number was reduced to 0.4% (liver insert) and 2.6% (30% CaCO 3 insert) using this technique. When tube potential selection was limited to the range of 90 to 150 kV, the relative error was further restrained to  〈 1.2% for all tissue types. Conclusion Phantom results demonstrated that using the artificial 120 kV technique, it was feasible to acquire raw projection data at the desired tube potential and then reconstruct images with scaled CT numbers comparable to those obtained directly at 120 kV. In radiotherapy applications, this technique may allow optimization of tube potential without complicating clinical workflow by eliminating the necessity of maintaining multiple sets of CT calibration curves.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2022
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    In: Medical Physics, Wiley
    Abstract: Photon‐counting‐detector CT (PCD‐CT) enables the production of virtual monoenergetic images (VMIs) at a high spatial resolution (HR) via simultaneous acquisition of multi‐energy data. However, noise levels in these HR VMIs are markedly increased. Purpose To develop a deep learning technique that utilizes a lower noise VMI as prior information to reduce image noise in HR, PCD‐CT coronary CT angiography (CTA). Methods Coronary CTA exams of 10 patients were acquired using PCD‐CT (NAEOTOM Alpha, Siemens Healthineers). A prior‐information‐enabled neural network (Pie‐Net) was developed, treating one lower‐noise VMI (e.g., 70 keV) as a prior input and one noisy VMI (e.g., 50 keV or 100 keV) as another. For data preprocessing, noisy VMIs were reconstructed by filtered back‐projection (FBP) and iterative reconstruction (IR), which were then subtracted to generate “noise‐only” images. Spatial decoupling was applied to the noise‐only images to mitigate overfitting and improve randomization. Thicker slice averaging was used for the IR and prior images. The final training inputs for the convolutional neural network (CNN) inside the Pie‐Net consisted of thicker‐slice signal images with the reinsertion of spatially decoupled noise‐only images and the thicker‐slice prior images. The CNN training labels consisted of the corresponding thicker‐slice label images without noise insertion. Pie‐Net's performance was evaluated in terms of image noise, spatial detail preservation, and quantitative accuracy, and compared to a U‐net‐based method that did not include prior information. Results Pie‐Net provided strong noise reduction, by 95 ± 1% relative to FBP and by 60 ± 8% relative to IR. For HR VMIs at different keV (e.g., 50 keV or 100 keV), Pie‐Net maintained spatial and spectral fidelity. The inclusion of prior information from the PCD‐CT data in the spectral domain was able to improve a robust deep learning‐based denoising performance compared to the U‐net‐based method, which caused some loss of spatial detail and introduced some artifacts. Conclusion The proposed Pie‐Net achieved substantial noise reduction while preserving HR VMI's spatial and spectral properties.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    Language: English
    Publisher: Wiley
    Publication Date: 2023
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    In: Medical Physics, Wiley, Vol. 48, No. 11 ( 2021-11), p. 6710-6723
    Abstract: Eye‐tracking approaches have been used to understand the visual search process in radiology. However, previous eye‐tracking work in computer tomography (CT) has been limited largely to single cross‐sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three‐dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye‐tracking hardware with in‐house‐developed reader workstation software to allow monitoring of the visual search process and reader‐image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye‐tracking data using this platform for different eye‐tracking data acquisition modes. Methods An eye‐tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real‐time eye movement and workstation events at 1000 Hz sampling frequency. The eye‐tracker was operated either in head‐stabilized mode or in free‐movement mode. In head‐stabilized mode, the reader positioned their head on a manufacturer‐provided chinrest. In free‐movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye‐tracking spatial accuracy under three constraint conditions: head‐stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross‐hair target prior to the integration of the eye‐tracker with the image viewing workstation. In Study 2, after integration of the eye‐tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. Results The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye‐tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. Conclusions An integrated eye‐tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head‐free movement condition with audio biofeedback performed similarly to head‐stabilized mode.
    Type of Medium: Online Resource
    ISSN: 0094-2405 , 2473-4209
    URL: Issue
    Language: English
    Publisher: Wiley
    Publication Date: 2021
    detail.hit.zdb_id: 1466421-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...