GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 1
    Publication Date: 2023-06-21
    Description: Underwater images are used to explore and monitor ocean habitats, generating huge datasets with unusual data characteristics that preclude traditional data management strategies. Due to the lack of universally adopted data standards, image data collected from the marine environment are increasing in heterogeneity, preventing objective comparison. The extraction of actionable information thus remains challenging, particularly for researchers not directly involved with the image data collection. Standardized formats and procedures are needed to enable sustainable image analysis and processing tools, as are solutions for image publication in long-term repositories to ascertain reuse of data. The FAIR principles (Findable, Accessible, Interoperable, Reusable) provide a framework for such data management goals. We propose the use of image FAIR Digital Objects (iFDOs) and present an infrastructure environment to create and exploit such FAIR digital objects. We show how these iFDOs can be created, validated, managed and stored, and which data associated with imagery should be curated. The goal is to reduce image management overheads while simultaneously creating visibility for image acquisition and publication efforts.
    Repository Name: EPIC Alfred Wegener Institut
    Type: Article , NonPeerReviewed , info:eu-repo/semantics/article
    Format: application/pdf
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    facet.materialart.
    Unknown
    Springer
    In:  In: Pattern Recognition. ICPR International Workshops and Challenges. , ed. by Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G. M., Mei, T., Bertini, M., Escalante, H. J. and Vezzani, R. Springer, Cham, pp. 390-397, 8 pp.
    Publication Date: 2021-03-08
    Description: In deep water conditions, vision systems mounted on underwater robotic platforms require artificial light sources to illuminate the scene. The particular lighting configurations significantly influence the quality of the captured underwater images and can make their analysis much harder or easier. Nowadays, classical monolithic Xenon flashes are gradually being replaced by more flexible setups of multiple powerful LEDs. However, this raises the question of how to arrange these light sources, given different types of seawater and-depending-on different flying altitudes of the capture platforms. Hence, this paper presents a rendering based coarse-to-fine approach to optimize recent multi-light setups for underwater vehicles. It uses physical underwater light transport models and target ocean and mission parameters to simulate the underwater images as would be observed by a camera system with particular lighting setups. This paper proposes to systematically vary certain design parameters such as each LED’s orientation and analyses the rendered image properties (such as illuminated image area and light uniformity) to find optimal light configurations. We report first results on a real, ongoing AUV light design process for deep sea mission conditions.
    Type: Book chapter , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    facet.materialart.
    Unknown
    Springer
    In:  In: Pattern Recognition. ICPR International Workshops and Challenges. , ed. by Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G. M., Mei, T., Bertini, M., Escalante, H. J. and Vezzani, R. Springer, Cham, pp. 375-389.
    Publication Date: 2021-08-03
    Description: Nowadays underwater vision systems are being widely applied in ocean research. However, the largest portion of the ocean - the deep sea - still remains mostly unexplored. Only relatively few image sets have been taken from the deep sea due to the physical limitations caused by technical challenges and enormous costs. Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community. The shortage of deep sea images and the corresponding ground truth data for evaluation and training is becoming a bottleneck for the development of underwater computer vision methods. Thus, this paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs, to generate underwater image sequences taken by robots in deep ocean scenarios. Different from shallow water conditions, artificial illumination plays a vital role in deep sea image formation as it strongly affects the scene appearance. Our radiometric image formation model considers both attenuation and scattering effects with co-moving spotlights in the dark. By detailed analysis and evaluation of the underwater image formation model, we propose a 3D lookup table structure in combination with a novel rendering strategy to improve simulation performance. This enables us to integrate an interactive deep sea robotic vision simulation in the Unmanned Underwater Vehicles simulator. To inspire further deep sea vision research by the community, we release the source code of our deep sea image converter to the public (https://www.geomar.de/en/omv-research/robotic-imaging-simulator).
    Type: Book chapter , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2022-07-11
    Description: Underwater images are challenging for correspondence search algorithms, which are traditionally designed based on images captured in air and under uniform illumination. In water however, medium interactions have a much higher impact on the light propagation. Absorption and scattering cause wavelength- and distance-dependent color distortion, blurring and contrast reductions. For deeper or turbid waters, artificial illumination is required that usually moves rigidly with the camera and thus increases the appearance differences of the same seafloor spot in different images. Correspondence search, e.g. using image features, is however a core task in underwater visual navigation employed in seafloor surveys and is also required for 3D reconstruction, image retrieval and object detection. For underwater images, it has to be robust against the challenging imaging conditions to avoid decreased accuracy or even failure of computer vision algorithms. However, explicitly taking underwater nuisances into account during the feature extraction and matching process is challenging. On the other hand, learned feature extraction models achieved high performance in many in-air problems in recent years. Hence we investigate, how such a learned robust feature model, D2Net, can be applied to the underwater environment and particularly look into the issue of cross domain transfer learning as a strategy to deal with the lack of annotated underwater training data.
    Type: Book chapter , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2023-02-23
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2023-01-24
    Description: Spectacular advances have been made in the field of machine vision over the past decade. While this discipline is traditionally driven by geometric models, neural networks have proven to be superior in some applications and have significantly expanded the limits of what is possible. At the same time, conventional graphic models describe the relationship between images and the associated scene with textures and light in a physically realistic manner and are an important part of photogrammetry. Differential renderers combine these approaches by enabling gradient-based optimization in fixed structures of a graphics pipeline and thus adapt the learning process of neural networks. This fusion of formalized knowledge and machine learning motivates the idea of a modular differentiable renderer in which physical and statistical models can be recombined depending on the use case. We therefore present Gemini Connector: an initiative for the modular development and combination of differentiable physical models and neural networks. We examine opportunities and problems and motivate the idea with the extension of a differentiable rendering pipeline to include models of underwater optics for the analysis of deep sea images. Finally, we discuss use cases, especially within the Cross-Domain Fusion initiative.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    facet.materialart.
    Unknown
    arXiv
    In:  (Submitted) arXiv e-prints .
    Publication Date: 2021-11-24
    Description: Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere's center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications.
    Type: Article , NonPeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    facet.materialart.
    Unknown
    In:  [Paper] In: 3DV 2021 International Conference on 3D Vision, 01.-03.12.2021, Online .
    Publication Date: 2022-01-14
    Description: Macro photography is characterized by a very shallow depth of field, which challenges classical structure from motion and even camera calibration techniques, since images suffer from large defocussed areas. Computational photography methods such as focus stacking combine the sharp areas of many photos into one, which can produce spectacular images of insects or small structures. In this contribution we analyse the camera model to describe such focus stacked images in photogrammetry and computer vision and derive a camera calibration pipeline for macro photography to enable photogrammetry and 3D reconstruction of tiny objects. We demonstrate the effectiveness of the approach on raytraced images with ground truth and real images.
    Type: Conference or Workshop Item , NonPeerReviewed , info:eu-repo/semantics/conferenceObject
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    facet.materialart.
    Unknown
    In:  [Paper] In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 11.-17.10.2021, Montreal, Canada .
    Publication Date: 2022-01-14
    Type: Conference or Workshop Item , NonPeerReviewed , info:eu-repo/semantics/conferenceObject
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2024-02-07
    Description: Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...