GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 2020-2024  (8)
Document type
Years
Year
  • 1
    Publication Date: 2023-02-08
    Description: Nowadays various methods and sensors are available for 3D reconstruction tasks; however, it is still necessary to integrate advantages of different technologies for optimizing the quality 3D models. Computed tomography (CT) is an imaging technique which takes a large number of radiographic measurements from different angles, in order to generate slices of the object, however, without colour information. The aim of this study is to put forward a framework to extract colour information from photogrammetric images for corresponding Computed Tomography (CT) surface data with high precision. The 3D models of the same object from CT and photogrammetry methods are generated respectively, and a transformation matrix is determined to align the extracted CT surface to the photogrammetric point cloud through a coarse-to-fine registration process. The estimated pose information of images to the photogrammetric point clouds, which can be obtained from the standard image alignment procedure, also applies to the aligned CT surface data. For each camera pose, a depth image of CT data is calculated by projecting all the CT points to the image plane. The depth image is in principle should agree with the corresponding photogrammetric image. The points, which cannot be seen from the pose, but are also projected on the depth image, are excluded from the colouring process. This is realized by comparing the range values of neighbouring pixels and finding the corresponding 3D points with larger range values. The same procedure is implemented for all the image poses to obtain the coloured CT surface. Thus, by using photogrammetric images, we achieve a coloured CT dataset with high precision, which combines the advantages from both methods. Rather than simply stitching different data, we deep-dive into the photogrammetric 3D reconstruction process and optimize the CT data with colour information. This process can also provide an initial route and more options for other data fusion processes.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2024-02-07
    Description: Underwater cameras are typically placed behind glass windows to protect them from the water. Spherical glass, a dome port, is well suited for high water pressures at great depth, allows for a large field of view, and avoids refraction if a pinhole camera is positioned exactly at the sphere’s center. Adjusting a real lens perfectly to the dome center is a challenging task, both in terms of how to actually guide the centering process (e.g. visual servoing) and how to measure the alignment quality, but also, how to mechanically perform the alignment. Consequently, such systems are prone to being decentered by some offset, leading to challenging refraction patterns at the sphere that invalidate the pinhole camera model. We show that the overall camera system becomes an axial camera, even for thick domes as used for deep sea exploration and provide a non-iterative way to compute the center of refraction without requiring knowledge of exact air, glass or water properties. We also analyze the refractive geometry at the sphere, looking at effects such as forward- vs. backward decentering, iso-refraction curves and obtain a 6th-degree polynomial equation for forward projection of 3D points in thin domes. We then propose a pure underwater calibration procedure to estimate the decentering from multiple images. This estimate can either be used during adjustment to guide the mechanical position of the lens, or can be considered in photogrammetric underwater applications.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2024-02-07
    Description: Reliable quantification of natural and anthropogenic gas release (e.g. CO2, methane) from the seafloor into the water column, and potentially to the atmosphere, is a challenging task. While ship-based echo sounders such as single beam and multibeam systems allow detection of free gas, bubbles, in the water even from a great distance, exact quantification utilizing the hydroacoustic data requires additional parameters such as rise speed and bubble size distribution. Optical methods are complementary in the sense that they can provide high temporal and spatial resolution of single bubbles or bubble streams from close distance. In this contribution we introduce a complete instrument and evaluation method for optical bubble stream characterization targeted at flows of up to 100 ml/min and bubbles with a few millimeters radius. The dedicated instrument employs a high-speed deep sea capable stereo camera system that can record terabytes of bubble imagery when deployed at a seep site for later automated analysis. Bubble characteristics can be obtained for short sequences, then relocating the instrument to other locations, or in autonomous mode of definable intervals up to several days, in order to capture bubble flow variations due to e.g. tide dependent pressure changes or reservoir depletion. Beside reporting the steps to make bubble characterization robust and autonomous, we carefully evaluate the reachable accuracy to be in the range of 1–2% of the bubble radius and propose a novel auto-calibration procedure that, due to the lack of point correspondences, uses only the silhouettes of bubbles. The system has been operated successfully in 1000 m water depth at the Cascadia margin offshore Oregon to assess methane fluxes from various seep locations. Besides sample results we also report failure cases and lessons learnt during deployment and method development.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2024-02-07
    Description: Most parts of the Earth’s surface are situated in the deep ocean. To explore this visually rather adversarial environment with cameras, they have to be protected by pressure housings. These housings, in turn, need interfaces to the world, enduring extreme pressures within the water column. Commonly, a flat window or a half-sphere of glass, called flat-port or dome-port, respectively is used to implement such kind of interface. Hence, multi-media interfaces, between water, glass and air are introduced, entailing refraction effects in the images taken through them. To obtain unbiased 3D measurements and to yield a geometrically faithful reconstruction of the scene, it is mandatory to deal with the effects in a proper manner. Hence, we propose an optical digital twin of an underwater environment, which has been geometrically verified to resemble a real water lab tank that features the two most common optical interfaces. It can be used to develop, evaluate, train, test and tune refractive algorithms. Alongside this paper, we publish the model for further extension, jointly with code to dynamically generate samples from the dataset. Finally, we also publish a pre-rendered dataset ready for use at https://git.geomar.de/david-nakath/geodt.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2024-02-07
    Description: Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2024-02-07
    Description: Vision in the deep sea is acquiring increasing interest from many fields as the deep seafloor represents the largest surface portion onEarth. Unlike common shallow underwater imaging, deep sea imaging requires artificial lighting to illuminate the scene in perpetualdarkness. Deep sea images suffer from degradation caused by scattering, attenuation and effects of artificial light sources and havea very different appearance to images in shallow water or on land. This impairs transferring current vision methods to deep seaapplications. Development of adequate algorithms requires some data with ground truth in order to evaluate the methods. However,it is practically impossible to capture a deep sea scene also without water or artificial lighting effects. This situation impairs progressin deep sea vision research, where already synthesized images with ground truth could be a good solution. Most current methodseither render a virtual 3D model, or use atmospheric image formation models to convert real world scenes to appear as in shallowwater appearance illuminated by sunlight. Currently, there is a lack of image datasets dedicated to deep sea vision evaluation. Thispaper introduces a pipeline to synthesize deep sea images using existing real world RGB-D benchmarks, and exemplarily generatesthe deep sea twin datasets for the well known Middlebury stereo benchmarks. They can be used both for testing underwater stereomatching methods and for training and evaluating underwater image processing algorithms. This work aims towards establishingan image benchmark, which is intended particularly for deep sea vision developments.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2024-03-22
    Description: Underwater image restoration has been a challenging problem for decades since the advent of underwater photography. Most solutions focus on shallow water scenarios, where the scene is uniformly illuminated by the sunlight. However, the vast majority of uncharted underwater terrain is located beyond 200 meters depth where natural light is scarce and artificial illumination is needed. In such cases, light sources co-moving with the camera, dynamically change the scene appearance, which make shallow water restoration methods inadequate. In particular for multi-light source systems (composed of dozens of LEDs nowadays), calibrating each light is time-consuming, error-prone and tedious, and we observe that only the integrated illumination within the viewing volume of the camera is critical, rather than the individual light sources. The key idea of this paper is therefore to exploit the appearance changes of objects or the seafloor, when traversing the viewing frustum of the camera. Through new constraints assuming Lambertian surfaces, corresponding image pixels constrain the light field in front of the camera, and for each voxel a signal factor and a backscatter value are stored in a volumetric grid that can be used for very efficient image restoration of camera-light platforms, which facilitates consistently texturing large 3D models and maps that would otherwise be dominated by lighting and medium artifacts. To validate the effectiveness of our approach, we conducted extensive experiments on simulated and real-world datasets. The results of these experiments demonstrate the robustness of our approach in restoring the true albedo of objects, while mitigating the influence of lighting and medium effects. Furthermore, we demonstrate our approach can be readily extended to other scenarios, including in-air imaging with artificial illumination or other similar cases.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2024-03-25
    Description: Imaging is increasingly used to capture information on the marine environment thanks to the improvements in imaging equipment, devices for carrying cameras and data storage in recent years. In that context, biologists, geologists, computer specialists and end-users must gather to discuss the methods and procedures for optimising the quality and quantity of data collected from images. The 4 th Marine Imaging Workshop was organised from 3-6 October 2022 in Brest (France) in a hybrid mode. More than a hundred participants were welcomed in person and about 80 people attended the online sessions. The workshop was organised in a single plenary session of presentations followed by discussion sessions. These were based on dynamic polls and open questions that allowed recording of the imaging community’s current and future ideas. In addition, a whole day was dedicated to practical sessions on image analysis, data standardisation and communication tools. The format of this edition allowed the participation of a wider community, including lower-income countries, early career scientists, all working on laboratory, benthic and pelagic imaging. This article summarises the topics addressed during the workshop, particularly the outcomes of the discussion sessions for future reference and to make the workshop results available to the open public.
    Type: Article , NonPeerReviewed
    Format: text
    Format: archive
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...