GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2019-09-23
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-09-23
    Description: Highlights • Marine Image Annotation Software (MIAS) are used to assist annotation of underwater imagery. • We compare 23 MIAS assisting human annotation including some that include automated annotation. • MIAS can run in real time (50%), allow posterior annotation (95%), and interact with databases and data flows (44%). • MIAS differ in data input/output and display, customization, image analysis and re-annotation. • We provide important considerations when selecting UIAS, and outline future trends. Abstract Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, in posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display of data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze annotation data. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-06-26
    Description: Highlights • The proposed method automatically assesses the abundance of poly-metallic nodules on the seafloor. • No manually created feature reference set is required. • Large collections of benthic images from a range of acquisition gear can be analysed efficiently. Abstract Underwater image analysis is a new field for computational pattern recognition. In academia as well as in the industry, it is more and more common to use camera-equipped stationary landers, autonomous underwater vehicles, ocean floor observatory systems or remotely operated vehicles for image based monitoring and exploration. The resulting image collections create a bottleneck for manual data interpretation owing to their size. In this paper, the problem of measuring size and abundance of poly-metallic nodules in benthic images is considered. A foreground/background separation (i.e. separating the nodules from the surrounding sediment) is required to determine the targeted quantities. Poly-metallic nodules are compact (convex), but vary in size and appear as composites with different visual features (color, texture, etc.). Methods for automating nodule segmentation have so far relied on manual training data. However, a hand-drawn, ground-truthed segmentation of nodules and sediment is difficult (or even impossible) to achieve for a sufficient number of images. The new ES4C algorithm (Evolutionary tuned Segmentation using Cluster Co-occurrence and a Convexity Criterion) is presented that can be applied to a segmentation task without a reference ground truth. First, a learning vector quantization groups the visual features in the images into clusters. Secondly, a segmentation function is constructed by assigning the clusters to classes automatically according to defined heuristics. Using evolutionary algorithms, a quality criterion is maximized to assign cluster prototypes to classes. This criterion integrates the morphological compactness of the nodules as well as feature similarity in different parts of nodules. To assess its applicability, the ES4C algorithm is tested with two real-world data sets. For one of these data sets, a reference gold standard is available and we report a sensitivity of 0.88 and a specificity of 0.65. Our results show that the applied heuristics, which combine patterns in the feature domain with patterns in the spatial domain, lead to good segmentation results and allow full automation of the resource-abundance assessment for benthic poly-metallic nodules.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2021-02-08
    Description: Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections.
    Type: Article , PeerReviewed
    Format: text
    Format: archive
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2018-01-04
    Description: Marine researchers continue to create large quantities of benthic images e.g., using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimeter ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and color features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect/collection from just a small number of hand labeled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimeter ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-09-23
    Description: Optic technologies and methods/procedures are established across all areas and scales in limnic and marine research in Germany and develop further continuously. The working group “Aquatic Optic Technologies” (AOT) constitutes a common platform for knowledge transfer among scientists and users, provides a synergistic environment for the national developer community and will enhance the international visibility of the German activities in this field. This document summarizes the AOT-procedures and -techniques applied by national research institutions. We expect to initiate a trend towards harmonization across institutes. This will facilitate the establishment of open standards, provide better access to documentation, and render technical assistance for systems integration. The document consists of the parts: Platforms and carrier systems outlines the main application areas and the used technologies. Focus parameters specifies the parameters measured by means of optical methods/techniques and indicates to which extent these parameters have a socio-political dimension. Methods presents the individual optical sensors and their underlying physical methods. Similarities denominates the common space of AOT-techniques and applications. National developments lists projects and developer groups in Germany designing optical high-technologies for limnic and marine scientific purposes.
    Type: Report , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    In:  In: OCEANS 2015. Institute of Electrical and Electronics Engineers (IEEE), Washington, pp. 1-5. ISBN 978-0-933957-43-5
    Publication Date: 2016-12-01
    Description: Computational underwater image analysis is developing into a mature field of research, with an increasing number of companies, academic groups and researchers showing interest in it. While on the one hand, the basic question is addressed by many groups, how algorithms can be applied to automatically detect and classify objects of interest (OOI) in underwater image footage, on the other hand the questions for efficiency and performance, i.e. the time a computer (or a compute cluster) needs to perform this task, has received much attention yet. In this paper we will show, how nowadays methods for high performance computing like parallelization and GPU computing via CUDA (Compute Unified Device Architecture) can be used to achieve both, image enhancement and segmentation in less than 0:2 sec per image (4224 x 2376 pixel) on average, which paves the way to real time online applications.
    Type: Book chapter , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2022-01-31
    Description: The evaluation of large amounts of digital image data is of growing importance for biology, including for the exploration and monitoring of marine habitats. However, only a tiny percentage of the image data collected is evaluated by marine biologists who manually interpret and annotate the image contents, which can be slow and laborious. In order to overcome the bottleneck in image annotation, two strategies are increasingly proposed: “citizen science” and “machine learning”. In this study, we investigated how the combination of citizen science, to detect objects, and machine learning, to classify megafauna, could be used to automate annotation of underwater images. For this purpose, multiple large data sets of citizen science annotations with different degrees of common errors and inaccuracies observed in citizen science data were simulated by modifying “gold standard” annotations done by an experienced marine biologist. The parameters of the simulation were determined on the basis of two citizen science experiments. It allowed us to analyze the relationship between the outcome of a citizen science study and the quality of the classifications of a deep learning megafauna classifier. The results show great potential for combining citizen science with machine learning, provided that the participants are informed precisely about the annotation protocol. Inaccuracies in the position of the annotation had the most substantial influence on the classification accuracy, whereas the size of the marking and false positive detections had a smaller influence.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2017-06-01
    Description: Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, in posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display of data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze annotation data. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
    Repository Name: EPIC Alfred Wegener Institut
    Type: Article , isiRev
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2016-01-13
    Description: Optic technologies and methods/procedures are established across all areas and scales in limnic and marine research in Germany and develop further continuously. The working group “Aquatic Optic Technologies” (AOT) constitutes a common platform for knowledge transfer among scientists and users, provides a synergistic environment for the national developer community and will enhance the international visibility of the German activities in this field. This document summarizes the AOT-procedures and -techniques applied by national research institutions. We expect to initiate a trend towards harmonization across institutes. This will facilitate the establishment of open standards, provide better access to documentation, and render technical assistance for systems integration. The document consists of the parts: Platforms and carrier systems outlines the main application areas and the used technologies. Focus parameters specifies the parameters measured by means of optical methods/techniques and indicates to which extent these parameters have a socio-political dimension. Methods presents the individual optical sensors and their underlying physical methods. Similarities denominates the common space of AOT-techniques and applications. National developments lists projects and developer groups in Germany designing optical hightechnologies for limnic and marine scientific purposes.
    Repository Name: EPIC Alfred Wegener Institut
    Type: Article , notRev
    Format: application/pdf
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...