GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Document type
Keywords
Language
  • 1
    Keywords: Forschungsbericht
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (18 Seiten, 8,10 MB) , Illustrationen, Diagramme
    Language: German
    Note: Autor und durchführende Institution dem Berichtsblatt entnommen , Förderkennzeichen BMWK 0324254D , Verbundnummer 01182152 , Laufzeit: 01.06.2018 bis 31.12.2021 , Unterschiede zwischen dem gedruckten Dokument und der elektronischen Ressource können nicht ausgeschlossen werden
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    facet.materialart.
    Unknown
    PANGAEA
    In:  Supplement to: Bergmann, Melanie; Langwald, Nina; Ontrup, Jörg; Soltwedel, Thomas; Schewe, Ingo; Klages, Michael; Nattkemper, Tim W (2011): Megafaunal assemblages from two shelf stations west of Svalbard. Marine Biology Research, 7(6), 525-539, https://doi.org/10.1080/17451000.2010.535834
    Publication Date: 2023-12-13
    Description: Megafauna plays an important role in benthic ecosystems and contributes significantly to benthic biomass in the Arctic. The distribution is mostly studied using towed cameras. Here, we compare the megafauna from two sites located at different distances from the Kongsfjord: one station at the entrance to the fjord, another on the outer shelf. Although they are only located 25 km apart and at comparable depth, there were significant differences in their species composition. While the inshore station was characterized by shrimps (2.57 +/- 2.18 ind./m**2) and brittlestars (3.21 +/- 3.21 ind./m**2), the offshore site harboured even higher brittlestar densities (15.23 +/- 9.32 ind./m**2) and high numbers of the sea urchin Strongylocentrotus pallidus (1.23 +/- 1.09 ind./m**2). Phytodetrital concentrations of the upper sediment centimetres were significantly higher inshore compared with offshore. At a smaller scale, there were also differences in the composition of different transect sections. Several taxa were characterized by a patchy distribution along transects. We conclude that these differences were caused primarily by habitat characteristics. The seafloor inshore was characterized by glacial soft sediments, whereas the station offshore harboured large quantities of stones. Although the use of a new web-2.0-based tool, BIIGLE (http://www.BIIGLE.de), allowed us to analyse more images (~90) than could have been achieved by hand, taxon area curves indicated that the number of images analysed was not sufficient to capture the species inventory fully. New automated image analysis tools would enable a rapid analysis of larger quantities of camera footage.
    Keywords: Actiniaria; Actiniaria, standard deviation; Actiniidae; Actinostolidae; Alcyonacea; Alcyonacea, standard deviation; Amblyraja radiata; Amblyraja radiata, standard deviation; Amphicteis gunneri; Amphipoda; Amphipoda, standard deviation; Anarhichas minor; Anarhichas minor, standard deviation; Anobothrus gracilis; Anthozoa; Anthozoa, standard deviation; Area/locality; Aristias tumidus; ARK-XXIII/2; Arrhis phyllonyx; Artacama proboscidea; Artediellus atlanticus; Artemisina apollinis; Artemisina apollinis, standard deviation; Ascidiacea; Ascidiacea, standard deviation; Astarte montagui; Asteroidea; Asteroidea, standard deviation; AWI; Brada granulosa; Brada inhabilis; Branchiomma sp.; Bylgides elegans; Bylgides groenlandicus; Capitella capitata; Caridea; Caridea, standard deviation; Ceriantharia; Ceriantharia, standard deviation; Chaetozone spp.; Chirimia biceps; Chlamys islandica; Chone sp.; Ciliatocardium ciliatum; Cirratulus sp.; Colossendeis proboscidea; Colossendeis proboscidea, standard deviation; Colus sabini; Coryphella salmonacea; Crossaster papposus; Crossaster papposus, standard deviation; Crustacea; Crustacea, standard deviation; Cryptonatica affinis; Ctenodiscus crispatus; Cylichna sp.; Dendrobeania cf. fruticosa; Eteone flava; Eteone foliosa; Eunoe nodosa; Euphrosine sp.; Eupyrgus scaber; Event label; Frigidoalvania janmayeni; Gadus morhua; Gadus morhua, standard deviation; Gastropoda; Gastropoda, standard deviation; Gattyana cirrhosa; Gersemia rubiformis; Gersemia rubiformis, standard deviation; Golfingia margaritacea; Gymnelus sp.; Halecium muricatum; Halecium scutum; Halirages fulvocincta; Haploops sp.; Harmothoe sp.; Henricia perforata; Heteromastus filiformis; Hiatella sp.; Hippoglossoides platessoides; Hippoglossoides platessoides, standard deviation; Hormathia digitata; Hormathia nodosa; Hyas spp.; Hyas spp., standard deviation; Icasterias panopla; Icasterias panopla, standard deviation; International Polar Year (2007-2008); IPY; Jasmineira cf. schaudinni; Laonice cf. cirrata; Laonice sp.; Leitoscoloplos mammosus; Lepeta caeca; Leptochiton sp.; Lumbrinereidae; Lumpenus lampretaeformis; Lumpenus lampretaeformis, standard deviation; Lycodes gracilis; Lysippe labiata; Maldane cf. arctica; Maldane sarsi; Maldanidae; Method comment; MF; Microcionidae; Multi frame; Munnopsis typica; Myriapora coarctata; Myriapora coarctata, standard deviation; Myriochele cf. oculata; Myriochele heeri; Myxilla sp.; Nemertea; Neoamphitrite affinis; Nephasoma diaphanes; Nephtys ciliata; Neptunea despecta; Nicomache lumbricalis; Nothria conchylega; Nuculana pernula; Nymphon hirtipes; Oedicerotidae; Oenopota sp.; OFOS photographic survey with BIIGLE analysis; Ophiacantha bidentata; Ophiopholis aculeata; Ophiura robusta; Ophiura sarsi; Ophiuroidea; Ophiuroidea, standard deviation; Pandalus sp.; Paramphithoe hystrix; Pedicellaster typicus; Phascolion strombi; Pherusa sp.; Philine finmarchica; Pholoe cf. assimilis; Phoxocephalus holbolli; Pisces; Pisces, standard deviation; Polarstern; Polynoidae; Porifera; Porifera, standard deviation; Praxillura longissima; Prionospio sp.; PS72; PS72/106-4; PS72/107-4; Pteraster cf. pulvillus; Sabellidae; Sclerocrangon sp.; Scoletoma fragilis; Sepiolidae; Serpulidae; Serpulidae, standard deviation; Similipecten greenlandicus; Solariella obscura; Spiochaetopterus typicus; Spiophanes kroeyeri; Stegocephalopsis ampulla; Stegopoma plicatile; Strongylocentrotus pallidus; Strongylocentrotus pallidus, standard deviation; Syllis cornuta; Tachyrhynchus reticulatus; Tedania suctoria; Terebellides sp.; Themisto sp.; Volutopsius norwegicus; Yoldiella propinqua; Yoldiella solidula; Zoarcidae; Zoarcidae, standard deviation
    Type: Dataset
    Format: text/tab-separated-values, 392 data points
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-09-23
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-09-23
    Description: Highlights • Marine Image Annotation Software (MIAS) are used to assist annotation of underwater imagery. • We compare 23 MIAS assisting human annotation including some that include automated annotation. • MIAS can run in real time (50%), allow posterior annotation (95%), and interact with databases and data flows (44%). • MIAS differ in data input/output and display, customization, image analysis and re-annotation. • We provide important considerations when selecting UIAS, and outline future trends. Abstract Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, in posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display of data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze annotation data. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-06-26
    Description: Highlights • The proposed method automatically assesses the abundance of poly-metallic nodules on the seafloor. • No manually created feature reference set is required. • Large collections of benthic images from a range of acquisition gear can be analysed efficiently. Abstract Underwater image analysis is a new field for computational pattern recognition. In academia as well as in the industry, it is more and more common to use camera-equipped stationary landers, autonomous underwater vehicles, ocean floor observatory systems or remotely operated vehicles for image based monitoring and exploration. The resulting image collections create a bottleneck for manual data interpretation owing to their size. In this paper, the problem of measuring size and abundance of poly-metallic nodules in benthic images is considered. A foreground/background separation (i.e. separating the nodules from the surrounding sediment) is required to determine the targeted quantities. Poly-metallic nodules are compact (convex), but vary in size and appear as composites with different visual features (color, texture, etc.). Methods for automating nodule segmentation have so far relied on manual training data. However, a hand-drawn, ground-truthed segmentation of nodules and sediment is difficult (or even impossible) to achieve for a sufficient number of images. The new ES4C algorithm (Evolutionary tuned Segmentation using Cluster Co-occurrence and a Convexity Criterion) is presented that can be applied to a segmentation task without a reference ground truth. First, a learning vector quantization groups the visual features in the images into clusters. Secondly, a segmentation function is constructed by assigning the clusters to classes automatically according to defined heuristics. Using evolutionary algorithms, a quality criterion is maximized to assign cluster prototypes to classes. This criterion integrates the morphological compactness of the nodules as well as feature similarity in different parts of nodules. To assess its applicability, the ES4C algorithm is tested with two real-world data sets. For one of these data sets, a reference gold standard is available and we report a sensitivity of 0.88 and a specificity of 0.65. Our results show that the applied heuristics, which combine patterns in the feature domain with patterns in the spatial domain, lead to good segmentation results and allow full automation of the resource-abundance assessment for benthic poly-metallic nodules.
    Type: Article , PeerReviewed , info:eu-repo/semantics/article
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2021-01-08
    Description: The volume of digital image data collected in the field of marine environmental monitoring and exploration has been growing in rapidly increasing rates in recent years. Computational support is essential for the timely evaluation of the high volume of marine imaging data, but often modern techniques such as deep learning cannot be applied due to the lack of training data. In this article, we present Unsupervised Knowledge Transfer (UnKnoT), a new method to use the limited amount of training data more efficiently. In order to avoid time-consuming annotation, it employs a technique we call “scale transfer” and enhanced data augmentation to reuse existing training data for object detection of the same object classes in new image datasets. We introduce four fully annotated marine image datasets acquired in the same geographical area but with different gear and distance to the sea floor. We evaluate the new method on the four datasets and show that it can greatly improve the object detection performance in the relevant cases compared to object detection without knowledge transfer. We conclude with a recommendation for an image acquisition and annotation scheme that ensures a good applicability of modern machine learning methods in the field of marine environmental monitoring and exploration.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    facet.materialart.
    Unknown
    Frontiers
    In:  Frontiers in Artificial Intelligence, 3 (49).
    Publication Date: 2021-01-08
    Description: Deep artificial neural networks have become the go-to method for many machine learning tasks. In the field of computer vision, deep convolutional neural networks achieve state-of-the-art performance for tasks such as classification, object detection, or instance segmentation. As deep neural networks become more and more complex, their inner workings become more and more opaque, rendering them a “black box” whose decision making process is no longer comprehensible. In recent years, various methods have been presented that attempt to peek inside the black box and to visualize the inner workings of deep neural networks, with a focus on deep convolutional neural networks for computer vision. These methods can serve as a toolbox to facilitate the design and inspection of neural networks for computer vision and the interpretation of the decision making process of the network. Here, we present the new tool Interactive Feature Localization in Deep neural networks (IFeaLiD) which provides a novel visualization approach to convolutional neural network layers. The tool interprets neural network layers as multivariate feature maps and visualizes the similarity between the feature vectors of individual pixels of an input image in a heat map display. The similarity display can reveal how the input image is perceived by different layers of the network and how the perception of one particular image region compares to the perception of the remaining image. IFeaLiD runs interactively in a web browser and can process even high resolution feature maps in real time by using GPU acceleration with WebGL 2. We present examples from four computer vision datasets with feature maps from different layers of a pre-trained ResNet101. IFeaLiD is open source and available online at https://ifealid.cebitec.uni-bielefeld.de.
    Type: Article , PeerReviewed
    Format: text
    Format: other
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2021-02-08
    Description: Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to “traditional” annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections.
    Type: Article , PeerReviewed
    Format: text
    Format: archive
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2017-07-18
    Description: Megafauna play an important role in benthic ecosystem function and are sensitive indicators of environmental change. Non-invasive monitoring of benthic communities can be accomplished by seafloor imaging. However, manual quantification of megafauna in images is labor-intensive and therefore, this organism size class is often neglected in ecosystem studies. Automated image analysis has been proposed as a possible approach to such analysis, but the heterogeneity of megafaunal communities poses a non-trivial challenge for such automated techniques. Here, the potential of a generalized object detection architecture, referred to as iSIS (intelligent Screening of underwater Image Sequences), for the quantification of a heterogenous group of megafauna taxa is investigated. The iSIS system is tuned for a particular image sequence (i.e. a transect) using a small subset of the images, in which megafauna taxa positions were previously marked by an expert. To investigate the potential of iSIS and compare its results with those obtained from human experts, a group of eight different taxa from one camera transect of seafloor images taken at the Arctic deep-sea observatory HAUSGARTEN is used. The results show that inter-and intra-observer agreements of human experts exhibit considerable variation between the species, with a similar degree of variation apparent in the automatically derived results obtained by iSIS. Whilst some taxa (e. g. Bathycrinus stalks, Kolga hyalina, small white sea anemone) were well detected by iSIS (i.e. overall Sensitivity: 87%, overall Positive Predictive Value: 67%), some taxa such as the small sea cucumber Elpidia heckeri remain challenging, for both human observers and iSIS.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2018-01-04
    Description: Marine researchers continue to create large quantities of benthic images e.g., using AUVs (Autonomous Underwater Vehicles). In order to quantify the size of sessile objects in the images, a pixel-to-centimeter ratio is required for each image, often indirectly provided through a geometric laser point (LP) pattern, projected onto the seafloor. Manual annotation of these LPs in all images is too time-consuming and thus infeasible for nowadays data volumes. Because of the technical evolution of camera rigs, the LP's geometrical layout and color features vary for different expeditions and projects. This makes the application of one algorithm, tuned to a strictly defined LP pattern, also ineffective. Here we present the web-tool DELPHI, that efficiently learns the LP layout for one image transect/collection from just a small number of hand labeled LPs and applies this layout model to the rest of the data. The efficiency in adapting to new data allows to compute the LPs and the pixel-to-centimeter ratio fully automatic and with high accuracy. DELPHI is applied to two real-world examples and shows clear improvements regarding reduction of tuning effort for new LP patterns as well as increasing detection performance.
    Type: Article , PeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...