In:
PLOS ONE, Public Library of Science (PLoS), Vol. 18, No. 2 ( 2023-2-24), p. e0272103-
Abstract:
Diatoms represent one of the morphologically and taxonomically most diverse groups of microscopic eukaryotes. Light microscopy-based taxonomic identification and enumeration of frustules, the silica shells of these microalgae, is broadly used in aquatic ecology and biomonitoring. One key step in emerging digital variants of such investigations is segmentation, a task that has been addressed before, but usually in manually captured megapixel-sized images of individual diatom cells with a mostly clean background. In this paper, we applied deep learning-based segmentation methods to gigapixel-sized, high-resolution scans of diatom slides with a realistically cluttered background. This setup requires large slide scans to be subdivided into small images (tiles) to apply a segmentation model to them. This subdivision (tiling), when done using a sliding window approach, often leads to cropping relevant objects at the boundaries of individual tiles. We hypothesized that in the case of diatom analysis, reducing the amount of such cropped objects in the training data can improve segmentation performance by allowing for a better discrimination of relevant, intact frustules or valves from small diatom fragments, which are considered irrelevant when counting diatoms. We tested this hypothesis by comparing a standard sliding window / fixed-stride tiling approach with two new approaches we term object-based tile positioning with and without object integrity constraint. With all three tiling approaches, we trained Mask-R-CNN and U-Net models with different amounts of training data and compared their performance. Object-based tiling with object integrity constraint led to an improvement in pixel-based precision by 12–17 percentage points without substantially impairing recall when compared with standard sliding window tiling. We thus propose that training segmentation models with object-based tiling schemes can improve diatom segmentation from large gigapixel-sized images but could potentially also be relevant for other image domains.
Type of Medium:
Online Resource
ISSN:
1932-6203
DOI:
10.1371/journal.pone.0272103
DOI:
10.1371/journal.pone.0272103.g001
DOI:
10.1371/journal.pone.0272103.g002
DOI:
10.1371/journal.pone.0272103.g003
DOI:
10.1371/journal.pone.0272103.g004
DOI:
10.1371/journal.pone.0272103.g005
DOI:
10.1371/journal.pone.0272103.g006
DOI:
10.1371/journal.pone.0272103.g007
DOI:
10.1371/journal.pone.0272103.g008
DOI:
10.1371/journal.pone.0272103.g009
DOI:
10.1371/journal.pone.0272103.g010
DOI:
10.1371/journal.pone.0272103.g011
DOI:
10.1371/journal.pone.0272103.g012
DOI:
10.1371/journal.pone.0272103.t001
DOI:
10.1371/journal.pone.0272103.t002
DOI:
10.1371/journal.pone.0272103.t003
DOI:
10.1371/journal.pone.0272103.t004
DOI:
10.1371/journal.pone.0272103.s001
DOI:
10.1371/journal.pone.0272103.s002
DOI:
10.1371/journal.pone.0272103.s003
DOI:
10.1371/journal.pone.0272103.s004
DOI:
10.1371/journal.pone.0272103.s005
DOI:
10.1371/journal.pone.0272103.s006
DOI:
10.1371/journal.pone.0272103.s007
DOI:
10.1371/journal.pone.0272103.s008
DOI:
10.1371/journal.pone.0272103.s009
DOI:
10.1371/journal.pone.0272103.s010
DOI:
10.1371/journal.pone.0272103.s011
DOI:
10.1371/journal.pone.0272103.r001
DOI:
10.1371/journal.pone.0272103.r002
DOI:
10.1371/journal.pone.0272103.r003
DOI:
10.1371/journal.pone.0272103.r004
DOI:
10.1371/journal.pone.0272103.r005
Language:
English
Publisher:
Public Library of Science (PLoS)
Publication Date:
2023
detail.hit.zdb_id:
2267670-3
Permalink