GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    In: Biodiversity Data Journal, Pensoft Publishers, Vol. 7 ( 2019-09-27)
    Abstract: The 150 grassland plots were located in three study regions in Germany, 50 in each region. The dataset describes the yearly grassland management for each grassland plot using 116 variables. General information includes plot identifier, study region and survey year. Additionally, grassland plot characteristics describe the presence and starting year of drainage and whether arable farming had taken place 25 years before our assessment, i.e. between 1981 and 2006. In each year, the size of the management unit is given which, in some cases, changed slightly across years. Mowing, grazing and fertilisation were systematically surveyed: Mowing is characterised by mowing frequency (i.e. number of cuts per year), dates of cutting and different technical variables, such as type of machine used or usage of conditioner. For grazing , the livestock species and age (e.g. cattle, horse, sheep), the number of animals, stocking density per hectare and total duration of grazing were recorded. As a derived variable, the mean grazing intensity was then calculated by multiplying the livestock units with the duration of grazing per hectare [LSU days/ha]. Different grazing periods during a year, partly involving different herds, were summed up to an annual grazing intensity for each grassland. For fertilisation , information on the type and amount of different types of fertilisers was recorded separately for mineral and organic fertilisers, such as solid farmland manure, slurry and mash from a bioethanol factory. Our fertilisation measures neglect dung dropped by livestock during grazing. For each type of fertiliser, we calculated its total nitrogen content, derived from chemical analyses by the producer or agricultural guidelines (Table 3). All three management types, mowing, fertilisation and grazing, were used to calculate a combined land use intensity index (LUI) which is frequently used to define a measure for the land use intensity. Here, fertilisation is expressed as total nitrogen per hectare [kg N/ha], but does not consider potassium and phosphorus. Information on additional management practices in grasslands was also recorded including levelling, to tear-up matted grass covers, rolling, to remove surface irregularities, seed addition, to close gaps in the sward. Investigating the relationship between human land use and biodiversity is important to understand if and how humans affect it through the way they manage the land and to develop sustainable land use strategies. Quantifying land use (the ‘X’ in such graphs) can be difficult as humans manage land using a multitude of actions, all of which may affect biodiversity, yet most studies use rather simple measures of land use, for example, by creating land use categories such as conventional vs. organic agriculture. Here, we provide detailed data on grassland management to allow for detailed analyses and the development of land use theory. The raw data have already been used for & gt; 100 papers on the effect of management on biodiversity (e.g. Manning et al. 2015).
    Type of Medium: Online Resource
    ISSN: 1314-2828 , 1314-2836
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2019
    detail.hit.zdb_id: 2736709-5
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    In: Research Ideas and Outcomes, Pensoft Publishers, Vol. 6 ( 2020-01-24)
    Abstract: The SYNTHESYS consortium has been operational since 2004, and has facilitated physical access by individual researchers to European natural history collections through its Transnational Access programme (TA). For the first time, SYNTHESYS+ will be offering virtual access to collections through digitisation, with two calls for the programme, the first in 2020 and the second in 2021. The Virtual Access (VA) programme is not a direct digital parallel of Transnational Access - proposals for collections digitisation will be prioritised and carried out based on community demand, and data must be made openly available immediately. A key feature of Virtual Access is that, unlike TA, it does not select the researchers to whom access is provided. Because Virtual Access in this way is new to the community and to the collections-holding institutions, the SYNTHESYS+ consortium invited ideas through an Ideas Call, that opened on 7th October 2019 and closed on 22nd November 2019, in order to assess interest and to trial procedures. This report is intended to provide feedback to those who participated in the Ideas Call and to help all applicants to the first SYNTHESYS+Virtual Access Call that will be launched on 20 th of February 2020.
    Type of Medium: Online Resource
    ISSN: 2367-7163
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2020
    detail.hit.zdb_id: 2833254-4
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Pensoft Publishers ; 2017
    In:  Proceedings of TDWG Vol. 1 ( 2017-07-25), p. e14778-
    In: Proceedings of TDWG, Pensoft Publishers, Vol. 1 ( 2017-07-25), p. e14778-
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2017
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    In: Biodiversity Information Science and Standards, Pensoft Publishers, Vol. 2 ( 2018-05-22), p. e26177-
    Abstract: SOCCOMAS is a ready-to-use Semantic Ontology-Controlled Content Management System (http://escience.biowikifarm.net/wiki/SOCCOMAS). Each web content management system (WCMS) run by SOCCOMAS is controlled by a set of ontologies and an accompanying Java-based middleware with the data housed in a Jena tuple store. The ontologies describe the behavior of the WCMS, including all of its input forms, input controls, data schemes and workflow processes (Fig. 1). Data is organized into different types of data entries, which represent collections of data referring to a particular material entity, for instance an individual specimen. SOCCOMAS implements a suite of general processes, which can be used to manage and organize all data entry types. One category of processes manages the life-cycle of a data entry, including all required for changing between the following possible entry states: current draft version; backup draft version; recycle bin draft version; deleted draft version; current published version; previously published version. current draft version; backup draft version; recycle bin draft version; deleted draft version; current published version; previously published version. The processes also allow a user to create a revised draft based on the current published version. Another category of processes automatically tracks the overall provenance (i.e. creator, authors, creation and publication date, contributers, relation between different versions, etc.) for each particular data entry. Additionally, on a significantly finer level of granularity, SOCCOMAS also tracks in a detailed change-history log all changes made to a particular data record at the level of individual input fields. All information (data, provenance metadata, change-history metadata) is stored based on Resource Description Framework (RDF) compliant data schemes into different named graphs (i.e. a URI under which triple statements are stored in the tuple store). All recorded information can be accessed through a SPARQL endpoint. All data entries are Linked Open Data and thus provide access to an HTML representation of the data for visualization in a web-browser or as a machine-readable RDF file. The ontology-controlled design of SOCCOMAS allows administrators to easily customize already existing templates for input forms of data entries, define new templates for new types of data entries, and define underlying RDF-compliant data schemes and apply them to each relevant input field. SOCCOMAS provides an engine for running and developing semantic WCMSs, where only ontology editing, but no middleware and front end programming, are required for adapting the WCMS to one's own specific requirements.
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2018
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    In: Proceedings of TDWG, Pensoft Publishers, Vol. 1 ( 2017-08-04), p. e20033-
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2017
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Pensoft Publishers ; 2018
    In:  Biodiversity Information Science and Standards Vol. 2 ( 2018-05-22), p. e25535-
    In: Biodiversity Information Science and Standards, Pensoft Publishers, Vol. 2 ( 2018-05-22), p. e25535-
    Abstract: Providing data in a semantically structured format has become the gold standard in data science. However, a significant amount of data is still provided as unstructured text - either because it is legacy data or because adequate tools for storing and disseminating data in a semantically structured format are still missing. We have developed a description module for Morph∙D∙Base, a semantic knowledge base for taxonomic and morphologic data, that enables users to generate highly standardized and formalized descriptions of anatomical entities using free text and ontology-based descriptions. The main organizational backbone of a description in Morph∙D∙Base is a partonomy, to which the user adds all the anatomical entities of the specimen that they want to describe. Each element of this partonomy is an instance of an ontology class and can be further described in two different ways: as semantically enriched free-text description that is annotated with terms from ontologies, and semantically through defined input forms with a wide range of ontology-terms to choose from. To facilitate the integration of the free text into a semantic context, text can be automatically annotated using jAnnotator, a javascript library that uses about 700 ontologies with more than 8.5 million classes of the National Center for Biomedical Ontology (NCBO) bioportal. Users get to choose from suggested class definitions and link them to terms in the text, resulting in a semantic markup of the text. This markup may also include labels of elements that the user already added to the partonomy. Anatomical entities marked in the text can be added to the partonomy as new elements that can subsequently be described semantically using the input forms. Each free text together with its semantic annotations is stored following the W3C Web Annotation Data Model standard (https://www.w3.org/TR/annotation-model). The whole description with the annotated free text and the formalized semantic descriptions for each element of the partonomy are saved in the tuplestore of Morph∙D∙Base. The demonstration is targeted at developers and users of data portals and will give an insight to the semantic Morph∙D∙Base knowledge base (https://proto.morphdbase.de) and jAnnotator (http://git.morphdbase.de/christian/jAnnotator).
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2018
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    In: Proceedings of TDWG, Pensoft Publishers, Vol. 1 ( 2017-07-25), p. e15141-
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2017
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Online Resource
    Online Resource
    Pensoft Publishers ; 2019
    In:  Biodiversity Information Science and Standards Vol. 3 ( 2019-06-26)
    In: Biodiversity Information Science and Standards, Pensoft Publishers, Vol. 3 ( 2019-06-26)
    Abstract: The landscape of currently existing repositories of specimen data consists of isolated islands, with each applying its own underlying data model. Using standardized protocols such as DarwinCore or ABCD, specimen data and metadata are exchanged and published on web portals such as GBIF. However, data models differ across repositories. This can lead to problems when comparing and integrating content from different systems. for example, in one system there is a field with the label 'determination', in another there is a field with the label 'taxonomic identification'. Both might refer to the same concepts of organism identification process (e.g., 'obi:organism identification assay'; http://purl.obolibrary.org/obo/OBI_0001624), but the intuitive meaning of the content is not clear and the understanding of the providers of the information might differ from that of the users. Without additional information, data integration across isolated repositories is thus difficult and error-prone. As a consequence, interoperability and retrievability of data across isolated repositories is difficult. Linked Open Data (LOD) promises an improvement. URIs can be used for concepts that are ideally created and accepted by a community and that provide machine-readable meanings. LOD thereby supports transfer of data into information and then into knowledge, thus making the data FAIR ( F indable, A ccessible, I nteroperable, R eusable; Wilkinson et al. 2016). Annotating specimen associated data with LOD, therefore, seems to be a promising approach to guarantee interoperability across different repositories. However, all currently used specimen collection management systems are based on relational database systems, which lack semantic transparency and thus do not provide easily accessible, machine-readable meanings for the terms used in their data models. As a consequence, transferring their data contents into an LOD framework may lead to loss or misinterpretation of information. This discrepancy between LOD and relational databases results from the lack of semantic transparency and machine-readability of data in relational databases. Storing specimen collection data as semantic Knowledge Graphs provides semantic transparency and machine-readability of data. Semantic Knowledge Graphs are graphs that are based on the syntax of ‘Subject – Property – Object’ of the Resource Description Framework (RDF). The ‘Subject’ and ‘Property’ position is taken by URIs and the ‘Object’ position can be taken either by a URI or by a label or value. Since a given URI can take the ‘Subject’ position in one RDF statement and the ‘Object’ position in another RDF statement, several RDF statements can be connected to form a directed labeled graph, i.e. a semantic graph. Semantic Knowledge Graphs are graphs in which each described specimen and its parts and properties possess their own URI and thus can be individually referenced. These URIs are used to describe the respective specimen and its properties using the RDF syntax. Additional RDF statements specify the ontology class that each part and property instantiates. The reference to the URIs of the instantiated ontology classes guarantees the F indability, I nteroperability, and R eusability of information contained in semantic Knowledge Graphs. Specimen collection data contained in semantic Knowledge Graphs can be made A ccessible in a human-readable form through an interface and in a machine-readable form through a SPARQL endpoint (https://en.wikipedia.org/wiki/SPARQL). As a consequence, semantic Knowledge Graphs comply with the FAIR guiding principles. By using URIs for the semantic Knowledge Graph of each specimen in the collection, it is also available as LOD. With semantic Morph·D·Base, we have implemented a prototype to this approach that is based on Semantic Programming. We present the prototype and discuss different aspects of how specimen collection data are handled. By using community created terminologies and standardized methods for the contents created (e.g. species identification) as well as URIs for each expression, we make the data and metadata semantically transparent and communicable. The source code for Semantic Programming and for semantic Morph·D·Base is available from https://github.com/SemanticProgramming. The prototype of semantic Morph·D·Base can be accessed here: https://proto.morphdbase.de.
    Type of Medium: Online Resource
    ISSN: 2535-0897
    Language: Unknown
    Publisher: Pensoft Publishers
    Publication Date: 2019
    detail.hit.zdb_id: 3028709-1
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...