GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Publication Date: 2021-05-03
    Description: The identity of authors and data providers is crucial for personalized interoperability. The marketplace of available identifiers is packed and the right choice is getting more and more complicated. Even though there are more then 15 different systems available there are still some under development and proposed to come up by the end of 2012 ('PubMed Central Author ID' and ORCID). Data Management on a scale beyond the size of a single research institute but on the scale of a scientific site including a university with student education program needs to tackle this problem and so did the Kiel Data Management an Infrastructure. The main problem with the identities of researchers is the quite high frequency changes in positions during a scientist life. The required system needed to be a system that already contained the potential of preregistered people with their scientific publications from other countries, institutions and organizations. Scanning the author ID marketplace brought up, that there us a high risk of additional workload to the researcher itself or the administration due to the fact that individuals need to register an ID for themselves or the chosen register is not yet big enough to simply find the right entry. On the other hand libraries deal with authors and their publications now for centuries and they have high quality catalogs with person identities already available. Millions of records internationally mapped are available by collaboration with libraries and can be used in exactly the same scope. The international collaboration between libraries (VIAF) provides a mapping between libraries from the US, CA, UK, FR, GER and many more. The international library author identification system made it possible to actually reach at the first matching a success of 60% of all scientists. The additional advantage is that librarians can finalize the Identity system in a kind of background process. The Kiel Data Management Infrastructure initiated a web service at Kiel for mapping from one ID to another. This web service supports the scientific workflows for automation of the data archiving process at world data archive PANGAEA. The long-lasting concept of the library identifier enables the use of these identifiers beyond the employment period, while it has nothing to do with the institutional IDM. The access rights and ownership of data can be assured for very long time since the national library with its national scope hosts the basic system. Making use of this existing system released resourced planed for this task and enabled the chance of interoperability on an international scale for a regional data management infrastructure.
    Type: Conference or Workshop Item , NonPeerReviewed , info:eu-repo/semantics/conferenceObject
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    facet.materialart.
    Unknown
    In:  [Poster] In: AGU Fall Meeting 2013, 09.-13.12.2013, San Francisco, USA .
    Publication Date: 2021-05-03
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    facet.materialart.
    Unknown
    In:  [Poster] In: EGU General Assembly 2016, 17.-22.04.2016, Vienna, Austria .
    Publication Date: 2021-05-03
    Description: In times of whole city centres being available by a mouse click in 3D to virtually walk through, reality sometimes becomes neglected. The reality of scientific sample collections not being digitised to the essence of molecules, isotopes and electrons becomes unbelievable to the upgrowing generation of scientists. Just like any other geological institute the Helmholtz Centre for Ocean Research GEOMAR accumulated thousands of specimen. The samples, collected mainly during marine expeditions, date back as far as 1964. Today GEOMAR houses a central geological sample collection of at least 17 000 m of sediment core and more than 4 500 boxes with hard rock samples and refined sample specimen. This repository, having been dormant, missed the onset of the interconnected digital age. Physical samples without barcodes, QR codes or RFID tags need to be migrated and reconnected, urgently. In our use case, GEOMAR opted for the International Geo Sample Number IGSN as the persistent identifier. Consequentially, the software CurationDIS by smartcube GmbH as the central component of this project was selected. The software is designed to handle acquisition and administration of sample material and sample archiving in storage places. In addition, the software allows direct embedding of IGSN. We plan to adopt IGSN as a future asset, while for the initial inventory taking of our sample material, simple but unique QR codes act as “bridging identifiers” during the process. Currently we compile an overview of the broad variety of sample types and their associated data. QR-coding of the boxes of rock samples and sediment cores is near completion, delineating their location in the repository and linking a particular sample to any information available about the object. Planning is in progress to streamline the flow from receiving new samples to their curation to sharing samples and information publically. Additionally, interface planning for linkage to GEOMAR databases OceanRep (publications) and OSIS (expeditions) as well as for external data retrieval are in the pipeline. Looking ahead to implement IGSN, taking on board lessons learned from earlier generations, it will enable to comply with our institute’s open science policy. Also it will allow to register newly collected samples already during ship expeditions. They thus receive their "birth certificate" contemporarily in this ever faster revolving scientific world.
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    facet.materialart.
    Unknown
    In:  [Poster] In: Future Ocean Retreat 2014, 29.-30.09.2014, Schleswig, Germany .
    Publication Date: 2021-05-03
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    facet.materialart.
    Unknown
    In:  [Poster] In: AGU Fall Meeting 2013, 09.-13.12.2013, San Francisco, USA .
    Publication Date: 2021-05-03
    Description: The architecture of Kiel Data Management Infrastructure (KDMI) is setup to serve from the data creation process all the way to the data publication procedure. Accordingly the KDMI is managing data at the right beginning of the data life cycle and does not leave data unattended at this very crucial time. Starting from the chosen working procedure to handwritten protocols or lab notes the provenance of the resulting research data is captured within the KDMI. The provenance definition system is the fundamental (see figure 1) capturing tool for working procedures. The provenance definition is used to enable data input by file import, web client or hand writing recognition. The captured data in the provenance system for data is taking care of unpublished in house research data created directly on site. This system serves as a master for research data systems with more degrees of freedom in regard to technology, design or performance (e.g. GraphDB, etc). Such research systems can be regarded as compilations of unpublished data and public domain data e.g. from World Data Centers or archives. These compilations can be used to run statistical data mining and pattern finding algorithms on these specially designed platforms. The architecture of the KDMI ensures that a technical solution for data correction from the slave systems to the master system is possible and improves the quality of the stored data in the provenance system for data. After the research phase is over and the interpretation is finished the provenance system is used by a workflow based publication system called PubFlow. Within PubFlow it is possible to create repeatable workflows to publish data into various external long-term archives or World Data Center. The KDMI is based on the utilization of persistent identifiers for samples and person identities to support this automatized publication process. The publication process is the final step of the KDMI and the management responsibility of the long-term part of the data life cycle is handed over to the chosen archive. Nevertheless the provenance information remains at the KDMI and the definition maybe serves for future datasets again. Unattended data may get lost or be destroyed
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: image
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    facet.materialart.
    Unknown
    Department of Computer Science, Kiel University, Germany
    In:  Bericht / Institut für Informatik der Christian-Albrechts-Universität zu Kiel, 0605 . Department of Computer Science, Kiel University, Germany , Kiel, Germany, 71 pp.
    Publication Date: 2021-05-03
    Description: Nowadays, content management systems are an established technology. Based on the experiences from several application scenarios we discuss the points of contact between content management systems and other disciplines of information systems engineering like data warehouses, data mining, and data integration. We derive a system architecture called "content warehouse" that integrates these technologies and defines a more general and more sophisticated view on content management. As an example, a system for the collection, maintenance, and evaluation of biological content like survey data or multimedia resources is shown as a case study.
    Type: Report , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    facet.materialart.
    Unknown
    In:  [Poster] In: EGU General Assembly 2012, 22.-27.04.2012, Vienna, Austria .
    Publication Date: 2021-05-03
    Description: During the last decades data managers dedicated their work to the pursuit for importable data. In the recent years this chase seams to come to an end while funding organisations assume that the approach of data publications with citable data sets will eliminate denial of scientists to commit their data. But is this true for all problems we are facing at the edge of a data avalanche and data intensive science? The concept of citable data is a logical consequence from the connection of points. Potential data providers in the past complained usually about the missing of a credit assignment for data providers and they still do. The selected way of DOI captured data sets is perfectly fitting into the credit system of publisher driven publications with countable citations. This system is well known by scientists for approximately 400 years now. Unfortunately, there is a double bind situation between citeability and reusability. While cooperation of publishers and data archives are coming into existence, it is necessary to get one question clear: “Is it really worth while in the twenty-first century to force data into the publication process of the seventeenth century?” Data publications enable easy citability, but do not support easy data reusability for future users. Additional problems occur in such an environment while taking into account the chances of collaborative data corrections in the institutional repository. The future with huge amounts of data connected with publications makes reconsideration towards a more integrated approach reasonable. In the past data archives were the only infrastructures taking care of long-term data retrievability and availability. Nevertheless, they were never a part of the scientific process from data creation, analysis, interpretation and publication. Data archives were regarded as isolated islands in the sea of scientific data. Accordingly scientists considered data publications like a stumbling stone in their daily routines and still do. The creation of data set as additional publications is an additional workload a lot of scientists are not yet convinced about. These times are coming to an end now because of the efforts of the funding organisations and the increased awareness of scientific institutions. Right now data archives have their expertise in retrievability and availability, but the new demand of data provenance is not yet included in their systems. So why not taking the chance of the scientific institutes sneaking in and split the workload of retrievability and provenance. Such an integrated data environment will be characterized by increased functionality, creditability and structured data from the creation and everything accompanied by data managers. The Kiel Data Management Infrastructure is creating such an institutional provenance system for the scientific site of Kiel. Having data sets up to date by synchronisation with institutional provenance system capturing all changes and improvements right where they happen. A sophisticated and scalable landscape needs to combine advantages of the existing data centers such as the usability and retrievability functionality with the advantages of decentralised data capturing and provenance. This data environment with synchronisation features and creditability of scientific data to future users would be capable of the future tasks.
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    facet.materialart.
    Unknown
    In:  [Poster] In: SOPRAN Annual Meeting 2012, 20.-21.03.2012, Kiel .
    Publication Date: 2021-05-03
    Description: The scientific site of Kiel provides support for projects with data management requirements due to project size or interdisciplinarity. This infrastructure is the Kiel Data Management Infrastructure (KDMI) and was initially created by SFB574, SFB754, Excellence Cluster ‘The Future Ocean‘ and the GEOMAR | Helmholtz Centre for Ocean Research Kiel. To achieve public data availability from publicly funded projects by the end of the funding period it is necessary to initiate the data acquisition during the data creation process. Accordingly the KDMI uses a three level approach to achieve this goal in SOPRAN III. Data management is al- ready involvedin the planning of expeditions or experiments. The resulting schedule for data files can be used by the project coordinationto increase the efficeny of data sharing within SOPRAN III. The scientists provide files with basic metainformation, which are available within the virtual research environment as soon as possible to all project members. Final data will be transferred to PANGAEA for long term availability when the data are analysed and interpreted in a scientific publication or by the end of SOPRAN III. The Kiel Data Management Team offers a portal for all GEOMAR and University Kiel marine projects. This portal will be used in SOPRAN III in combination with PANGAEA to fulfill the project’s data management requirements and to enhance the data sharing within SOPRAN III by a file sharing environment for preliminary data not yet suitable for PANGAEA.
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2021-05-03
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    facet.materialart.
    Unknown
    Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS, Trieste, Italy
    In:  [Poster] In: IMDIS 2013 International Conference on Marine Data and Information Systems, 23.-25.09.2013, Lucca, Italy . IMDIS 2013 International Conference on Marine Data and Information Systems 23-25 September, 2013 - Lucca, Italy Book of Abstracts ; p. 257 .
    Publication Date: 2021-05-03
    Type: Conference or Workshop Item , NonPeerReviewed
    Format: text
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...