GLORIA

GEOMAR Library Ocean Research Information Access

Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
  • 1
    In: Advanced Materials, Wiley, Vol. 32, No. 52 ( 2020-12)
    Kurzfassung: Developing high‐performance donor polymers is important for nonfullerene organic solar cells (NF‐OSCs), as state‐of‐the‐art nonfullerene acceptors can only perform well if they are coupled with a matching donor with suitable energy levels. However, there are very limited choices of donor polymers for NF‐OSCs, and the most commonly used ones are polymers named PM6 and PM7, which suffer from several problems. First, the performance of these polymers (particularly PM7) relies on precise control of their molecular weights. Also, their optimal morphology is extremely sensitive to any structural modification. In this work, a family of donor polymers is developed based on a random polymerization strategy. These polymers can achieve well‐controlled morphology and high‐performance with a variety of chemical structures and molecular weights. The polymer donors are D–A1–D–A2‐type random copolymers in which the D and A1 units are monomers originating from PM6 or PM7, while the A2 unit comprises an electron‐deficient core flanked by two thiophene rings with branched alkyl chains. Consequently, multiple cases of highly efficient NF‐OSCs are achieved with efficiencies between 16.0% and 17.1%. As the electron‐deficient cores can be changed to many other structural units, the strategy can easily expand the choices of high‐performance donor polymers for NF‐OSCs.
    Materialart: Online-Ressource
    ISSN: 0935-9648 , 1521-4095
    URL: Issue
    RVK:
    Sprache: Englisch
    Verlag: Wiley
    Publikationsdatum: 2020
    ZDB Id: 1474949-X
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Online-Ressource
    Online-Ressource
    Wiley ; 2023
    In:  Geophysical Prospecting Vol. 71, No. 8 ( 2023-10), p. 1420-1437
    In: Geophysical Prospecting, Wiley, Vol. 71, No. 8 ( 2023-10), p. 1420-1437
    Kurzfassung: Microseismic datasets typically have relatively low signal‐to‐noise ratio waveforms. To that end, several noise suppression techniques are often applied to improve the signal‐to‐noise ratio of the recorded waveforms. We apply a linear geometric mode decomposition approach to microseismic datasets for background noise suppression. The geometric mode decomposition method optimizes linear patterns within amplitude–frequency modulated modes and can efficiently distinguish microseismic events (signal) from the background noise. This method can also split linear and non‐linear dispersive seismic events into linear modes. The segmented events in different modes can then be added carefully to reconstruct the denoised signal. The application of geometric mode decomposition is well suited for microseismic acquisitions with smaller receiver spacing, where the signal may exhibit either (nearly) linear or non‐linear recording patterns, depending on the source location relative to the receiver array. Using synthetic and real microseismic data examples from limited‐aperture downhole recordings only, we show that geometric mode decomposition is robust in suppressing the background noise from the recorded waveforms. We also compare the filtering results from geometric mode decomposition with those obtained from FX‐Decon and one‐dimensional variational mode decomposition methods. For the examples used, geometric mode decomposition outperforms both FX‐Decon and one‐dimensional variational mode decomposition in background noise suppression.
    Materialart: Online-Ressource
    ISSN: 0016-8025 , 1365-2478
    URL: Issue
    RVK:
    Sprache: Englisch
    Verlag: Wiley
    Publikationsdatum: 2023
    ZDB Id: 2020311-1
    ZDB Id: 799178-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2023
    In:  GEOPHYSICS Vol. 88, No. 4 ( 2023-07-01), p. V303-V315
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 88, No. 4 ( 2023-07-01), p. V303-V315
    Kurzfassung: Ground roll and random noise usually mask primary reflections in land seismic data. Different sets of signal processing methods are used to suppress these two noises based on statistical and/or transformation filtering. Among these methods, linear-mode decomposition (LMD) decomposes linear and nonlinear seismic events into amplitude-frequency modulated modes using the Wiener filter. Different combinations of these decomposed linear modes then can be used to represent different seismic events. However, LMD requires predefining the level of decomposition that must be selected carefully to avoid suboptimal binning, which can influence the fidelity of the decomposed seismic modes. To that end, we introduce an adaptive LMD (ALMD) that optimally separates seismic events, ground roll, and random noise. ALMD uses the correlation between the decomposed modes and the input data to determine the decomposition level. Consequently, an optimum decomposition divides the data into linear modes with minimum mixing. In addition, unlike conventional ground roll suppression methods, ALMD does not require estimating the slope or the frequency bandwidth of the ground roll. Moreover, ALMD automates the random noise segregation by separating modes as the signal, noise, and mixed modes, based on the permutation entropy and kurtosis criteria. ALMD iteratively decomposes mixed modes with remnant random noise until a signal or noise criterion is met. Using synthetic and real data examples, we demonstrate that the proposed ALMD is an effective method for separating desired linear and nonlinear events, unwanted ground roll energy, and random noise from the seismic data.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2023
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2016
    In:  GEOPHYSICS Vol. 81, No. 4 ( 2016-07), p. V327-V340
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 81, No. 4 ( 2016-07), p. V327-V340
    Kurzfassung: Seismic data denoising and interpolation are essential preprocessing steps in any seismic data processing chain. Sparse transforms with a fixed basis are often used in these two steps. Recently, we have developed an adaptive learning method, the data-driven tight frame (DDTF) method, for seismic data denoising and interpolation. With its adaptability to seismic data, the DDTF method achieves high-quality recovery. For 2D seismic data, the DDTF method is much more efficient than traditional dictionary learning methods. But for 3D or 5D seismic data, the DDTF method results in a high computational expense. The motivation behind this work is to accelerate the filter bank training process in DDTF, while doing less damage to the recovery quality. The most frequently used method involves only a randomly selective subset of the training set. However, this random selection method uses no prior information of the data. We have designed a new patch selection method for DDTF seismic data recovery. We suppose that patches with higher variance contain more information related to complex structures, and should be selected into the training set with higher probability. First, we calculate the variance of all available patches. Then for each patch, a uniformly distributed random number is generated and the patch is preserved if its variance is greater than the random number. Finally, all selected patches are used for filter bank training. We call this procedure the Monte Carlo DDTF method. We have tested the trained filter bank on seismic data denoising and interpolation. Numerical results using this Monte Carlo DDTF method surpass random or regular patch selection DDTF when the sizes of the training sets are the same. We have also used state-of-the-art methods based on the curvelet transform, block matching 4D, and multichannel singular spectrum analysis as comparisons when dealing with field data.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2016
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2018
    In:  GEOPHYSICS Vol. 83, No. 2 ( 2018-03-01), p. V83-V97
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 83, No. 2 ( 2018-03-01), p. V83-V97
    Kurzfassung: Acquisition technology advances, as well as the exploration of geologically complex areas, are pushing the quantity of data to be analyzed into the “big-data” era. In our related work, we found that a machine-learning method based on support vector regression (SVR) for seismic data intelligent interpolation can fully use large data as training data and can eliminate certain prior assumptions in the existing methods, such as linear events, sparsity, or low rank. However, immense training sets not only encompass high redundancy but also result in considerable computational costs, especially for high-dimensional seismic data. We have developed a criterion based on the Monte Carlo method for the intelligent reduction of training sets. For seismic data, pixel values in each local patch can be regarded as a set of statistical data and a variance value for the patch can be calculated. A high variance means that there are events centered around its corresponding patch or the pixel values in the patch range obviously. The patches with high variances are regarded as more representative patches. The Monte Carlo method assigns the variance as constraint and selects only the representative patches with a higher probability through a series of random positive numbers. After the training set is intelligently reduced through the Monte Carlo method, only these representative patches, constituting the new training set, are input to the SVR-based machine learning frame to construct a continuous regression model. Meanwhile, the patches with lower variances can be readily interpolated using a simple method and only present a minor influence in the construction of the regression model. Thus, the representative patches are called effective patches. Finally, the missing traces can be generated from the learned regression model. Numerical illustrations on 2D seismic data and results on 3D or 5D data show that the Monte Carlo method can intelligently select the effective patches as the new training set, which greatly decreases redundancy and also keeps the reconstruction quality.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2018
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2020
    In:  GEOPHYSICS Vol. 85, No. 2 ( 2020-03-01), p. V157-V168
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 85, No. 2 ( 2020-03-01), p. V157-V168
    Kurzfassung: Different from the surface survey, the vertical seismic profile (VSP) survey deploys sources on the surface and geophones in a well. VSP provides higher resolution information of subsurface structures. The faults that cannot be imaged with surface seismic data may be detected with VSP data, and detailed analysis of fracture zones can be achieved with multicomponent VSP. However, one of the main problems is that the sources seldom are acquired on a regular grid in realistic VSP surveys. The irregular samplings cause serious artifacts in migration or imaging, such that data regularization must be implemented first. We have developed a compressive sensing (CS)-based method to regularize nonstationary VSP data. Our method directly operates on irregularly gridded data sets, which is a key contribution compared to the existing CS-based reconstruction methods that work on regular grids. The CS framework consists of a sparsity constraint and a penalty term. We have used the curvelet transform for sparsity constraint of nonstationary events in the regularization term and the nonequispaced Fourier transform to regularize the VSP data in a penalty term. An alternative directional method of multipliers is used for solving the optimization problem. Our method is tested on synthetic, field 2D and 3D VSP data sets. Our method obtains improved reconstructions on continuities of the events and produces fewer artifacts compared to the well-known antileaking Fourier transform method.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2020
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2019
    In:  GEOPHYSICS Vol. 84, No. 6 ( 2019-11-01), p. V333-V350
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 84, No. 6 ( 2019-11-01), p. V333-V350
    Kurzfassung: Compared with traditional seismic noise attenuation algorithms that depend on signal models and their corresponding prior assumptions, removing noise with a deep neural network is trained based on a large training set in which the inputs are the raw data sets and the corresponding outputs are the desired clean data. After the completion of training, the deep-learning (DL) method achieves adaptive denoising with no requirements of (1) accurate modelings of the signal and noise or (2) optimal parameters tuning. We call this intelligent denoising. We have used a convolutional neural network (CNN) as the basic tool for DL. In random and linear noise attenuation, the training set is generated with artificially added noise. In the multiple attenuation step, the training set is generated with the acoustic wave equation. The stochastic gradient descent is used to solve the optimal parameters for the CNN. The runtime of DL on a graphics processing unit for denoising has the same order as the [Formula: see text] -[Formula: see text] deconvolution method. Synthetic and field results indicate the potential applications of DL in automatic attenuation of random noise (with unknown variance), linear noise, and multiples.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2019
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2023
    In:  GEOPHYSICS Vol. 88, No. 4 ( 2023-07-01), p. V291-V302
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 88, No. 4 ( 2023-07-01), p. V291-V302
    Kurzfassung: Seismic samples are generally designed to be placed on perfect Cartesian coordinates, that is, on-the-grid. However, sampling geometry is disturbed by obstacles in field applications. Large obstacles result in missing samples. For small obstacles, geophones or sources are placed at an available off-the-grid location nearest to the designed grid. To achieve simultaneous off-the-grid regularization and missing data reconstruction for 3D seismic data, we develop a new mathematical model based on a new combined sampling operator, a 3D curvelet transform, and a fast projection onto convex sets (FPOCS) algorithm. The sampling operator is combined with a binary mask for on-the-grid samples reconstruction and a barycentric Lagrangian (BL) operator for off-the-grid samples regularization. A 2D BL operator is obtained using the tensor product of two 1D BL operators. The inversion problem is efficiently solved based on FPOCS. This method is tested on synthetic and field data sets. The reconstruction results outperform the methods based on the binary mask in terms of signal-to-noise ratio and visual effect.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2023
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2017
    In:  GEOPHYSICS Vol. 82, No. 5 ( 2017-09-01), p. V321-V334
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 82, No. 5 ( 2017-09-01), p. V321-V334
    Kurzfassung: We have found that seismic data can be described in a low-dimensional manifold, and then we investigated using a low-dimensional manifold model (LDMM) method for extremely strong noise attenuation. The LDMM supposes the dimension of the patch manifold of seismic data should be low. In other words, the degree of freedom of the patches should be low. Under the linear events assumption on a patch, the patch can be parameterized by the intercept and slope of the event, if the seismic wavelet is identical everywhere. The denoising problem is formed as an optimization problem, including a fidelity term and an LDMM regularization term. We have tested LDMM on synthetic seismic data with different noise levels. LDMM achieves better denoised results in comparison with the Fourier, curvelet and nonlocal mean filtering methods, especially in the presence of strong noise or low signal-to-noise ratio situations. We have also tested LDMM on field records, indicating that LDMM is a method for handling relatively strong noise and preserving weak features.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2017
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 10
    Online-Ressource
    Online-Ressource
    Society of Exploration Geophysicists ; 2023
    In:  GEOPHYSICS Vol. 88, No. 3 ( 2023-05-01), p. V249-V265
    In: GEOPHYSICS, Society of Exploration Geophysicists, Vol. 88, No. 3 ( 2023-05-01), p. V249-V265
    Kurzfassung: Seismic data interpolation is an essential procedure in seismic data processing. However, conventional interpolation methods may generate inaccurate results due to the simplicity of assumptions, such as linear events or sparsity. In contrast, deep learning trains a deep neural network with a large data set without relying on predefined assumptions. However, the lack of physical priors in the traditional pure data-driven deep learning frameworks may cause low generalization for different sampling patterns. Inspired by the framework of projection onto convex sets (POCS), a new neural network is proposed for seismic interpolation, called POCS-Net. The forward Fourier transform, the inverse Fourier transform, and the threshold parameter in POCS are replaced by neural networks that are independent in different iterations. The threshold is trainable in POCS-Net rather than manually set. A nonnegative constraint is imposed on the threshold to make it consistent with traditional POCS. POCS-Net is essentially an end-to-end neural network with priors of a sampling pattern and a predefined iterative framework. Numerical results on 3D synthetic and field seismic data sets demonstrate the superiority of the reconstruction accuracy of the proposed method compared with the traditional and natural image-learned POCS methods.
    Materialart: Online-Ressource
    ISSN: 0016-8033 , 1942-2156
    RVK:
    Sprache: Englisch
    Verlag: Society of Exploration Geophysicists
    Publikationsdatum: 2023
    ZDB Id: 2033021-2
    ZDB Id: 2184-2
    SSG: 16,13
    Standort Signatur Einschränkungen Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie hier...