GLORIA

GEOMAR Library Ocean Research Information Access

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: In this paper we investigate and compare different gradient algorithms designed for the domain expression of the shape derivative. Our main focus is to examine the usefulness of kernel reproducing Hilbert spaces for PDE constrained shape optimisation problems. We show that radial kernels provide convenient formulas for the shape gradient that can be efficiently used in numerical simulations. The shape gradients associated with radial kernels depend on a so called smoothing parameter that allows a smoothness adjustment of the shape during the optimisation process. Besides, this smoothing parameter can be used to modify the movement of the shape. The theoretical findings are verified in a number of numerical experiments.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (33 Seiten, 6.275 kB) , Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik No. 2244
    Language: English
    Note: Literaturverzeichnis: Seite 29-31
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: A hierarchical a posteriori error estimator for the first-order finite element method (FEM) on a red-refined triangular mesh is presented for the 2D Poisson model problem. Reliability and efficiency with some explicit constant is proved for triangulations with inner angles smaller than or equal to π/2 . The error estimator does not rely on any saturation assumption and is valid even in the pre-asymptotic regime on arbitrarily coarse meshes. The evaluation of the estimator is a simple post-processing of the piecewise linear FEM without any extra solve plus a higher-order approximation term. The results also allows the striking observation that arbitrary local averaging of the primal variable leads to a reliable and efficient error estimation. Several numerical experiments illustrate the performance of the proposed a posteriori error estimator for computational benchmarks.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (27 Seiten, 1.713 kB) , Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik No. 2251
    Language: English
    Note: Literaturverzeichnis: Seite 23-25
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (54 Seiten, 983,53 KB) , Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 3049
    Language: English
    Note: Literaturverzeichnis: Seite 42-46
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: Sampling from probability densities is a common challenge in fields such as Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in particular, the use of reverse-time diffusion processes depending on the log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling tool. In [5] the authors point out that these log-densities can be obtained by solution of a Hamilton-Jacobi-Bellman (HJB) equation known from stochastic optimal control. While this HJB equation is usually treated with indirect methods such as policy iteration and unsupervised training of black-box architectures like Neural Networks, we propose instead to solve the HJB equation by direct time integration, using compressed polynomials represented in the Tensor Train (TT) format for spatial discretization. Crucially, this method is sample-free, agnostic to normalization constants and can avoid the curse of dimensionality due to the TT compression. We provide a complete derivation of the HJB equation’s action on Tensor Train polynomials and demonstrate the performance of the proposed time-step-, rank- and degree-adaptive integration method on a nonlinear sampling task in 20 dimensions.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (36 Seiten, 2,08 MB) , Illustrationen, Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 3078
    Language: English
    Note: Literaturverzeichnis: Seite 25-27
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: We sample from a given target distribution by constructing a neural network which maps samples from a simple reference, e.g. the standard normal distribution, to samples from the target. To that end, we propose using a neural network architecture inspired by the Langevin Monte Carlo (LMC) algorithm. Based on LMC perturbation results, we show approximation rates of the proposed architecture for smooth, log-concave target distributions measured in the Wasserstein-2 distance. The analysis heavily relies on the notion of sub-Gaussianity of the intermediate measures of the perturbed LMC process. In particular, we derive bounds on the growth of the intermediate variance proxies under different assumptions on the perturbations. Moreover, we propose an architecture similar to deep residual neural networks and derive expressivity results for approximating the sample to target distribution map.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (52 Seiten, 805,07 KB) , Illustrationen, Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 3077
    Language: English
    Note: Literaturverzeichnis: Seite 45-50
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: We present a novel approach to solve Stochastic Differential Equations (SDEs) with Deep Neural Networks by a Deep Operator Network (DeepONet) architecture. The notion of Deep-ONets relies on operator learning in terms of a reduced basis.We make use of a polynomial chaos expansion (PCE) of stochastic processes and call the corresponding architecture SDEONet. The PCE has been used extensively in the area of uncertainty quantification with parametric partial differential equations. This however is not the case with SDE, where classical sampling methods dominate and functional approaches are seen rarely. A main challenge with truncated PCEs occurs due to the drastic growth of the number of components with respect to the maximum polynomial degree and the number of basis elements. The proposed SDEONet architecture aims to alleviate the issue of exponential complexity by learning a sparse truncation of the Wiener chaos expansion. A complete convergence analysis is presented, making use of recent Neural Network approximation results. Numerical experiments illustrate the promising performance of the suggested approach in 1D and higher dimensions.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (32 Seiten, 764,06 KB) , Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 3079
    Language: English
    Note: Literaturverzeichnis: Seite 28-30
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: Topology optimisation is a mathematical approach relevant to different engineering problems where the distribution of material in a defined domain is distributed in some optimal way, subject to a predefined cost function representing desired (e.g., mechanical) properties and constraints. The computation of such an optimal distribution depends on the numerical solution of some physical model (in our case linear elasticity) and robustness is achieved by introducing uncertainties into the model data, namely the forces acting on the structure and variations of the material stiffness, rendering the task high-dimensional and computationally expensive. To alleviate this computational burden, we develop two neural network architectures (NN) that are capable of predicting the gradient step of the optimisation procedure. Since state-of-the-art methods use adaptive mesh refinement, the neural networks are designed to use a sufficiently fine reference mesh such that only one training phase of the neural network suffices. As a first architecture, a convolutional neural network is adapted to the task. To include sequential information of the optimisation process, a recurrent neural network is constructed as a second architecture. A common 2D bridge benchmark is used to illustrate the performance of the proposed architectures. It is observed that the NN prediction of the gradient step clearly outperforms the classical optimisation method, in particular since larger iteration steps become viable
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (39 Seiten, 32,65 MB) , Illustrationen, Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 2982
    Language: English
    Note: Literaturverzeichnis: Seite 33-34
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 8
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: Ensemble methods have become ubiquitous for the solution of Bayesian inference problems. State-of-the-art Langevin samplers such as the Ensemble Kalman Sampler (EKS), Affine Invariant Langevin Dynamics (ALDI) or its extension using weighted covariance estimates rely on successive evaluations of the forward model or its gradient. A main drawback of these methods hence is their vast number of required forward calls as well as their possible lack of convergence in the case of more involved posterior measures such as multimodal distributions. The goal of this paper is to address these challenges to some extend. First, several possible adaptive ensemble enrichment strategies that successively enlarge the number of particles in the underlying Langevin dynamics are discusses that in turn lead to a significant reduction of the total number of forward calls. Second, analytical consistency guarantees of the ensemble enrichment method are provided for linear forward models. Third, to address more involved target distributions, the method is extended by applying adapted Langevin dynamics based on a homotopy formalism for which convergence is proved. Finally, numerical investigations of several benchmark problems illustrates the possible gain of the proposed method, comparing it to state-of-the-art Langevin samplers.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (54 Seiten, 1,40 MB) , Illustrationen, Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 2987
    Language: English
    Note: Literaturverzeichnis: Seite 35-38
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 9
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: We propose a new kernel learning approach based on efficient low-rank tensor compression for Gaussian process (GP) regression. The central idea is to compose a low-rank function represented in a hierarchical tensor format with a GP covariance function. Compared to similar deep neural network architectures, this approach facilitates to learn significantly more expressive features at lower computational costs as illustrated in the examples. Additionally, over-fitting is avoided with this compositional model by taking advantage of its inherent regularisation properties. Estimates of the generalisation error are compared to five baseline models on three synthetic and six real-world data sets. The experimental results show that the incorporated tensor network enables a highly accurate GP regression with a comparatively low number of trainable parameters. The observed performance is clearly superior (usually by an order of magnitude in mean squared error) to all examined standard models, in particular to deep neural networks with more than 1000 times as many parameters.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (22 Seiten, 360,70 KB)
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 2981
    Language: English
    Note: Literaturverzeichnis: Seite 10-14
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 10
    Online Resource
    Online Resource
    Berlin : Weierstraß-Institut für Angewandte Analysis und Stochastik Leibniz-Institut im Forschungsverbund Berlin e.V.
    Keywords: Forschungsbericht
    Description / Table of Contents: Imperfections and inaccuracies in real technical products often influence the mechanical behavior and the overall structural reliability. The prediction of real stress states and possibly resulting failure mechanisms is essential and a real challenge, e.g. in the design process. In this contribution, imperfections in elastic materials such as air voids in adhesive bonds between fiberreinforced composites are investigated. They are modeled as arbitrarily shaped and positioned. The focus is on local displacement values as well as on associated stress concentrations caused by the imperfections. For this purpose, the resulting complex random one-scale finite element model is numerically solved by a new developed surrogate model using an overlapping domain decomposition scheme based on Schwarz alternating method. Here, the actual response of local subproblems associated with isolated material imperfections is determined by a single appropriate surrogate model, that allows for an accelerated propagation of randomness. The efficiency of the method is demonstrated for imperfections with elliptical and ellipsoidal shape in 2D and 3D and extended to arbitrarily shaped voids. For the latter one, a local surrogate model based on artificial neural networks (ANN) is constructed. Finally, a comparison to experimental results validates the numerical predictions for a real engineering problem.
    Type of Medium: Online Resource
    Pages: 1 Online-Ressource (22 Seiten, 9,16 MB) , Illustrationen, Diagramme
    Series Statement: Preprint / Weierstraß-Institut für Angewandte Analysis und Stochastik no. 2928
    Language: English
    Note: Literaturverzeichnis: Seite 19-20
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...