报告人: Jiahua Jiang, Virginia Polytechnic Institute and State University
The Reduced Basis Method (RBM) is a popular certified model reduction approach for solving parametrized partial differential equations. One critical stage of the offline portion of the algorithm is a greedy algorithm, requiring maximization of an error estimate over parameter space. In practice this maximization is usually performed by replacing the parameter domain continuum with a discrete training'' set. When the dimension of parameter space is large, it is necessary to significantly increase the size of this training set in order to effectively search parameter space. Large training sets diminish the attractiveness of RBM algorithms since this proportionally increases the cost of the offline phase.
In this work we propose novel strategies for offline RBM algorithms that mitigate the computational difficulty of maximizing error estimates over a training set. The main idea is to identify a subset of the training set, a Surrogate Training Set'' (STS), on which to perform greedy algorithms. The STS's we construct are much smaller in size than the full training set, yet our examples suggest that they are accurate enough to induce the solution manifold of interest at the current offline RBM iteration. We propose two algorithms to construct the STS: Our first algorithm, the Successive Maximization Method (SMM) method, is inspired by inverse transform sampling for non-standard univariate probability distributions. The second constructs an STS by identifying pivots in the Cholesky Decomposition of an approximate error correlation matrix. We demonstrate the algorithm through numerical experiments, showing that it is capable of accelerating offline RBM procedures without degrading accuracy, assuming that the solution manifold has rapidly decaying Kolmogorov width.