Entropy stable shock capturing spacetime DG methods for approximating systems of conservation laws.
We propose a shock capturing spacetime Discontinuous Galerkin (DG) methods for multi-dimensional systems of conservation laws. We show that that these methods are (formally) arbitrarily high-order accurate, satisfy a discrete version of the entropy inequality and converge to entropy measure valued solutions. We present preconditioners that enhance the efficiency of the method and highlight the ability of space-time DG methods to compute all speed flows and allow for spacetime adaptivity. A large number of numerical experiments, illustrating the method, are also presented. The talk is based on joint work with Andreas Hiltebrand (ETH Zurich).
Dr. Emil Wiedemann, Department of Mathematics, University of British Columbia
Orientation-Preserving Young Measures
Young measures are an important tool in the calculus of variations and nonlinear elasticity theory, where they are used e.g. to describe microstructures in crystals. A realistic mathematical model for the deformation of a solid should, however, not allow the deformation to collapse the solid's volume to zero, or to reverse orientation. It has therefore been an important open question to characterize those Young measures which arise from such physically admissible deformations. I will present such a characterization, which is achieved by the so-called method of convex integration, along with several applications. Joint work with K. Koumatos (Oxford) and F. Rindler (Warwick).
Operator Based Approach to the Problem of Heterogeneous Data Fusion
The problem of data integration and fusion is a longstanding problem in the remote sensing community. It deals with finding effective and efficient ways to integrate information from heterogeneous sensing modalities. In this talk we shall present a completely deterministic approach which exploits fused representations of certain well known data-dependent operators, such as, e.g., graph Laplacian and graph Schroedinger operators. It is through the eigendecomposition of these operators that we introduce the notion of fusion/integration of heterogeneous data, such as hyperspectral imagery (HSI) and LIDAR, or spatial information. We verify the results of our methods by applying them to HSI classification.
Dr. Lin Lin, Computational Research Division, Lawrence Berkeley National Laboratory
Elliptic preconditioner for accelerating the self consistent field iteration of Kohn-Sham density functional theory
Kohn-Sham density functional theory (KSDFT) is the most widely used electronic structure theory for molecules and condensed matter systems. Although KSDFT is often stated as a nonlinear eigenvalue problem, an alternative formulation of the problem, which is more convenient for understanding the convergence of numerical algorithms for solving this type of problem, is based on a nonlinear map known as the Kohn-Sham map. The solution to the KSDFT problem is a fixed point of this nonlinear map. The simplest way to solve the KSDFT problem is to apply a fixed point iteration to the nonlinear equation defined by the Kohn-Sham map. This is commonly known as the self-consistent field (SCF) iteration in the condensed matter physics and chemistry communities. However, this simple approach often fails to converge. The difficulty of reaching convergence can be seen from the analysis of the Jacobian matrix of the Kohn-Sham map, which we will present in this talk. The Jacobian matrix is directly related to the dielectric matrix or the linear response operator in the condensed matter community. We will show the different behaviors of insulating and metallic systems in terms of the spectral property of the Jacobian matrix. A particularly difficult case for SCF iteration is systems with mixed insulating and metallic nature, such as metal padded with vacuum, or metallic slabs. We discuss how to use these properties to approximate the Jacobian matrix and to develop effective preconditioners to accelerate the convergence of the SCF iteration. In particular, we introduce a new technique called elliptic preconditioner, which unifies the treatment of large scale metallic and insulating systems at low temperature. Numerical results show that the elliptic preconditioner can effectively accelerate the SCF convergence of metallic systems, insulating systems, and systems of mixed metallic and insulating nature. (Joint work with Chao Yang)
Sparse non-negative matrix factorization with Markov chain Monte Carlo with applications to cancer genomics
Matrix factorization techniques are essential to infer dominant patterns related to temporal processes in big data, such as time course genomics. However, the orthogonality constraint in standard pattern-finding algorithms, including notably principal components analysis (PCA), confounds inference of simultaneous biological processes. Non-negative matrix factorization (NMF) techniques were initially introduced to find non-orthogonal patterns in data, making them ideal techniques for inference of patterns that distinguish concurrent processes. We introduce a Markov chain Monte Carlo NMF algorithm, CoGAPS, which uses an atomic prior to naturally model biological sparsity and correlation structure of data matrices. Comparisons to gradient-based NMF algorithms show that CoGAPS yields more robust and biologically relevant patterns related to biochemical perturbations to cancer cells.
A kinetic scheme on staggered grids; application to multifluid flows
We introduce a new scheme for 2*2 systems of conservation laws,
typically barotropic Euler equations.
The design of the scheme is motivated by its application to the
simulation of "multifluid flows" where models involve an intricate
constraint on the divergence of the velocity field.
The scheme works on staggered grids and numerical fluxes have the flavor
of kinetic fluxes.
We analyze the stability properties: positivity of the density, decay of
entropy.
The talk is devoted to theoretical aspects of sparse approximation and
optimization. The main motivation for the study of sparse approximation is
that many real world signals can be well approximated by sparse ones. Sparse
approximation automatically implies a need for nonlinear approximation, in
particular, for greedy approximation. We will discuss greedy approximation
in dierent settings: with respect to bases and redundant dictionaries, in
Hilbert and in Banach spaces.
We also discuss sparse approximate solutions to convex optimization
problems. It is known that in many engineering applications researchers
are interested in an approximate solution of an optimization problem as a
linear combination of a few elements from a given system of elements. There
is an increasing interest in building such sparse approximate solutions using
dierent greedy-type algorithms. The problem of approximation of a given
element of a Banach space by linear combinations of elements from a given
system (dictionary) is well studied in nonlinear approximation theory. At
a rst glance the settings of approximation and optimization problems are
very dierent. In the approximation problem an element is given and our
task is to nd a sparse approximation of it. In optimization theory an energy
function is given and we should nd an approximate sparse solution to the
minimization problem. It turns out that the same technique can be used for
solving both problems. We discuss how the technique developed in nonlinear
approximation theory, in particular the greedy approximation technique can
be adjusted for nding a sparse solution of an optimization problem.
Prof. Manuel Tiglio, CSCAMM and Department of Physics, University of Maryland
Status on Reduced Order Modeling in Gravitational Wave Physics
I will give a status overview of a research program aimed at predicting, evaluating and analyzing gravitational waves from binary compact coalescences, such as binary black hole collisions or binary neutron star inspirals, in real time instead of months of supercomputer time, in particular on mobile devices. The effort is based on an offline-online decomposition of the problem, reduced bases, surrogate models and fast analysis. The goal of the program is being able to extract as much astrophysical information as possible from the advanced network of gravitational wave detectors, worth billions of dollars, but the techniques are generic and applicable to many parametrized systems, especially complex ones for which a rapid time prediction is desirable.
The effort is an interdisciplinary one, the work reported in collaboration with Pablo Perez De Angelis (LVK Labs) Harbir Antil (GMU), Jonathan Blackman (Caltech), Priscilla Canizares (Cambridge, UK), Sarah Caudill (UWM), Scott Field (UMD), Jonathan Gair (Cambridge, UK), Chad Galley (Caltech), Jason Kaye (Brown Univ), Jan Hesthaven (EPFL, Switzerland), Frank Herrmann (London, UK), Andres Pagliano (LVKLabs), Ricardo Nochetto (UMD), Evan Ochsner (UWM), Vivien Raymond (Caltech), Rory Smith (Caltech), Bela Szilagyi (Caltech).
From Boltzmann to Euler: Hilbert’s 6th problem revisited
This talk addresses the hydrodynamic limit of the Boltzmann equation, namely the compressible Euler equations of gas dynamics. An exact summation of the Chapman-Enskog expansion originally given by Gorban and Karlin is the key to the analysis. An appraisal of the role of viscosity and capillarity in the limiting process is then given where the analogy is drawn to the limit of the Korteweg-de-Vries/Burgers equations as a small parameter tends to zero.
Regularity, blow up, and small scale creation in fluids
The Euler equation of fluid mechanics describes a flow of inviscid and incompressible fluid, and has been first written in 1755. The equation is both nonlinear and nonlocal, and its solutions often create small scales easily and tend to be unstable. I will review some of the background, and then discuss a recent sharp result on small scale creation in solutions of the 2D Euler equation. I will also indicate links to the long open question of finite time blow up for solutions of the 3D Euler equation.
How long does it take to compute the eigenvalues of a random symmetric matrix?
In the early days of scientific computing, Goldstine and Von Neumman suggested that it would be fruitful to study the "typical" performance of Gaussian elimination on random data. This approach lay dormant for decades until Alan Edelman's 1989 thesis on the distribution of condition numbers of random matrices. Since then numerical linear algebraists have made basic contributions to random matrix theory and the study of condition numbers of random matrices has proven to be a rich subject.
We approach the symmetric eigenvalue problem from a similar viewpoint. The underlying mathematical issue is to analyze the number of iterations required for an eigenvalue algorithm to converge. Our study focuses on the QR algorithm, the Toda algorithm and a version of the matrix-sign algorithm. All three algorithms have intimate ties with completely integrable Hamiltonian systems. Our results stress an empirical discovery of "universality in computation". We also show that this problems admits an elegant mathematical formulation and suggests interesting new questions in integrable systems, kinetic theory and random matrix theory. Very little background in these areas will be presumed, and the talk will be self-contained.
This is joint work with Percy Deift (Courant Institute) and Christian Pfrang (JP Morgan).