Joint CSCAMM/Mathematics Seminar
January 25th
at 2:00PM Math Colloquium Room #3206
Professor Guillaume Bal, Department of
Applied Physics and Applied Mathematics,
Columbia University
Radiative transfer models: derivation and applications
Radiative transfer equations
have long been used to model the energy density
of waves in random media, with applications in
quantum waves in semiconductors, light
propagation through turbulent atmospheres,
underwater acoustics, and elastic wave
propagation in the Earth's crust. In this talk,
we consider the derivation of such models from
first principles, i.e., from equations for high
frequency wave fields. Mathematically rigorous
derivations are presented in the paraxial and
Ito-Schroedinger approximations of wave
propagation and in the random Liouville regime.
The radiative transfer models are also applied
to the understanding of the enhanced refocusing
properties of time reversed waves propagating in
random media.
The validity of radiative transfer models to
predict the energy density of high frequency
waves is then investigated numerically.
Two-dimensional acoustics systems of equations
are solved over large domains (on the order of
500 times 500 wavelengths) using parallel
architectures. We demonstrate the very good
accuracy of the macroscopic radiative transfer
models and show the relative statistical
stability of the wave energy density, i.e., the
fact that the latter depends on macroscopic
statistics of the random medium and not on its
specific realization.
The kissing number k(n) is the maximal number of equal nonoverlapping spheres in n-dimensional space that can touch another sphere of the same size. This problem in dimension three was the subject of a famous discussion between Isaac Newton and David Gregory in 1694. In three dimensions the problem was finally solved only in 1953 by Schutte and van der Waerden. It was proved that the bounds given by Delsarte's method are not good enough to solve the problem in 4-dimensional space.
Delsarte's linear programming method is widely used in coding theory. In this talk we will discuss a solution of the kissing problem in four dimensions which is based on an extension of the Delsarte method.
This extension also yields a new proof of k(3)<13.
*TUESDAY*
January 31
2.00 PM, 3206 Math Bldg
*NOTE
SPECIAL DAY*
Joint CSCAMM/Mathematics Seminar
*TUESDAY*, January 31st
at 2:00PM Math Colloquium Room #3206
Professor Becca Thomases, Courant
Institute of Mathematical Sciences, New York
University
Nonlinear Elasticity and Viscoelasticity
In this talk I will present a
global existence result for small data nonlinear
elasticity which also applies to the Oldroyd-B
viscoelastic fluid model. These results can be
obtained via a general local decay theorem which
applies to wide variety of isotropic hyperbolic
systems.
While small solutions decay,
the problem for large data is much more
complicated. I will present some recent
numerical work on the Oldroyd-B system which
shows that the system develops large stress
gradients for certain curvilinear flows. These
stress gradients appear to become unbounded as
material parameters (in particular the
Weissenberg number) are increased.
Professor David Keyes, Applied Physics
and Applied Mathematics, Columbia University
Scalable Solution Algorithms for Magnetically Confined Fusion Energy Simulations
The Terascale Optimal PDE Simulations (TOPS) project is sponsored by the U.S.
Department of Energy to research and deploy a collection of open-source, scalable
solvers (PETSc, Hypre, SuperLU, etc.) for discrete problems arising in several
large-scale applications, including fusion reactor modeling and design. Optimal
complexity methods, such as multigrid/multilevel preconditioners, keep the time
spent in dominant algebraic kernels close to linear as the applications scale on
parallel computers. Krylov accelerators and Jacobian-free variants of Newton's method,
as appropriate, are wrapped outside to deliver robustness in multirate, multiscale
coupled systems, which are solved either implicitly or in more traditional forms of
operator splitting. The TOPS software framework is being extended beyond forward
simulation to optimization. We outline the TOPS research agenda and illustrate with a
range of applications in magnetically confined fusion energy, as the fusion community
gears up for participation in the International Thermonuclear Experimental Reactor
(ITER) consortium, the ultimate goal of which is abundant energy production outside of
the planetary carbon cycle.
Dr. Hao-Min Zhou, School of Mathematics, Georgia Institute of Technology
Variation Models and PDE Techniques in Wavelet Inpainting
In this talk, I will present a recent work (collaborated with Tony Chan (UCLA) and Jackie Shen (Minnesota)) on image inpainting in wavelet domain. The problem is closely related to the classical image inpainting, with the difference being that the inpainting regions are in the wavelet domain, that brings new challenges to the reconstructions, as there is no geometrically well defined inpainting region in the pixel domain, and the damage is inhomogeneous. We propose new variational models, especially total variation minimization in conjunction with wavelets for the wavelet inpainting problems. The models lead to PDE's, which are Euler-Lagrange equations of the variational formulations, in the wavelet domain and can be solved numerically. The proposed models can have effective and automatic control over geometric features of the inpainted images including sharp edges, even in the presence of substantial loss of wavelet coefficients, including in the low frequencies.
February 15
2.00 PM, 3206 Math Bldg
Joint CSCAMM/Mathematics Seminar
February 15th
at 2:00PM Math Colloquium Room #3206
Professor Wei Cai, Department of Mathematics,
University of North Carolina at Charlotte
Numerical Methods for Modeling Photonics and Nano-Electronics
Computational modeling has established itself as
an indispensable tool for studying new physical
phenomena and behaviors in micro-to-mesoscopic
systems. Numerical modeling based on classical
and quantum physics provides unparalleled
possibilities in investigating physical
processes not accessible to experimental
explorations. Due to the intrinsic multi-scale
and multi-physics properties in devices
operating in far from equilibrium involving
ultrafast kinetics and high field transport,
there is a great demand for efficient and
accurate numerical algorithms for fast
simulations. In this talk, we will present our
research in computational methods for the
following two problems: (1) Energy transfer in
resonant systems such as coupled microcavity
dielectric resonators and coupled resonant
plasmon silver nanowires, (2) Carrier transport
in quantum dots and nanotransistors.
Several numerical approaches will be studied. In
order to compute the optical field propagation
in heterogeneous media, we have developed 4th
order upwinding embedded boundary methods and
discontinuous spectral element methods for
dispersive lossy Maxwell’s systems. For the
transport in quantum dots and nano-transistors,
we have developed fast integral equation methods
for layered media based on acceleration
algorithms for Green’s functions and 2-D FMM.
Issues of boundary conditions for open quantum
systems using Green’s functions will also be
addressed. Numerical methods for calculating
self-energy and quantum conductance with the
Landauer formula in nano-MOSFET, including the
geometric effects of the quantum devices, will
be presented. Finally, future research issues
will be discussed.
February 22
2.00 PM, 3206 Math Bldg
Joint CSCAMM/Mathematics Seminar
February 22nd
at 2:00PM Math Colloquium Room #3206
Professor Hongkai Zhao, Department of
Mathematics, University of California-Irvine
The Fast Sweeping Method
for Hyperbolic Problems
An efficient iterative algorithm, the fast
sweeping method, for steady hyperbolic equations
will be presented. I will show that the
iterative algorithm has an optimal complexity,
i.e., the number of iterations is finite and is
independent of mesh size, for Eikonal equation
which is a nonlinear hyperbolic boundary value
problem. I will also explain different
convergence mechanisms for hyperbolic and
elliptic problems. Extensions to more general
Hamilton-Jacobi equations and hyperbolic
conservation laws will also be discussed. If
time permits, applications to computer vision
and image processing will be shown.
Nail A. Gumerov, Institute for Advanced Computer Studies, University of Maryland
Method of Scalar Potentials for the Solution of Maxwell’s Equations in Three Dimensions
By Nail A. Gumerov and Ramani Duraiswami
It is hard to overestimate the practical importance of development of computationally efficient methods for solution of Maxwell’s equations for large scale problems. Computation of propagation and scattering of electromagnetic waves is a key issue for antenna design, radar applications, wireless networks, optical devices and materials, diffractional tomography, and many more emerging areas. In the present study we develop a computational method based on a scalar potential representation, which efficiently reduces the solution of Maxwell’s equations to the solution of two scalar Helmholtz equations.
One of the major challenges in developing our formulation was the lack of an existing theory for the translation of such a representation, since the form of the decomposition is not invariant with respect to translations. We developed such a theory by introducing the concept of “conversion” operators, which enables representation of the electric and magnetic vector fields via scalar potentials in an arbitrary reference frame. This speeds up methods such as the Fast Multipole Method, since only two Helmholtz equations need be solved, and moreover, the divergence free constraints are satisfied automatically by construction.
For illustration of the method we implemented an algorithm for the solution of the Mie-type scattering problem from a system of spherical objects of different sizes and dielectric properties using a variant of T-matrix method and GMRES. Results of the computations agree well with the previous theoretical and experimental results. We also discuss opportunities for application of the method of scalar potentials to solution of other boundary value problems for Maxwell’s equations.
Professor Ping Lin, Department of
Mathematics, The National University of
Singapore
A quasi-continuum approximation and its analysis
In many applications materials are modeled by a large number of particles (or atoms)
where any one of particles interacting with all others through a pair potential energy.
The equillibrium configuration of the material is the minimizer of the total energy of the
system. The computational cost is high since the number of atoms is huge. Recently much
attention has been paid to a so-called quasicontinuum (QC) approximation which is a mixed
atomistic/continuum model.
The QC method solves a fully atomistic problem in regions where
the material contains defects (or larger deformation gradients), but uses continuum finite
elements to integrate out the majority of the atomistic degrees of freedom in regions where
deformation gradients are small. However, numerical analysis is still at its infancy. In
this talk we will conduct a convengence analysis of the QC method in the case that there
is no serious defect or that the defect region is small. The difference of our analysis
from conventional finite element analysis is that our exact solution is not a solution of
a continuous partial differential equation but a discrete atomic scale solution which is
not simply related to any conventional partial differential equation. We will consider
both one dimensional and two dimensional cases. Some thoughts about the dynamical case
may be mentioned as well. The QC method may be related to some other fields such as model
reduction and pre-conditioning.
Numerical Analysis of Deterministic Approximations for
Elliptic PDEs with Stochastic Coefficients
We present deterministic FEM for the solution of
elliptic problems with coefficients which are
spatially inohomogeneous random fields.
The FEM is based on M-term on a principal component,
Karhunen-Loeve type expansion of input data,
computed by a generalized FMM and
on sparse wavelet resp. spectral approximations of meshwidth h
resp. order p for the random solution's joint probability densities,
parametrized in the principal components of the input data.
Numerical analysis of the random solution's regularity and
of the complexity of the method as h→0 resp.
p→∞ simultaneously with M→∞
are given.
Joint work with R.A. Todor and P. Frauenfelder of ETH.
Professor Maria Lukacova, Institute of Numerical Simulation at
Hamburg University of Technology
Well-balanced genuinely multidimensional finite volume schemes for hyperbolic balance laws
Many models of geophysical flow arising in oceanography, meteorology or climatology belong to the class of hyperbolic balance laws. We present a new well-balanced finite volume method within the framework of the finite volume evolution Galerkin (FVEG) schemes. The methodology will be illustrated for the shallow water equations with source terms modeling the bottom topography, friction effects and Coriolis forces. Results can be generalized to more complex systems of balance laws.
The FVEG methods belong to the class of genuinely multidimensional methods. They couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of the multidimensional hyperbolic systems, such that all of the infinitely many directions of wave propagation are taken into account explicitly. We derive a well-balanced approximation of the integral equations and prove that the FVEG scheme is well-balanced for the stationary steady states as well as for the steady jets in the rotational frame. Several illustrating examples will confirm this property experimentally, too.
Further, we compare the results obtained by the FVEG scheme with the solution of the finite element methods, which are typically used in the river engineering. We also present results of extensive numerical study of accuracy and computational efficiency of both methods.
Professor Antony Beris, Department of Chemical Engineering, University of Delaware
Numerical Challenges in the Direct Numerical Simulation of Turbulent Viscoelastic Flows
Polymer-induced turbulent drag reduction has
been the subject of intense investigations ever
since its accidental discovery by Tomms and
Mysels during the Second World War, with
important applications, most well known of which
is the facilitation of the oil transfer though
pipelines, such as in the Alaskan pipeline.
Recent developments in numerical methods, have
allowed us for the first time to be able to
obtain accurate and stable simulations of highly
drag reduced (more than 60%) turbulent channel
flows of dilute polymer solutions. The data
confirm earlier results in our group at lower
drag reduction values whereby the primary
mechanism for drag reduction is the decreased
intensity of the wall eddies which is the result
of a significant increase to extensional
deformations contributed by the polymer
additives, exactly as proposed earlier by
Metzner and Lumley. Recent work has been able
to more systematically investigate the changes
to the flow structure effected due to the
polymers, and in particular the coherent
structures, using Karhunen-Loeve (K-L) Proper
Orthogonal Decomposition (POD) analysis of the
data. This demonstrated a dramatic enhancement
of the importance of large scale motions with
increased viscoelasticity and an equally
dramatic decrease in the K-L dimension of the
flow (an order of magnitude) as viscoelasticity
increases versus similar Newtonian results.
More recent work aims at building up better
numerical techniques able to probe even higher
drag reductions, close to the maximum drag
reduction (Virk) limit. Towards that goal we
recently developed an exponential mapping for
the viscoelastic conformation tensor that allows
us to preserve its positive definiteness under
all flow conditions exactly. This preservation
avoids the formation of numerically-induced
Hadamard instabilities (thus offering
exceptional stability to the numerical
calculations) and allows always the obtainment
of physically meaningful results under all flow
conditions. These numerical capabilities can
prove essential for the understanding of
turbulence modification through polymer
additives that can potentially lead to the
development of better drag reducers.
Anna Gilbert, Department of Mathematics, University of Michigan
The interplay of analysis and algorithms
It has recently been observed that sparse and
compressible signals can be sketched using very
few nonadaptive linear measurements in
comparison with the length of the signal. This
sketch can be viewed as an embedding of an
entire class of compressible signals into a
low-dimensional space. In particular,
d-dimensional signals with m nonzero entries
(m-sparse signals) can be embedded in O(m log d)
dimensions. To date, most algorithms for
approximating or reconstructing the signal from
the sketch, such as the linear programming
approach proposed by Cand`es–Tao and Donoho,
require time polynomial in the signal length. I
will talk about new methods for sketching both
m-sparse and compressible signals and novel
signal recovery algorithms. I will also talk
about the types of statistical information which
can be recovered from these sketches and how
sparse representations are obtained more
generally.
Professor Chrysoula Tsogka, Department of Mathematics, University of Chicago Adaptive coherent interferometric imaging
I will discuss a robust, coherent interferometric approach for array imaging in cluttered media, in regimes with significant multipathing of the waves by the inhomogeneities in clutter. In such scattering regimes, the recorded traces at the array have long and noisy codas and classic imaging methods give unstable results. Coherent interferometry is essentially a very efficient statistical smoothing technique that exploits systematically the spatial and temporal coherence in the data to obtain stable images.
I will show that in coherent interferometry, there is a delicate balance between having stable and sharp images and achieving the optimal resolution. This balance depends on two clutter dependent decoherence parameters. I will explain briefly how we can estimate these parameters efficiently during the image formation process.
The robustness of the proposed imaging method will be illustrated with several results.
Professor Yosef Yomdin, Weizmann Institute of Science
Fourier Transform of "Simple" Functions
The rate of Fourier approximation of a given
function is determined by its
regularity. For functions with singularities,
even very simple, like the Heaviside
step function, the convergence of the Fourier
series is slow, and their reconstruction
from the truncated Fourier data involves
systematic errors (“Gibbs effect”).
It was recently discovered in the work of D.
Donoho, E. Candes and others that
an accurate reconstruction from the sparse
measurements data (in particular, from
the truncated Fourier data) is possible not only
for regular functions, but rather for
“compressible” ones - those possessing a sparse
representation in a certain basis.
In many important applications (like Image
Processing) the linear representation
of the data in a certain fixed basis may be not
the most natural starting
point. Instead, we can approximate the data with
geometric models, explicitly
incorporating nonlinear geometric elements like
edges, ridges, etc.
Accordingly, instead of “linear sparseness” we
use another notion of “simplicity”,
based on the rate of the best approximation of a
given function by semialgebraic
functions of a prescribed degree.
The main subject of the present talk is that
such ”simple” functions can be
accurately reconstructed (by a non-linear
inversion) from their truncated Fourier
or Moment data.
[an error occurred while processing this directive]