Prof Radu Balan, Department of Mathematics and CSCAMM, University of Maryland Signal Reconstruction From Its Spectrogram
This paper presents a framework for discrete-time signal reconstruction from absolute values of its short-time Fourier coefficients. Our approach has two steps. In step one we reconstruct a band-diagonal matrix associated to the rank-one operator Kx=xx*. In step two we recover the signal x by solving an optimization problem. The two steps are somewhat independent, and one purpose of this talk is to present a framework that decouples the two problems. The solution to the first step is connected to the problem of constructing frames for spaces of Hilbert-Schmidt operators. The second step is somewhat more elusive. Due to inherent redundancy in recovering x from its associated rank-one operator Kx, the reconstruction problem allows for imposing supplemental conditions. In this paper we make one such choice that yields a fast and robust reconstruction. However this choice may not necessarily be optimal in other situations. It is worth mentioning that this second step is related to the problem of finding a rank-one approximation to a matrix with missing data.
Dr Triet Le, Department of Mathematics, Yale University Local scales of oscillatory patterns
In this talk, we study the problem of extracting local scales of oscillatory patterns in images. Given a multi-scale representation {u(t)} of an image f, we are interested in automatically picking out a few choices of t_i(x), which we call local scales, that better represent the multi-scale structure of f at x. We will characterize local scales coming from the Gaussian kernel. Non-linear type diffusions are also discussed.
Prof Brian Hunt, Inst for Physical Science & Technology, University of Maryland
Ensemble Methods and Data Assimilation
I will speak about ensemble methods that approximate and/or analyze a
trajectory (or pseudotrajectory) of a chaotic dynamical system without
linearizing the system, including a particular method ("Local Ensemble
Transform Kalman Filter") developed at the University of Maryland for
data assimilation in spatially extended systems. By "ensemble method"
I mean an iterative procedure that alternately: (1) makes a short-term
forecast from an ensemble of initial conditions; and then (2) adjusts
the ensemble by some prescribed algorithm to determine initial
conditions for the next forecast. The methods I consider seek to
maintain an ensemble whose spread is reasonably small and that
approximately spans the most unstable directions in its vicinity. For
data assimilation, the method inputs a time series of observations of
an otherwise unknown trajectory, and seeks to keep the ensemble close
to that trajectory. I will discuss theoretical results for hyperbolic
systems, and practical results for data assimilation in spatially
extended systems, including weather forecast models.
Vortex sheets are idealized models for flows with strong shears concentrated in a thin region, such as what occurs in the flow trailing an airfoil. In 1991 J.-M. Delort established the existence of a weak solution of the incompressible two-dimensional Euler equations with general vortex sheet initial data of distinguished sign, ie, in which the vorticity, which is the curl of velocity, is a bounded Radon measure with distinguished sign. The original result was proved for flows in the full plane, on a compact manifold and in a bounded domain.
In this talk we revisit the case of a bounded domain and seek to understand the vortex dynamics point-of-view. In particular we note that, despite circulation (around boundary components) being a conserved quantity for smooth flows, it may cease to be conserved for vortex sheet flows. We extend Delort's proof to include exterior domain flows and show that the same issue arises. Finally, we discuss what is missing for conservation of circulation to hold. This is joint work with D. Iftimie (Lyon), M. C. Lopes Filho (UNICAMP) and F. Sueur (Paris VI).
Prof Brian Hunt, Inst for Physical Science & Technology, University of Maryland
Ensemble Methods and Data Assimilation
I will speak about ensemble methods that approximate and/or analyze a
trajectory (or pseudotrajectory) of a chaotic dynamical system without
linearizing the system, including a particular method ("Local Ensemble
Transform Kalman Filter") developed at the University of Maryland for
data assimilation in spatially extended systems. By "ensemble method"
I mean an iterative procedure that alternately: (1) makes a short-term
forecast from an ensemble of initial conditions; and then (2) adjusts
the ensemble by some prescribed algorithm to determine initial
conditions for the next forecast. The methods I consider seek to
maintain an ensemble whose spread is reasonably small and that
approximately spans the most unstable directions in its vicinity. For
data assimilation, the method inputs a time series of observations of
an otherwise unknown trajectory, and seeks to keep the ensemble close
to that trajectory. I will discuss theoretical results for hyperbolic
systems, and practical results for data assimilation in spatially
extended systems, including weather forecast models.
Prof Alina Chertock, Department of Mathematics, North Carolina State University
Particle Methods for Pressureless Gas Dynamics
In this talk, I will consider one- and two-dimensional pressureless gas dynamics equations. These equations arise in different applications and, in particular, serve as a basic model of formation of large structures in the universe.
The main feature
of interest in this problem is the occurrence of strong singularities (delta-functions along the surface as well as at separate points) and the emergence of vacuum states. The same pressureless equations also arise in semiclassical approximations of highly oscillating solutions of the Schrodinger equation with high frequency initial data, in which case multivalued solutions -- not the single valued ones -- are physically relevant.
Both the single and multivalued solutions of the pressureless gas dynamics equations can be accurately captured by low dissipative particle methods. In these methods, the solution is sought in the form of a linear combination of delta-functions, whose positions and coefficients are evolved in time according to a system of ODEs, obtained from a weak formulation of the system of PDEs.
I will first introduce a new sticky particle method which is used for capturing the singular single valued solutions. The method is based on the idea of merging the particles that cluster at singularities and determining the particle velocities from the conservation requirements.
This particle merger procedure results in the desired mass concentration.
I will then discuss how to apply particle methods for computing the multivalued solutions.
In this case, the particles do not interact and thus several particles are allowed to be located at exactly the same point (representing several branches of the computed
solution) and to propagate
with the velocities that are completely independent of the velocities of their neighbors. The main issue to be discussed here is how the point values of the computed solution can be recovered from its particle distribution.
Analysis of the Boltzmann collision kernel via an N particle stochastic system
In 1956, Mark Kac proposed a novel approach to the study of the
Boltzmann equation via the large N limit of a stochastic system of N particle undergoing binary collisions. In the 1960's, Henry McKean and
his students
made many significant contributions to this program, particularly with
regard to the problem of propagation of chaos. However, analysis of the
rate of equilibration for this model remained an open problem for many
years, and progress on
this front was much more recent, and until now, had been made only for
"Maxwellian molecules".
Recent work of myself, Carvalho and Loss extends this progress to the
physically significant
hard-sphere case, as will be explained in this lecture.
Adaptive regularization methods for nonlinear optimization
Nonlinear nonconvex optimization problems represent the bed-rock of numerous
real-life applications, such as data assimilation for weather prediction,
radiation therapy treatment planning, optimal design of energy transmission
networks, and many more. Finding (local) solutions of these problems usually
involves iteratively constructing easier-to-solve local models of the function
to be optimized, with the optimizer of the model taken as an estimate of the
sought-after solution. Linear or quadratic models are usually employed locally
in this context, yielding the well-known steepest descent and Newton approaches,
respectively. However, these approximations are often unsatisfactory either
because they are unbounded in the presence of nonconvexity and hence cannot be
meaningfully optimized, or they are accurate representations of the function
only in a small neighbourhood, yielding only small or no iterative improvements.
Hence such models require some form of regularization to improve algorithm
performance and avoid failure; traditionally, linesearch and trust-region
techniques have been employed for this purpose and represent the
state-of-the-art. Here, we first show that the steepest descent and Newton's
methods, even with the above-mentioned regularization strategies, may both
require a similar number of iterations and function evaluations to reach within
approximate first-order optimality, implying that the known bound for worst-case
complexity of steepest descent is tight and that Newton's method may be as slow
as steepest descent under standard assumptions. Then, a new class of methods
will be presented that approximately globally minimize a quadratic model of the
objective regularized by a cubic term, extending to practical large-scale
problems earlier regularization approaches by Nesterov (2007) and Griewank
(1982). Preliminary numerical experiments show our methods to perform better
than a trust-region implementation, while our convergence results show them to
be at least as reliable as the latter approach. Furthermore, their worst-case
complexity is provably better than that of steepest descent, Newton's and
trust-region methods. This is joint work with Nick Gould (Rutherford Appleton
Laboratory, UK) and Philippe Toint (University of Namur, Belgium).
Phase transitions phenomenon in Compressed Sensing
Compressed Sensing reconstruction algorithms typically exhibit a zeroth-order phase transition phenomenon for large problem sizes, where there is a domain of problem sizes for which successful recovery occurs with overwhelming probability, and there is a domain of problem sizes for which recovery failure occurs with overwhelming probability.
The mathematics underlying this phenomenon will be outlined for 1 regularization, non-negative feasibility point regions, and bounded constraints. Both instances employ a large deviation analysis of the associated geometric probability event. These results give precise if and only if conditions on the number of samples needed in Compressed Sensing applications.
Lower bounds on the phase transitions implied by the Restricted Isometry Property for Gaussian random matrices will also be presented for the following algorithms: q-regularization for q ∈ (0,1), CoSaMP, Subspace Pursuit, and Iterated Hard Thresholding.
Prof Howard Barnum, Perimeter Institute for Theoretical Physics
in Ontario, Canada
Quantum information processing in context: information processing in convex operational theories
The rise of quantum information science has been paralleled
by the development of a vigorous research program aimed at obtaining
characterizations or reconstructions of the quantum formalism in terms
the possibilities it makes available for information processing. This
development has involved a variety of approaches; I will review
results, primarily ones by me and my collaborators, in a broad
framework for stochastic theories that encompasses quantum and
classical theory, but also a wide variety of other theories that can
serve as foils to them. One goal is an understanding of the
conceptual features of quantum mechanics that account for its
greater-than-classical information-processing power, an understanding
which could perhaps help guide the search for new quantum algorithms
and protocols.
The main results reviewed are: (1) the fact that the only information that
can be obtained in the framework without disturbance is inherently
classical, no-cloning and no-broadcasting theorems in the generalized
framework, (2) the existence of exponentially secure bit commitment in
non-classical theories without entanglement, (3) the consequences for
theories of the existence of a conclusive teleportation scheme, and
(4) sufficient conditions for the existence of a deterministic
teleportation scheme (5) sufficient conditions for
what Schroedinger called "steering" of ensembles using entangled
states of generalized theories, rendering insecure bit commitment
protocols of the form shown to be secure in the unentangled case.
If time permits, I may cover (6) some notions of entropy and mutual
information in this framework, connecting properties of these
(specifically, the equality of preparation and measurement entropy,
and a data processing inequality for mutual information) to bounds on
quantum, correlations via the "information causality" property,
and (7) a generalized notion of interference in such theories,
adapting Rafael Sorkin's notion of a hierarchy of "levels" of
interference, of which quantum theory only exhibits the lowest order,
may be adapted to this framework.
Joint work with various groups of collaborators including Jonathan
Barrett, Matthew Leifer, Alexander Wilce, Oscar Dahlsten, Ben Toner,
Cozmin Ududec, Joe Emerson, Rob Spekkens, Lisa Orloff-Clark, Phillip
Gaebler, Nicholas Stepanik, Robin Wilke.
Prof Edriss Titi, The Weizmann Institute of Science and The University of California - Irvine
Alpha Sub-grid Scale Models of Turbulence and Inviscid Regularization
In recent years many analytical sub-grid scale models of turbulence
were introduced based on the Navier--Stokes-alpha model (also known
as a viscous Camassa--Holm equations or the Lagrangian Averaged
Navier--Stokes-alpha (LANS-alpha)). Some of these are the
Leray-alpha, the modified Leray-alpha, the simplified Bardina-alpha
and the Clark-alpha models. In this talk I will show the global
well-posedness of these models and provide estimates for the
dimension of their global attractors, and relate these estimates to
the relevant physical parameters. Furthermore, I will show that up
to certain wave number in the inertial range the energy power
spectra of these models obey the Kolmogorov -5/3 power law, however,
for the rest the inertial range the energy spectra are much steeper.
In addition, I will show that by using these alpha models as closure
models to the Reynolds averaged equations of the Navier--Stokes one
gets very good agreement with empirical and numerical data of
turbulent flows for a wide range of huge Reynolds numbers in
infinite pipes and channels.
It will also be observed that, unlike the three-dimensional Euler
equations and other inviscid alpha models, the inviscid simplified
Bardina model has global regular solutions for all initial data.
Inspired by this observation I will introduce new inviscid
regularizing schemes for the three-dimensional Euler, Navier--Stokes
and MHD equations, which does not require, in the viscous case, any
additional boundary conditions. This same kind of inviscid
regularization is also used to regularize the Surface
Quasi-Geostrophic model.
Finally, and based on the alpha regularization I will present, if
time allows, some error estimates for the rate of convergence of the
alpha models to the Navier-Stokes equations, and will also present
new approximation of vortex sheets dynamics.
Prof. Michael Evans, Department of Geology and ESSIC, University of Maryland
Applications of proxy system modeling in high resolution paleoclimatology
Paleoclimatic reconstructions are generally based on robust multivariate linear regression of indirect climate measures
("proxies") on contemporaneous direct climate observations.
Mechanistic models of proxy systems may be multivariate and/or nonlinear. We explore the extent to which such a model for the environmental controls on the tree-ring width climate proxy explains ring width observations, and discuss the potential to use proxy system models as data level constraints on Bayesian hierarchical reconstructions of recent paleoclimates.
Prof Sennur Ulukus, Electrical and Computer Engineering, University of Maryland
Securing Wireless Communications in the Physical Layer using Signal Processing
The last couple of decades have witnessed an amazing
growth in wireless communications and networking applications.
More and more subscribers are relying solely on their wireless
communication and computing devices for communicating sensitive
information. Ensuring secure transfer of information is thus
essential. This important issue is currently dealt with at the
higher layers of the protocol hierarchies using cryptographic
algorithms, which provide useful protection against
computationally-limited adversaries.
Our research explores providing security in the physical layer
using techniques from information theory, communication theory
and signal processing. This approach is a fundamental departure
from the currently available cryptographic solutions in that
the security we provide is unbreakable, provable and
quantifiable (in bits/sec). We use unique characteristics of
the wireless medium, such as the inherent random fluctuations
in the wireless channels (that enable opportunistic
transmissions), overheard information (that enables
cooperation), and the use of multiple antennas (that provides
spatial diversity and multiplexing gains) to secure wireless
communications in the physical layer. In this talk, we will
demonstrate how opportunistic transmissions in fading channels,
cooperation (with trusted or untrusted relays), multiple
antennas, and signal alignment can be used to enhance security
of wireless communications. We will also present results on
secure data compression.