Prof. Doron Levy, Department of Mathematics and CSCAMM, University of Maryland
On the Dynamics of Cancer Stem Cells and Drug Resistance
Often, resistance to drugs is an obstacle to a successful treatment of cancer. In spite of the importance of the problem, the actual mechanisms that control the evolution of drug resistance are not fully understood. In this talk we present our recent results on mathematical models for studying cancer stem cells and their role in developing drug resistance. We derive a new estimate of the probability of developing drug resistance by the time a tumor is detected. We then combine our mathematical results together with clinical and experimental data on Chronic Myelogenous Leukemia to propose answers to open problems regarding the dynamics of hematopoietic cancer stem cells. This is a joint work with Cristian Tomasetti.
Operator splitting for the dissipative quasi-geostrophic equation
In this talk, I will discuss operator splitting for the surface quasi-geostrophic equation
modeling strongly rotating atmospheric flow. The numerical algorithms are based on
evolving the solution by alternately applying the action of transport and diffusion.
The main result is that both Gudnov and Strang splitting converges with the
expected orders provided the initial data is sufficiently regular. The analysis
can be generalized to a large class of well-posed active scalar equations
including the Topaz-Bertozzi aggregation equation and the quasi-geostrophic
equation with dispersion.
This is joint work with Helge Holden and Kenneth Karlsen.
Dr. Shai Dekel, Imaging Solutions, GE Healthcare IT, Israel
PACS systems, present and future
The huge imaging datasets generated by medical imaging modalities (CT, MR, Ultrasound, Nuclear Medicine, etc.) are eventually stored in a PACS (Picture Archiving and Communication System). One of the holy grails of Healthcare IT systems is an intelligent decision support system that, with a 'single-click', provides the users (radiologists, cardiologists, pathologists, etc.) the required medical records and imaging data, presented in way that will enable them to focus on the critical information needed for the diagnosis. This requires technologies such as image compression and streaming, automatic segmentations/labeling, content based image retrieval, multi-modality image fusion and machine learning.
Prof. Tom Haine, Department of Earth and Planetary Sciences, Johns Hopkins University
Simulation and Assimilation of Denmark Strait Overflow
The North Atlantic is a special place where the ocean circulation has an important influence on Earth's climate. The deep flow in particular plays a critical role and the fluid dynamics of the surface-to-sea-floor circulation is subtle and sensitive to extrinsic forcing. The system seems to have the potential for unexpected rapid change that has widespread impact. Indeed, global warming projections clearly point to increasing likelihood of circulation change in the coming decades. In this light, monitoring the deep North Atlantic circulation is an urgent scientific challenge. We focus on the Denmark Strait where the flow is channeled by bathymetry and dense Arctic waters overflow a shallow gap into the deep North Atlantic. We simulate the fluid dynamics of this region using numerical circulation models. We also perform a synthesis of all available observations using variational data assimilation. This process yields insight into the nature of the overflow itself and suggests ways in which the circulation can be efficiently monitored. The role of scientific computing and numerical modeling is central to these efforts, and is emphasized throughout.
Optimal control with budget constraints and resets.
Consider a model problem:
Given a room with multiple obstacles and a stationary enemy observer,
find the fastest path to the target for a robot, with the constraint that
the observer should not be able to see that robot for more than five
seconds in a row.
Many realistic control problems involve multiple criteria for optimality
and/or integral constraints on allowable controls. This can be
conveniently modeled by introducing a budget for each secondary
criterion/constraint. An augmented Hamilton-Jacobi-Bellman equation is
then solved on an expanded state space, and its discontinuous viscosity
solution yields the value function for the primary criterion/cost. This
formulation was previously used by Kumar & Vladimirsky to build a fast
(non-iterative) method for problems in which the resources/budgets are
monotone decreasing. We currently address a more challenging case, where
the resources can be instantaneously renewed (& budgets can be "reset")
upon entering a pre-specified subset of the state space. This leads to a
hybrid control problem with more subtle causal properties of the value
function & additional challenges in constructing efficient numerical
methods.
(Joint work with R. Takei, W. Chen, Z. Clawson, and S. Kirov)
Nonlinear integrate and fire neuron models: analysis and numerics
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for
neurons networks can be written as Fokker-Planck-Kolmogorov equations on
the probability density of neurons, the main parameters in the model being
the connectivity of the network and the noise. We analyse several aspects
of the NNLIF model: the number of steady states, a priori estimates,
blow-up issues and convergence toward equilibrium in the linear case. In
particular, for excitatory networks, blow-up always occurs for initial
data concentrated close to the firing potential. These results show how
critical is the balance between noise and excitatory/inhibitory
interactions to the connectivity parameter. This is mainly a work in
collaboration with M. J. Cáceres and B. Perthame. Some extensions to
models with conductance variables will be discussed, time permitting,
corresponding to a work in collaboration with M. J. Cáceres and L. Tao.
Prof. Maria Lukacova, Institute of Mathematics, Johannes Gutenberg University Mainz
Large time step FVEG schemes for shallow water flows
In this talk we will present new Finite Volume Evolution Galerkin (FVEG) schemes for the solution of the shallow water equations with source terms.
The FVEG methods couple a finite volume formulation with approximate evolution operators. The latter are constructed using the bicharacteristics of multidimensional hyperbolic systems, such that all of the infinitely many directions of wave propagation are taken into account explicitly. In order to approximate multiscale waves we present two variants of large time step FVEG method:
a semi-implicit time approximation and an explicit time approximation using several evolution steps along bicharacteristic cones. The schemes are also well-balanced and a new entropy fix improves the reproduction of sonic rarefaction waves.
Another important aspect of new schemes is the preservation of positivity of the water height and treatment of dry states. Our approach for dry states is general and can be applied for arbitrary finite volume schemes.
Dr. Charles W. Clark, Chief, Electron and Optical Physics Division, National Institute of Standards and Technology (NIST) Dr. Bruce R. Miller, Applied and Computational Mathematics Division, National Institute of Standards and Technology
The NIST Digital Library of Mathematical Functions
The NIST Digital Library of Mathematical Functions (DLMF) is an ongoing project that provides a freely-accessible digital library of the special functions of mathematics. It was first released at http://dlmf.nist.gov in May, 2010, with simultaneous publication of a book by Cambridge University Press. We present an overview that will describe some of the challenges associated with conducting a project of this scope, discoveries and decisions made concerning the representation of mathematical information in a digital library, and resources that the project offers to the computational science community. We conclude with a live demonstration of DLMF features.
April 6
No seminar this week.
April 13
2.00PM, 3206 Math Bldg (note location)
Monroe H. Martin Lecture Joint CSCAMM-Math-IPST Seminar
Prof. Joel Tropp, Department of Applied and Computational Mathematics, California Institute of Technology
Finding structure with randomness: Probabilistic algorithms for constructing low-rank matrix decompositions
Computer scientists have long known that randomness can be used to improve the performance of algorithms. A familiar application is the process of dimension reduction, in which a random map transports data from a high-dimensional space to a lower-dimensional space while approximately preserving some geometric properties. By operating with the compact representation of the data, it is theoretically possible to produce approximate solutions to certain large problems very efficiently.
Recently, it has been observed that dimension reduction has powerful applications in numerical linear algebra and numerical analysis. This talk provides a high-level introduction to randomized methods for computing standard matrix approximations, and it summarizes a new analysis that offers (nearly) optimal bounds on the performance of these methods. In practice, the techniques are so effective that they compete with—or even outperform—classical algorithms. Since matrix approximations play a ubiquitous role in areas ranging from information processing to scientific computing, it seems certain that randomized algorithms will eventually supplant the standard methods in some application domains.
Joint work with Gunnar Martinsson and Nathan Halko.
April 13
3.30PM,
3206 Math Bldg (note location and time)
Monroe H. Martin Lecture Joint CSCAMM-Math-IPST Seminar
Discrete Geometric Game Interpretations of Nonlinear Elliptic Partial Differential Equations.
This talk is about building effective approximation methods for a class of nonlinear elliptic partial differential equations. These equations have applications in diverse areas: Differential Geometry, Stochastic Control, Mathematical Finance, and Homogenization. Typical examples include: Hamilton-Jacobi equations, the Monge-Ampere equation, and the equation for the Convex Envelope.
Capturing weak (viscosity) solutions is a challenge, because solutions can be singular, and standard numerical methods can fail. The right (convergent) way to build solution methods for these equations is to find a simple discrete approximation. These approximations can often be interpreted as a game.
In this mostly nontechnical talk, I will present discrete geometric games, which in simple cases you will be able to solve on a blackboard. Starting with familiar games, such as random walks, we will add new twists (choosing biased diffusions, exit strategies, random turns) which lead to interpretations and effective solutions methods for these equations.
Prof. Jian-Guo Liu, Departments of Mathematics and Physics, Duke University
Dynamics of orientational alignment and phase transition
Phase transition of directional field appears in some physical and biological systems such as ferromagnetism near Currie temperature, flocking dynamics near critical mass of self propelled particles. Dynamics of orientational alignment associated with the phase transition can be effectively described by a mean field kinetic equation. The natural free energy of the kinetic equation is non-convex with a minimum level set consisting of a sphere at super-critical case, a typic spontaneous symmetry breaking behavior in physics. In this talk, I will present some analytical results on this dynamics equation of orientational alignment and exponential convergence rate to the equilibria for both supper and sub critical cases, as well at algebraic convergence rate at the critical case.
A new entropy and spontaneous symmetry breaking analysis played an important role in our analysis. This is a joint work with Pierre Degond and Amic Frouvelle.
Prof. Caroline Japhet, University Paris 13 and CSCAMM
Optimized Schwarz waveform relaxation for heterogeneous problems
We present domain decomposition strategies with as final objective the solving, on parallel platforms, of heterogeneous
advection-diffusion-reaction equation modeling ocean-atmosphere coupling or radionuclide flow transport in the underground, in the context of nuclear waste repositories. For such problems with high variability in the coefficients we consider numerical methods that allow for the use of nonconforming space-time grids, so as to match the local space and time scales, while at the same time simplifying the parallel generation and adaptation of meshes. Schwarz waveform relaxation algorithms are a class of such methods, based on domain decomposition and on iterations which converge quickly and require communications only once per iteration, at the end of the time interval. For nuclear waste applications, we propose a new approach to determine optimized transmission conditions when the size of the waste package becomes small with respect to the size of the surrounding clay subdomains. Numerical results illustrate the method on examples inspired from ocean-atmosphere coupling and from nuclear waste disposal simulations.
This is a joint work with L. Halpern, J. Szeftel, P. Omnes, E. Blayo and F. Lemarié.
Prof. Vivek Goyal, Department of Electrical Engineering and Computer Science , Research Laboratory of Electronics at MIT
Bayesian Analysis of Compressed Sensing and Improved Reconstruction from Quantized Samples
Compressed sensing has brought the use of sparsity- and
compressibility-based signal models to the forefront of data
acquisition. The well-known analyses of compressed sensing are indirect
and hold pointwise over the possible signals of interest. Inspired by
the extreme conservatism of these analyses, we develop a Bayesian
analysis using the replica method. This gives asymptotically-exact
performance analyses for a large class of estimators applied to a large
class of problems. In particular, it shows that lasso typically
performs much better than predicted by previous analyses.
Bayesian formulations are amenable to new generalized approximate
message passing (GAMP) algorithms. We develop a GAMP algorithm for
estimation from quantized samples, which arise in analog-to-digital
conversion and compression. The GAMP algorithm provides large
improvements over conventional reconstruction. The state evolution
formalism of GAMP enables efficient optimization of quantizers, leading
to further improvement.
The talk is based on joint work with Alyson Fletcher, Ulugbek Kamilov,
and Sundeep Rangan.