Dr. Alethea Barbaro, Department of Mathematics, UCLA
Predicting Migration Routes with an Interacting Particle Model
The capelin is a pelagic species of fish which lives in the northern oceans. We focus on the stock around Iceland which migrates hundreds of kilometers every year. The capelin is important to the region both economically and ecologically, since the stock is fished and also is one of the main staples for the area's cod. Recently, the migration route of the capelin has been shifting and the research vessels have had difficulty locating it, making it difficult to take accurate stock assessments. We have implemented an interacting particle model for the spawning migration of the capelin around Iceland based on the work of Vicsek, Czirók, Ben-Jacob, Cohen, and Shochet. In February 2008, our model accurately predicted an unusual path for the yearly spawning migration. The same parameters were used to successfully reproduce the migration routes from two previous years. We will describe our model, including how environmental data is incorporated, and show simulation results alongside data collected by the Marine Research Institute of Iceland. Our code is implemented in object-oriented C++, and I will describe the architecture of the code and why using C++ was vital to our success. The derivation of scaling laws between the radii of the interaction zones, the number of particles in the simulation, and the timestep will also be given, and the possible implications of these scaling laws will be discussed.
Professor Christopher Jones, Department of Mathematics, University of North Carolina at Chapel Hill and University of Warwick
Going with the Flow and Using the Information
This is a talk about assimilating Lagrangian data. Such data come from instruments that move with a fluid flow, such as ocean floats and drifters. This type of assimilation presents a challenge because the data are not expressed in terms of state variables and also reflect the nonlinearities of the Lagrangian flow. The data, however, lie in a well-defined low-dimensional space and this fact allows us to consider techniques that can exploit to full advantage the interesting (nonlinear, chaotic, ...) Lagrangian dynamics.
Dr. Harald P. Pfeiffer,
Theoretical Astrophysics, Caltech
Binary Black Hole Simulations and Implicit Time-stepping
Numerical simulations of black hole binaries have made tremendous progress over the last years. The usefulness of such simulations is limited by their tremendous computational cost, which ultimately results from a separation of time-scales: Emission of gravitational radiation drives the evolution of the binary toward smaller separation and eventual merger. The time-scale for inspiral is far longer than the dynamical time-scale of each black hole. Therefore, the currently deployed explicit time-steppers are severely limited by Courant instabilities. Implicit time-stepping algorithms provide an obvious route around the Courant limit, thus offering a tremendous potential to speed up the simulations. However, the complexity of Einstein's equations make this a highly non-trivial endevour. This talk will first present a general overview of the status of Black Hole simulations, followed by a status report on the ongoing work aimed at implementing modern implict/explicit (IMEX) evolution schemes for Einstein's equations.
Professor Benjamin Shapiro ,
Department of Aerospace Engineering, University of Maryland
Control of Small Things: From Steering Cells on Chips to Targeting Drugs to Deep Tumors
My group is interested in applying control to miniaturized systems, usually for biological or medical applications. I will show results on using micro-scale flow control to precisely steer particles on chip (e.g.
cells or quantum dots) and simulation and control algorithms for precision control of droplets actuated by electrically modified surface tension. Here smart control can allow existing devices to display new and novel capabilities. For example, we can make simple, cheap, easy to make, handheld devices replicate the capabilities of sophisticated, expensive, large laser tweezers. Applications include manipulating cells in complex dirty samples for disease diagnosis and creating multi-dot quantum information systems.
I will also describe where we are on a new project: magnetic control of drug-coated particles inside people (to focus chemotherapy to deep tumors). This problem is interesting, hard, and important. I'll describe the state-of-the-art, the modeling (what is known, can be known, and not), simulations, a pesky theorem, and initial control results.
Spectral evolutions of Einstein's equations normally require a fully first-order reduction of the system. While this is the only well- known way to obtain a stable system, it has the disadvantage of introducing additional constraints and equations. A new method for evolving second-order (in space) equations spectrally is derived for the simplest analogous system, the scalar wave equation. Surprisingly, even this simple case turns out to be non-trivial. The derivation of the method and its application to binary black hole spectral evolutions will be presented and discussed.
Dr. Werner Benger,
Center for Computation & Technology, Louisiana State University
Fiberbundle-based Visualization of a Stir Tank Fluid
We describe a novel approach to treat data from a complex numerical
simulation in a unified environment using a generic data model for
scientific visualization. The model is constructed out of building blocks
in a hierarchical scheme of seven levels, out of which only three are
exposed to the end-user. This generic scheme allows for a wide variety of
input formats, and results in powerful capabilities to connect data. We
review the theory of this data model, implementation aspects in our
visualization environment, and its application to computational fluid
dynamic simulation covering a fluid in a stir tank. The computational data
are given as a vector field and a scalar field describing pressure on 2088
blocks in curvilinear coordinates.
Professor Chen Greif ,
Department of Computer Science, University of British Columbia
An Inner-Outer Iteration for Computing PageRank
PageRank is a method for ranking Web pages whereby a page's importance (or
rank) is determined according to the link structure of the Web. This model
has been used by Google as part of its search engine technology. The exact
ranking techniques and calculation methods used by Google today are no
longer public information, but the PageRank model has taken on a life of
its own and has received considerable attention in the scientific
community in the last few years. The PageRank vector is essentially the
stationary distribution vector of a Markov chain whose transition matrix
is a convex combination of the Web link graph and a certain rank-1 matrix.
In this talk I will present a new iterative scheme for PageRank
computation. The algorithm is applied to the linear system formulation of
the problem, using inner-outer stationary iterations. It is simple, can be
easily implemented and parallelized, and requires minimal storage
overhead. Convergence analysis shows that the algorithm is effective for a
crude inner tolerance and is not sensitive to the choice of the parameters
involved. The same idea can be used as a preconditioning technique for
non-stationary schemes. Numerical examples featuring Web link graphs of
dimensions exceeding 100,000,000 in sequential and parallel environments
demonstrate the merits of the technique.
Professor Merav Opher,
Department of Physics and Astronomy, George Mason University
Plasma Effects in the Interaction of the Solar System with the Interstellar Medium
Plasma effects are ubiquitous and known to be crucial in space and astrophysical media; these media are excellent plasma laboratories and provide observational data that add valuable constraints to theoretical models. Modern computational techniques such as magnetohydrodynamic MHD and kinetic modeling are currently used to explore several fundamental plasma effects such as turbulence, reconnection, shocks evolution, etc.
In this talk I will discuss one example of how cross-fertilization between basic plasma physics, sophisticated computational work allied with detailed observational data can be guided towards resolving important physical phenomena. The twin Voyager spacecraft, both approximately 100 AU from the Sun, are providing us with an unexpected view of how stars interact with their surrounding media. For the first time we are able to make in situ measurements of particles and fields at the boundaries of the solar system. Although our understanding of collisionless shocks has been advanced in recent years, we still lack a comprehensive study on how the evolution of shocks (and their characteristics) is affected by the environment in which they propagate. From the in situ observations of the termination shock by Voyager 1 and 2, it became clear that shock properties are much more complex than previously expected. Voyager 1 crossed the termination shock in Dec 2004, and is now in the heliosheath, and in August 2007 Voyager 2 also crossed the shock. In this talk I will review our recent work where we were able to constrain the direction of the local interstellar magnetic field. As a result of the interstellar magnetic field, we have shown that the solar system is asymmetric, being pushed toward the Sun in the southern hemisphere. I will review as well our other work exploring reconnection effects, near the heliopause (the region that separates the solar system and the interstellar media) and waves, instabilities and turbulence in the heliosheath (the regions where the solar wind is subsonic). These effects will be able to be sampled by future Voyager observations, providing direct testing of the theories we have developed. I will discuss these results of our understanding of shocks, particle acceleration and implication for plasma phenomena in laboratory.
Dr. Amir Riaz,
Department of Mechanical Engineering, University of Maryland
Unstable Dynamics of Fluid Interfaces in Geologic CO2 Sequestration Systems
Carbon dioxide concentration in the atmosphere has increased to high levels in the past 100 years due to human activity, leading to an unprecedented level of global warming. Injection of CO2 into deep saline geologic aquifers and subsequent sequestration for hundreds of years through solubility driven gravity currents has been proposed to mitigate the harmful effects. Supercritical CO2 phase dissolves in the aquifer fluid to create a gravitational instability at the interface. The time for the onset of instability and the corresponding wavenumber determine how much CO2 can be sequestered over a given period of time. Because the onset time is short compared to the rate of dissolution, existing methods of determining instability characteristics are inadequate, incurring errors on the order of the sequestration time. We propose a method based on the projection of the zeroth mode on the full spectrum of the linearized stability operator in a self-similar coordinate system to obtain the correct time for the onset of instability. Our analysis reveals the existence of a short wavenumber cut-off due to the truncated Hermite polynomial basis for the zeroth mode. Amplification of the intrinsic instability of a homogeneous system is then analyzed as a function of geologic heterogeneity of the aquifer rock. High accuracy numerical simulations based on spectral methods are carried out to determine the accuracy of the stability analysis as well as to understand the late time nonlinear behavior of the unstable system.
April 16
*Thursday*
3:30-4:30pm, CSS Building, Auditorium - Room 2400
(note special day, time and place)
Joint AOSC-CSCAMM Seminar
Professor James C. McWilliams,
Institute of Geophysics and Planetary Physics and Department of Atmospheric and Oceanic Sciences, UCLA
Turbulent Fluid Dynamics at the Margins of Rotational and Stratified Control: Submesoscale Fluid Dynamics in the Ocean
Geophysical fluid dynamicists have developed a mature perspective on the dynamical influence of Earth's rotation, while most other areas of fluid dynamics can safely disregard rotation. Similarly, geophysical problems usually arise under the influence of stable density stratification at least as importantly as velocity shear. In this talk the dominant turbulence and wave behaviors in the rotating and non-rotating, stratified and non-stratified fluid-dynamical realms are described, and particular attention is given to their borderlands, where rotational and stratified influences are significant but not dominant. Contrary to the inverse energy cascade of geostrophic turbulence toward larger scales, a forward energy cascade develops within the borderlands from the breakdown of diagnostic force balances, frontogenesis and frontal instabilities, and filamentogenesis with strong surface convergences. Then the cascade continues further through the small-scale, non-rotating, unstratified (a.k.a. universal) realm until it dissipates at the microscale. In particular, this submesoscale cascade behavior is of interest as a global route to kinetic and available-potential energy dissipations in the oceanic general circulation, as well as an energy source for microscale material mixing across stably-stratified density surfaces and a penetration route for potential vorticity across stably stratified density surfaces.
Graphics Hardware & GPU Computing: Past, Present, and Future
Modern GPUs have emerged as the world’s most successful parallel architecture. GPUs provide a level of massively parallel computation that was once the preserve of supercomputers like the MasPar and Connection Machine. For example, NVIDIA's GeForce GTX 280 is a fully programmable, massively multithreaded chip with up to 240 cores, 30,720 threads and capable of performing up to a trillion operations per second. The raw computational horsepower of these chips has expanded their reach well beyond graphics. Today’s GPUs not only render video game frames, they also accelerate physics computations, video transcoding, image processing, astrophysics, protein folding, seismic exploration, computational finance, radioastronomy - the list goes on and on. Enabled by platforms like the CUDA architecture, which provides a scalable programming model, researchers across science and engineering are accelerating applications in their discipline by up to two orders of magnitude. These success stories, and the tremendous scientific and market opportunities they open up, imply a new and diverse set of workloads that in turn carry implications for the evolution of future GPU architectures.
In this talk I will discuss the evolution of GPUs from fixed-function graphics accelerators to general-purpose massively parallel processors. I will briefly motivate GPU computing and explore the transition it represents in massively parallel computing: from the domain of supercomputers to that of commodity "manycore" hardware available to all. I will discuss the goals, implications, and key abstractions of the CUDA architecture. Finally I will close with a discussion of future workloads in games, high-performance computing, and consumer applications, and their implications for future GPU architectures.
Bio :
Dr. David Luebke
Manager, NVIDIA Research
NVIDIA Corporation
http://luebke.us
David Luebke helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. His principal research interests are GPU computing and real-time computer graphics. Luebke's honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award". Dr. Luebke has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, and dozens of papers, articles, chapters, and patents.
Professor Alexander Vladimirsky,
Department of Mathematics, Cornell University
Multiobjective optimization and homogenization --
computational challenges
I will present two recent projects related to front propagation & optimal control.
The first of these (joint with A. Kumar) deals with multiple criteria for
optimality (e.g., fastest versus shortest trajectories) and optimality
under integral constraints. We show that an augmented PDE on a
higher-dimensional domain describes all Pareto-optimal trajectories. Our
numerical method uses the causality of this PDE to approximate its
discontinuous viscosity solution efficiently. The method is illustrated
by problems in robotic navigation (e.g., minimizing the path length and
exposure to an enemy observer simultaneously).
The second project (joint with A. Oberman and R. Takei) deals with 2-scale
and 3-scale computations in geometric optics. We propose a new &
efficient method to homogenize first-order Hamilton-Jacobi PDEs. Unlike
the prior cell-problem methods, our algorithm is based on homogenizing the
related Finsler metric. We illustrate by computing the effective velocity
profiles for a number of periodic and "random" composite materials.
Professor Thomas Strohmer,
Department of Mathematics, University of California, Davis
Helmholtz meets Heisenberg: Sparse Remote Sensing
We consider the problem of detecting targets via remote sensing.
This imaging problem is typically plagued by nonuniqueness and instability and hence mathematically challenging. Traditional methods such as matched field processing are rather limited in the number of targets that they can reliably recover at high resolution. By utilizing sparsity and tools from compressed sensing I will present methods that significantly improve upon existing radar imaging techniques.
I will derive fundamental performance and resolution limits for compressed radar imaging with respect to the number of sensors and resolvable targets. These theoretical results demonstrate the advantages as well as limitations of compressed remote sensing. Numerical simulations confirm the theoretical analysis. This is joint work with Albert Fannjiang, Mike Yan, and Matt Herman.