16 September

Frequency Bias in Deep Learning

2:00 PM
Online via Zoom

Prof. David Jacobs, Department of Computer Science and UMIACS, University of Maryland

 

Recent results have shown that highly overparameterized deep neural networks act as linear systems.  Fully connected networks are equivalent to kernel methods, with a neural tangent kernel (NTK).  This talk will describe our work on better understanding the properties of this kernel.  We study the eigenvalues and eigenvectors of NTK, and quantify a frequency bias in neural networks, that causes them to learn low frequency functions more quickly than high frequency ones.  In fact, these eigenvectors and eigenvalues are the same as those of the well-known Laplace kernel, implying that these two kernels interpolate functions with the same smoothness properties.  On a large number of datasets, we show that Kernel-based classification with NTK and the Laplace kernel perform quite similarly.

30 September

Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics

2:00 PM
Online via Zoom

Prof. Furong Huang, Department of Computer Science, University of Maryland

 

Poisoning attacks, although have been studied extensively in supervised learning, are not well understood in Reinforcement Learning (RL), especially in deep RL. Prior works on poisoning RL usually either assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous types/victims of poisoning attacks in RL, considering the unique challenges in RL such as data no longer being i.i.d. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for most policy-based deep RL agents, using a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy, with a limited attacking budget. Our experiment results demonstrate varying vulnerabilities of different deep RL agents in multiple environments, benefiting the understanding and applications of deep RL under security threat scenarios.

14 October

Solutions to two conjectures in branched transport: stability and regularity of optimal paths

2:00 PM
Online via Zoom

Prof. Antonio De Rosa, Department of Mathematics, University of Maryland

 

Transport models involving branched structures are employed to describe several biological, natural and supply-demand systems. The transportation cost in these models is proportional to a concave power of the intensity of the flow.

In this talk, we focus on the stability of optimal transports with respect to variations of the source and target measures. Stability was known to hold just for special regimes (supercritical concave powers degenerating with the dimension). We prove that stability holds for every lower semicontinuous cost functional continuous in 0 and we provide counterexamples when these assumptions are not satisfied. Thus we completely solve a conjecture of Bernot, Caselles and Morel. 

To conclude, we prove stability for the mailing problem, too. This was completely open in the literature and allows us to obtain the regularity of the optimal networks.

18 November

Gaps and effective gaps in Floquet media

Online via Zoom

Prof. Amir Sagiv, Applied Mathematics, Columbia University

 

Applying time-periodic forcing is a common technique to effectively change materials properties. A well-known example is the transformation of graphene from a conductor to an insulator ("Floquet topological insulator'') by applying to it a time-dependent magnetic potential. We will see how this phenomenon is derived from certain reduced models of graphene. We will then turn to the first-principle, continuum, model of graphene. There, it is not at all obvious how to derive the insulation property, or even what its mathematical expression is. We will introduce the notion of an "effective gap", or low-oscillations gap, and prove its existence in forced graphene. This new notion distinguishes a part of the energy-spectrum in a quantitative way. It implies that the medium is approximately insulating to a class of physically-likely wavepackets.

Based on joint work with MI Weinstein.

02 December

Dynamic Network-level Analysis of Neural Data Underlying Behavior: Case Studies in Auditory Processing

2:00 PM
Online via Zoom

Prof. Behtash Babadi, Department of Electrical and Computer Engineering, University of Maryland

 

In this talk, I present computational methodologies for extracting dynamic neural functional networks that underlie behavior. These methods aim at capturing the sparsity, dynamicity and stochasticity of these networks, by integrating techniques from high-dimensional statistics, point processes, state-space modeling, and adaptive filtering. I demonstrate their utility using several case studies involving auditory processing, including 1) functional auditory-prefrontal interactions during attentive behavior in the ferret brain, 2) network-level signatures of decision-making in the mouse primary auditory cortex, and 3) cortical dynamics of speech processing in the human brain.