EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

Information Systems Lab (ISL) Colloquium

ISL Colloquium presents "Information-theoretic Privacy"

Topic: 
Information-theoretic Privacy: A holistic view via leakage measures, robust privacy guarantees, and adversarial models for mechanism design
Abstract / Description: 

Privacy is the problem of ensuring limited leakage of information about sensitive features while sharing information (utility) about non-private features to legitimate data users. Even as differential privacy has emerged as a strong desideratum for privacy, there is a need for varied yet rigorous approaches for applications with different requirements. This talk presents an information-theoretic approach and takes a holistic view focusing on leakage measures, design of privacy mechanisms, and verifiable implementations using generative adversarial models. Specifically, we introduce maximal alpha leakage as a new class of adversarially motivated tunable leakage measures that quantifies the maximal gain of an adversary in refining a tilted belief of any (potentially random) function of a dataset conditioned on a disclosed dataset. The choice of alpha determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing privacy can then be viewed as one of designing a randomizing mechanism that minimizes alpha leakage subject to utility constraints. We then present bounds on the robustness of privacy guarantees that can be made when designing mechanisms from a finite number of samples. Finally, we focus on a data-driven approach, generative adversarial privacy (GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained minimax game between a privatizer (intent on publishing a utility-guaranteeing learning representation that limits leakage of the sensitive features) and an adversary (intent on learning the sensitive features). We demonstrate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI dataset. Time permitting, we will briefly discuss the learning-theoretic underpinnings of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU), O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz (ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P. Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).

Date and Time: 
Wednesday, December 5, 2018 - 4:15pm
Venue: 
Packard 202

ISL Colloquium presents "Estimation After Parameter Selection"

Topic: 
Estimation After Parameter Selection
Abstract / Description: 

In many practical parameter estimation problems, such as medical experiments and cognitive radio communications, parameter selection is performed prior to estimation. The selection process has a major impact on subsequent estimation by introducing a selection bias and creating coupling between decoupled parameters. As a result, classical estimation theory may be inappropriate and inaccurate and a new methodology is needed. In this study, the problem of estimating a preselected unknown deterministic parameter, chosen from a parameter set based on a predetermined data-based selection rule, Ψ, is considered. In this talk, I will present a general non-Bayesian estimation theory for estimation after parameter selection, includes estimation methods, performance analysis, and adaptive sampling strategies. The new theory is based on the post-selection mean-square-error (PSMSE) criterion as a performance measure instead of the commonly used mean-square-error (MSE). We derive the corresponding Cramér-Rao-type bound on the PSMSE of any Ψ-unbiased estimator, where the Ψ -unbiasedness is in the Lehmann-unbiasedness sense. Then, the post-selection maximum-likelihood (PSML) estimator is presented and its Ψ–efficiency properties are demonstrated. Practical implementations of the PSML estimator are proposed as well. As time permits, I will discuss the similar ideas that can be applied to estimation after model selection and to estimation in Good-Turing models.

Date and Time: 
Monday, December 3, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium presents Transportation Systems Resilience: Capacity-Aware Control and Value of Information

Topic: 
Transportation Systems Resilience: Capacity-Aware Control and Value of Information
Abstract / Description: 

Resilience of a transportation system is its ability to operate under adverse events like incidents and storms. Availability of real-time traffic data provides new opportunities for predicting travelers' routing behavior and implementing network control operations during adverse events. In this talk, we will discuss two problems: controlling highway corridors in response to disruptions and modeling strategic route choices of travelers with heterogeneous access to incident information. Firstly, we present an approach to designing control strategies for highway corridors facing stochastic capacity disruptions such random incidents and vehicle platoons/moving bottlenecks. We exploit the properties of traffic flow dynamics under recurrent incidents to derive verifiable conditions for stability of traffic queues, and also obtain guarantees on the system throughput. Secondly, we introduce a routing game in which travelers receive asymmetric and incomplete information about uncertain network state, and make route choices based on their private beliefs about the state and other travelers' behavior. We study the effects of information heterogeneity on travelers' equilibrium route choices and costs. Our analysis is useful for evaluating the value of receiving state information for travelers, which can be positive, zero, or negative in equilibrium. These results demonstrate the advantages of considering network state uncertainty in both strategic and operational aspects of system resilience.

Date and Time: 
Friday, November 30, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium presents Deconstructing the Blockchain to Approach Physical Limits

Topic: 
Deconstructing the Blockchain to Approach Physical Limits
Abstract / Description: 

The concept of a blockchain was invented by Satoshi Nakamoto to maintain a distributed ledger for an electronic payment system, Bitcoin. In addition to its security, important performance measures of a blockchain protocol are its transaction throughput, confirmation latency and confirmation reliability. These measures are limited by two underlying physical network attributes: communication capacity and speed-of-light propagation delay. Existing systems operate far away from these physical limits. In this work we introduce Prism, a new blockchain protocol, which can provably achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in the bandwidth-delay product CD; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing the blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits.

This is joint work with Vivek Bagaria, David Tse, Giulia Fanti and Pramod Viswanath. The full paper can be found here.

Date and Time: 
Thursday, November 29, 2018 - 3:00pm
Venue: 
Packard 101

ISL Colloquium presents Artwork Personalization via Online Learning

Topic: 
Artwork Personalization via Online Learning
Abstract / Description: 

For many years, the main goal of the Netflix recommendation engine has been to get the right titles in front of each member at the right time. Today, we use nonlinear, probabilistic and deep learning approaches to make better and better rankings of our movies and TV shows for each user. But the job of recommendation does not end there. The homepage should be able to convey to the member enough evidence of why this is a good title for her, especially for shows that the member has never heard of. One way to address this challenge is to personalize the way we portray the titles on our service. Our image personalization engine is driven by online learning and contextual bandits. Traditional bandits frameworks make strong assumptions which do not apply when predictions entail actions in the real world. We will discuss how learning algorithms can be augmented to better deal with causality, bias, and noncompliance.

Date and Time: 
Tuesday, November 13, 2018 - 4:15pm
Venue: 
Packard 202

ISL Colloquium presents Estimating the Information Flow in Deep Neural Networks

Topic: 
Estimating the Information Flow in Deep Neural Networks
Abstract / Description: 

This talk will discuss the flow of information and the evolution of internal representations during deep neural network (DNN) training, aiming to demystify the compression aspect of the information bottleneck theory. The theory suggests that DNN training comprises a rapid fitting phase followed by a slower compression phase, in which the mutual information I(X;T) between the input X and internal representations T decreases. Several papers observe compression of estimated mutual information on different DNN models, but the true I(X;T) over these networks is provably either constant (discrete X) or infinite (continuous X). We will explain this discrepancy between theory and experiments, and explain what was actually measured by these past works.

To this end, an auxiliary (noisy) DNN framework will be introduced, in which I(X;T) is a meaningful quantity that depends on the network's parameters. We will show that this noisy framework is a good proxy for the original (deterministic) system both in terms of performance and the learned representations. To accurately track I(X;T) over noisy DNNs, a differential entropy estimator tailor to exploit the DNN's layered structure will be developed and theoretical guarantees on the associated minimax risk will be provided. Using this estimator along with a certain analogy to an information-theoretic communication problem, we will elucidate the geometric mechanism that drives compression of I(X;T) in noisy DNNs. Based on these findings, we will circle back to deterministic networks and explain what the past observations of compression were in fact showing. Future research directions inspired by this study aiming to facilitate a comprehensive information-theoretic understanding of deep learning will also be discussed.

Date and Time: 
Friday, November 9, 2018 - 1:15pm
Venue: 
Packard 202

ISL Colloquium presents When do neural networks have bad local minima, and when not?

Topic: 
When do neural networks have bad local minima, and when not?
Abstract / Description: 

To explain the recent success of neural networks, researchers have conjectured that all local minima are global minima despite the non-convexity of the problem. Is this really true? Is this just hand-wavy intuition that is roughly true in special cases or can be a rigorous result in a broad setting?

In this talk, instead of explaining "why neural-nets are good", we try to understand "when neural-nets are good, and when not" --with a restricted definition of "good" by "every local-min is global-min". We focus on the binary classification problem and discuss how architecture and data affect the landscape. On the positive side, we prove that no bad local minima exist under reasonable assumptions on the neuron types, the neural-net structure, the loss function, and the dataset. On the negative side, we provide dozens of counterexamples that show the necessity of most assumptions.

Our approach can be viewed as a game of "local-min attack" and "defense". An attacker tries to construct examples that bad local minima exist, and the defender modifies the setting to eliminate bad local minima. For instance, the attacker constructs bad local minima for 1-hidden-layer ReLU network with linearly separable data, then the defender proves that smooth versions of ReLU eliminate them. At last, we present a strong defense consisting of a special neuron and a special regularizer that can eliminate bad local minima for a deep neural-net in the realizable case.

Joint work with Shiyu Liang, Yixuan Li, Jason Lee and R. Srikant.

Date and Time: 
Thursday, November 8, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium presents Taming the Devil of Gradient-based Optimization Methods with the Angel of Differential Equations

Topic: 
Taming the Devil of Gradient-based Optimization Methods with the Angel of Differential Equations
Abstract / Description: 

In this talk, we use ordinary differential equations to model, analyze, and interpret gradient-based optimization methods. In the first part of the talk, we derive a second-order ODE that is the limit of Nesterov's accelerated gradient method for non-strongly objectives (NAG-C). The continuous-time ODE is shown to allow for a better understanding of NAG-C and, as a byproduct, we obtain a family of accelerated methods with similar convergence rates. In the second part, we begin by recognizing that existing ODEs in the literature are inadequate to distinguish between two fundamentally different methods, Nesterov's accelerated gradient method for strongly convex functions (NAG-SC) and Polyak's heavy-ball method. In response, we derive high-resolution ODEs as more accurate surrogates for the three aforementioned methods. These novel ODEs can be integrated into a general framework that allows for a fine-grained analysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As the first application of this framework, we identify the effect of a term referred to as 'gradient correction' in NAG-SC but not in the heavy-ball method, shedding insight into why the former achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for smooth convex functions. This is based on joint work with Stephen Boyd, Emmanuel Candes, Simon Du, Michael Jordan, and Bin Shi.

Date and Time: 
Thursday, November 1, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Battling Demons in Peer Review

Topic: 
New Battling Demons in Peer Review
Abstract / Description: 

Peer review is the backbone of scholarly research. It is however faced with a number of challenges (or "demons") such as subjectivity, bias/miscalibration, noise, and strategic behavior. The growing number of submissions in many areas of research such as machine learning has significantly increased the scale of these demons. This talk will present some principled and practical approaches to battle these demons in peer review:

(1) Subjectivity: How to ensure that all papers are judged by the same yardstick?

(2) Bias/miscalibration: How to use ratings in presence of arbitrary or adversarial miscalibration?

(3) Noise: How to assign reviewers to papers to simultaneously ensure fair and accurate evaluations in the presence of review noise?

(4) Strategic behavior: How to insulate peer review from strategic behavior of author-reviewers?

The work uses tools from statistics and learning theory, social choice theory, information theory, game theory and decision theory. (No prior knowledge on these topics will be assumed.)


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Thursday at 4:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Martin Zhang, Farzan Farnia, and Zhengyuan Zhou. To suggest speakers, please contact any of the students.

Date and Time: 
Thursday, October 25, 2018 - 4:15pm
Venue: 
Packard 202

ISL Colloquium: New Breakthroughs in Scheduling Theory

Topic: 
New Breakthroughs in Scheduling Theory
Abstract / Description: 

Scheduling policies are at the heart of computer systems. The right scheduling policy can dramatically reduce response times, ensure fairness, provide class-based priority, etc., without requiring additional resources. While stochastic response time analysis of different scheduling policies has been the focus of thousands of theoretical papers, results are limited to analyzing a relatively small number of "simple" scheduling policies.

In this talk, we introduce the SOAP class of scheduling policies: Schedule Ordered by Age-based Priority. The SOAP policies include almost all scheduling policies in the literature as well as an infinite number of variants which have never been analyzed, or maybe even conceived. SOAP policies include the highly intractable SERPT policy (Shortest-Expected-Remaining-Processing-Time), as well as the famously complex Gittins Index policy. SOAP also includes all sorts of "combination" policies, like a policy that performs Shortest-Job-First on jobs with known size and Foreground-Background scheduling on jobs with unknown size, or a policy that is allowed to preempt jobs only when they hit certain ages. In this talk we present a stochastic response time analysis of all SOAP policies via a single universal framework.

Joint work with: Ziv Scully and Alan Scheller-Wolf

Appeared in ACM SIGMETRICS/POMACS 2018. APS 2018 Best Student Paper Finalist.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Thursday at 4:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Martin Zhang, Farzan Farnia, and Zhengyuan Zhou. To suggest speakers, please contact any of the students.

Date and Time: 
Thursday, October 11, 2018 - 4:15pm
Venue: 
Packard 202

Pages

Subscribe to RSS - Information Systems Lab (ISL) Colloquium