EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

Information Systems Lab (ISL) Colloquium

ISL colloquium presents "Computational Barriers to Estimation from Low-Degree Polynomials"

Topic: 
Computational Barriers to Estimation from Low-Degree Polynomials
Abstract / Description: 

One fundamental goal of high-dimensional statistics is to detect or recover planted structure (such as a low-rank matrix) hidden in noisy data. A growing body of work studies low-degree polynomials as a restricted model of computation for such problems. Many leading algorithmic paradigms (such as spectral methods and approximate message passing) can be captured by low-degree polynomials, and thus, lower bounds against low-degree polynomials serve as evidence for computational hardness of statistical problems.

Prior work has studied the power of low-degree polynomials for the detection (i.e. hypothesis testing) task. In this work, we extend these methods to address problems of estimating (i.e. recovering) the planted signal instead of merely detecting its presence. For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree-D polynomial. These are the first results to establish low-degree hardness of recovery problems for which the associated detection problem is easy. As applications, we study the planted submatrix and planted dense subgraph problems, resolving (in the low-degree framework) open problems about the computational complexity of recovery in both cases.

Joint work with Tselil Schramm, available at: https://arxiv.org/abs/2008.02269

Date and Time: 
Thursday, October 8, 2020 - 4:30pm
Venue: 
Zoom registration required

ISL Colloquium welcomes Prof. Ramesh Johari

Topic: 
TBA
Abstract / Description: 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, November 19, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium welcomes Prof. David Woodruff

Topic: 
TBA
Abstract / Description: 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Friday, November 13, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Connecting Online Optimization and Control"

Topic: 
Connecting Online Optimization and Control
Abstract / Description: 

Online optimization is a powerful framework in machine learning that has seen numerous applications to problems in distributed systems, robotics, autonomous planning, and sustainability. In my group at Caltech, we began by applying online optimization to 'right-size' capacity in data centers a decade ago; and now we have used tools from online optimization to develop algorithms for demand response, energy storage management, video streaming, drone navigation, autonomous driving, and beyond. In this talk, I will highlight both the applications of online optimization and the theoretical progress that has been driven by these applications. Over the past decade, the community has moved from designing algorithms for one-dimensional problems with restrictive assumptions on costs to general results for high-dimensional non-convex problems that highlight the role of constraints, predictions, delay, and more. In the last two years, a connection between online optimization and adversarial control has emerged, and I will highlight how advances in online optimization can lead to advances in the control of linear dynamical systems.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 22, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Strategies for Active Machine Learning"

Topic: 
Strategies for Active Machine Learning
Abstract / Description: 

The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains and often using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines.

Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This talk explores the development of active ML theory and methods over the past decade, including a new approach applicable to kernel methods and neural networks, which views the learning problem through the lens of representer theorems. This perspective highlights the effect of adding a new training example on the functional representation, leading to a new criterion for actively selecting examples.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 15, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Optimizing the Cost of Distributed Learning"

Topic: 
Optimizing the Cost of Distributed Learning
Abstract / Description: 

As machine learning models are trained on ever-larger and more complex datasets, it has become standard to distribute this training across multiple physical computing devices. Such an approach offers a number of potential benefits, including reduced training time and storage needs due to parallelization. Distributed stochastic gradient descent (SGD) is a common iterative framework for training machine learning models: in each iteration, local workers compute parameter updates on a local dataset. These are then sent to a central server, which aggregates the local updates and pushes global parameters back to local workers to begin a new iteration. Distributed SGD, however, can be expensive in practice: training a typical deep learning model might require several days and thousands of dollars on commercial cloud platforms. Cloud-based services that allow occasional worker failures (e.g., locating some workers on Amazon spot or Google preemptible instances) can reduce this cost, but may also reduce the training accuracy. We quantify the effect of worker failure and recovery rates on the model accuracy and wall-clock training time, and show both analytically and experimentally that these performance bounds can be used to optimize the SGD worker configurations. In particular, we can optimize the number of workers that utilize spot or preemptible instances. Compared to heuristic worker configuration strategies and standard on-demand instances, we dramatically reduce the cost of training a model, with modest increases in training time and the same level of accuracy. Finally, we discuss implications of our work for federated learning environments, which use a variant of distributed SGD. Two major challenges in federated learning are unpredictable worker failures and a heterogeneous (non-i.i.d.) distribution of data across the workers, and we show that our characterization of distributed SGD's performance under worker failures can be adapted to this setting.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 1, 2020 - 4:30pm

ISL Colloquium presents "Quantum Renyi relative entropies and their use"

Topic: 
Quantum Renyi relative entropies and their use
Abstract / Description: 

The past decade of research in quantum information theory has witnessed extraordinary progress in understanding communication over quantum channels, due in large part to quantum generalizations of the classical Renyi relative entropy. One generalization is known as the sandwiched Renyi relative entropy and finds its use in characterizing asymptotic behavior in quantum hypothesis testing. It has also found use in establishing strong converse theorems (fundamental communication capacity limitations) for a variety of quantum communication tasks. Another generalization is known as the geometric Renyi relative entropy and finds its use in establishing strong converse theorems for feedback assisted protocols, which apply to quantum key distribution and distributed quantum computing scenarios. Finally, a generalization now known as the Petz–Renyi relative entropy plays a critical role for statements of achievability in quantum communication. In this talk, I will review these quantum generalizations of the classical Renyi relative entropy, discuss their relevant information-theoretic properties, and the applications mentioned above.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, September 17, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Federated Learning at Google and Beyond"

Topic: 
Federated Learning at Google and Beyond
Abstract / Description: 

Federated Learning enables mobile devices to collaboratively learn a shared prediction model or analytic while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud. It embodies the principles of focused collection and data minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning. In this talk, I will discuss: (1) how federated learning differs from more traditional distributed machine learning paradigms, focusing on the main defining characteristics and challenges of the federated learning setting; (2) practical algorithms for federated learning that address the unique challenges of this setting; (3) extensions to federated learning, including differential privacy, secure aggregation, and compression for model updates, and (4) a range of valuable research directions that could have significant real-world impact.

Date and Time: 
Friday, February 28, 2020 - 1:15pm
Venue: 
Packard 202

RL forum presents "Temporal Abstraction in Reinforcement Learning with the Successor Representation"

Topic: 
Temporal Abstraction in Reinforcement Learning with the Successor Representation
Abstract / Description: 

Reasoning at multiple levels of temporal abstraction is one of the key abilities for artificial intelligence. In the reinforcement learning problem, this is often instantiated with the options framework. Options allow agents to make predictions and to operate at different levels of abstraction within an environment. Nevertheless, when a reasonable set of options is not known beforehand, there are no definitive answers for characterizing which options one should consider. Recently, a new paradigm for option discovery has been introduced. This paradigm is based on the successor representation (SR), which defines state generalization in terms of how similar successor states are. In this talk I'll discuss the existing methods from this paradigm, providing a big picture look at how the SR can be used in the options framework. I'll present methods for discovering "bottleneck" options, as well as options that improve an agent's exploration capabilities. I'll also discuss the option keyboard, which uses the SR to extend a finite set of options to a combinatorially large counterpart without additional learning.

Date and Time: 
Tuesday, February 25, 2020 - 2:00pm
Venue: 
Packard 202

ISL Colloquium presents "Fully Convolutional Pixelwise Context-Adaptive Denoiser"

Topic: 
Fully Convolutional Pixelwise Context-Adaptive Denoiser
Abstract / Description: 

Denoising is a classical problem in signal processing and information theory, and various different methods have been applied to tackle the problem for several decades. Recently, supervised-trained neural network-based methods have achieved impressive denoising performances, significantly surpassing those of the classical approaches, such as prior- or optimization-based denoisers. However, there are two drawbacks on those methods; they are not adaptive, i.e., the neural- network cannot correct itself when distributional mismatch between training and test data exists, and they require clean source data and exact noise model for training, which is not always possible in some practical scenarios. In this talk, I will introduce a framework that tries to tackle above two drawbacks jointly, based on an unbiased estimate of the loss of a particular class of pixelwise context- adaptive denoisers. Using the framework and neural networks to learn the denoisers, I show the resulting image denoiser can adapt to mismatched distributions in the data solely based on the given noisy images, and achieve the state-of-the-art performances on several benchmark datasets. Moreover, combined with the standard noise transform/estimation techniques, I will show that our denoiser can be completely blindly trained only with the noisy images (and without exact noise model) and yet be very effective for denoising more sophisticated, source-dependent real-world noise, e.g., Poisson- Gaussian noise.

This is a joint work with my students, Sungmin Cha and Jaeseok Byun at SKKU.

Date and Time: 
Friday, February 21, 2020 - 1:15pm
Venue: 
Packard 202

Pages

Subscribe to RSS - Information Systems Lab (ISL) Colloquium