Information Systems Lab (ISL) Colloquium

ISL Colloquium presents "Information Directed Sampling" - Lecture 1 & 2

Topic: 
Information Directed Sampling
Abstract / Description: 

Tor will give a whirlwind tour of a series of recent papers on the information directed sampling algorithm for sequential decision-making. The results come in three flavours. First, generalising and applying the IDS algorithm to problems with a rich information structure such as convex bandits and partial monitoring. Second, showing a connection between the optimisation problem solved by IDS and the optimisation problem that determines the asymptotic lower bound for stochastic structured bandit problems. Third, showing a deep connection between IDS and the mirror descent framework for convex optimisation.

Date and Time: 
Monday, January 11, 2021 - 10:00am to Tuesday, January 12, 2021 - 9:55am
Wednesday, January 13, 2021 - 10:00am to Thursday, January 14, 2021 - 9:55am

ISL Colloquium presents "Accelerating Reinforcement Learning in Emerging Wireless IoT Systems and Applications via System Awareness"

Topic: 
Accelerating Reinforcement Learning in Emerging Wireless IoT Systems and Applications via System Awareness
Abstract / Description: 

Traditional reinforcement learning (RL) algorithms are purely data-driven and operate without any a priori knowledge about the nature of the available actions, the system’s state transition dynamics, and its cost/reward function. This allows them to solve a wide variety of problems, but severely penalizes their ability to meet critical requirements of emerging wireless applications, due to the inefficiency with which the algorithms learn from their interactions with the environment. In this presentation, we describe foundational advances in system-aware RL that are achieved by systematically integrating basic system models into the learning process. These solutions use real-time data in conjunction with basic knowledge of the underlying communication system, and can achieve orders of magnitude improvement in key performance metrics, such as sample, compute, and memory complexity, compared to well-established RL benchmarks. Integration of this framework with deep RL and its further acceleration via stochastic computing and hardware optimization are also discussed.

Date and Time: 
Thursday, January 14, 2021 - 4:30pm
Venue: 
Zoom registration required

ISL Colloquium presents "Learning Convolutions from Scratch"

Topic: 
Learning Convolutions from Scratch
Abstract / Description: 

Convolution is one of the most essential components of architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data. In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures. To find architectures with small description length, we propose β-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected nets on CIFAR-10 (85.19%), CIFAR-100 (59.56%) and SVHN (94.07%) bridging the gap between fully-connected and convolutional nets.

Date and Time: 
Thursday, November 5, 2020 - 4:30pm
Venue: 
Zoom registration required

ISL colloquium presents "Computational Barriers to Estimation from Low-Degree Polynomials"

Topic: 
Computational Barriers to Estimation from Low-Degree Polynomials
Abstract / Description: 

One fundamental goal of high-dimensional statistics is to detect or recover planted structure (such as a low-rank matrix) hidden in noisy data. A growing body of work studies low-degree polynomials as a restricted model of computation for such problems. Many leading algorithmic paradigms (such as spectral methods and approximate message passing) can be captured by low-degree polynomials, and thus, lower bounds against low-degree polynomials serve as evidence for computational hardness of statistical problems.

Prior work has studied the power of low-degree polynomials for the detection (i.e. hypothesis testing) task. In this work, we extend these methods to address problems of estimating (i.e. recovering) the planted signal instead of merely detecting its presence. For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree-D polynomial. These are the first results to establish low-degree hardness of recovery problems for which the associated detection problem is easy. As applications, we study the planted submatrix and planted dense subgraph problems, resolving (in the low-degree framework) open problems about the computational complexity of recovery in both cases.

Joint work with Tselil Schramm, available at: https://arxiv.org/abs/2008.02269

Date and Time: 
Thursday, October 8, 2020 - 4:30pm
Venue: 
Zoom registration required

ISL Colloquium presents "Interference in Experimental Design in Online Platforms"

Topic: 
Interference in Experimental Design in Online Platforms
Abstract / Description: 

Many experiments ("A/B tests") in online platforms exhibit interference, where an intervention applied to one market participant influences the behavior of another participant. This interference can lead to biased estimates of the treatment effect of the intervention.

In this talk we first focus on such experiments in two-sided platforms where "customers" book "listings". We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments, and use our model to investigate how the performance of different designs and estimators is affected by marketplace interference effects. Platforms typically use two common experimental designs: demand-side ("customer") randomization (CR) and supply-side ("listing") randomization (LR), along with their associated estimators. We show that good experimental design depends on market balance: in highly demand-constrained markets, CR is unbiased, while LR is biased; conversely, in highly supply-constrained markets, LR is unbiased, while CR is biased. We also introduce and study a novel experimental design based on two-sided randomization (TSR) where both customers and listings are randomized to treatment and control. We show that appropriate choices of TSR designs can be unbiased in both extremes of market balance, while yielding relatively low bias in intermediate regimes of market balance. (This is based on joint work with Hannah Li, Inessa Liskovich, and Gabriel Weintraub.)

Time permitting, we will conclude the talk with some discussion of other experimental designs used by online platforms to address interference, including adaptive designs such as switchback experiments, and clustered experimental designs. The goal is to provide an overview of some of the open challenges that arise in this domain. (This part of the talk is based in part on joint work with Peter Glynn and Mohammad Rasouli.)


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, November 19, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Connecting Online Optimization and Control"

Topic: 
Connecting Online Optimization and Control
Abstract / Description: 

Online optimization is a powerful framework in machine learning that has seen numerous applications to problems in distributed systems, robotics, autonomous planning, and sustainability. In my group at Caltech, we began by applying online optimization to 'right-size' capacity in data centers a decade ago; and now we have used tools from online optimization to develop algorithms for demand response, energy storage management, video streaming, drone navigation, autonomous driving, and beyond. In this talk, I will highlight both the applications of online optimization and the theoretical progress that has been driven by these applications. Over the past decade, the community has moved from designing algorithms for one-dimensional problems with restrictive assumptions on costs to general results for high-dimensional non-convex problems that highlight the role of constraints, predictions, delay, and more. In the last two years, a connection between online optimization and adversarial control has emerged, and I will highlight how advances in online optimization can lead to advances in the control of linear dynamical systems.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 22, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Strategies for Active Machine Learning"

Topic: 
Strategies for Active Machine Learning
Abstract / Description: 

The field of Machine Learning (ML) has advanced considerably in recent years, but mostly in well-defined domains and often using huge amounts of human-labeled training data. Machines can recognize objects in images and translate text, but they must be trained with more images and text than a person can see in nearly a lifetime. The computational complexity of training has been offset by recent technological advances, but the cost of training data is measured in terms of the human effort in labeling data. People are not getting faster nor cheaper, so generating labeled training datasets has become a major bottleneck in ML pipelines.

Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This talk explores the development of active ML theory and methods over the past decade, including a new approach applicable to kernel methods and neural networks, which views the learning problem through the lens of representer theorems. This perspective highlights the effect of adding a new training example on the functional representation, leading to a new criterion for actively selecting examples.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 15, 2020 - 4:30pm
Venue: 
Zoom

ISL Colloquium presents "Optimizing the Cost of Distributed Learning"

Topic: 
Optimizing the Cost of Distributed Learning
Abstract / Description: 

As machine learning models are trained on ever-larger and more complex datasets, it has become standard to distribute this training across multiple physical computing devices. Such an approach offers a number of potential benefits, including reduced training time and storage needs due to parallelization. Distributed stochastic gradient descent (SGD) is a common iterative framework for training machine learning models: in each iteration, local workers compute parameter updates on a local dataset. These are then sent to a central server, which aggregates the local updates and pushes global parameters back to local workers to begin a new iteration. Distributed SGD, however, can be expensive in practice: training a typical deep learning model might require several days and thousands of dollars on commercial cloud platforms. Cloud-based services that allow occasional worker failures (e.g., locating some workers on Amazon spot or Google preemptible instances) can reduce this cost, but may also reduce the training accuracy. We quantify the effect of worker failure and recovery rates on the model accuracy and wall-clock training time, and show both analytically and experimentally that these performance bounds can be used to optimize the SGD worker configurations. In particular, we can optimize the number of workers that utilize spot or preemptible instances. Compared to heuristic worker configuration strategies and standard on-demand instances, we dramatically reduce the cost of training a model, with modest increases in training time and the same level of accuracy. Finally, we discuss implications of our work for federated learning environments, which use a variant of distributed SGD. Two major challenges in federated learning are unpredictable worker failures and a heterogeneous (non-i.i.d.) distribution of data across the workers, and we show that our characterization of distributed SGD's performance under worker failures can be adapted to this setting.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, October 1, 2020 - 4:30pm

ISL Colloquium presents "Quantum Renyi relative entropies and their use"

Topic: 
Quantum Renyi relative entropies and their use
Abstract / Description: 

The past decade of research in quantum information theory has witnessed extraordinary progress in understanding communication over quantum channels, due in large part to quantum generalizations of the classical Renyi relative entropy. One generalization is known as the sandwiched Renyi relative entropy and finds its use in characterizing asymptotic behavior in quantum hypothesis testing. It has also found use in establishing strong converse theorems (fundamental communication capacity limitations) for a variety of quantum communication tasks. Another generalization is known as the geometric Renyi relative entropy and finds its use in establishing strong converse theorems for feedback assisted protocols, which apply to quantum key distribution and distributed quantum computing scenarios. Finally, a generalization now known as the Petz–Renyi relative entropy plays a critical role for statements of achievability in quantum communication. In this talk, I will review these quantum generalizations of the classical Renyi relative entropy, discuss their relevant information-theoretic properties, and the applications mentioned above.


 

The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Thursday, September 17, 2020 - 4:30pm
Venue: 
Zoom

IT-Forum presents "A Very Sketchy Talk"

Topic: 
A Very Sketchy Talk
Abstract / Description: 

We give an overview of dimensionality reduction methods, or sketching, for a number of problems in optimization, first surveying work using these methods for classical problems, which gives near optimal algorithms for regression, low rank approximation, and natural variants. We then survey recent work applying sketching to column subset selection, kernel methods, sublinear algorithms for structured matrices, tensors, trace estimation, and so on. The focus in the talk will be on fast algorithms.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held every Friday at 1:15 pm during the academic year.

Until further notice, the IT Forum convenes exclusively via Zoom (on Fridays at 1:15pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJwkf-uvqjoqHNIWxY4HHon4K107QMo22PVR to receive the Zoom meeting details via email.


The ISL Colloquium meets weekly during the academic year. Seminars are each Thursday at 4:30pm PT, unless indicated otherwise.

Until further notice, the ISL Colloquium convenes exclusively via Zoom (on Thursdays at 4:30pm PT) due to the ongoing pandemic. To avoid "Zoom-bombing", we ask attendees to input their email address here https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ to receive the Zoom meeting details via email.

Date and Time: 
Friday, November 13, 2020 - 1:15pm
Venue: 
Zoom

Pages

Subscribe to RSS - Information Systems Lab (ISL) Colloquium