Seminar / Colloquium

Applied Physics/Physics Colloquium: Recent results on Gravitational Waves from LIGO and Virgo

Topic: 
Recent results on Gravitational Waves from LIGO and Virgo
Abstract / Description: 

Over the last two years, the Advanced LIGO and Advanced Virgo detectors have observed a handful of gravitational-wave events from the inspiral and merger of binary black holes in distant galaxies. These events have resulted in the first measurements of the fundamental properties of gravitational waves, tests of General Relativity in the strong-field, highly-dynamical regime, and the population, masses and spins of black holes in the universe. Most recently, signals were detected from the inspiral of a binary neutron star system, GW170817. That event is thus far the loudest (highest signal-to-noise ratio) and closest gravitational-wave event observed. A gamma-ray burst detected 1.7 seconds after merger confirms the long-held hypothesis that BNS mergers are associated with short gamma-ray bursts. The LIGO and Virgo data produced a three-dimensional sky localization of the source, enabling a successful electromagnetic follow-up campaign that identified an associated electromagnetic transient in a galaxy ~40 Mpc from Earth. A multi-messenger view of GW170817 from ~100 seconds before merger through weeks afterward provides evidence of a "kilonova", and of the production of heavy elements. For the first time, using gravitational waves we are able to constrain the equation of state of dense neutron stars and infer the rate of local binary neutron star mergers. When we include EM observations, we are able to directly measure the speed of gravitational waves, constrain its polarization content, independently measure the Hubble constant, probe the validity of the equivalence principle, and gain new insight into the astrophysical engine driving these events.

Date and Time: 
Tuesday, November 14, 2017 - 4:30pm
Venue: 
Hewlett 201

SCIEN Talk: Next Generation Wearable AR Display Technologies

Topic: 
Next Generation Wearable AR Display Technologies
Abstract / Description: 

Wearable AR/VR displays have a long history and earlier efforts failed due to various limitations. Advances in sensors, optical technologies, and computing technologies renewed the interest in this area. Most people are convinced AR will be very big. A key question is whether AR glasses can be the new computing platform and replace smart phones? I'll discuss some of the challenges ahead. We have been working on various wearable display architectures and I'll discuss our efforts related to MEMS scanned beam displays, head-mounted projectors and smart telepresence screens, and holographic near-eye displays.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Near-Eye Varifocal Augmented Reality Displays

Topic: 
Near-Eye Varifocal Augmented Reality Displays
Abstract / Description: 

With the goal of registering dynamic synthetic imagery onto the real world, Ivan Sutherland envisioned a fundamental idea to combine digital displays with conventional optical components in a wearable fashion. Since then, various new advancements in the display engineering domain, and a broader understanding in the vision science domain have led us to computational displays for virtual reality and augmented reality applications. Today, such displays promise a more realistic and comfortable experience through techniques such as lightfield displays, holographic displays, always-in-focus displays, multiplane displays, and varifocal displays. In this talk, as an Nvidian, I will be presenting our new optical layouts for see-through computational near-eye displays that is simple, compact, varifocal, and provides a wide field of view with clear peripheral vision and large eyebox. Key to our efforts so far contain novel see-through rear-projection holographic screens, and deformable mirror membranes. We establish fundamental trade-offs between the quantitative parameters of resolution, field of view, and the form-factor of our designs; opening an intriguing avenue for future work on accommodation-supporting augmented reality display.

Date and Time: 
Wednesday, November 15, 2017 - 4:30pm
Venue: 
Packard 101

SystemX Seminar: Efficient battery usage for wireless IoT nodes

Topic: 
Efficient battery usage for wireless IoT nodes: Battery lifetime prediction
Abstract / Description: 

The prediction of the life-time of battery-powered IoT devices and wireless sensor networks is almost exclusively based on the assumption that the total charge in a battery (i.e. the mA-h) can be linearly consumed in time. This is not the case in reality. Batteries are complex electro-chemical systems and their discharge behavior depends heavily on the timing and intensity of the applied load. There is very little empirical data or reliable models available for these kinds of batteries and loads that are typically used in IoT sensor nodes for very long operational time, 5 -10 years.

We characterize the inexpensive CR2032 Li-coin cells using carefully controlled synthetic loads and a wide range of IoT-typical load parameters. We observe that actual lifetimes can differ from predicted linear ones by almost a factor of three. Furthermore, loads with similar average currents can vary significantly in the amount of capacity of the battery they can utilize. We conclude that short duration loads generally are faring better than sustained loads which was not anticipated. We suggest a better prediction model, that captures the non-linear short duration behavior, which can be implemented in constrained IoT devices.

Date and Time: 
Thursday, November 16, 2017 - 4:30pm
Venue: 
Huang 018

IT Forum: Tight regret bounds for a latent variable model of recommendation systems

Topic: 
Tight regret bounds for a latent variable model of recommendation systems
Abstract / Description: 

We consider an online model for recommendation systems, with each user being recommended an item at each time-step and providing 'like' or 'dislike' feedback. A latent variable model specifies the user preferences: both users and items are clustered into types. The model captures structure in both the item and user spaces, and our focus is on simultaneous use of both structures. We analyze the situation in which the type preference matrix has i.i.d. entries. Our analysis elucidates the system operating regimes in which existing algorithms are nearly optimal, as well as highlighting the sub-optimality of using only one of item or user structure (as is done in commonly used item-item and user-user collaborative filtering). This prompts a new algorithm that is nearly optimal in essentially all parameter regimes.

Joint work with Prof. Guy Bresler.

Date and Time: 
Friday, November 10, 2017 - 1:15pm
Venue: 
Packard 202

ISL Colloquium: Delay, memory, and messaging tradeoffs in a distributed service system

Topic: 
Delay, memory, and messaging tradeoffs in a distributed service system
Abstract / Description: 

We consider the classical supermarket model: jobs arrive as a Poisson process of rate of lambda N, with 0 < lambda < 1, and are to be routed to one of N identical servers with unit mean, exponentially distributed processing times. We review a variety of policies and architectures that have been considered in the literature, and which differ in terms of the direction and number of messages that are exchanged, and the memory that they employ; for example, the "power-of-d-choices" or pull-based policies. In order to compare policies of this kind, we focus on the resources (memory and messaging) that they use, and on whether the expected delay of a typical vanishes as N increases.
We show that if (i) the message rate increases superlinearly, or (ii) the memory size increases superlogarithmically, as a function of N, then there exists a policy that drives the delay to zero, and we outline an analysis using fluid models. On the other hand, if neither condition (i) or (ii) holds, then no policy within a broad class of symmetric policies can yield vanishing delay.

Date and Time: 
Thursday, November 9, 2017 - 4:15pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Interactive 3D Digital Humans

Topic: 
Interactive 3D Digital Humans
Abstract / Description: 

This talk will cover recent methods for recording and displaying interactive life-sized digital humans using the ICT Light Stage, natural language interfaces, and automultiscopic 3D displays. We will then discuss the first full application of this technology to preserve the experience of in-person interactions with Holocaust survivors

More Information: http://gl.ict.usc.edu/Research/TimeOffsetConversations/


The SCIEN Colloquia are open to the public. The talks are also videotaped and posted the following week on talks.stanford.edu.

There will a reception following the presentation.

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Packard 101

EE380 Computer Systems Colloquium: Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing

Topic: 
Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing
Abstract / Description: 

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Gates B03

A New Class of Memory & Storage Technology: 3D XPoint™ and Optane™ Technology Overview

Topic: 
A New Class of Memory & Storage Technology: 3D XPoint™ and Optane™ Technology Overview
Abstract / Description: 

3D XPoint™ memory, the first new memory to ship in volume in decades, features the combination of DRAM and NAND attributes of: high memory density (NAND-like capacity); high performance (closer to DRAM performance); and non-volatility. The arrival of 3D XPoint™ memory – offering the full promise of a storage class memory – has the potential to fundamentally change the memory-storage hierarchy at the hardware, system software, and application levels. This memory is currently available as an Intel® Optane™ SSD with access times as fast as the rest of the system; a departure from classical storage technologies. System changes to match the low latency of these SSDs are already advanced, and in many cases they enable the application to utilize all the Optane SSD's performance. The implications of this new memory on computing are significant, with applications taking advantage of this new technology as storage as the first to benefit. Applications such as key–value stores and real-time analytics can benefit immediately both in terms of faster runtimes and also access to larger data sets through application or OS-based paging. To best measure this high performance SSD we borrow memory performance measurement techniques and apply them to storage. The next step in this convergence of memory and storage will be 3D XPoint memory accessed through processor load/store operations on DIMM busses.

Date and Time: 
Tuesday, November 7, 2017 - 4:00pm
Venue: 
Gates 415

Pages

Applied Physics / Physics Colloquium

Applied Physics/Physics Colloquium: Recent results on Gravitational Waves from LIGO and Virgo

Topic: 
Recent results on Gravitational Waves from LIGO and Virgo
Abstract / Description: 

Over the last two years, the Advanced LIGO and Advanced Virgo detectors have observed a handful of gravitational-wave events from the inspiral and merger of binary black holes in distant galaxies. These events have resulted in the first measurements of the fundamental properties of gravitational waves, tests of General Relativity in the strong-field, highly-dynamical regime, and the population, masses and spins of black holes in the universe. Most recently, signals were detected from the inspiral of a binary neutron star system, GW170817. That event is thus far the loudest (highest signal-to-noise ratio) and closest gravitational-wave event observed. A gamma-ray burst detected 1.7 seconds after merger confirms the long-held hypothesis that BNS mergers are associated with short gamma-ray bursts. The LIGO and Virgo data produced a three-dimensional sky localization of the source, enabling a successful electromagnetic follow-up campaign that identified an associated electromagnetic transient in a galaxy ~40 Mpc from Earth. A multi-messenger view of GW170817 from ~100 seconds before merger through weeks afterward provides evidence of a "kilonova", and of the production of heavy elements. For the first time, using gravitational waves we are able to constrain the equation of state of dense neutron stars and infer the rate of local binary neutron star mergers. When we include EM observations, we are able to directly measure the speed of gravitational waves, constrain its polarization content, independently measure the Hubble constant, probe the validity of the equivalence principle, and gain new insight into the astrophysical engine driving these events.

Date and Time: 
Tuesday, November 14, 2017 - 4:30pm
Venue: 
Hewlett 201

Making a Physicist with Jazz

Topic: 
Making a Physicist with Jazz
Abstract / Description: 

In 2005, theoretical physicist S. James Gates related a story about Abdus Salam where Salam explained that once Black people entered physics in large numbers, they would create something like jazz. Is this an essentialization of Black people or getting at the essence of how Black people have responded to the wake of slavery and colonialism? Using texts from a diverse set of disciplines -- English, ethnomusicology, and science, technology, and society studies -- I will reflect on possible answers to this question, what they tell us about how physicists are made, and whether this framework offers lessons for how physicists should be made.

Open to all interested. Limited seating. To attend this talk and discussion, please register online via this link: https://stanforduniversity.qualtrics.com/jfe/form/SV_2to6cpsyuDlqFo1

Date and Time: 
Monday, October 2, 2017 - 3:30pm
Venue: 
Black Community Services Center, Community Room

Applied Physics/Physics Colloquium: Physics in the Future

Topic: 
Physics in the Future
Abstract / Description: 

The greatest American philosopher of the 20th century and Hall of Fame Yankee baseball catcher, Yogi Berra, wisely noted, "It's hard to make predictions, especially about the future." Yogi Berra also warned, "If you don't know where you are going, you might wind up someplace else."

What excites physicists more than anything else are the unexpected discoveries that will open new horizons of the endless frontiers of science. The talk will sketch a few selected areas and offer a personal view of how physicists can position themselves to become lucky enough to stumble onto discoveries that lead us "someplace else."


 

APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, November 7, 2017 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Quantum Simulations and Tensor Networks in Condensed Matter and High Energy Physics

Topic: 
Quantum Simulations and Tensor Networks in Condensed Matter and High Energy Physics
Abstract / Description: 

 2017-10-26Many-body quantum systems are very hard to describe, since the number of parameters required to describe them grows exponentially with the number of particles, volume, etc.  This problem appears in different areas of science, and several methods have been developed in fields of quantum chemistry, condensed matter and high energy physics in order to circumvent it in certain situations. In the last years, other approaches inspired by quantum information theory have been introduced in order to address such a problem. On the one hand, quantum simulation uses a different system in order to emulate the behavior of the problem under study. On the other, tensor networks aim at the accurate description of many-body quantum states with few parameters. In this talk, I will give a basic introduction to those approaches, and explain current efforts to use them in order to attack both condensed and high-energy physics problems.


 

APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, October 31, 2017 - 4:15pm
Venue: 
Hewlett 200

Increasing Accuracy and Increasing Tension in Ho [Applied Physics/Physics Colloquium]

Topic: 
Increasing Accuracy and Increasing Tension in Ho
Abstract / Description: 

 

he Hubble constant, Ho, provides a measure of the current expansion rate of the universe. In recent decades, there has been a huge increase in the accuracy with which extragalactic distances, and hence, Ho can be measured. While the historical factor-of-two uncertainty in Ho has been resolved, a new discrepancy has arisen between the values of Ho measured in the local universe, and that estimated from cosmic microwave background measurements, assuming a Lambda cold dark matter model. I will review the advances that have led to the increase in accuracy in measurements of Ho, as well as describe exciting future prospects with the James Webb Space Telescope (JWST) and Gaia, which will make it feasible to measure extragalactic distances at percent level accuracy in the next decade


APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, October 24, 2017 - 4:15pm
Venue: 
Hewlett 200

Applied Physics/Physics Colloquium: The Brayton Battery

Topic: 
The Brayton Battery
Abstract / Description: 

This talk will give an overview of work I have been doing off campus for the past several years on the matter grid energy storage using reversible heat engines and molten nitrate salt. The most recent work, a secret project at X code named "Project Malta", was revealed in Bloomberg on 31 Aug 17. My own scientific writing on the matter came out about the same time as J. Renew. Sustain. Energy 9, 044103 (2917). The talk will touch on the grid storage problem itself and why I think heat beats all other ways of solving it - including in particular electrochemistry. The bulk of the talk will focus on the concept of a closed-cycle Brayton engine and the many aspects of "old time physics" associated with it: equation of state thermo-dynamics, adiabatic efficiency, metals creep, entropy creation in counterflow heat exchange, salt corrosion, high-speed bearings, hard permanent magnetism, torque limits of high-speed motor-generators imposed by Maxwell's equations and vibration physics, simulation of physical linkages with power semiconductors, and so forth. The larger idea underneath is that lowering costs to the bone is the problem, and that the physics of heat transfer facilitates this. It has no land use issues, it can be easily scaled up to the size of megacities at low cost, and it is safe - meaning that the nuclear weapon's worth of energy one most store cannot be released all at once. I will show images of the (tiny) released reprototype engine, which is about 2 m long, 0.6 m in diameter, and has the power of a diesel locomotive.This talk will give an overview of work I have been doing off campus for the past several years on the matter grid energy storage using reversible heat engines and molten nitrate salt.  The most recent work, a secret project at X code named "Project Malta", was revealed in Bloomberg on 31 Aug 17.  My own scientific writing on the matter came out about the same time as J. Renew. Sustain. Energy 9, 044103 (2917).  The talk will touch on the grid storage problem itself and why I think heat beats all other ways of solving it - including in particular electrochemistry.  The bulk of the talk will focus on the concept of a closed-cycle Brayton engine and the many aspects of "old time physics" associated with it: equation of state thermo-dynamics, adiabatic efficiency, metals creep, entropy creation in counterflow heat exchange, salt corrosion, high-speed bearings, hard permanent magnetism, torque limits of high-speed motor-generators imposed by Maxwell's equations and vibration physics, simulation of physical linkages with power semiconductors, and so forth.  The larger idea underneath is that lowering costs to the bone is the problem, and that the physics of heat transfer facilitates this. It has no land use issues, it can be easily scaled up to the size of megacities at low cost, and it is safe - meaning that the nuclear weapon's worth of energy one most store cannot be released all at once.  I will show images of the (tiny) released reprototype engine, which is about 2 m long, 0.6 m in diameter, and has the power of a diesel locomotive.


 

APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, October 17, 2017 - 4:15pm
Venue: 
Hewlett 200

Quantum Matter without Quasiparticles: Strange Metals and Black Holes [Applied Physics/Physics Colloquium]

Topic: 
Quantum Matter without Quasiparticles: Strange Metals and Black Holes
Abstract / Description: 

The quasiparticle concept is the foundation of our understanding of the dynamics of quantum many-body systems. It originated in the theory of metals, which have electron-like quasiparticles, but it is also useful in more exotic states like those found in fractional quantum Hall systems. However, many modern materials exhibit a 'strange metal' phase to which the quasiparticle picture does not apply, and developing its theory remains one of the important challenges in condensed matter physics. I will describe the simplest known quantum many-body models without quasiparticle excitations. Some of these models have a dual description as black holes in a curved spacetime with an emergent spatial direction, and the black hole mapping has proved useful in understanding some experiments.


APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, October 10, 2017 - 4:15pm
Venue: 
Hewlett 200

A Wandering Path to Plasma Fusion [Applied Physics/Physics Colloquium]

Topic: 
A Wandering Path to Plasma Fusion
Abstract / Description: 

 The speaker recounts an unconventional journey through academic physics and the tech industry, beginning and ending with the study of plasma fusion. Over the past three years, a team at Google has collaborated with Tri Alpha Energy in applying modern data science techniques, in particular in experiment design, to the C-2U plasma confinement machine. The collaboration continues on the much more energetic "Norman" machine, being commissioned now.


 

 

APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, October 3, 2017 - 4:15pm
Venue: 
Hewlett 200

Protecting Quantum Superpositions in Superconducting Circuits [Applied Physics/Physics Colloquium]

Topic: 
Protecting Quantum Superpositions in Superconducting Circuits
Abstract / Description: 

Can we prolong the coherence of a two-state manifold in a complex quantum system beyond the coherence of its longest-lived component? This question is the starting point in the construction of a scalable quantum computer. It translates in the search for processes that operate as some sort of Maxwell's demon, reliably correcting the errors resulting from the coupling between qubits and their environment. The presentation will review recent experiments that tested the dynamical protection, by Josephson circuits, of a logical qubit memory based on superpositions of particular coherent states of a superconducting resonator.


 

APPLIED PHYSICS/PHYSICS COLLOQUIUM

Held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200. Refreshments in the lobby of Varian Physics at 4:15 pm.

Winter 2016/2017, Committee: A. Linde (Chair), S. Kivelson, S. Zhang

Date and Time: 
Tuesday, May 30, 2017 - 4:15pm
Venue: 
Hewlett 200

Pages

CS300 Seminar

SpaceX's journey on the road to mars

Topic: 
SpaceX's journey on the road to mars
Abstract / Description: 

SSI will be hosting Gwynne Shotwell — President and COO of SpaceX — to discuss SpaceX's journey on the road to mars. The event will be on Wednesday Oct 11th from 7pm - 8pm in Dinkelspiel Auditorium. After the talk, there will be a Q&A session hosted by Steve Jurvetson from DFJ Venture Capital.

Claim your tickets now on eventbright

 

Date and Time: 
Wednesday, October 11, 2017 - 7:00pm
Venue: 
Dinkelspiel Auditorium

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Subhasish Mitra

5:15-6:00, Silvio Savarese

Date and Time: 
Wednesday, December 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Phil Levis

5:15-6:00, Ron Fedkiw

Date and Time: 
Monday, December 5, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Dan Boneh

5:15-6:00, Aaron Sidford

Date and Time: 
Wednesday, November 30, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, John Mitchell

5:15-6:00, James Zou

Date and Time: 
Monday, November 28, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Emma Brunskill

5:15-6:00, Doug James

Date and Time: 
Wednesday, November 16, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, James Landay

5:15-6:00, Dan Jurafsky

Date and Time: 
Monday, November 14, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Ken Salisbury

5:15-6:00, Noah Goodman

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Kunle Olukotun

5:15-6:00, Jure Leskovec

Date and Time: 
Monday, November 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Omer Reingold

5:15-6:00, Oussama Khatib

Date and Time: 
Wednesday, November 2, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

Pages

EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium: Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing

Topic: 
Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing
Abstract / Description: 

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Petascale Deep Learning on a Single Chips

Topic: 
Petascale Deep Learning on a Single Chips
Abstract / Description: 

Vathys.ai is a deep learning startup that has been developing a new deep learning processor architecture with the goal of massively improved energy efficiency and performance. The architecture is also designed to be highly scalable, amenable to next generation DL models. Although deep learning processors appear to be the "hot topic" of the day in computer architecture, the majority (we argue all) of such designs incorrectly identify the bottleneck as computation and thus neglect the true culprits in inefficiency; data movement and miscellaneous control flow processor overheads. This talk will cover many of the architectural strategies that the Vathys processor uses to reduce data movement and improve efficiency. The talk will also cover some circuit level innovations and will include a quantitative and qualitative comparison to many DL processor designs, including the Google TPU, demonstrating numerical evidence for massive improvements compared to the TPU and other such processors.

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: NLV Agents

Topic: 
NLV Agents
Abstract / Description: 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Partisan Gerrymandering and the Supreme Court: The Role of Social Science

Topic: 
Partisan Gerrymandering and the Supreme Court: The Role of Social Science
Abstract / Description: 

The U.S. Supreme Court is considering a case this term, Gill v Whitford, that might lead to the first constitutional constraints on partisanship in redistricting. Eric McGhee is the inventor of the efficiency gap, a measure of gerrymandering that the court is considering in the case. He will describe the case's legal background, discuss some of the metrics that have been proposed for measuring gerrymandering, and reflect on the role of social science in the litigation.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computing with High-Dimensional Vectors

Topic: 
Computing with High-Dimensional Vectors
Abstract / Description: 

Computing with high-dimensional vectors complements traditional computing and occupies the gap between symbolic AI and artificial neural nets. Traditional computing treats bits, numbers, and memory pointers as basic objects on which all else is built. I will consider the possibility of computing with high-dimensional vectors as basic objects, for example with 10,000-bit words, when no individual bit nor subset of bits has a meaning of its own--when any piece of information encoded into a vector is distributed over all components. Thus a traditional data record subdivided into fields is encoded as a high-dimensional vector with the fields superposed.

Computing power arises from the operations on the basic objects--from what is called their algebra. Operations on bits form Boolean algebra, and the addition and multiplication of numbers form an algebraic structure called a "field." Two operations on high-dimensional vectors correspond to the addition and multiplication of numbers. With permutation of coordinates as the third operation, we end up with a system of computing that in some ways is richer and more powerful than arithmetic, and also different from linear algebra. Computing of this kind was anticipated by von Neumann, described by Plate, and has proven to be possible in high-dimensional spaces of different kinds.

The three operations, when applied to orthogonal or nearly orthogonal vectors, allow us to encode, decode and manipulate sets, sequences, lists, and arbitrary data structures. One reason for high dimensionality is that it provides a nearly endless supply of nearly orthogonal vectors. Making of them is simple because a randomly generated vector is approximately orthogonal to any vector encountered so far. The architecture includes a memory which, when cued with a high-dimensional vector, finds its nearest neighbors among the stored vectors. A neural-net associative memory is an example of such.

Circuits for computing in high-D are thousands of bits wide but the components need not be ultra-reliable nor fast. Thus the architecture is a good match to emerging nanotechnology, with applications in many areas of machine learning. I will demonstrate high-dimensional computing with a simple algorithm for identifying languages.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Generalized Reversible Computing and the Unconventional Computing Landscape

Topic: 
Generalized Reversible Computing and the Unconventional Computing Landscape
Abstract / Description: 

With the end of transistor scaling now in sight, the raw energy efficiency (and thus, practical performance) of conventional digital computing is expected to soon plateau. Thus, there is presently a growing interest in exploring various unconventional types of computing that may have the potential to take us beyond the limits of conventional CMOS technology. In this talk, I survey a range of unconventional computing approaches, with an emphasis on reversible computing (defined in an appropriately generalized way), which fundamental physical arguments indicate is the only possible approach that can potentially increase energy efficiency and affordable performance of arbitrary computations by unboundedly large factors as the technology is further developed.

Date and Time: 
Wednesday, October 18, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: scratchwork, a tool for developing and communicating technical ideas

Topic: 
scratchwork: a tool for developing and communicating technical ideas
Abstract / Description: 

Digital tablets are no longer new or even expensive, but most of us still struggle to input our technical ideas (such as equations and diagrams) into a computer as easily as we write them on paper. I will discuss relevant existing technology and present scratchworktool.com, a tool designed to help simplify the digital writing process even without a tablet. I will also cover some of the important decisions and mistakes I made especially as I started building it. I hope these lessons will be helpful for anyone who is (or may eventually be) interested in developing similarly sophisticated products to solve a consumer-facing problem.

Date and Time: 
Wednesday, October 11, 2017 - 4:30pm
Venue: 
Gates B03

NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning [EE380 Computer Systems Colloquium]

Topic: 
NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning
Abstract / Description: 

Deep Learning and GPU Computing are now being deployed across many industries, helping to solve big data problems ranging from computer vision and natural language-processing to self-driving cars. At the heart of these solutions is the NVIDIA GPU, providing the computing power to both train these massive deep neural networks as well as efficiently provide inference and implementation of those networks. But how did the GPU get to this point?

In this talk I will present a personal perspective and some lessons learned during the GPU's journey and evolution from being the heart of the PC gaming platform, to today also powering the world's largest datacenters and supercomputers.

Date and Time: 
Wednesday, October 4, 2017 - 4:30pm
Venue: 
Gates B03

The Machineries of Doubt & Disinformation: Cigarettes, Climate & Other Electronic Confusions [EE380 Computer Systems Colloquium]

Topic: 
The Machineries of Doubt & Disinformation: Cigarettes, Climate & Other Electronic Confusions
Abstract / Description: 

The tobacco industry has long employed the best marketing techniques and adopted the latest technologies for disinformation. The fossil energy industries have employed similar tactics and technologies. For both, the Internet has proved a fertile ground and by now, similar tactics have gained force in politics.

For example, 2009 "Climategate" theft and use of emails against climate scientists seems a precursor of recent Russian efforts in American & French elections.

This talk uses insights from the well-documented history of tobacco and fossil disinformation machinery to anticipate further attacks on science and political processes, including thoughts about the challenges of informed skepticism in the world of Internet, Twitter and Facebook and electronic cigarettes that monitor and control usage, and may report back to the vendor.

Date and Time: 
Wednesday, May 10, 2017 - 4:30pm
Venue: 
Gates B03

Ethics, Algorithms, and Systems [EE380 Computer Systems]

Topic: 
Ethics, Algorithms, and Systems
Abstract / Description: 

The Internet has made possible new means of manipulating opinions, purchases and votes that are unprecedented in human history in their effectiveness, scale and clandestine nature. Whether closely guided by human hands or operating independently of their creators, these algorithms now guide human decision making 24/7, often in ways that have ethical consequences. Biased search rankings, for example, have been shown to shift the voting preferences of undecided voters dramatically without any awareness on their part that they are being manipulated (the Search Engine Manipulation Effect, or SEME).

Recent research shows that SEME can impact a wide range of opinions, not just voting preferences, and that multiple searches increase SEME's impact. New experiments also help to explain why SEME is so powerful and demonstrate how SEME can be suppressed to some extent.

In 2016, new research also demonstrated that search suggestions (in "autocomplete") can also be used shift opinions and votes (the Search Suggestion Effect, or SSE).

Demonstrating these possibilities in research is one thing; do search engine companies actually show people search suggestions or search results that are biased in some way?

In 2016, AIBRT researchers recruited a nationwide network of field agents whose election-related searches were collected and aggregated for six months before the November election, thus preserving 13,207 searchers and the 98,044 web pages to which the search results linked. This unique data set revealed that that search results were indeed biased toward one candidate during most of this period in all 10 search positions on the first page of search results - enough, perhaps, to shift millions of votes without people's knowledge.

Based on the success of this tracking effort, in early 2017, experts in multiple fields and at multiple universities in the US and Europe came together to creates The Sunlight Society (http://TheSunlightSociety.org), a nonprofit organization devoted to creating a worldwide ecosystem of passive monitoring software that will reveal a wide range of online manipulations as they are occurring, thus providing a means for identifying unethical algorithms as they are launched.

Date and Time: 
Wednesday, June 7, 2017 - 4:30pm
Venue: 
Gates B03

Pages

Ginzton Lab

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Ginzton Lab / AMO Seminar

Topic: 
2D/3D Photonic Integration Technologies for Arbitrary Optical Waveform Generation in Temporal, Spectral, and Spatial Domains
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 29, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
Silicon-Plus Photonics for Tomorrow's (Astronomically) Large-Scale Networks
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 22, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
'Supermode-Polariton Condensation in a Multimode Cavity QED-BEC System' and 'Probing Ultrafast Electron Dynamics in Atoms and Molecules'
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, January 4, 2016 - 4:15pm to 5:30pm
Venue: 
Spilker 232

Ginzton Lab: Special Optics Seminar

Topic: 
A Carbon Nanotube Optical Rectenna
Abstract / Description: 

An optical rectenna – that is, a device that directly converts free-propagating electromagnetic waves at optical frequencies to d.c. electricity – was first proposed over 40 years ago, yet this concept has not been demonstrated experimentally due to fabrication challenges at the nanoscale. Realizing an optical rectenna requires that an antenna be coupled to a diode that operates on the order of 1 petahertz (switching speed on the order of a femtosecond). Ultralow capacitance, on the order of a few attofarads, enables a diode to operate at these frequencies; and the development of metal-insulator-metal tunnel junctions with nanoscale dimensions has emerged as a potential path to diodes with ultralow capacitance, but these structures remain extremely difficult to fabricate and couple to a nanoscale antenna reliably. Here we demonstrate an optical rectenna by engineering metal-insulator-metal tunnel diodes, with ultralow junction capacitance of approximately 2 attofarads, at the tips of multiwall carbon nanotubes, which act as the antenna and metallic electron field emitter in the diode. This demonstration is achieved using very small diode areas based on the diameter of a single carbon nanotube (about 10 nanometers), geometric field enhancement at the carbon nanotube tips, and a low work function semitransparent top metal contact. Using vertically-aligned arrays of the diodes, we measure d.c. open-circuit voltage and short-circuit current at visible and infrared electromagnetic frequencies that is due to a rectification process, and quantify minor contributions from thermal effects. In contrast to recent reports of photodetection based on hot electron decay in plasmonic nanoscale antenna, a coherent optical antenna field is rectified directly in our devices, consistent with rectenna theory. Our devices show evidence of photon-assisted tunneling that reduces diode resistance by two orders of magnitude under monochromatic illumination. Additionally, power rectification is observed under simulated solar illumination. Numerous current-voltage scans on different devices, and between 5-77 degrees Celsius, show no detectable change in diode performance, indicating a potential for robust operation.

Date and Time: 
Tuesday, October 20, 2015 - 2:00pm to 3:00pm
Venue: 
Spilker 232

Pages

Information Systems Lab (ISL) Colloquium

ISL Colloquium: Delay, memory, and messaging tradeoffs in a distributed service system

Topic: 
Delay, memory, and messaging tradeoffs in a distributed service system
Abstract / Description: 

We consider the classical supermarket model: jobs arrive as a Poisson process of rate of lambda N, with 0 < lambda < 1, and are to be routed to one of N identical servers with unit mean, exponentially distributed processing times. We review a variety of policies and architectures that have been considered in the literature, and which differ in terms of the direction and number of messages that are exchanged, and the memory that they employ; for example, the "power-of-d-choices" or pull-based policies. In order to compare policies of this kind, we focus on the resources (memory and messaging) that they use, and on whether the expected delay of a typical vanishes as N increases.
We show that if (i) the message rate increases superlinearly, or (ii) the memory size increases superlogarithmically, as a function of N, then there exists a policy that drives the delay to zero, and we outline an analysis using fluid models. On the other hand, if neither condition (i) or (ii) holds, then no policy within a broad class of symmetric policies can yield vanishing delay.

Date and Time: 
Thursday, November 9, 2017 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Dykstra’s Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions

Topic: 
Dykstra’s Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions
Abstract / Description: 

We study connections between Dykstra's algorithm for projecting onto an intersection of convex sets, the augmented Lagrangian method of multipliers or ADMM, and block coordinate descent. We prove that coordinate descent for a regularized regression problem, in which the penalty is a separable sum of support functions, is exactly equivalent to Dykstra's algorithm applied to the dual problem. ADMM on the dual problem is also seen to be equivalent, in the special case of two sets, with one being a linear subspace. These connections, aside from being interesting in their own right, suggest new ways of analyzing and extending coordinate de- scent. For example, from existing convergence theory on Dykstra's algorithm over polyhedra, we discern that coordinate descent for the lasso problem converges at an (asymptotically) linear rate. We also develop two parallel versions of coordinate descent, based on the Dykstra and ADMM connections.

Date and Time: 
Wednesday, October 11, 2017 - 4:15pm
Venue: 
Packard 202

An Information Theoretic Perspective of Fronthaul Constrained Cloud and Fog Radio Access Networks [Special Seminar: ISL Colloquium]

Topic: 
An Information Theoretic Perspective of Fronthaul Constrained Cloud and Fog Radio Access Networks
Abstract / Description: 

Cloud radio access networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/decoding is migrated from the local base-stations/radio units (RUs) to a control/central unit (CU) in the "cloud". Fog radio access networks (F-RAN) address the case where the RUs are enhanced by having the ability of local caching of popular contents. The network
operates via fronthaul digital links connecting the CU and the RUs. In this talk we will address basic information theoretic aspects of such networks, with emphasis of simple oblivious processing. Theoretical results illustrate the considerable performance gains to be expected for different cellular models. Some interesting theoretical directions conclude the presentation.

Date and Time: 
Wednesday, May 24, 2017 - 2:00pm
Venue: 
Packard 202

Cracking Big Data with Small Data [ISL Colloquium]

Topic: 
Cracking Big Data with Small Data
Abstract / Description: 

For the last several years, we have witnessed the emergence of datasets of an unprecedented scale across different scientific disciplines. The large volume of such datasets presents new computational challenges as the diverse, feature-rich, and usually high-resolution data does not allow for effective data-intensive inference. In this regard, data summarization is a compelling (and sometimes the only) approach that aims at both exploiting the richness of large-scale data and being computationally tractable; Instead of operating on complex and large data directly, carefully constructed summaries not only enable the execution of various data analytics tasks but also improve their efficiency and scalability.

A systematic way for data summarization is to turn the problem into selecting a subset of data elements optimizing a utility function that quantifies "representativeness" of the selected set. Often-times, these objective functions satisfy submodularity, an intuitive notion of diminishing returns stating that selecting any given element earlier helps more than selecting it later. Thus, many problems in data summarization require maximizing submodular set functions subject to cardinality and massive data means we have to solve these problems at scale.

In this talk, I will present our recent efforts in developing practical schemes for data summarization. In particular, I will first discuss the fastest centralized solution whose query complexity is only linear in data size. However, to truly summarize massive data we need to opt for scalable methods. I will then present a streaming algorithm that with a single pass over the data provides a constant-factor approximation guarantee to the optimum solution. Finally, I will talk about a distributed approach that summarizes tens of millions of data points in a timely fashion. I will also demonstrate experiments on several applications, including sparse Gaussian process inference and exemplar-based clustering using Apache-Spark.

Date and Time: 
Thursday, May 11, 2017 - 4:15pm
Venue: 
Packard 101

An information-theoretic perspective on interference management [ISL Colloquium]

Topic: 
An information-theoretic perspective on interference management
Abstract / Description: 

For high data rates and massive connectivity, next-generation cellular networks are expected to deploy many small base stations. While such dense deployment provides the benefit of bringing radio closer to end users, it also increases the amount of interference from neighboring cells. Consequently, efficient and effective management of interference is expected to become one of the main challenges for high-spectral-efficiency, low-power, broad-coverage wireless communications.

In this talk, we introduce two competing paradigms of interference management and discuss recent developments in network information theory under these paradigms. In the first "distributed network" paradigm, the network consists of autonomous cells with minimal cooperation. We explore advanced channel coding techniques for the corresponding mathematical model of the "interference channel," focusing mainly on the sliding-window superposition coding scheme that achieves the performance of simultaneous decoding through point-to-point channel codes and low-complexity decoding. In the second "centralized network" paradigm, the network is a group of neighboring cells connected via backhaul links. For uplink and downlink communications over this "two-hop relay network," we develop dual coding schemes – noisy network coding and distributed decode-forward – that achieve capacity universally within a few bits per degree of freedom.

Date and Time: 
Thursday, April 20, 2017 - 4:15pm
Venue: 
Packard 101

When Exploration is Expensive -- Reducing and Bounding the Amount of Experience Needed to Learn to Make Good Decisions [ISL]

Topic: 
When Exploration is Expensive -- Reducing and Bounding the Amount of Experience Needed to Learn to Make Good Decisions
Abstract / Description: 

Understanding the limits of how much experience is needed to learn to make good decisions is both a foundational issue in reinforcement learning, and has important applications. Indeed, the potential to have artificial agents that help augment human capabilities, in the form of automated coaches or teachers, is enormous. Such reinforcement learning agents must explore in costly domains, since each experience comes from interacting with a human. I will discuss some of our recent theoretical results on sample efficient reinforcement learning.


 

The Information Systems Laboratory Colloquium (ISLC) is typically held in Packard 101 every Thursday at 4:15 pm during the academic year. Refreshments are usually served after the talk.

The Colloquium is organized by graduate students Martin Zhang, Farzan Farnia, Reza Takapoui, and Zhengyuan Zhou. To suggest speakers, please contact any of the students.

Date and Time: 
Thursday, April 27, 2017 - 4:15pm
Venue: 
Packard 101

Self-Driving Networks Workshop [ISL]

Topic: 
Self-Driving Networks Workshop
Abstract / Description: 

Networks have become very complex over the past decade. The users and operators of large cloud platforms and campus networks have desired a much more programmable network infrastructure to meet the dynamic needs of different applications and reduce the friction they can cause to each other. This has culminated in the Software-­‐defined Networking paradigm. But you cannot program what you do not understand: the volume, velocity and richness of network applications and traffic seem beyond the ability of direct human comprehension. What is needed is a sensing, inference and learning system which can observe the data emitted by a network during the course of its operation, reconstruct the network's evolution, infer key performance metrics, continually learn the best responses to rapidly-­‐changing load and operating conditions, and help the network adapt to them in real-­‐time. The workshop brings together academic and industry groups interested in the broad themes of this topic. It highlights ongoing research at Stanford and describes initial prototype systems and results from pilot deployments.

Date and Time: 
Wednesday, April 12, 2017 (All day)
Venue: 
Arrillaga Alumni Center

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Insensitivity of Loss Systems under Randomized SQ(d) Algorithms [ISL Colloquium]

Topic: 
Insensitivity of Loss Systems under Randomized SQ(d) Algorithms
Abstract / Description: 

In many applications such as cloud computing, managing server farm resources etc. an incoming task or job has to be matched with an appropriate server in order to minimise the latency or blocking associated with the processing. Ideally the best choice would be to match a job to the fastest available server. However when there are thousands of servers requiring the information on all server tasks is an overkill.

Pioneered in the 1990's the idea of randomised sampling of a few servers was proposed by Vvedenskaya and Dobrushin in Russia and Mitzmenmacher in the US and popularised as the "power of two" schemes which basically means that sampling two servers randomly and sending the job to the "better" server (i.e. with the shortest queue, or most resources) provides most of the benefits of sampling all the servers.

In the talk I will discuss multi-server loss models under power-of-d routing scheme when service time distributions are general with finite mean. Previous works on these models assume that the service times are exponentially distributed and insensitivity was suggested through simulations. Showing insensitivity to service time distributions has remained an open problem. We address this problem by considering service time distributions as Mixed-Erlang distributions that are dense in the class of general distributions on (0, ∞). We derive the mean field equations (MFE) of the empirical distributions for the system and establish the existence and uniqueness of the fixed point of the MFE. Furthermore we show that the fixed point of the MFE corresponds to the fixed point obtained from the MFE corresponding to a system with exponential service times showing that the fixed point is insensitive to the distribution. Due to lack of uniformity of the mixed-Erlang convergence the true general case needs to be handled differently. I will conclude the case of the MFE with general service times showing that the MFE is now characterized by a pde whose stationary point coincides with the fixed point in the case with exponential service times.The techniques developed in this paper are applicable to study mean field limits for Markov processes on general state spaces and insensitivity properties of other queueing models.

Date and Time: 
Monday, March 20, 2017 - 3:00pm
Venue: 
Packard 202

Anonymity in the Bitcoin Peer-to-Peer Network [ISL Colloquium]

Topic: 
Anonymity in the Bitcoin Peer-to-Peer Network
Abstract / Description: 

Bitcoin enjoys a public perception of being a privacy-preserving financial system. In reality, Bitcoin has a number of privacy vulnerabilities, including the well-studied fact that transactions can be linked through the public blockchain. More recently, researchers have demonstrated deanonymization attacks that exploit a lower-layer weakness: the Bitcoin peer-to-peer (P2P) networking stack. In particular, the P2P network currently forwards content in a structured way that allows observers to deanonymize users by linking their transactions to the originating IP addresses. In this work, we first demonstrate that current protocols exhibit poor anonymity guarantees, both theoretically and in practice. Then, we consider a first-principles redesign of the P2P network, with the goal of providing strong, provable anonymity guarantees. We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility.

Date and Time: 
Thursday, March 16, 2017 - 4:15pm
Venue: 
Packard 101

Pages

IT-Forum

IT Forum: Tight regret bounds for a latent variable model of recommendation systems

Topic: 
Tight regret bounds for a latent variable model of recommendation systems
Abstract / Description: 

We consider an online model for recommendation systems, with each user being recommended an item at each time-step and providing 'like' or 'dislike' feedback. A latent variable model specifies the user preferences: both users and items are clustered into types. The model captures structure in both the item and user spaces, and our focus is on simultaneous use of both structures. We analyze the situation in which the type preference matrix has i.i.d. entries. Our analysis elucidates the system operating regimes in which existing algorithms are nearly optimal, as well as highlighting the sub-optimality of using only one of item or user structure (as is done in commonly used item-item and user-user collaborative filtering). This prompts a new algorithm that is nearly optimal in essentially all parameter regimes.

Joint work with Prof. Guy Bresler.

Date and Time: 
Friday, November 10, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning

Topic: 
Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning
Abstract / Description: 

Molecular communication is a new and bio-inspired field, where chemical signals are used to transfer information instead of electromagnetic or electrical signals. In this paradigm, the transmitter releases chemicals or molecules and encodes information on some property of these signals such as their timing or concentration. The signal then propagates the medium between the transmitter and the receiver through different means such as diffusion, until it arrives at the receiver where the signal is detected and the information decoded. This new multidisciplinary field can be used for in-body communication, secrecy, networking microscale and nanoscale devices, infrastructure monitoring in smart cities and industrial complexes, as well as for underwater communications. Since these systems are fundamentally different from telecommunication systems, most techniques that have been developed over the past few decades to advance radio technology cannot be applied to them directly.

In this talk, we first explore some of the fundamental limits of molecular communication channels, evaluate how capacity scales with respect to the number of particles released by the transmitter, and the optimal input distribution. Finally, since the underlying channel models for some molecular communication systems are unknown, we demonstrate how techniques from machine learning and deep learning can be used to design components such as detection algorithms, directly from transmission data, without any knowledge of the underlying channel models.

Date and Time: 
Monday, October 16, 2017 - 3:25pm to 4:25pm
Venue: 
Packard 202

Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions

Topic: 
Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions
Abstract / Description: 

Recent years have witnessed significant progress in entropy and mutual information estimation, in particular in the large alphabet regime. Concretely, there exist efficiently computable estimators whose performance with n samples is essentially that of the maximum likelihood estimator with n log(n) samples, a phenomenon termed "effective sample size enlargement". Generalizations to processes with memory (estimation of the entropy rate) and continuous distributions (estimation of the differential entropy) have remained largely open. This talk is about the challenges behind those generalizations and recent progress in this direction. For estimating the entropy rate of a Markov chain, we show that when the mixing time is not too slow, at least S^2/log(S) samples are required to consistently estimate the entropy rate, where S is the size of the state space. In contrast, the empirical entropy rate requires S^2 samples to achieve consistency even if the Markov chain is i.i.d. We propose a general approach to achieve the S^2/log(S) sample complexity, and illustrate our results through estimating the entropy rate of the English language from the Penn Treebank (PTB) and the Google 1 Billion Word Dataset. For differential entropy estimation, we characterize the minimax behavior over Besov balls, and show that a fixed-k nearest neighbor estimator adaptively achieves the minimax rates up to logarithmic factors without knowing the smoothness of the density. The "effective sample size enlargement" phenomenon holds in both the Markov chain case and the case of continuous distributions.

 

Joint work with Weihao Gao, Yanjun Han, Chuan-Zheng Lee, Pramod Viswanath, Tsachy Weissman, Yihong Wu, and Tiancheng Yu.

Date and Time: 
Friday, October 13, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications

Topic: 
Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications
Abstract / Description: 

We consider a model of multi-agent online learning under imperfect information, where the reward structures of agents are given by a general continuous game. After introducing a general equilibrium stability notion for continuous games, called variational stability, we examine the well-known online mirror descent (OMD) learning algorithm and show that the "last iterate" (that is, the actual sequence of actions) of OMD converges to variationally stable Nash equilibria provided that the feedback delays faced by the agents are synchronous and bounded. We then extend the result to almost sure convergence to variationally stable Nash equilibria under both unbiased noise and synchronous and bounded delays. Subsequently, to tackle fully decentralized, asynchronous environments with unbounded feedback delays, we propose a variant of OMD which we call delayed mirror descent (DMD), and which relies on the repeated leveraging of past information. With this modification, the algorithm converges to variationally stable Nash equilibria, with no feedback synchronicity assumptions, and even when the delays grow super-linearly relative to the game's horizon. We then again extend it to the case where there are both delays and noise.

In the second part of the talk, we present two applications of the multi-agent online learning framework. The first application is on non-convex stochastic optimization, where we characterize almost sure convergence of the well-known stochastic mirror descent algorithm to global optima for a large class of non-convex stochastic optimization problems (strictly including convex, quasi-convex and start-convex problems). A step further, our results also include as a special case the large-scale stochastic optimization problem, where stochastic mirror descent is applied in a distributed, asynchronous manner across multiple machines/processors. Time permitting, we will discuss how these results help (at least in part) clarify and affirm the recent successes of mirror-descent type algorithms in large-scale machine learning. The second application concerns power management on random wireless networks, where we use a game-design approach to derive robust power control algorithms that converge (almost surely) to the optimal power allocation in the presence of randomly fluctuating networks.

This is joint work with Nick Bambos, Stephen Boyd, Panayotis Mertikopoulos, Peter Glynn and Claire Tomlin.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, October 6, 2017 - 1:15pm
Venue: 
Packard 101

Biology as Information Dynamics [IT forum]

Topic: 
Biology as Information Dynamics
Abstract / Description: 

If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the 'replicator equation' – a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback-Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher's fundamental theorem of natural selection.

Date and Time: 
Thursday, April 20, 2017 - 4:20pm
Venue: 
Clark S361

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Minimum Rates of Approximate Sufficient Statistics [IT-Forum]

Topic: 
Minimum Rates of Approximate Sufficient Statistics
Abstract / Description: 

Given a sufficient statistic for a parametric family of distributions, one can estimate the parameter without access to the data itself but by using a sufficient statistic. However, the memory size for storing the sufficient statistic may be prohibitive. Indeed, for $n$ independent data samples drawn from a $k$-nomial distribution with $d=k-1$ degrees of freedom, the length of the code scales as $d\log n+O(1)$. In many applications though, we may not have a useful notion of sufficient statistics and also may not need to reconstruct the generating distribution exactly. By adopting an information-theoretic approach in which we consider allow a small error in estimating the generating distribution, we construct various notions of {\em approximate sufficient statistics} and show that the code length can be reduced to $\frac{d}{2}\log n + O(1)$. We consider errors measured according to the relative entropy and variational distance criteria. For the code construction parts, we leverage Rissanen's minimum description length principle, which yields a non-vanishing error measured using the relative entropy. For the converse parts, we use Clarke and Barron's asymptotic expansion for the relative entropy of a parametrized distribution and the corresponding mixture distribution. The limitation of this method is that only a weak converse for the variational distance can be shown. We develop new techniques to achieve vanishing errors and we also prove strong converses for all our statements. The latter means that even if the code is allowed to have a non-vanishing error, its length must still be at least $\frac{d}{2}\log n$.

This is joint work with Prof. Masahito Hayashi (Graduate School of Mathematics, Nagoya University and Center for Quantum Technologies, NUS


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Monday, April 17, 2017 - 1:15pm
Venue: 
Packard 202

Information Theory, Geometry, and Cover's Open Problem [IT-Forum]

Topic: 
Information Theory, Geometry, and Cover's Open Problem
Abstract / Description: 

Formulating the problem of determining the communication capacity of point-to-point channels as a problem in high-dimensional geometry is one of Shannon's most important insights that has led to the conception of information theory. However, such geometric insights have been limited to the point-to-point case, and have not been effectively utilized to attack network problems. In this talk, we present our recent work which develops a geometric approach to make progress on one of the central problems in network information theory, namely the capacity of the relay channel. In particular, consider a memoryless relay channel, where the channel from the relay to the destination is an isolated bit pipe of capacity C0. Let C(C0) denote the capacity of this channel as a function of C0. What is the critical value of C0 such that C(C0) first equals C(infinity)? This is a long-standing open problem posed by Cover and named ''The Capacity of the Relay Channel,'' in Open Problems in Communication and Computation, Springer-Verlag, 1987. In this talk, we answer this question in the Gaussian case and show that C0 can not equal to C(infinity) unless C0=infinity, regardless of the SNR of the Gaussian channels, while the cut-set bound would suggest that C(infinity) can be achieved at finite C0. The key step in our proof is a strengthening of the isoperimetric inequality on a high-dimensional sphere, which we use to develop a packing argument on a spherical cap that resembles Shannon's sphere packing idea for point-to-point channels.

Joint work with Leighton Barnes and Ayfer Ozgur.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 24, 2017 - 1:15pm
Venue: 
Packard 202

Codes and card tricks: Magic for adversarial crowds [IT-Forum]

Topic: 
Codes and card tricks: Magic for adversarial crowds
Abstract / Description: 

Rated by Ron Graham as a top-10 mathematical card trick of the 20th century, Diaconis' mind reader is a magic that involves the interaction with five collaborative volunteers. Inspired by this, we perform a similar card trick in this talk with the upgrade to tolerate bluffing volunteers. The theory behind this trick will be used to develop fundamental limits as well as code constructions for faster delay estimation in positioning systems.

This is a joint work with Sihuang Hu and Ofer Shayevitz (https://arxiv.org/abs/1605.09038).

Date and Time: 
Friday, January 27, 2017 - 1:15pm
Venue: 
Packard 202

On Two Problems in Coded Statistical Inference [IT-Forum]

Topic: 
On Two Problems in Coded Statistical Inference
Abstract / Description: 

While statistical inference and information theory are deeply related fields, problems which lie at the intersection of both disciplines usually fall between the two stools, and lack definitive answers. In this talk, I will discuss recent advances in two such problems.

In the first part of the talk, I will discuss a distributed hypothesis testing problem, in which the hypotheses regard the joint statistics of two sequences, one available to the decision function directly (as side information), while the other is conveyed through a limited-rate link. The goal is to design a system which obtains the optimal trade-off between the false-alarm and misdetection exponents. I will define a notion of "channel detection codes", and show that the optimal exponents of the distributed hypothesis testing problem is directly related to the exponents of these codes. Then, I will discuss a few bounds on the exponents of channel detection codes, as well as prospective improvements. This approach has a two merits over previous works: It is suitable for any pair of memoryless joint distributions, and it provides bounds on the entire false-alarm/misdetection curve, rather than just bounds on its boundary points (Stein's exponent).

In the second part of the talk (time permitting), I will discuss a parameter estimation problem over an additive Gaussian noise channel with bandlimited input. In case one is allowed to design both the modulator and the estimator, the absolute \$alpha$-th moment of the estimation error can decrease exponentially with the transmission time. I will discuss several new upper (converse) bounds for the optimal decrease rate.

Joint work with Yuval Kochman (Hebrew university) and Neri Merhav (Technion).


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 17, 2017 - 1:15pm
Venue: 
Packard 202

Pages

Optics and Electronics Seminar

High Precision Motion Control 101: Tools for Emerging Applications [Stanford Optical Society]

Topic: 
High Precision Motion Control 101: Tools for Emerging Applications
Abstract / Description: 

Precision motion control is an important subset of automation, which encompasses a diverse array of applications. Many aspects of optical engineering research and development depend on the appropriate selection and use of sensors, actuators, and controllers. In precision motion projects, the actual needs may vary widely from the originally intended specifications.

Some of the most difficult tasks in optics research require complex multi-axis motion control methods. Some popular examples include Additive Manufacturing (3D Printing), Sample Positioning for Crystallography, and Alignment with Silicon Photonics. In this seminar we will address concepts and challenges in selecting actuator and sensing technologies, as well as the appropriate controller techniques. We will provide researchers with an ability to understand and apply the fundamental concepts for R&D precision motion projects and automation systems.

Date and Time: 
Wednesday, May 17, 2017 - 1:25pm
Venue: 
Spilker 232

Scientific Visualization with Blender [Stanford Optical Society Workshop]

Topic: 
Scientific Visualization with Blender
Abstract / Description: 

Have you ever said to yourself: "I'm sure this paper would have been accepted if I had just included a pretty picture..."? The pretty pictures you often see gracing the cover of Nature can be made in a number of ways, from simple sketching in Powerpoint to full-blown 3D modeling. Among the numerous software packages available for 3D computer graphics is Blender, a free and open-source package that can be used for modeling, sculpting, animating, rendering, and more. In this hands-on workshop, I will introduce basic principles of operation for the modeling and rendering of objects in Blender. Together, we will create some simple models before diving into some advanced techniques that are necessary to make images like the one seen below. Are you the type of person that prefers working from the command line? Blender is built on Python, and can be directly manipulated from a Python console or script. I will show you how to perform some unique operations using a simple Python script, with an eye towards visualizing your own data in Blender. Are you simply looking for a method of converting a 2D image into a shiny 3D model? I will also show you how to take an SVG file and turn it into a 3D model in Blender that can be manipulated. This workshop is intended for people that are entirely unfamiliar with Blender software, but the concepts covered can easily be applied to any rendering software of your choice.

This is a hands-on workshop: bring your laptops, a keyboard with a numpad, and a mouse!

Date and Time: 
Wednesday, May 24, 2017 - 1:00pm to 5:00pm
Venue: 
-Venue information will be provided to registered attendees-

Supercontinuum Fiber lasers: Technology and Applications [Stanford Optical Society Seminar]

Topic: 
Supercontinuum Fiber lasers: Technology and Applications
Abstract / Description: 

In the 1970s, wide spectral broadening of intense laser light in a non-linear material, or supercontinuum generation, was first demonstrated in the laboratory. With the development of recent fiber and fiber laser technology, namely compact high power picosecond lasers and micro-structured Photonic Crystal Fiber (PCF) commercial supercontinuum lasers have become a reality. With a typical spectral bandwidth covering over 2000 nm and output powers exceeding 20W, these sources have proved an invaluable tool. In this talk, we will cover:

  • Fundamentals of how supercontinuum lasers work and the importance on the PCF design in tailoring the spectrum.
  • The properties of supercontinuum laser light and what make them unique sources.
  • The main applications today for supercontinuum laser in imaging, spectroscopy, Optical Coherence Tomography (OCT) and illumination.
  • Supercontinuum technology roadmap and future applications
Date and Time: 
Monday, May 22, 2017 - 11:00am
Venue: 
Spilker 232

Always-On Vision Becomes a Reality [OSA Seminar]

Topic: 
Always-On Vision Becomes a Reality
Abstract / Description: 

Intelligent devices equipped with human-like senses such as always-on touch, audio and motion detection have enabled a variety of new use cases and applications, transforming the way we interact with each other and our surroundings. While the vast majority (>80%) of human insight comes through the eyes, enabling always-on vision (defined as < 1 mA power) for devices is challenging due to power-hungry hardware and the high complexity of inference algorithms. Qualcomm Research has pioneered an Always-on Computer Vision Module (CVM) combining innovations in the system architecture, ultra-low power design and dedicated hardware for vision algorithms running at the "edge." With low end-to-end power consumption, a tiny form factor and low cost, the CVM can be integrated into a wide range of battery- and line-powered devices (IoT, mobile, VR/AR, automotive, etc.), performing object detection, feature recognition, change/motion detection, and other tasks. Its processor performs all computation within the module itself and outputs metadata.

Date and Time: 
Thursday, May 11, 2017 - 4:15pm
Venue: 
Spilker 232

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Insensitivity of Loss Systems under Randomized SQ(d) Algorithms [ISL Colloquium]

Topic: 
Insensitivity of Loss Systems under Randomized SQ(d) Algorithms
Abstract / Description: 

In many applications such as cloud computing, managing server farm resources etc. an incoming task or job has to be matched with an appropriate server in order to minimise the latency or blocking associated with the processing. Ideally the best choice would be to match a job to the fastest available server. However when there are thousands of servers requiring the information on all server tasks is an overkill.

Pioneered in the 1990's the idea of randomised sampling of a few servers was proposed by Vvedenskaya and Dobrushin in Russia and Mitzmenmacher in the US and popularised as the "power of two" schemes which basically means that sampling two servers randomly and sending the job to the "better" server (i.e. with the shortest queue, or most resources) provides most of the benefits of sampling all the servers.

In the talk I will discuss multi-server loss models under power-of-d routing scheme when service time distributions are general with finite mean. Previous works on these models assume that the service times are exponentially distributed and insensitivity was suggested through simulations. Showing insensitivity to service time distributions has remained an open problem. We address this problem by considering service time distributions as Mixed-Erlang distributions that are dense in the class of general distributions on (0, ∞). We derive the mean field equations (MFE) of the empirical distributions for the system and establish the existence and uniqueness of the fixed point of the MFE. Furthermore we show that the fixed point of the MFE corresponds to the fixed point obtained from the MFE corresponding to a system with exponential service times showing that the fixed point is insensitive to the distribution. Due to lack of uniformity of the mixed-Erlang convergence the true general case needs to be handled differently. I will conclude the case of the MFE with general service times showing that the MFE is now characterized by a pde whose stationary point coincides with the fixed point in the case with exponential service times.The techniques developed in this paper are applicable to study mean field limits for Markov processes on general state spaces and insensitivity properties of other queueing models.

Date and Time: 
Monday, March 20, 2017 - 3:00pm
Venue: 
Packard 202

Quest for Energy Efficiency in Computing Technologies [Applied Physics 483 Optics & Electronics]

Topic: 
Quest for Energy Efficiency in Computing Technologies
Abstract / Description: 

As computing becomes increasingly pervasive in our daily life, it is generally recognized that energy efficiency will be one of the key design considerations for any future computing scheme. Consequently, significant research is currently ongoing on exploring new physics, material systems and system level designs to improve energy efficiency. In this talk, I shall discuss some of our recent progresses in this regard. Specifically, the physics of ordered and correlated systems allow for fundamental improvement of the energy efficiency when a transition happens between two distinguishable states. Our recent experiments show that this theoretical promise can indeed be realized in electronic devices. The resulting gain in energy efficiency could exceed orders of magnitude.

Date and Time: 
Monday, March 13, 2017 - 4:00pm
Venue: 
Spilker 232

Synopsys LightTools Hands-On Training [Optical Society Workshop]

Topic: 
Synopsys LightTools Hands-On Training
Abstract / Description: 

Capacity: 18 people

LightTools is a 3D optical engineering and design software product that supports virtual prototyping, simulation, optimization, and photorealistic renderings of illumination applications. Its unique design and analysis capabilities, combined with ease of use, support for rapid design iterations, and automatic system optimization, help to ensure the delivery of illumination designs according to specifications and schedule.

LightTools is used by industry leaders for engineering applications such as LEDs, displays, lighting, solar, automotive, head-mounted displays, projectors, etc.

Please read:

  • This is a hands-on interactive training session; you will need to actively participate.
  • The software runs on Windows. Therefore, you will need a computer that runs Windows for the training.
  • You should register only if you are absolutely sure that you can commit for the full 4-hours.
  • Since it is a hands-on session, the capacity is limited to 18 people. The first 18 people to RSVP will receive the download link, license information, and event location.
  • Please RSPV using the following link: https://www.surveymonkey.com/r/DSHDMD5
Date and Time: 
Monday, February 27, 2017 - 1:00pm to 5:00pm

Data-driven, Interactive Scientific Articles in a Collaborative Environment with Authorea [OSA; WEE]

Topic: 
Data-driven, Interactive Scientific Articles in a Collaborative Environment with Authorea
Abstract / Description: 

Most tools that scientists use for the preparation of scholarly manuscripts, such as Microsoft Word and LaTeX, function offline and don't account for the digital-born nature of research objects. Further, most authoring tools in use today are not designed for collaboration. As scientific collaborations grow in size, research transparency and the attribution of scholarly credit are at stake. I will show how Authorea allows scientists to write rich data-driven manuscripts on the web; articles that natively offer readers a dynamic, interactive experience with an article's full text, images, data, and code, paving the way to increased data sharing, research reproducibility, and Open Science. I will also demonstrate how Authorea differs from Overleaf and ShareLaTeX.

 

Please bring your laptop to actively participate in the demo (suggested; not mandatory)

Date and Time: 
Tuesday, January 24, 2017 - 12:30pm
Venue: 
Spilker 143

OSA Special Seminar

Topic: 
Widely-Tunable High-Performance Lasers: From sophisticated optical set-ups to robust products
Abstract / Description: 

Unique manufacturing techniques enable the integration of complex optical set-ups, such as optical parametric oscillators (OPO), into highly stable products. We will discuss two such laser products that have been recently developed and released on the market: The Hübner C-WAVE, which is tunable in the visible (450-650nm) and NIR (900-1300nm); and the Cobolt Odin, which operates in the IR (2-5µm). The talk will focus on bringing new optical technology from the benchtop to market-ready products. We will also present performance data and application results from studies in atomic physics, single molecule and gas spectroscopy, and the characterization of integrated optical devices using the aforementioned products.

Date and Time: 
Thursday, November 10, 2016 - 4:15pm to 5:00pm
Venue: 
Spilker 232

Pages

SCIEN Talk

SCIEN Talk: Next Generation Wearable AR Display Technologies

Topic: 
Next Generation Wearable AR Display Technologies
Abstract / Description: 

Wearable AR/VR displays have a long history and earlier efforts failed due to various limitations. Advances in sensors, optical technologies, and computing technologies renewed the interest in this area. Most people are convinced AR will be very big. A key question is whether AR glasses can be the new computing platform and replace smart phones? I'll discuss some of the challenges ahead. We have been working on various wearable display architectures and I'll discuss our efforts related to MEMS scanned beam displays, head-mounted projectors and smart telepresence screens, and holographic near-eye displays.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Near-Eye Varifocal Augmented Reality Displays

Topic: 
Near-Eye Varifocal Augmented Reality Displays
Abstract / Description: 

With the goal of registering dynamic synthetic imagery onto the real world, Ivan Sutherland envisioned a fundamental idea to combine digital displays with conventional optical components in a wearable fashion. Since then, various new advancements in the display engineering domain, and a broader understanding in the vision science domain have led us to computational displays for virtual reality and augmented reality applications. Today, such displays promise a more realistic and comfortable experience through techniques such as lightfield displays, holographic displays, always-in-focus displays, multiplane displays, and varifocal displays. In this talk, as an Nvidian, I will be presenting our new optical layouts for see-through computational near-eye displays that is simple, compact, varifocal, and provides a wide field of view with clear peripheral vision and large eyebox. Key to our efforts so far contain novel see-through rear-projection holographic screens, and deformable mirror membranes. We establish fundamental trade-offs between the quantitative parameters of resolution, field of view, and the form-factor of our designs; opening an intriguing avenue for future work on accommodation-supporting augmented reality display.

Date and Time: 
Wednesday, November 15, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Interactive 3D Digital Humans

Topic: 
Interactive 3D Digital Humans
Abstract / Description: 

This talk will cover recent methods for recording and displaying interactive life-sized digital humans using the ICT Light Stage, natural language interfaces, and automultiscopic 3D displays. We will then discuss the first full application of this technology to preserve the experience of in-person interactions with Holocaust survivors

More Information: http://gl.ict.usc.edu/Research/TimeOffsetConversations/


The SCIEN Colloquia are open to the public. The talks are also videotaped and posted the following week on talks.stanford.edu.

There will a reception following the presentation.

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Mapping molecular orientation using polarized light microscopy

Topic: 
Mapping molecular orientation using polarized light microscopy
Abstract / Description: 

Polarization is a basic property of light, but the human eye is not sensitive to it. Therefore, we don't have an intuitive understanding of polarization and of optical phenomena that are based on it. They either elude us, like the polarization of the blue sky or the rainbow, or they puzzle us, like the effect of Polaroid sunglasses. Meanwhile, polarized light plays an important role in nature and can be used to manipulate and analyze molecular order in materials, including living cells, tissues, and whole organisms, by observation with the polarized light microscope.

In this seminar, Rudolf Oldenbourg will first illustrate the nature of polarized light and its interaction with aligned materials using hands-on demonstrations. He will then introduce a modern version of the polarized light microscope, the LC-PolScope, created at the MBL. Enhanced by liquid crystal devices, electronic imaging, and digital image processing techniques, the LC-PolScope reveals and measures the orientation of molecules in every resolved specimen point at once. In recent years, his lab expanded the LC-PolScope technique to include the measurement of polarized fluorescence of GFP and other fluorescent molecules, and applied it to record the remarkable choreography of septin proteins during cell division, displayed in yeast to mammalian cells.

Talon Chandler will then discuss extending polarized light techniques to multi-view microscopes, including light sheet and light field microscopes. In contrast to traditional, single-view microscopy, the recording of specimen images along two or more viewing directions allows us to unambiguously measure the three dimensional orientation of molecules and their aggregates. Chandler will discuss ongoing work on optimizing the design and reconstruction algorithms for multi-view polarized light microscopy.


The SCIEN Colloquia are open to the public. The talks are also videotaped and posted the following week on talks.stanford.edu.

There will a reception following the presentation.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN colloquium: Light field Retargeting for Integral and Multi-panel Displays

Topic: 
Light field Retargeting for Integral and Multi-panel Displays
Abstract / Description: 

Light fields are a collection of rays emanating from a 3D scene at various directions, that when properly captured provides a means of projecting depth and parallax cues on 3D displays. However due to the limited aperture size and the constrained spatial-angular sampling of many light field capture systems (e.g. plenoptic cameras), the displayed light fields provide only a narrow viewing zone in which parallax views can be supported. In addition, the autostereoscopic displaying devices may be of unmatched spatio-angular resolution (e.g. integral display) or of different architecture (e.g. multi-panel display) as opposed to the capturing plenoptic system which requires careful engineering between the capture and display stages. This talk presents an efficient light field retargeting pipeline for integral and multi-panel displays which provides us with a controllable enhanced parallax content. This is accomplished by slicing the captured light fields according to their depth content, boosting the parallax, and merging these slices with data filling. In integral displays, the synthesized views are simply resampled and reordered to create elemental images that beneath a lenslet array can collectively create multi-view rendering. For multi-panel displays, additional processing steps are needed to achieve seamless transition over different depth panels and viewing angles where displayed views are synthesized and aligned dynamically according to the position of the viewer. The retargeting technique is simulated and verified experimentally on actual integral and multi-panel displays.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Packard 101

Holographic Near-Eye Displays for Virtual and Augmented Reality

Topic: 
Holographic Near-Eye Displays for Virtual and Augmented Reality
Abstract / Description: 

Today's near-eye displays are a compromise of field of view, form factor, resolution, supported depth cues, and other factors. There is no clear path to obtain eyeglasses-like displays that reproduce the full fidelity of human vision. Computational displays are a potential solution in which hardware complexity is traded for software complexity, where it is easier to meet many conflicting optical constraints. Among computational displays, digital holography is a particularly attractive solution that may scale to meet all the optical demands of an ideal near-eye display. I will present novel designs for virtual and augmented reality near-eye displays based on phase-only holographic projection. The approach is built on the principles of Fresnel holography and double phase amplitude encoding with additional hardware, phase correction factors, and spatial light modulator encodings to achieve full color, high contrast and low noise holograms with high resolution and true per-pixel focal control. A unified focus, aberration correction, and vision correction model, along with a user calibration process, accounts for any optical defects between the light source and retina. This optical correction ability not only to fixes minor aberrations but enables truly compact, eyeglasses-like displays with wide fields of view (80 degrees) that would be inaccessible through conventional means. All functionality is evaluated across a series of proof-of-concept hardware prototypes; I will discuss remaining challenges to incorporate all features into a single device and obtain practical displays.

Date and Time: 
Wednesday, October 18, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Focal Surface Displays

Topic: 
Focal Surface Displays
Abstract / Description: 

Conventional binocular head-mounted displays (HMDs) vary the stimulus to vergence with the information in the picture, while the stimulus to accommodation remains fixed at the apparent distance of the display, as created by the viewing optics. Sustained vergence-accommodation conflict (VAC) has been associated with visual discomfort, motivating numerous proposals for delivering near-correct accommodation cues. We introduce focal surface displays to meet this challenge, augmenting conventional HMDs with a phase-only spatial light modulator (SLM) placed between the display screen and viewing optics. This SLM acts as a dynamic freeform lens, shaping synthesized focal surfaces to conform to the virtual scene geometry. We introduce a framework to decompose target focal stacks and depth maps into one or more pairs of piecewise smooth focal surfaces and underlying display images. We build on recent developments in "optimized blending" to implement a multifocal display that allows the accurate depiction of occluding, semi-transparent, and reflective objects. Practical benefits over prior accommodation-supporting HMDs are demonstrated using a binocular focal surface display employing a liquid crystal on silicon (LCOS) phase SLM and an organic light-emitting diode (OLED) display.

Date and Time: 
Wednesday, October 11, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Computational Near-Eye Displays

Topic: 
Computational Near-Eye Displays
Abstract / Description: 

Virtual reality is a new medium that provides unprecedented user experiences. Eventually, VR/AR systems will redefine communication, entertainment, education, collaborative work, simulation, training, telesurgery, and basic vision research. In all of these applications, the primary interface between the user and the digital world is the near-eye display. While today's VR systems struggle to provide natural and comfortable viewing experiences, next-generation computational near-eye displays have the potential to provide visual experiences that are better than the real world. In this talk, we explore the frontiers of VR/AR systems engineering and discuss next-generation near-eye display technology, including gaze-contingent focus, light field displays, monovision, holographic near-eye displays, and accommodation-invariant near-eye displays.

Date and Time: 
Wednesday, October 4, 2017 - 4:30pm
Venue: 
Packard 101

Computational Imaging for Robotic Vision [SCIEN]

Topic: 
Computational Imaging for Robotic Vision
Abstract / Description: 

This talk argues for combining the fields of robotic vision and computational imaging. Both consider the joint design of hardware and algorithms, but with dramatically different approaches and results. Roboticists seldom design their own cameras, and computational imaging seldom considers performance in terms of autonomous decision-making.The union of these fields considers whole-system design from optics to decisions. This yields impactful sensors offering greater autonomy and robustness, especially in challenging imaging conditions. Motivating examples are drawn from autonomous ground and underwater robotics, and the talk concludes with recent advances in the design and evaluation of novel cameras for robotics applications.

Date and Time: 
Wednesday, June 7, 2017 - 4:30pm
Venue: 
Packard 101

Carl Zeiss Smart Glasses [SCIEN Talk]

Topic: 
Carl Zeiss Smart Glasses
Abstract / Description: 

Kai Stroeder, Managing Director at Carl Zeiss Smart Optics GmbH, will talk about the Carl Zeiss Smart Glasses.

This will be an informal session with an introduction and prototype demo of the Smart Glasses and an open discussion about future directions and applications.

Date and Time: 
Tuesday, May 30, 2017 - 10:00am
Venue: 
Lucas Center for Imaging, P083

Pages

SmartGrid

SmartGrid Seminar welcomes Saurabh Amin

Topic: 
TBA
Abstract / Description: 

The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions
to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.


Yours sincerely,

Smart Grid Seminar Organization Team,

Ram Rajagopal, Assistant Professor, Civil & Environmental Engineering, and Electrical Engineering
Chin-Woo Tan, Director, Stanford Smart Grid Lab
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
Emre Kara, Associate Staff Scientist, SLAC National Accelerator Laboratory

Date and Time: 
Thursday, November 16, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Optimization, Inference and Learning for District-Energy Systems

Topic: 
Optimization, Inference and Learning for District-Energy Systems
Abstract / Description: 

We discuss how Optimization, Inference and Learning (OIL) methodology is expected to re-shape future demand-response technologies acting across interdependent energy, i.e. power, natural gas andheating/cooling, infrastructures at the district/metropolitan/distribution level. We describe hierarchy ofdeterministic and stochastic planning and operational problems emerging in the context of physical flows over networks associated with the laws of electricity, gas-, fluid- and heat-mechanics. We proceed to illustratedevelopment and challenges of the physics-informed OIL methodology on examples of: a) Graphical Models approach applied to a broad spectrum of the energy flow problems, including online reconstruction of the grid(s) topology from measurements; b) Direct and inverse dynamical problems for timely delivery of services in the district heating/cooling systems; c) Ensemble Control of the phase-space cycling energy loads via Markov Decision Process (MDP) and related reinforcement learning approaches.

Date and Time: 
Thursday, November 2, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Emerging Technologies and Their Impact on the Grid

Topic: 
Emerging Technologies and Their Impact on the Grid
Abstract / Description: 

As rooftop solar, electric vehicles, and residential battery storage continue to become more and more commonplace, they can have significant impacts in the way the Energy Grid operates. By embracing these new technologies, PG&E is helping to create a vision for what a next generation energy company will look like and seeking to answer key questions such as: Is energy storage changing the way in which utilities operate the grid? What is needed for new technologies, such as residential battery energy storage, to go mainstream? What are some of the key factors driving the inevitable transition from a one-way grid to a two-way grid?

This presentation will focus on both the technology changes happening to the energy space as well as some of the technology advancements helping to reshape how the energy grid engages with these changes. It will cover these topics while exploring a case study of a recent pilot projects where PG&E, Tesla, GE & Green Charge teamed up on a project in San Jose to demonstrate how battery storage and rooftop solar connected to smart inverters can be used to support the electric grid during periods of high demand while providing participating residents and businesses with backup power and bill reduction. The project is a microcosm of what the grid will look like in the near future with the rapid adoption of distributed energy resources such as solar, battery storage & EVs.


 

The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions
to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

 

Yours sincerely,


Smart Grid Seminar Organization Team,

Ram Rajagopal, Assistant Professor, Civil & Environmental Engineering, and Electrical Engineering
Chin-Woo Tan, Director, Stanford Smart Grid Lab
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
Emre Kara, Associate Staff Scientist, SLAC National Accelerator Laboratory

Date and Time: 
Thursday, November 9, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Smart Distribution Systems Research at Future Renewable Electric Energy Delivery and Management Systems Center

Topic: 
Smart Distribution Systems Research at Future Renewable Electric Energy Delivery and Management Systems Center
Abstract / Description: 

This talk will first highlight the challenges associated with upgrading the current electric power distribution system infrastructure towards a smart distribution system that can accommodate high levels of distributed energy resources (DERs). Then, an overview of the research efforts that has been undertaken at the FREEDM center will be provided. The focus will be on the new monitoring and control methods needed for the future smart distribution systems.

Date and Time: 
Thursday, October 12, 2017 - 1:30pm
Venue: 
Y2E2 111

Design, stability and control of ad-hoc microgrids [SmartGrid Seminar]

Topic: 
Design, stability and control of ad-hoc microgrids
Abstract / Description: 

Microgrids are a promising and viable solution for integrating the distributed generation resources in future power systems. Similar to large-scale power systems, microgrids are prone to a range of instability mechanisms and are naturally fragile with respect to disturbances. However, existing planning and operation practices employed in large scale transmission grids usually cannot be downscaled to small low-voltage microgrids. This talk will discuss the concept of ad-hoc microgrids that allow for arbitrary interconnection and switching with guaranteed stability. Although the problem of microgrid stability and control has received a lot of attention in the last years, vast majority of existing works assumed that the network configuration is given and fixed. Moreover, only few works have accounted for electromagnetic delays that will be shown to play a critical role in the context of stability.

The talk will introduce a new mathematical framework for characterization and certification of stability in an ad-hoc setting and derive the formal design constraints for both DC and AC networks. In the context of low-voltage DC network, the corresponding derivations will employ the Brayton-Moser potential theory and result in simple conditions on load capacitances that guarantee both small-signal and transient stability. Whereas for AC microgrids, the singular perturbation analysis will be used to derive simple relations for the droop coefficient of neighboring networks. The talk will conclude with a discussion of key open problems and challenges.

Date and Time: 
Wednesday, June 28, 2017 - 1:30pm
Venue: 
Y2E2 101

Research Perspectives on Smart Electric Distribution Systems [SLAC-Stanford SmartGrid]

Topic: 
Research Perspectives on Smart Electric Distribution Systems
Abstract / Description: 

Electric distribution systems are transforming from a traditionally passive element to an active component of the Smart Grid with a hitherto unprecedented availability of new technologies, data, control, and options for end-users to participate in the daily operations of the grid. To realize the full potential of this transformation there is a dire need for new architectures, markets, tools, techniques, and testbeds. In that regard, this talk presents a comprehensive approach based on cyber-physical-social system to energy management in the emerging smart distribution system with new research results from on-going efforts. Topics of aggregators, incentive pricing, customer-side intelligence, and sustainability metrics as well as aspects of current and future trends in this research will be addressed.

Date and Time: 
Friday, June 16, 2017 - 2:00pm
Venue: 
Y2E2 101

The New Utility: Basic Enabler of Sustainable and Resilient Electric Energy Services? [SmartGrid Seminar]

Topic: 
The New Utility: Basic Enabler of Sustainable and Resilient Electric Energy Services?
Abstract / Description: 

In this talk, we consider problems concerning the role of future utilities. Innovative operations and financial mechanisms are needed to transform utilities into future enablers of sustainable and resilient electric energy service providers. Both technical and financial issues on the road to modernizing today's utilities.

First, we illustrate on real-world operating problems limiting the penetration and utilization of distributed energy resources (DERs) and how these problems can be systematically solved using advanced automation and control. Automation represents a fundamental opportunity to overcome today's worst-case approach to electric energy services and offer more sustainable and resilient services. Mechanisms for better voltage support, power-electronics-based automation for stable operations systems and fast storage systems during abnormal conditions must be introduced. Although utilities should consider this approach as an alternative to building strong grids, some of these solutions are too complex for end users. Fortunately, there exists a win-win range of technological solutions by both utilities and end users. This is particularly the case when solutions are needed to operate these grids during natural disasters and cyber-attacks.

Second, we discuss financial roadblocks to deploy these promising technological innovations. We assess electricity markets in terms of their ability to enable DER integration at value. We also show how DERs can participate in electricity markets for energy and regulation during normal operations, but stress that there are no good mechanisms to value automation and storage. Utilities should move forward as providers of the last resort at value and be paid for taking the financial risks. If end users require uninterrupted clean services, market mechanisms must be put in place to give incentives to utilities to deploy effective technological solutions.


 

This quarter's speakers are renowned experts in power and energy systems, and we believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T for interested students. This course can be repeated for credit for the students.

SmartGrid Seminar Organization Team:

  • Ram Rajagopal, Assistant Professor, Civil and Environmental Engineering
  • Chin-Woo Tan, Director, Stanford Smart Grid Lab
  • Wenyuan Tang, Postdoctoral Scholar, Civil and Environmental Engineering
  • Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
  • Emre Kara, Associate Staff Scientist, SLAC
Date and Time: 
Thursday, May 18, 2017 - 1:30pm
Venue: 
Shriram 104

Electric Vehicles in the Smart Grid: Optimization & Control [SmartGrid Seminar]

Topic: 
Electric Vehicles in the Smart Grid: Optimization & Control
Abstract / Description: 

The rapid electrification of the transportation fleet imposes unprecedented demands on the electric grid. If controlled, however, these electric vehicles (EVs) provide an immense opportunity for smart grid services that enable renewable penetration and increased reliability. In this talk we discuss paradigms for aggregating and optimally controlling EV charging. Specifically, we discuss (i) aggregate modeling via partial differential equations, (ii) distributed optimization of large-scale EV fleets, (iii) and plug-and-play model predictive control in distribution networks. The talk closes with future perspectives for EVs in the Smart Grid.


This quarter's speakers are renowned experts in power and energy systems, and we believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T for interested students. This course can be repeated for credit for the students.

SmartGrid Seminar Organization Team:

  • Ram Rajagopal, Assistant Professor, Civil and Environmental Engineering
  • Chin-Woo Tan, Director, Stanford Smart Grid Lab
  • Wenyuan Tang, Postdoctoral Scholar, Civil and Environmental Engineering
  • Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
  • Emre Kara, Associate Staff Scientist, SLAC
Date and Time: 
Thursday, May 11, 2017 - 1:30pm
Venue: 
Shriram 104

Toward Real-Time Monitoring, Look-Ahead Assessment and Forecasting Engine for Active Distribution Networks [SmartGrid Seminar]

Topic: 
Toward Real-Time Monitoring, Look-Ahead Assessment and Forecasting Engine for Active Distribution Networks
Abstract / Description: 

United Kingdom Power Networks (UKPN) provides power to a quarter of the UK's population via its electricity distribution networks in London that span to the east and southeast of England. This talk will present an advanced distribution analytics power network tool (ADAPT) codeveloped by BSI and UKPN. ADAPT is an advanced real-time monitoring, state estimation platform, contingency analysis, corrective control. In addition, look ahead platform (30 minutes to 2 hours ahead) offers look-ahead assessment of the network taking the uncertainties of renewable energy into account. ADAPT completes with energy forecasting tools which provide input into forecasting future system cases (e.g. 1 hour ahead to 24 hours ahead). ADAPT has several key features such as: State Estimation, Power flow, Contingency Analysis, Interactive Single Line Diagram (132 kV, 33 kV, and external connections), Energy forecaster for load, solar, and wind, Corrective control for removing violations in the system. The ADAPT platform provides operators and engineers real-time situational awareness and facilitates network reliability management as new distributed generation comes online. It also enhances the capability of outage planners to minimize constraints placed on the output from distributed generators during the summer maintenance season and during any major construction and reconfiguration activities. The Look-Ahead mode allows engineers to include the uncertainty of renewable output as well as energy forecasting to produce cases with new renewable contingencies and alternate dispatch cases. Some challenges faced during the development of ADAPT will also be presented. A by-product of the tool's analysis capabilities can also identify root causes of system and component power losses as well as ways to minimize them. Some challenges and theoretical issues faced during the development of ADAPT will also be presented.


This quarter's speakers are renowned experts in power and energy systems, and we believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T for interested students. This course can be repeated for credit for the students.

SmartGrid Seminar Organization Team:

  • Ram Rajagopal, Assistant Professor, Civil and Environmental Engineering
  • Chin-Woo Tan, Director, Stanford Smart Grid Lab
  • Wenyuan Tang, Postdoctoral Scholar, Civil and Environmental Engineering
  • Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
  • Emre Kara, Associate Staff Scientist, SLAC
Date and Time: 
Thursday, May 4, 2017 - 1:30pm
Venue: 
Shriram 104

Smart grids and energy systems [SmartGrid Seminar]

Topic: 
Smart grids and energy systems
Abstract / Description: 

As the share of renewable energy becomes an increasing part of electricity generation, electric vehicles (EVs) have the potential to be used as virtual power plants (VPP) to provide reliable back-up power. This could generate additional profits for EV carsharing rental firms. We design a computational control mechanism for VPPs that decide whether EVs should be charging, discharging, or rented out. We validate our computational design by developing a discrete-event simulation platform based on real-time GPS information from 1,100 electric cars from Daimler's carsharing service Car2Go in San Diego, Amsterdam, and Stuttgart. We compute trading prices (bids and asks) for participating in secondary control reserve markets and investigate what effect the density of charging infrastructure, battery technology, and rental demand for vehicles have on the pay-off for the carsharing fleet. We show that VPPs can create sustainable revenue streams for electric vehicle carsharing fleets without compromising their rental business.

The theme of this quarter's Stanford SmartGrid seminar series is on smart grids and energy systems, scheduled to be held on Thursdays, with speakers from academic institutions and industry.


This quarter's speakers are renowned experts in power and energy systems, and we believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T for interested students. This course can be repeated for credit for the students.

SmartGrid Seminar Organization Team:

  • Ram Rajagopal, Assistant Professor, Civil and Environmental Engineering
  • Chin-Woo Tan, Director, Stanford Smart Grid Lab
  • Wenyuan Tang, Postdoctoral Scholar, Civil and Environmental Engineering
  • Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
  • Emre Kara, Associate Staff Scientist, SLAC
Date and Time: 
Thursday, April 27, 2017 - 1:30pm
Venue: 
Shriram 104

Pages

Stanford's NetSeminar

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, November 11, 2015 - 12:15pm to 1:30pm
Venue: 
Packard 202

NetSeminar

Topic: 
Precise localization and high throughput backscatter using WiFi signals
Abstract / Description: 

Indoor localization holds great promise to enable applications like location-based advertising, indoor navigation, inventory monitoring and management. SpotFi is an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems.

We then talk about BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter IoT sensors and WiFi APs using ambient WiFi transmissions as the excitation signal. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system.

Date and Time: 
Thursday, October 15, 2015 - 12:15pm to 1:30pm
Venue: 
Gates 104

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, October 7, 2015 - 12:15pm to 1:30pm
Venue: 
AllenX Auditorium

Pages

Statistics and Probability Seminars

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Statistics Seminar

Topic: 
Brownian Regularity for the Airy Line Ensemble
Abstract / Description: 

The Airy line ensemble is a positive-integer indexed ordered system of continuous random curves on the real line whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy2 process, describes after the subtraction of a parabola the limiting law of the scaled weight of a geodesic running from the origin to a variable point on an anti-diagonal line in such problems as Poissonian last passage percolation. The Airy line ensemble enjoys a simple and explicit spatial Markov property, the Brownian Gibbs property.


In this talk, I will discuss how this resampling property may be used to analyse the Airy line ensemble. Arising results include a close comparison between the ensemble's curves after affine shift and Brownian bridge. The Brownian Gibbs technique is also used to compute the value of a natural exponent describing the decay in probability for the existence of several near geodesics with common endpoints in Brownian last passage percolation, where the notion of "near" refers to a small deficit in scaled geodesic weight, with the parameter specifying this nearness tending to zero.

Date and Time: 
Monday, September 26, 2016 - 4:30pm
Venue: 
Sequoia Hall, room 200

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Probability Seminar

Topic: 
Upper tails and independence polynomials in sparse random graphs
Abstract / Description: 

The upper tail problem in the Erd˝os–R´enyi random graph G ∼ Gn,p is to estimate the probability that the number of copies of a graph H in G exceeds its expectation by a factor 1 + δ. Already, for the case of triangles, the order in the exponent of the tail probability was a long-standing open problem until fairly recently, when it was solved by Chatterjee (2012), and independently by DeMarco and Kahn (2012). Recently, Chatterjee and Dembo (2014) showed that in the sparse regime, the logarithm of the tail probability reduces to a natural variational problem on the space of weighted graphs. In this talk we derive the exact asymptotics of the tail probability by solving this variational problem for any fixed graph H. As it turns out, the leading order constant in the large deviation rate function is governed by the independence polynomial of H.


This is based on joint work with Shirshendu Ganguly, Eyal Lubetzky, and Yufei Zhao.


 

The Probability Seminars are held in Sequoia Hall, Room 200, at 4:30pm on Mondays. Refreshments are served at 4pm in the Lounge on the first floor.

Date and Time: 
Monday, January 11, 2016 - 4:30pm to 5:30pm
Venue: 
Sequoia Hall, Room 200

Probability Seminar

Topic: 
The Yang–Mills free energy
Abstract / Description: 

The construction of four-dimensional quantum Yang–Mills theories is a central open question in mathematical physics, famously posed as one of the millennium prize problems by the Clay Institute. While much progress has been made for the two dimensional problem, the techniques mostly break down in dimensions three and four. In this talk I will present a partial advance on this question, taking the program one step beyond the results proved in the Eighties.


 

The Probability Seminars are held in Sequoia Hall, Room 200, at 4:30pm on Mondays. Refreshments are served at 4pm in the Lounge on the first floor.

Date and Time: 
Monday, January 25, 2016 - 4:30pm to 5:30pm
Venue: 
Sequoia Hall, Room 200

Pages

SystemX

SystemX Seminar: Efficient battery usage for wireless IoT nodes

Topic: 
Efficient battery usage for wireless IoT nodes: Battery lifetime prediction
Abstract / Description: 

The prediction of the life-time of battery-powered IoT devices and wireless sensor networks is almost exclusively based on the assumption that the total charge in a battery (i.e. the mA-h) can be linearly consumed in time. This is not the case in reality. Batteries are complex electro-chemical systems and their discharge behavior depends heavily on the timing and intensity of the applied load. There is very little empirical data or reliable models available for these kinds of batteries and loads that are typically used in IoT sensor nodes for very long operational time, 5 -10 years.

We characterize the inexpensive CR2032 Li-coin cells using carefully controlled synthetic loads and a wide range of IoT-typical load parameters. We observe that actual lifetimes can differ from predicted linear ones by almost a factor of three. Furthermore, loads with similar average currents can vary significantly in the amount of capacity of the battery they can utilize. We conclude that short duration loads generally are faring better than sustained loads which was not anticipated. We suggest a better prediction model, that captures the non-linear short duration behavior, which can be implemented in constrained IoT devices.

Date and Time: 
Thursday, November 16, 2017 - 4:30pm
Venue: 
Huang 018

SystemX Seminar: Reconfigurable platforms, the thirst for bandwidth, and the future of computing

Topic: 
Reconfigurable platforms, the thirst for bandwidth, and the future of computing
Abstract / Description: 

Today, designers of specialized systems are increasingly tantalized by the enormous energy efficiency of custom silicon solutions...but are just as turned off by the spiraling costs of building and verifying those very chips. Enter the FPGA, a reconfigurable substrate that would be the absolutely perfect solution were it not for two common conventional wisdoms: they're impossible to build, and impossible to use. In this talk we will discuss the motivation and context of FPGAs, notably from the perspective of IO circuits, and what interesting problems and solutions they pose to designers and users.

Date and Time: 
Thursday, November 9, 2017 - 4:30pm
Venue: 
Huang 018

SystemX Seminar: Efficient Machine Learning Hardware Design: From High-Throughput Computer Vision to Ultra-Low-Power Biomedical Applications

Topic: 
Efficient Machine Learning Hardware Design: From High-Throughput Computer Vision to Ultra-Low-Power Biomedical Applications
Abstract / Description: 

In recent years, deep learning algorithms have been widespread across many practical applications. Algorithms trained by offline back propagation using pre-defined datasets show impressive performance, but state-of-the-art algorithms are compute-/memory-intensive, making it difficult to perform low-power real-time classification, especially on area-/power-constrained embedded hardware platforms.

In this talk, we present our recent research on how hardware designs of machine learning algorithms are efficiently customized for two divergent applications. These include deep convolutional neural networks (VGG, ResNet) for high-throughput image/video applications (e.g. autonomous driving), and compressed neural networks for ECG-based ultra-low-power biomedical applications (e.g. wearable devices). Our FPGA and ASIC prototype designs are presented that improve the energy-efficiency by optimizing computation, memory, and communication for representative neural networks.

Date and Time: 
Thursday, November 2, 2017 - 4:30pm
Venue: 
Huang 018

SystemX Alliance Fall Conference - Nov 14-16

Topic: 
SystemX Alliance Fall Conference
Abstract / Description: 

Tuesday, Nov 14 (Day 1) - starts with a SystemX overview given by Prof. Boris Murmann and Prof. Philip Wong, SystemX's faculty directors. This will be followed by overviews of our Focus Areas by their respective faculty leaders. Breakout sessions for the Bio Interfaces and Design Productivity Focus Areas will follow in the afternoon. The day concludes with an FMA student poster session / industry mixer, reception from 4:30 -6:00 pm.

Wednesday, Nov 15 (Day 2) - begins with topical presentations on research into Machine Learning at Stanford. The intention is to be well rounded, and present the challenges as well as the opportunities to be found in ML. The afternoon will hold three breakout sessions for the Computation for Data Analytics, for the Heterogeneous Integration, and for the Photonic & Quantum Technologies Focus Areas, after which member company representatives will assemble for our Business Meeting. The day concludes with the SystemX Dinner also at the Li Ka Shing Center.

Our featured speaker at dinner will be Dr. Wally Rhines, President & CEO of Mentor Graphics, a Siemens business

Thursday, Nov 16 (Day 3) - will hold two breakout sessions in the morning for the Energy/Power Management Systems, as well as, the Internet of Everything Focus Areas. This final day will end at noon.

On Tues and Wed, the conference will be held at the Li Ka Shing Center. The Thursday morning sessions will be held in the Allen buildings. Please consult the preliminary agenda for more details.

 

 

PRELIMINARY SYSTEMX NOVEMBER CONFERENCE AGENDA

PARKING INSTRUCTIONS FOR GALVEZ LOT

PARKING AND CONFERENCE MAP

Date and Time: 
Tuesday, November 14, 2017 (All day) to Thursday, November 16, 2017 (All day)
Venue: 
Li Ka Shing Conference Center, Stanford University

The Vision and AI Startup Landscape: Dispatches from the Entrepreneurial Front Lines [SystemX Seminar]

Topic: 
The Vision and AI Startup Landscape: Dispatches from the Entrepreneurial Front Lines
Abstract / Description: 

No recent technology trend has attracted as much hype as deep neural networks. Thousands of startups have attached themselves to the AI wave, chasing opportunities for new cloud applications for business and social media, new system platforms like self-driving cars, and new silicon platforms. In this talk, we deconstruct the major technologies and end-system functions and examine what startups and applications are most likely to succeed, where entrepreneurship is thriving, and how deep learning innovations in real-time voice and vision, in convolutional, recurrent and generative network structures, and in new high-efficiency neural processor architectures are likely to shape future systems.

Date and Time: 
Thursday, September 28, 2017 - 4:30pm
Venue: 
Huang 018

Maximizing Server Efficiency: from microarchitecture to machine-learning accelerators [SystemX Seminar]

Topic: 
Maximizing Server Efficiency: from microarchitecture to machine-learning accelerators
Abstract / Description: 

Datacenter servers have emerged as the predominant computing platform, impacting nearly every aspect of our daily lives. The need to study, understand, and improve server systems has been the main driver of my research career. I will begin this talk with a brief overview of our work on the CloudSuite benchmarks and Dark Silicon in Servers, which identified a number of challenges and opportunities for server architecture. Then, I will describe how these results have led to our work on Scale-Out Processors and Temporal Memory Streaming, and dive deeper into my students' recent results on instruction sequence memorization and FPGA-based accelerators for machine learning.

Date and Time: 
Tuesday, October 3, 2017 - 4:30pm
Venue: 
Huang 018

Satisfiability Modulo Theories [SystemX Seminar]

Topic: 
Satisfiability Modulo Theories
Abstract / Description: 

I will give an introduction to Satisfiability Modulo Theories (SMT) solvers, including the DPLL(T) architecture that combines a Boolean satisfiability (SAT) solver with a theory solver, techniques for building individual theory solvers, and techniques for theory combination. I will also cover some applications of SMT solvers, including symbolic execution and equivalence checking.

Date and Time: 
Thursday, October 5, 2017 - 4:30pm
Venue: 
Huang 018

Quantum Computing: State of the Art and Industrial Applications [SystemX Seminar]

Topic: 
Quantum Computing: State of the Art and Industrial Applications
Abstract / Description: 

Theoretically, quantum computation has shown a significant asymptotic runtime enhancement in the performance of specific calculations when compared to their classical counterparts. By leveraging entanglement in multipartite quantum systems as a computational resource, it is hoped that quantum computers will find widespread utility in industry. In this talk I will present an overview of quantum computation with a focus on the scientific and technological advancements that could make quantum computers a reality. Special attention will be given to superconducting qubits as a promising hardware platform that can be fabricated by standard CMOS processes.

Date and Time: 
Thursday, October 12, 2017 - 4:30pm
Venue: 
Huang 018

Ultrasound & Breast Cancer Detection [SystemX Seminar]

Topic: 
Ultrasound & Breast Cancer Detection
Abstract / Description: 

Each year it is estimated that over 250,000 women in the United States will be diagnosed with breast cancer and more than 40,000 will die. Breast cancer is the most commonly diagnosed cancer in women and is the second leading cause of cancer death among women. According to the World Health Organization, breast cancer is the most common cancer among women worldwide, claiming the lives of hundreds of thousands of women each year and affecting countries at all levels of modernization. Population based screening has been successful in the early detection of some cancers, including cervical, colon, and breast. However, the success in mortality reduction by screening mammography has been limited in women with mammographically dense tissue. Ultrasound has the potential to be an ideal screening tool because it is relatively inexpensive and requires no injected contrast or ionizing radiation. However, the relatively poor conspicuity of some cancers by hand scanning and the considerable radiologist time necessary limit its use. Automated breast ultrasound (ABUS) allows the radiologist to read the images quickly, and separates image acquisition from interpretation, allowing for efficient screening workflow. Several studies using automated breast ultrasound show that adding ABUS to mammography significantly increases the cancer detection rate compared to mammography alone, and triples the 1 cm-or-less invasive cancers found in dense-breasted women.

Date and Time: 
Thursday, October 19, 2017 - 4:30pm
Venue: 
Huang 018

Pages

Subscribe to RSS - Seminar / Colloquium