Seminar / Colloquium

SystemX Seminar: Using the Stanford Driving Simulator for Human Machine Interaction Studies

Topic: 
Using the Stanford Driving Simulator for Human Machine Interaction Studies
Abstract / Description: 

The driving simulator at Stanford is used for human-in-the-loop, human-machine interaction (HMI) driving studies. Many of the studies focus on shared control between humans and autonomous systems. The simulator’s toolset collects objective driving behavior data directly from the simulator, as well as data streams from eye trackers, cameras and other physiological sensors that we employ to understand human responses to myriad circumstances in the simulated environment.  This presentation will describe the hardware and software associated with the driving studies, what is possible and show some similar labs at other universities. 

Date and Time: 
Thursday, January 25, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Programmable and Smart Silicon Interposers for 3D Chip Stacks

Topic: 
Programmable and Smart Silicon Interposers for 3D Chip Stacks
Abstract / Description: 

With increased demands for computation and the slowdown of CMOS scaling, alternative methods for further miniaturization of electronics are gaining momentum. Heterogeneous integration (HI) of chips from various manufacturing lines on to a silicon interposer is a newly recognized approach, which has been used in a number of high-performance applications. However, these 3D-IC chip stacks are time-consuming to develop and are application-specific, resulting in prohibitive costs.

Similar cost issues have been addressed in the form of field programmable gate arrays. In an analogous fashion, programmable silicon interposers open new possibilities of design-reuse of silicon for multiple applications, resulting in cost savings and time to market advantages. Programmable re-use of silicon interposers also enables just-in-time manufacturing to simultaneously produce several smaller lots made with high-mix of components.

In addition, programmable silicon interposers for 3D stacking allow system-level control of functions that can be embedded in the interposer, such as power management, built in self-test, and manufacturing defect repair. Power management techniques previously applied to single chip solutions can be re-architected to achieve higher system level efficiency in these 3D chip stack. We will demonstrate one such system built with a smart, programmable silicon interposer from zGlue – the first commercial implementation of a product in this category. This technology will help proliferate internet of things (IoT) devices, give a broader array of choices to product designers, and will accelerate proliferation of electronics in ultra-small form factor for healthcare, industrial as well as consumer space.

Date and Time: 
Thursday, January 18, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Smart Internet Connections: Your internet connection’s use of artificial intelligence and machine learning

Topic: 
Smart Internet Connections: Your internet connection’s use of artificial intelligence and machine learning
Abstract / Description: 

The next generation of internet communication has many uses for machine learning. This talk will review some of the applications for and types of 5th-generation converged software-defined communication networks, including the important access links to all users/consumers and devices/things, upon which humanity increasing and crucially depends. The general problem well addressed by communications theory is the inference from a large set of data (sometimes called a "channel" output) of a desired/intended conclusion (sometimes called the "channel input" or data "transmitted"); this is sometimes also known as "decoding." Many learning systems like search engines, detection of diseases, facial recognition, etc are all forms of this "decoding." Many of the methods for "machine learning" can be recast in this more general setting, and as well then re-used to advance further the art of next-generation communication. The talk will encourage further investigation into both the "learning" and advancement of the future networks that will increasingly connect us all. Some of these topics will be further examined in EE392AA (spring quarter), which can be used for EE MS Communications Depth sequence.

Date and Time: 
Thursday, January 11, 2018 - 4:30pm
Venue: 
Y2E2 111

Applied Physics/Physics Colloquium: Quantum vs. Classical optimization: A Status Update on the Arms Race

Topic: 
Quantum vs. Classical optimization: A Status Update on the Arms Race
Abstract / Description: 

Can quantum computers meet the tantalizing promise of solving complex calculations - such as optimization problems or database queries - faster than classical computers based on transistor technologies? Although IBM recently opened up their five-qubit programmable quantum computer to the public to tinker with, the holy grail of a useful large-scale programmable universal quantum computer is decades away. While working mid-scale programmable special-purpose quantum optimization machines exist, a conclusive detection of quantum speedup remains controversial despite recent promising results. In this talk, a head-to-head comparison between quantum and classical optimization approaches is given. Current quantum annealing technologies must outperform classical devices to claim the crown in the race for quantum speedup.

Date and Time: 
Tuesday, January 9, 2018 - 4:15pm
Venue: 
Hewlett 201

SystemX Seminar: Materials and device innovations in the scaling and post-scaling eras

Topic: 
Materials and device innovations in the scaling and post-scaling eras
Abstract / Description: 

With creative innovations and significant technical effort, semiconductor technology scaling is now continuing deeper into nm dimensions. The ultimate lateral dimensions, or ultimate number of layers in 3D stacking may be under debate, but not the fact that there are fundamental or practical (technical and economic) limits to exponential improvements. The industry is already transitioning towards an era in which innovations are enabling advantages for just one or two generations. This talk presents an overview of scaling showing examples of how innovations in materials, devices and design-technology co-optimization enabled scaling and continue to do so towards the 5nm and 3 nm nodes. We also discuss some of the fundamental limits of pitch scaling as well as perspectives on beyond pitch scaling approaches, 3D stacking and heterogeneous and system level integration that will allow to continue to enhance system capabilities, and how emerging applications such as neuromorphic computing impact and drive hardware requirements and development, and open new growth opportunities.

Date and Time: 
Thursday, December 7, 2017 - 4:30pm
Venue: 
Huang 018

IEEE-EDS Distinguished Lecture: 2D Electronics – Opportunities and Challenges

Topic: 
2D Electronics – Opportunities and Challenges
Abstract / Description: 

During the past decade, 2D (two-dimensional) materials have attracted enormous attention from various scientific communities ranging from chemists and physicists to material scientists and device engineers. The rise of the 2D materials began in 2004 with the work on graphene done at Manchester University and Georgia Tech. Particularly the observed high carrier mobilities raised early expectations that graphene could be a perfect electronic material. It soon became clear, however, that due its zero bandgap graphene is not suitable for most electronic devices, in particular transistors. On the other hand, researchers have extended their work to 2D materials beyond graphene and the number of 2D materials under investigation is continuously rising. Many of them possess sizeable bandgaps and therefore are considered to be useful for transistors. Indeed, the progress in the field of 2D transistors has been rapid and experimental MOSFETs using semiconducting 2D channel materials have been reported by many groups. A recent achievement was the demonstration of a well-performing 1-nm gate MoS2 MOSFET in 2016. On the other hand, and in spite of the progress in the field, the debate on the real prospects of the 2D materials for future electronics is still controversial.

In the present lecture, the most important classes of 2D materials are introduced and the potential of 2D transistors is assessed as realistically as possible. To this end, two material properties – bandgap and mobility – are examined in detail and the mobility-bandgap tradeoff is discussed. The state of the art of 2D transistors is reviewed by summarizing relevant results of leading groups in the field, presenting examples of the lecturer's own work on 2D electronics, and comparing the performance of 2D transistors to that of competing conventional transistors. Based on these considerations, a balanced view of both the pros and cons of 2D transistors is provided and their potential in both the More Moore (digital CMOS) and the More Than Moore domains of semiconductor electronics is discussed. It is shown that due to the rather conservative CMOS scaling scenario of the 2015 ITRS (compared to the more aggressive scenarios of the previous ITRS editions) it will be difficult for 2D materials to make inroads into mainstream CMOS. However, due to their specific properties (for example, 2D materials are bendable and stretchable) they may enable entirely new applications in the More Than Moore domain.

Date and Time: 
Friday, December 8, 2017 - 4:00pm
Venue: 
Packard 101

SCIEN & EE 292E: Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging

Topic: 
Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging
Abstract / Description: 

High-speed imaging is an indispensable technology for blur-free observation of fast transient dynamics in virtually all areas including science, industry, defense, energy, and medicine. Unfortunately, the frame rates of conventional cameras are significantly constrained by their data transfer bandwidth and onboard storage. We demonstrate a two-dimensional dynamic imaging technique, compressed ultrafast photography (CUP), which can capture non-repetitive time-evolving events at up to 100 billion fps. Compared with existing ultrafast imaging techniques, CUP has a prominent advantage of measuring an x, y, t (x, y, spatial coordinates; t, time) scene with a single camera snapshot, thereby allowing observation of transient events occurring on a time scale down to tens of picoseconds. Thanks to the CUP technology, for the first time, the human can see light pulses on the fly. Because this technology advances the imaging frame rate by orders of magnitude, we now enter a new regime and open new visions.

In this talk, I will discuss our recent effort to develop a second-generation CUP system and demonstrate its applications at scales from macroscopic to microscopic. For the first time, we imaged photonic Mach cones and captured "Sonic Boom" of light in action. Moreover, by adapting CUP for microscopy, we enabled two-dimensional fluorescence lifetime imaging at an unprecedented speed. The advantage of CUP recording is that even visually simple systems can be scientifically interesting when they are captured at such a high speed. Given CUP's capability, we expect it to find widespread applications in both fundamental and applied sciences including biomedical research.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Packard 101

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

Pages

Applied Physics / Physics Colloquium

Applied Physics/Physics Colloquium: Searches for new Physics with Nuclear Spin Precession

Topic: 
Searches for new Physics with Nuclear Spin Precession
Abstract / Description: 

Prof. Mike Romalis of Princeton University will give the Applied Physics/Physics colloquium on Jan. 23, 2018, entitled "Searches for new Physics with Nuclear Spin Precession."

Date and Time: 
Tuesday, January 23, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Symmetries of Time

Topic: 
Symmetries of Time
Abstract / Description: 

Time is a basic element in our models of the physical world, as is symmetry. Several issues at the frontiers of modern physics concern the interplay of those concepts. Elaborating on this theme, I will survey the current state of axions and time crystals, including very recent work.

Date and Time: 
Tuesday, January 16, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Quantum vs. Classical optimization: A Status Update on the Arms Race

Topic: 
Quantum vs. Classical optimization: A Status Update on the Arms Race
Abstract / Description: 

Can quantum computers meet the tantalizing promise of solving complex calculations - such as optimization problems or database queries - faster than classical computers based on transistor technologies? Although IBM recently opened up their five-qubit programmable quantum computer to the public to tinker with, the holy grail of a useful large-scale programmable universal quantum computer is decades away. While working mid-scale programmable special-purpose quantum optimization machines exist, a conclusive detection of quantum speedup remains controversial despite recent promising results. In this talk, a head-to-head comparison between quantum and classical optimization approaches is given. Current quantum annealing technologies must outperform classical devices to claim the crown in the race for quantum speedup.

Date and Time: 
Tuesday, January 9, 2018 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Probing Cosmology with the Dark Energy Survey

Topic: 
Probing Cosmology with the Dark Energy Survey
Abstract / Description: 

I will overview the Dark Energy Survey (DES) project and highlight its early science results, focusing on the recently released cosmology results from the first year of the survey. The DES collaboration built the 570-megapixel Dark Energy Camera for the Blanco 4-meter telescope at NOAO's Cerro Tololo Inter-American Observatory in Chile to carry out a deep, wide-area, multi-band optical survey of several hundred million galaxies and a time-domain survey to discover several thousand supernovae. The survey started in Aug. 2013 and is now in its fifth observing season. DES was designed to address the questions: why is the expansion of the Universe speeding up? Is cosmic acceleration due to dark energy or does it require a modification of General Relativity? DES is addressing these questions by measuring the history of cosmic expansion and the growth of structure through multiple complementary techniques: galaxy clusters, the large-scale galaxy distribution, gravitational lensing, and supernovae, as well as through cross-correlation with other data sets. I will also discuss how the DES data are being used to make a variety of other astronomical discoveries, from the outer Solar System to ultra-faint dwarf galaxies to the kilonova counterpart of a binary neutron star gravitational-wave source.

Date and Time: 
Tuesday, November 28, 2017 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Recent results on Gravitational Waves from LIGO and Virgo

Topic: 
Recent results on Gravitational Waves from LIGO and Virgo
Abstract / Description: 

Over the last two years, the Advanced LIGO and Advanced Virgo detectors have observed a handful of gravitational-wave events from the inspiral and merger of binary black holes in distant galaxies. These events have resulted in the first measurements of the fundamental properties of gravitational waves, tests of General Relativity in the strong-field, highly-dynamical regime, and the population, masses and spins of black holes in the universe. Most recently, signals were detected from the inspiral of a binary neutron star system, GW170817. That event is thus far the loudest (highest signal-to-noise ratio) and closest gravitational-wave event observed. A gamma-ray burst detected 1.7 seconds after merger confirms the long-held hypothesis that BNS mergers are associated with short gamma-ray bursts. The LIGO and Virgo data produced a three-dimensional sky localization of the source, enabling a successful electromagnetic follow-up campaign that identified an associated electromagnetic transient in a galaxy ~40 Mpc from Earth. A multi-messenger view of GW170817 from ~100 seconds before merger through weeks afterward provides evidence of a "kilonova", and of the production of heavy elements. For the first time, using gravitational waves we are able to constrain the equation of state of dense neutron stars and infer the rate of local binary neutron star mergers. When we include EM observations, we are able to directly measure the speed of gravitational waves, constrain its polarization content, independently measure the Hubble constant, probe the validity of the equivalence principle, and gain new insight into the astrophysical engine driving these events.

Date and Time: 
Tuesday, November 14, 2017 - 4:30pm
Venue: 
Hewlett 201

Making a Physicist with Jazz

Topic: 
Making a Physicist with Jazz
Abstract / Description: 

In 2005, theoretical physicist S. James Gates related a story about Abdus Salam where Salam explained that once Black people entered physics in large numbers, they would create something like jazz. Is this an essentialization of Black people or getting at the essence of how Black people have responded to the wake of slavery and colonialism? Using texts from a diverse set of disciplines -- English, ethnomusicology, and science, technology, and society studies -- I will reflect on possible answers to this question, what they tell us about how physicists are made, and whether this framework offers lessons for how physicists should be made.

Open to all interested. Limited seating. To attend this talk and discussion, please register online via this link: https://stanforduniversity.qualtrics.com/jfe/form/SV_2to6cpsyuDlqFo1

Date and Time: 
Monday, October 2, 2017 - 3:30pm
Venue: 
Black Community Services Center, Community Room

Applied Physics/Physics Colloquium: Physics in the Future

Topic: 
Physics in the Future
Abstract / Description: 

The greatest American philosopher of the 20th century and Hall of Fame Yankee baseball catcher, Yogi Berra, wisely noted, "It's hard to make predictions, especially about the future." Yogi Berra also warned, "If you don't know where you are going, you might wind up someplace else."

What excites physicists more than anything else are the unexpected discoveries that will open new horizons of the endless frontiers of science. The talk will sketch a few selected areas and offer a personal view of how physicists can position themselves to become lucky enough to stumble onto discoveries that lead us "someplace else."


 

APPLIED PHYSICS/PHYSICS COLLOQUIUM is held Tuesdays at 4:30 pm in the William R. Hewlett Teaching Center, room 200 (see map). Refreshments in the lobby of Varian Physics at 4:15 pm.

Autumn 2017/2018, Committee: Roger Blandford (Chair), Aharon Kapitulnik, Bob Laughlin, Leonardo Senatore

Date and Time: 
Tuesday, November 7, 2017 - 4:15pm
Venue: 
Hewlett 201

Pages

CS300 Seminar

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

SpaceX's journey on the road to mars

Topic: 
SpaceX's journey on the road to mars
Abstract / Description: 

SSI will be hosting Gwynne Shotwell — President and COO of SpaceX — to discuss SpaceX's journey on the road to mars. The event will be on Wednesday Oct 11th from 7pm - 8pm in Dinkelspiel Auditorium. After the talk, there will be a Q&A session hosted by Steve Jurvetson from DFJ Venture Capital.

Claim your tickets now on eventbright

 

Date and Time: 
Wednesday, October 11, 2017 - 7:00pm
Venue: 
Dinkelspiel Auditorium

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Subhasish Mitra

5:15-6:00, Silvio Savarese

Date and Time: 
Wednesday, December 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Phil Levis

5:15-6:00, Ron Fedkiw

Date and Time: 
Monday, December 5, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Dan Boneh

5:15-6:00, Aaron Sidford

Date and Time: 
Wednesday, November 30, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, John Mitchell

5:15-6:00, James Zou

Date and Time: 
Monday, November 28, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Emma Brunskill

5:15-6:00, Doug James

Date and Time: 
Wednesday, November 16, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, James Landay

5:15-6:00, Dan Jurafsky

Date and Time: 
Monday, November 14, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Ken Salisbury

5:15-6:00, Noah Goodman

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Kunle Olukotun

5:15-6:00, Jure Leskovec

Date and Time: 
Monday, November 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

Pages

EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium: Combining Physical and Statistical Models in Order to Narrow Uncertainty in Projected of Global Warming

Topic: 
Combining Physical and Statistical Models in Order to Narrow Uncertainty in Projected of Global Warming
Abstract / Description: 

A key question in climate science is How much global warming should we expect for a given increase in the atmospheric concentration of greenhouse gasses like carbon dioxide? One strategy for addressing this question is to run physical models of the global climate system but these models vary in their estimates of future warming by about a factor of two. Our research has attempted to narrow this range of uncertainty around model-projected future warming and to assess whether the upper or lower end of the model range is more likely. We showed that there are strong statistical relationships between how models simulate fundamental features of the Earth's energy budget over the recent past, and how much warming models simulate in the future. Importantly, we find that models that match observations the best over the recent past, tend to simulate more warming in the future than the average model. Thus, statistically combining information from physical models and observations tells us that we should expect more warming (with smaller uncertainty ranges) than we would expect if we were just looking at physical models in isolation and ignoring observations.

Date and Time: 
Wednesday, January 17, 2018 - 4:30pm
Venue: 
Gates B03

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

EE380 Computer Systems Colloquium: Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing

Topic: 
Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing
Abstract / Description: 

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Petascale Deep Learning on a Single Chips

Topic: 
Petascale Deep Learning on a Single Chips
Abstract / Description: 

Vathys.ai is a deep learning startup that has been developing a new deep learning processor architecture with the goal of massively improved energy efficiency and performance. The architecture is also designed to be highly scalable, amenable to next generation DL models. Although deep learning processors appear to be the "hot topic" of the day in computer architecture, the majority (we argue all) of such designs incorrectly identify the bottleneck as computation and thus neglect the true culprits in inefficiency; data movement and miscellaneous control flow processor overheads. This talk will cover many of the architectural strategies that the Vathys processor uses to reduce data movement and improve efficiency. The talk will also cover some circuit level innovations and will include a quantitative and qualitative comparison to many DL processor designs, including the Google TPU, demonstrating numerical evidence for massive improvements compared to the TPU and other such processors.

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: NLV Agents

Topic: 
Deep Learning in Speech Recognition
Abstract / Description: 

While neural networks had been used in speech recognition in the early 1990s, they did not outperform the traditional machine learning approaches until 2010, when Alex's team members at Microsoft Research demonstrated the superiority of Deep Neural Networks (DNN) for large vocabulary speech recognition systems. The speech community rapidly adopted deep learning, followed by the image processing community, and many other disciplines. In this talk I will give an introduction to speech recognition, go over the fundamentals of deep learning, explained what it took for the speech recognition field to adopt deep learning, and how that has been contributed to popularize personal assistants like Siri.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Partisan Gerrymandering and the Supreme Court: The Role of Social Science

Topic: 
Partisan Gerrymandering and the Supreme Court: The Role of Social Science
Abstract / Description: 

The U.S. Supreme Court is considering a case this term, Gill v Whitford, that might lead to the first constitutional constraints on partisanship in redistricting. Eric McGhee is the inventor of the efficiency gap, a measure of gerrymandering that the court is considering in the case. He will describe the case's legal background, discuss some of the metrics that have been proposed for measuring gerrymandering, and reflect on the role of social science in the litigation.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computing with High-Dimensional Vectors

Topic: 
Computing with High-Dimensional Vectors
Abstract / Description: 

Computing with high-dimensional vectors complements traditional computing and occupies the gap between symbolic AI and artificial neural nets. Traditional computing treats bits, numbers, and memory pointers as basic objects on which all else is built. I will consider the possibility of computing with high-dimensional vectors as basic objects, for example with 10,000-bit words, when no individual bit nor subset of bits has a meaning of its own--when any piece of information encoded into a vector is distributed over all components. Thus a traditional data record subdivided into fields is encoded as a high-dimensional vector with the fields superposed.

Computing power arises from the operations on the basic objects--from what is called their algebra. Operations on bits form Boolean algebra, and the addition and multiplication of numbers form an algebraic structure called a "field." Two operations on high-dimensional vectors correspond to the addition and multiplication of numbers. With permutation of coordinates as the third operation, we end up with a system of computing that in some ways is richer and more powerful than arithmetic, and also different from linear algebra. Computing of this kind was anticipated by von Neumann, described by Plate, and has proven to be possible in high-dimensional spaces of different kinds.

The three operations, when applied to orthogonal or nearly orthogonal vectors, allow us to encode, decode and manipulate sets, sequences, lists, and arbitrary data structures. One reason for high dimensionality is that it provides a nearly endless supply of nearly orthogonal vectors. Making of them is simple because a randomly generated vector is approximately orthogonal to any vector encountered so far. The architecture includes a memory which, when cued with a high-dimensional vector, finds its nearest neighbors among the stored vectors. A neural-net associative memory is an example of such.

Circuits for computing in high-D are thousands of bits wide but the components need not be ultra-reliable nor fast. Thus the architecture is a good match to emerging nanotechnology, with applications in many areas of machine learning. I will demonstrate high-dimensional computing with a simple algorithm for identifying languages.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Generalized Reversible Computing and the Unconventional Computing Landscape

Topic: 
Generalized Reversible Computing and the Unconventional Computing Landscape
Abstract / Description: 

With the end of transistor scaling now in sight, the raw energy efficiency (and thus, practical performance) of conventional digital computing is expected to soon plateau. Thus, there is presently a growing interest in exploring various unconventional types of computing that may have the potential to take us beyond the limits of conventional CMOS technology. In this talk, I survey a range of unconventional computing approaches, with an emphasis on reversible computing (defined in an appropriately generalized way), which fundamental physical arguments indicate is the only possible approach that can potentially increase energy efficiency and affordable performance of arbitrary computations by unboundedly large factors as the technology is further developed.

Date and Time: 
Wednesday, October 18, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: scratchwork, a tool for developing and communicating technical ideas

Topic: 
scratchwork: a tool for developing and communicating technical ideas
Abstract / Description: 

Digital tablets are no longer new or even expensive, but most of us still struggle to input our technical ideas (such as equations and diagrams) into a computer as easily as we write them on paper. I will discuss relevant existing technology and present scratchworktool.com, a tool designed to help simplify the digital writing process even without a tablet. I will also cover some of the important decisions and mistakes I made especially as I started building it. I hope these lessons will be helpful for anyone who is (or may eventually be) interested in developing similarly sophisticated products to solve a consumer-facing problem.

Date and Time: 
Wednesday, October 11, 2017 - 4:30pm
Venue: 
Gates B03

NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning [EE380 Computer Systems Colloquium]

Topic: 
NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning
Abstract / Description: 

Deep Learning and GPU Computing are now being deployed across many industries, helping to solve big data problems ranging from computer vision and natural language-processing to self-driving cars. At the heart of these solutions is the NVIDIA GPU, providing the computing power to both train these massive deep neural networks as well as efficiently provide inference and implementation of those networks. But how did the GPU get to this point?

In this talk I will present a personal perspective and some lessons learned during the GPU's journey and evolution from being the heart of the PC gaming platform, to today also powering the world's largest datacenters and supercomputers.

Date and Time: 
Wednesday, October 4, 2017 - 4:30pm
Venue: 
Gates B03

Pages

Ginzton Lab

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Ginzton Lab / AMO Seminar

Topic: 
2D/3D Photonic Integration Technologies for Arbitrary Optical Waveform Generation in Temporal, Spectral, and Spatial Domains
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 29, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
Silicon-Plus Photonics for Tomorrow's (Astronomically) Large-Scale Networks
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 22, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
'Supermode-Polariton Condensation in a Multimode Cavity QED-BEC System' and 'Probing Ultrafast Electron Dynamics in Atoms and Molecules'
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, January 4, 2016 - 4:15pm to 5:30pm
Venue: 
Spilker 232

Ginzton Lab: Special Optics Seminar

Topic: 
A Carbon Nanotube Optical Rectenna
Abstract / Description: 

An optical rectenna – that is, a device that directly converts free-propagating electromagnetic waves at optical frequencies to d.c. electricity – was first proposed over 40 years ago, yet this concept has not been demonstrated experimentally due to fabrication challenges at the nanoscale. Realizing an optical rectenna requires that an antenna be coupled to a diode that operates on the order of 1 petahertz (switching speed on the order of a femtosecond). Ultralow capacitance, on the order of a few attofarads, enables a diode to operate at these frequencies; and the development of metal-insulator-metal tunnel junctions with nanoscale dimensions has emerged as a potential path to diodes with ultralow capacitance, but these structures remain extremely difficult to fabricate and couple to a nanoscale antenna reliably. Here we demonstrate an optical rectenna by engineering metal-insulator-metal tunnel diodes, with ultralow junction capacitance of approximately 2 attofarads, at the tips of multiwall carbon nanotubes, which act as the antenna and metallic electron field emitter in the diode. This demonstration is achieved using very small diode areas based on the diameter of a single carbon nanotube (about 10 nanometers), geometric field enhancement at the carbon nanotube tips, and a low work function semitransparent top metal contact. Using vertically-aligned arrays of the diodes, we measure d.c. open-circuit voltage and short-circuit current at visible and infrared electromagnetic frequencies that is due to a rectification process, and quantify minor contributions from thermal effects. In contrast to recent reports of photodetection based on hot electron decay in plasmonic nanoscale antenna, a coherent optical antenna field is rectified directly in our devices, consistent with rectenna theory. Our devices show evidence of photon-assisted tunneling that reduces diode resistance by two orders of magnitude under monochromatic illumination. Additionally, power rectification is observed under simulated solar illumination. Numerous current-voltage scans on different devices, and between 5-77 degrees Celsius, show no detectable change in diode performance, indicating a potential for robust operation.

Date and Time: 
Tuesday, October 20, 2015 - 2:00pm to 3:00pm
Venue: 
Spilker 232

Pages

Information Systems Lab (ISL) Colloquium

ISL Colloquium: Delay, memory, and messaging tradeoffs in a distributed service system

Topic: 
Delay, memory, and messaging tradeoffs in a distributed service system
Abstract / Description: 

We consider the classical supermarket model: jobs arrive as a Poisson process of rate of lambda N, with 0 < lambda < 1, and are to be routed to one of N identical servers with unit mean, exponentially distributed processing times. We review a variety of policies and architectures that have been considered in the literature, and which differ in terms of the direction and number of messages that are exchanged, and the memory that they employ; for example, the "power-of-d-choices" or pull-based policies. In order to compare policies of this kind, we focus on the resources (memory and messaging) that they use, and on whether the expected delay of a typical vanishes as N increases.
We show that if (i) the message rate increases superlinearly, or (ii) the memory size increases superlogarithmically, as a function of N, then there exists a policy that drives the delay to zero, and we outline an analysis using fluid models. On the other hand, if neither condition (i) or (ii) holds, then no policy within a broad class of symmetric policies can yield vanishing delay.

Date and Time: 
Thursday, November 9, 2017 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Dykstra’s Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions

Topic: 
Dykstra’s Algorithm, ADMM, and Coordinate Descent: Connections, Insights, and Extensions
Abstract / Description: 

We study connections between Dykstra's algorithm for projecting onto an intersection of convex sets, the augmented Lagrangian method of multipliers or ADMM, and block coordinate descent. We prove that coordinate descent for a regularized regression problem, in which the penalty is a separable sum of support functions, is exactly equivalent to Dykstra's algorithm applied to the dual problem. ADMM on the dual problem is also seen to be equivalent, in the special case of two sets, with one being a linear subspace. These connections, aside from being interesting in their own right, suggest new ways of analyzing and extending coordinate de- scent. For example, from existing convergence theory on Dykstra's algorithm over polyhedra, we discern that coordinate descent for the lasso problem converges at an (asymptotically) linear rate. We also develop two parallel versions of coordinate descent, based on the Dykstra and ADMM connections.

Date and Time: 
Wednesday, October 11, 2017 - 4:15pm
Venue: 
Packard 202

An Information Theoretic Perspective of Fronthaul Constrained Cloud and Fog Radio Access Networks [Special Seminar: ISL Colloquium]

Topic: 
An Information Theoretic Perspective of Fronthaul Constrained Cloud and Fog Radio Access Networks
Abstract / Description: 

Cloud radio access networks (C-RANs) emerge as appealing architectures for next-generation wireless/cellular systems whereby the processing/decoding is migrated from the local base-stations/radio units (RUs) to a control/central unit (CU) in the "cloud". Fog radio access networks (F-RAN) address the case where the RUs are enhanced by having the ability of local caching of popular contents. The network
operates via fronthaul digital links connecting the CU and the RUs. In this talk we will address basic information theoretic aspects of such networks, with emphasis of simple oblivious processing. Theoretical results illustrate the considerable performance gains to be expected for different cellular models. Some interesting theoretical directions conclude the presentation.

Date and Time: 
Wednesday, May 24, 2017 - 2:00pm
Venue: 
Packard 202

Cracking Big Data with Small Data [ISL Colloquium]

Topic: 
Cracking Big Data with Small Data
Abstract / Description: 

For the last several years, we have witnessed the emergence of datasets of an unprecedented scale across different scientific disciplines. The large volume of such datasets presents new computational challenges as the diverse, feature-rich, and usually high-resolution data does not allow for effective data-intensive inference. In this regard, data summarization is a compelling (and sometimes the only) approach that aims at both exploiting the richness of large-scale data and being computationally tractable; Instead of operating on complex and large data directly, carefully constructed summaries not only enable the execution of various data analytics tasks but also improve their efficiency and scalability.

A systematic way for data summarization is to turn the problem into selecting a subset of data elements optimizing a utility function that quantifies "representativeness" of the selected set. Often-times, these objective functions satisfy submodularity, an intuitive notion of diminishing returns stating that selecting any given element earlier helps more than selecting it later. Thus, many problems in data summarization require maximizing submodular set functions subject to cardinality and massive data means we have to solve these problems at scale.

In this talk, I will present our recent efforts in developing practical schemes for data summarization. In particular, I will first discuss the fastest centralized solution whose query complexity is only linear in data size. However, to truly summarize massive data we need to opt for scalable methods. I will then present a streaming algorithm that with a single pass over the data provides a constant-factor approximation guarantee to the optimum solution. Finally, I will talk about a distributed approach that summarizes tens of millions of data points in a timely fashion. I will also demonstrate experiments on several applications, including sparse Gaussian process inference and exemplar-based clustering using Apache-Spark.

Date and Time: 
Thursday, May 11, 2017 - 4:15pm
Venue: 
Packard 101

An information-theoretic perspective on interference management [ISL Colloquium]

Topic: 
An information-theoretic perspective on interference management
Abstract / Description: 

For high data rates and massive connectivity, next-generation cellular networks are expected to deploy many small base stations. While such dense deployment provides the benefit of bringing radio closer to end users, it also increases the amount of interference from neighboring cells. Consequently, efficient and effective management of interference is expected to become one of the main challenges for high-spectral-efficiency, low-power, broad-coverage wireless communications.

In this talk, we introduce two competing paradigms of interference management and discuss recent developments in network information theory under these paradigms. In the first "distributed network" paradigm, the network consists of autonomous cells with minimal cooperation. We explore advanced channel coding techniques for the corresponding mathematical model of the "interference channel," focusing mainly on the sliding-window superposition coding scheme that achieves the performance of simultaneous decoding through point-to-point channel codes and low-complexity decoding. In the second "centralized network" paradigm, the network is a group of neighboring cells connected via backhaul links. For uplink and downlink communications over this "two-hop relay network," we develop dual coding schemes – noisy network coding and distributed decode-forward – that achieve capacity universally within a few bits per degree of freedom.

Date and Time: 
Thursday, April 20, 2017 - 4:15pm
Venue: 
Packard 101

When Exploration is Expensive -- Reducing and Bounding the Amount of Experience Needed to Learn to Make Good Decisions [ISL]

Topic: 
When Exploration is Expensive -- Reducing and Bounding the Amount of Experience Needed to Learn to Make Good Decisions
Abstract / Description: 

Understanding the limits of how much experience is needed to learn to make good decisions is both a foundational issue in reinforcement learning, and has important applications. Indeed, the potential to have artificial agents that help augment human capabilities, in the form of automated coaches or teachers, is enormous. Such reinforcement learning agents must explore in costly domains, since each experience comes from interacting with a human. I will discuss some of our recent theoretical results on sample efficient reinforcement learning.


 

The Information Systems Laboratory Colloquium (ISLC) is typically held in Packard 101 every Thursday at 4:15 pm during the academic year. Refreshments are usually served after the talk.

The Colloquium is organized by graduate students Martin Zhang, Farzan Farnia, Reza Takapoui, and Zhengyuan Zhou. To suggest speakers, please contact any of the students.

Date and Time: 
Thursday, April 27, 2017 - 4:15pm
Venue: 
Packard 101

Self-Driving Networks Workshop [ISL]

Topic: 
Self-Driving Networks Workshop
Abstract / Description: 

Networks have become very complex over the past decade. The users and operators of large cloud platforms and campus networks have desired a much more programmable network infrastructure to meet the dynamic needs of different applications and reduce the friction they can cause to each other. This has culminated in the Software-­‐defined Networking paradigm. But you cannot program what you do not understand: the volume, velocity and richness of network applications and traffic seem beyond the ability of direct human comprehension. What is needed is a sensing, inference and learning system which can observe the data emitted by a network during the course of its operation, reconstruct the network's evolution, infer key performance metrics, continually learn the best responses to rapidly-­‐changing load and operating conditions, and help the network adapt to them in real-­‐time. The workshop brings together academic and industry groups interested in the broad themes of this topic. It highlights ongoing research at Stanford and describes initial prototype systems and results from pilot deployments.

Date and Time: 
Wednesday, April 12, 2017 (All day)
Venue: 
Arrillaga Alumni Center

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Insensitivity of Loss Systems under Randomized SQ(d) Algorithms [ISL Colloquium]

Topic: 
Insensitivity of Loss Systems under Randomized SQ(d) Algorithms
Abstract / Description: 

In many applications such as cloud computing, managing server farm resources etc. an incoming task or job has to be matched with an appropriate server in order to minimise the latency or blocking associated with the processing. Ideally the best choice would be to match a job to the fastest available server. However when there are thousands of servers requiring the information on all server tasks is an overkill.

Pioneered in the 1990's the idea of randomised sampling of a few servers was proposed by Vvedenskaya and Dobrushin in Russia and Mitzmenmacher in the US and popularised as the "power of two" schemes which basically means that sampling two servers randomly and sending the job to the "better" server (i.e. with the shortest queue, or most resources) provides most of the benefits of sampling all the servers.

In the talk I will discuss multi-server loss models under power-of-d routing scheme when service time distributions are general with finite mean. Previous works on these models assume that the service times are exponentially distributed and insensitivity was suggested through simulations. Showing insensitivity to service time distributions has remained an open problem. We address this problem by considering service time distributions as Mixed-Erlang distributions that are dense in the class of general distributions on (0, ∞). We derive the mean field equations (MFE) of the empirical distributions for the system and establish the existence and uniqueness of the fixed point of the MFE. Furthermore we show that the fixed point of the MFE corresponds to the fixed point obtained from the MFE corresponding to a system with exponential service times showing that the fixed point is insensitive to the distribution. Due to lack of uniformity of the mixed-Erlang convergence the true general case needs to be handled differently. I will conclude the case of the MFE with general service times showing that the MFE is now characterized by a pde whose stationary point coincides with the fixed point in the case with exponential service times.The techniques developed in this paper are applicable to study mean field limits for Markov processes on general state spaces and insensitivity properties of other queueing models.

Date and Time: 
Monday, March 20, 2017 - 3:00pm
Venue: 
Packard 202

Anonymity in the Bitcoin Peer-to-Peer Network [ISL Colloquium]

Topic: 
Anonymity in the Bitcoin Peer-to-Peer Network
Abstract / Description: 

Bitcoin enjoys a public perception of being a privacy-preserving financial system. In reality, Bitcoin has a number of privacy vulnerabilities, including the well-studied fact that transactions can be linked through the public blockchain. More recently, researchers have demonstrated deanonymization attacks that exploit a lower-layer weakness: the Bitcoin peer-to-peer (P2P) networking stack. In particular, the P2P network currently forwards content in a structured way that allows observers to deanonymize users by linking their transactions to the originating IP addresses. In this work, we first demonstrate that current protocols exhibit poor anonymity guarantees, both theoretically and in practice. Then, we consider a first-principles redesign of the P2P network, with the goal of providing strong, provable anonymity guarantees. We propose a simple networking policy called Dandelion, which achieves nearly-optimal anonymity guarantees at minimal cost to the network's utility.

Date and Time: 
Thursday, March 16, 2017 - 4:15pm
Venue: 
Packard 101

Pages

IT-Forum

IT Forum: Tight regret bounds for a latent variable model of recommendation systems

Topic: 
Tight regret bounds for a latent variable model of recommendation systems
Abstract / Description: 

We consider an online model for recommendation systems, with each user being recommended an item at each time-step and providing 'like' or 'dislike' feedback. A latent variable model specifies the user preferences: both users and items are clustered into types. The model captures structure in both the item and user spaces, and our focus is on simultaneous use of both structures. We analyze the situation in which the type preference matrix has i.i.d. entries. Our analysis elucidates the system operating regimes in which existing algorithms are nearly optimal, as well as highlighting the sub-optimality of using only one of item or user structure (as is done in commonly used item-item and user-user collaborative filtering). This prompts a new algorithm that is nearly optimal in essentially all parameter regimes.

Joint work with Prof. Guy Bresler.

Date and Time: 
Friday, November 10, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning

Topic: 
Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning
Abstract / Description: 

Molecular communication is a new and bio-inspired field, where chemical signals are used to transfer information instead of electromagnetic or electrical signals. In this paradigm, the transmitter releases chemicals or molecules and encodes information on some property of these signals such as their timing or concentration. The signal then propagates the medium between the transmitter and the receiver through different means such as diffusion, until it arrives at the receiver where the signal is detected and the information decoded. This new multidisciplinary field can be used for in-body communication, secrecy, networking microscale and nanoscale devices, infrastructure monitoring in smart cities and industrial complexes, as well as for underwater communications. Since these systems are fundamentally different from telecommunication systems, most techniques that have been developed over the past few decades to advance radio technology cannot be applied to them directly.

In this talk, we first explore some of the fundamental limits of molecular communication channels, evaluate how capacity scales with respect to the number of particles released by the transmitter, and the optimal input distribution. Finally, since the underlying channel models for some molecular communication systems are unknown, we demonstrate how techniques from machine learning and deep learning can be used to design components such as detection algorithms, directly from transmission data, without any knowledge of the underlying channel models.

Date and Time: 
Monday, October 16, 2017 - 3:25pm to 4:25pm
Venue: 
Packard 202

Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions

Topic: 
Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions
Abstract / Description: 

Recent years have witnessed significant progress in entropy and mutual information estimation, in particular in the large alphabet regime. Concretely, there exist efficiently computable estimators whose performance with n samples is essentially that of the maximum likelihood estimator with n log(n) samples, a phenomenon termed "effective sample size enlargement". Generalizations to processes with memory (estimation of the entropy rate) and continuous distributions (estimation of the differential entropy) have remained largely open. This talk is about the challenges behind those generalizations and recent progress in this direction. For estimating the entropy rate of a Markov chain, we show that when the mixing time is not too slow, at least S^2/log(S) samples are required to consistently estimate the entropy rate, where S is the size of the state space. In contrast, the empirical entropy rate requires S^2 samples to achieve consistency even if the Markov chain is i.i.d. We propose a general approach to achieve the S^2/log(S) sample complexity, and illustrate our results through estimating the entropy rate of the English language from the Penn Treebank (PTB) and the Google 1 Billion Word Dataset. For differential entropy estimation, we characterize the minimax behavior over Besov balls, and show that a fixed-k nearest neighbor estimator adaptively achieves the minimax rates up to logarithmic factors without knowing the smoothness of the density. The "effective sample size enlargement" phenomenon holds in both the Markov chain case and the case of continuous distributions.

 

Joint work with Weihao Gao, Yanjun Han, Chuan-Zheng Lee, Pramod Viswanath, Tsachy Weissman, Yihong Wu, and Tiancheng Yu.

Date and Time: 
Friday, October 13, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications

Topic: 
Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications
Abstract / Description: 

We consider a model of multi-agent online learning under imperfect information, where the reward structures of agents are given by a general continuous game. After introducing a general equilibrium stability notion for continuous games, called variational stability, we examine the well-known online mirror descent (OMD) learning algorithm and show that the "last iterate" (that is, the actual sequence of actions) of OMD converges to variationally stable Nash equilibria provided that the feedback delays faced by the agents are synchronous and bounded. We then extend the result to almost sure convergence to variationally stable Nash equilibria under both unbiased noise and synchronous and bounded delays. Subsequently, to tackle fully decentralized, asynchronous environments with unbounded feedback delays, we propose a variant of OMD which we call delayed mirror descent (DMD), and which relies on the repeated leveraging of past information. With this modification, the algorithm converges to variationally stable Nash equilibria, with no feedback synchronicity assumptions, and even when the delays grow super-linearly relative to the game's horizon. We then again extend it to the case where there are both delays and noise.

In the second part of the talk, we present two applications of the multi-agent online learning framework. The first application is on non-convex stochastic optimization, where we characterize almost sure convergence of the well-known stochastic mirror descent algorithm to global optima for a large class of non-convex stochastic optimization problems (strictly including convex, quasi-convex and start-convex problems). A step further, our results also include as a special case the large-scale stochastic optimization problem, where stochastic mirror descent is applied in a distributed, asynchronous manner across multiple machines/processors. Time permitting, we will discuss how these results help (at least in part) clarify and affirm the recent successes of mirror-descent type algorithms in large-scale machine learning. The second application concerns power management on random wireless networks, where we use a game-design approach to derive robust power control algorithms that converge (almost surely) to the optimal power allocation in the presence of randomly fluctuating networks.

This is joint work with Nick Bambos, Stephen Boyd, Panayotis Mertikopoulos, Peter Glynn and Claire Tomlin.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, October 6, 2017 - 1:15pm
Venue: 
Packard 101

Biology as Information Dynamics [IT forum]

Topic: 
Biology as Information Dynamics
Abstract / Description: 

If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the 'replicator equation' – a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback-Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher's fundamental theorem of natural selection.

Date and Time: 
Thursday, April 20, 2017 - 4:20pm
Venue: 
Clark S361

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Minimum Rates of Approximate Sufficient Statistics [IT-Forum]

Topic: 
Minimum Rates of Approximate Sufficient Statistics
Abstract / Description: 

Given a sufficient statistic for a parametric family of distributions, one can estimate the parameter without access to the data itself but by using a sufficient statistic. However, the memory size for storing the sufficient statistic may be prohibitive. Indeed, for $n$ independent data samples drawn from a $k$-nomial distribution with $d=k-1$ degrees of freedom, the length of the code scales as $d\log n+O(1)$. In many applications though, we may not have a useful notion of sufficient statistics and also may not need to reconstruct the generating distribution exactly. By adopting an information-theoretic approach in which we consider allow a small error in estimating the generating distribution, we construct various notions of {\em approximate sufficient statistics} and show that the code length can be reduced to $\frac{d}{2}\log n + O(1)$. We consider errors measured according to the relative entropy and variational distance criteria. For the code construction parts, we leverage Rissanen's minimum description length principle, which yields a non-vanishing error measured using the relative entropy. For the converse parts, we use Clarke and Barron's asymptotic expansion for the relative entropy of a parametrized distribution and the corresponding mixture distribution. The limitation of this method is that only a weak converse for the variational distance can be shown. We develop new techniques to achieve vanishing errors and we also prove strong converses for all our statements. The latter means that even if the code is allowed to have a non-vanishing error, its length must still be at least $\frac{d}{2}\log n$.

This is joint work with Prof. Masahito Hayashi (Graduate School of Mathematics, Nagoya University and Center for Quantum Technologies, NUS


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Monday, April 17, 2017 - 1:15pm
Venue: 
Packard 202

Information Theory, Geometry, and Cover's Open Problem [IT-Forum]

Topic: 
Information Theory, Geometry, and Cover's Open Problem
Abstract / Description: 

Formulating the problem of determining the communication capacity of point-to-point channels as a problem in high-dimensional geometry is one of Shannon's most important insights that has led to the conception of information theory. However, such geometric insights have been limited to the point-to-point case, and have not been effectively utilized to attack network problems. In this talk, we present our recent work which develops a geometric approach to make progress on one of the central problems in network information theory, namely the capacity of the relay channel. In particular, consider a memoryless relay channel, where the channel from the relay to the destination is an isolated bit pipe of capacity C0. Let C(C0) denote the capacity of this channel as a function of C0. What is the critical value of C0 such that C(C0) first equals C(infinity)? This is a long-standing open problem posed by Cover and named ''The Capacity of the Relay Channel,'' in Open Problems in Communication and Computation, Springer-Verlag, 1987. In this talk, we answer this question in the Gaussian case and show that C0 can not equal to C(infinity) unless C0=infinity, regardless of the SNR of the Gaussian channels, while the cut-set bound would suggest that C(infinity) can be achieved at finite C0. The key step in our proof is a strengthening of the isoperimetric inequality on a high-dimensional sphere, which we use to develop a packing argument on a spherical cap that resembles Shannon's sphere packing idea for point-to-point channels.

Joint work with Leighton Barnes and Ayfer Ozgur.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 24, 2017 - 1:15pm
Venue: 
Packard 202

Codes and card tricks: Magic for adversarial crowds [IT-Forum]

Topic: 
Codes and card tricks: Magic for adversarial crowds
Abstract / Description: 

Rated by Ron Graham as a top-10 mathematical card trick of the 20th century, Diaconis' mind reader is a magic that involves the interaction with five collaborative volunteers. Inspired by this, we perform a similar card trick in this talk with the upgrade to tolerate bluffing volunteers. The theory behind this trick will be used to develop fundamental limits as well as code constructions for faster delay estimation in positioning systems.

This is a joint work with Sihuang Hu and Ofer Shayevitz (https://arxiv.org/abs/1605.09038).

Date and Time: 
Friday, January 27, 2017 - 1:15pm
Venue: 
Packard 202

On Two Problems in Coded Statistical Inference [IT-Forum]

Topic: 
On Two Problems in Coded Statistical Inference
Abstract / Description: 

While statistical inference and information theory are deeply related fields, problems which lie at the intersection of both disciplines usually fall between the two stools, and lack definitive answers. In this talk, I will discuss recent advances in two such problems.

In the first part of the talk, I will discuss a distributed hypothesis testing problem, in which the hypotheses regard the joint statistics of two sequences, one available to the decision function directly (as side information), while the other is conveyed through a limited-rate link. The goal is to design a system which obtains the optimal trade-off between the false-alarm and misdetection exponents. I will define a notion of "channel detection codes", and show that the optimal exponents of the distributed hypothesis testing problem is directly related to the exponents of these codes. Then, I will discuss a few bounds on the exponents of channel detection codes, as well as prospective improvements. This approach has a two merits over previous works: It is suitable for any pair of memoryless joint distributions, and it provides bounds on the entire false-alarm/misdetection curve, rather than just bounds on its boundary points (Stein's exponent).

In the second part of the talk (time permitting), I will discuss a parameter estimation problem over an additive Gaussian noise channel with bandlimited input. In case one is allowed to design both the modulator and the estimator, the absolute \$alpha$-th moment of the estimation error can decrease exponentially with the transmission time. I will discuss several new upper (converse) bounds for the optimal decrease rate.

Joint work with Yuval Kochman (Hebrew university) and Neri Merhav (Technion).


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 17, 2017 - 1:15pm
Venue: 
Packard 202

Pages

Optics and Electronics Seminar

High Precision Motion Control 101: Tools for Emerging Applications [Stanford Optical Society]

Topic: 
High Precision Motion Control 101: Tools for Emerging Applications
Abstract / Description: 

Precision motion control is an important subset of automation, which encompasses a diverse array of applications. Many aspects of optical engineering research and development depend on the appropriate selection and use of sensors, actuators, and controllers. In precision motion projects, the actual needs may vary widely from the originally intended specifications.

Some of the most difficult tasks in optics research require complex multi-axis motion control methods. Some popular examples include Additive Manufacturing (3D Printing), Sample Positioning for Crystallography, and Alignment with Silicon Photonics. In this seminar we will address concepts and challenges in selecting actuator and sensing technologies, as well as the appropriate controller techniques. We will provide researchers with an ability to understand and apply the fundamental concepts for R&D precision motion projects and automation systems.

Date and Time: 
Wednesday, May 17, 2017 - 1:25pm
Venue: 
Spilker 232

Scientific Visualization with Blender [Stanford Optical Society Workshop]

Topic: 
Scientific Visualization with Blender
Abstract / Description: 

Have you ever said to yourself: "I'm sure this paper would have been accepted if I had just included a pretty picture..."? The pretty pictures you often see gracing the cover of Nature can be made in a number of ways, from simple sketching in Powerpoint to full-blown 3D modeling. Among the numerous software packages available for 3D computer graphics is Blender, a free and open-source package that can be used for modeling, sculpting, animating, rendering, and more. In this hands-on workshop, I will introduce basic principles of operation for the modeling and rendering of objects in Blender. Together, we will create some simple models before diving into some advanced techniques that are necessary to make images like the one seen below. Are you the type of person that prefers working from the command line? Blender is built on Python, and can be directly manipulated from a Python console or script. I will show you how to perform some unique operations using a simple Python script, with an eye towards visualizing your own data in Blender. Are you simply looking for a method of converting a 2D image into a shiny 3D model? I will also show you how to take an SVG file and turn it into a 3D model in Blender that can be manipulated. This workshop is intended for people that are entirely unfamiliar with Blender software, but the concepts covered can easily be applied to any rendering software of your choice.

This is a hands-on workshop: bring your laptops, a keyboard with a numpad, and a mouse!

Date and Time: 
Wednesday, May 24, 2017 - 1:00pm to 5:00pm
Venue: 
-Venue information will be provided to registered attendees-

Supercontinuum Fiber lasers: Technology and Applications [Stanford Optical Society Seminar]

Topic: 
Supercontinuum Fiber lasers: Technology and Applications
Abstract / Description: 

In the 1970s, wide spectral broadening of intense laser light in a non-linear material, or supercontinuum generation, was first demonstrated in the laboratory. With the development of recent fiber and fiber laser technology, namely compact high power picosecond lasers and micro-structured Photonic Crystal Fiber (PCF) commercial supercontinuum lasers have become a reality. With a typical spectral bandwidth covering over 2000 nm and output powers exceeding 20W, these sources have proved an invaluable tool. In this talk, we will cover:

  • Fundamentals of how supercontinuum lasers work and the importance on the PCF design in tailoring the spectrum.
  • The properties of supercontinuum laser light and what make them unique sources.
  • The main applications today for supercontinuum laser in imaging, spectroscopy, Optical Coherence Tomography (OCT) and illumination.
  • Supercontinuum technology roadmap and future applications
Date and Time: 
Monday, May 22, 2017 - 11:00am
Venue: 
Spilker 232

Always-On Vision Becomes a Reality [OSA Seminar]

Topic: 
Always-On Vision Becomes a Reality
Abstract / Description: 

Intelligent devices equipped with human-like senses such as always-on touch, audio and motion detection have enabled a variety of new use cases and applications, transforming the way we interact with each other and our surroundings. While the vast majority (>80%) of human insight comes through the eyes, enabling always-on vision (defined as < 1 mA power) for devices is challenging due to power-hungry hardware and the high complexity of inference algorithms. Qualcomm Research has pioneered an Always-on Computer Vision Module (CVM) combining innovations in the system architecture, ultra-low power design and dedicated hardware for vision algorithms running at the "edge." With low end-to-end power consumption, a tiny form factor and low cost, the CVM can be integrated into a wide range of battery- and line-powered devices (IoT, mobile, VR/AR, automotive, etc.), performing object detection, feature recognition, change/motion detection, and other tasks. Its processor performs all computation within the module itself and outputs metadata.

Date and Time: 
Thursday, May 11, 2017 - 4:15pm
Venue: 
Spilker 232

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Insensitivity of Loss Systems under Randomized SQ(d) Algorithms [ISL Colloquium]

Topic: 
Insensitivity of Loss Systems under Randomized SQ(d) Algorithms
Abstract / Description: 

In many applications such as cloud computing, managing server farm resources etc. an incoming task or job has to be matched with an appropriate server in order to minimise the latency or blocking associated with the processing. Ideally the best choice would be to match a job to the fastest available server. However when there are thousands of servers requiring the information on all server tasks is an overkill.

Pioneered in the 1990's the idea of randomised sampling of a few servers was proposed by Vvedenskaya and Dobrushin in Russia and Mitzmenmacher in the US and popularised as the "power of two" schemes which basically means that sampling two servers randomly and sending the job to the "better" server (i.e. with the shortest queue, or most resources) provides most of the benefits of sampling all the servers.

In the talk I will discuss multi-server loss models under power-of-d routing scheme when service time distributions are general with finite mean. Previous works on these models assume that the service times are exponentially distributed and insensitivity was suggested through simulations. Showing insensitivity to service time distributions has remained an open problem. We address this problem by considering service time distributions as Mixed-Erlang distributions that are dense in the class of general distributions on (0, ∞). We derive the mean field equations (MFE) of the empirical distributions for the system and establish the existence and uniqueness of the fixed point of the MFE. Furthermore we show that the fixed point of the MFE corresponds to the fixed point obtained from the MFE corresponding to a system with exponential service times showing that the fixed point is insensitive to the distribution. Due to lack of uniformity of the mixed-Erlang convergence the true general case needs to be handled differently. I will conclude the case of the MFE with general service times showing that the MFE is now characterized by a pde whose stationary point coincides with the fixed point in the case with exponential service times.The techniques developed in this paper are applicable to study mean field limits for Markov processes on general state spaces and insensitivity properties of other queueing models.

Date and Time: 
Monday, March 20, 2017 - 3:00pm
Venue: 
Packard 202

Quest for Energy Efficiency in Computing Technologies [Applied Physics 483 Optics & Electronics]

Topic: 
Quest for Energy Efficiency in Computing Technologies
Abstract / Description: 

As computing becomes increasingly pervasive in our daily life, it is generally recognized that energy efficiency will be one of the key design considerations for any future computing scheme. Consequently, significant research is currently ongoing on exploring new physics, material systems and system level designs to improve energy efficiency. In this talk, I shall discuss some of our recent progresses in this regard. Specifically, the physics of ordered and correlated systems allow for fundamental improvement of the energy efficiency when a transition happens between two distinguishable states. Our recent experiments show that this theoretical promise can indeed be realized in electronic devices. The resulting gain in energy efficiency could exceed orders of magnitude.

Date and Time: 
Monday, March 13, 2017 - 4:00pm
Venue: 
Spilker 232

Synopsys LightTools Hands-On Training [Optical Society Workshop]

Topic: 
Synopsys LightTools Hands-On Training
Abstract / Description: 

Capacity: 18 people

LightTools is a 3D optical engineering and design software product that supports virtual prototyping, simulation, optimization, and photorealistic renderings of illumination applications. Its unique design and analysis capabilities, combined with ease of use, support for rapid design iterations, and automatic system optimization, help to ensure the delivery of illumination designs according to specifications and schedule.

LightTools is used by industry leaders for engineering applications such as LEDs, displays, lighting, solar, automotive, head-mounted displays, projectors, etc.

Please read:

  • This is a hands-on interactive training session; you will need to actively participate.
  • The software runs on Windows. Therefore, you will need a computer that runs Windows for the training.
  • You should register only if you are absolutely sure that you can commit for the full 4-hours.
  • Since it is a hands-on session, the capacity is limited to 18 people. The first 18 people to RSVP will receive the download link, license information, and event location.
  • Please RSPV using the following link: https://www.surveymonkey.com/r/DSHDMD5
Date and Time: 
Monday, February 27, 2017 - 1:00pm to 5:00pm

Data-driven, Interactive Scientific Articles in a Collaborative Environment with Authorea [OSA; WEE]

Topic: 
Data-driven, Interactive Scientific Articles in a Collaborative Environment with Authorea
Abstract / Description: 

Most tools that scientists use for the preparation of scholarly manuscripts, such as Microsoft Word and LaTeX, function offline and don't account for the digital-born nature of research objects. Further, most authoring tools in use today are not designed for collaboration. As scientific collaborations grow in size, research transparency and the attribution of scholarly credit are at stake. I will show how Authorea allows scientists to write rich data-driven manuscripts on the web; articles that natively offer readers a dynamic, interactive experience with an article's full text, images, data, and code, paving the way to increased data sharing, research reproducibility, and Open Science. I will also demonstrate how Authorea differs from Overleaf and ShareLaTeX.

 

Please bring your laptop to actively participate in the demo (suggested; not mandatory)

Date and Time: 
Tuesday, January 24, 2017 - 12:30pm
Venue: 
Spilker 143

OSA Special Seminar

Topic: 
Widely-Tunable High-Performance Lasers: From sophisticated optical set-ups to robust products
Abstract / Description: 

Unique manufacturing techniques enable the integration of complex optical set-ups, such as optical parametric oscillators (OPO), into highly stable products. We will discuss two such laser products that have been recently developed and released on the market: The Hübner C-WAVE, which is tunable in the visible (450-650nm) and NIR (900-1300nm); and the Cobolt Odin, which operates in the IR (2-5µm). The talk will focus on bringing new optical technology from the benchtop to market-ready products. We will also present performance data and application results from studies in atomic physics, single molecule and gas spectroscopy, and the characterization of integrated optical devices using the aforementioned products.

Date and Time: 
Thursday, November 10, 2016 - 4:15pm to 5:00pm
Venue: 
Spilker 232

Pages

SCIEN Talk

SCIEN Talk: Street View 2018 - The Newest Generation of Mapping Hardware

Topic: 
Street View 2018 - The Newest Generation of Mapping Hardware
Abstract / Description: 

A brief overview of Street View from it's inception 10 years ago until now will be presented. Street level Imagery has been the prime objective for Google's Street View in the past, and has now migrated into a state-of-the-art mapping platform. Challenges and solutions to the design and fabrication of the imaging system and optimization of hardware to align with specific software post processing will be discussed. Real world challenges of fielding hardware in 80+ countries will also be addressed.

Date and Time: 
Wednesday, February 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Learning where to look in 360 environments

Topic: 
Learning where to look in 360 environments
Abstract / Description: 

Many vision tasks require not just categorizing a well-composed human-taken photo, but also intelligently deciding "where to look" in order to get a meaningful observation in the first place. We explore how an agent can anticipate the visual effects of its actions, and develop policies for learning to look around actively---both for the sake of a specific recognition task as well as for generic exploratory behavior. In addition, we examine how a system can learn from unlabeled video to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree panoramas. Finally, to facilitate 360 video processing, we introduce spherical convolution, which allows application of off-the-shelf deep networks and object detectors to 360 imagery.

Date and Time: 
Wednesday, January 24, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Driverless Anything and the Role of LiDAR

Topic: 
Driverless Anything and the Role of LiDAR
Abstract / Description: 

LiDAR, or light detection and ranging, is a versatile light-based remote sensing technology that has been the subject of a great deal of attention in recent times. It has shown up in a number of media venues, and has even led to public debate about engineering choices of a well-known electric car company, Tesla Motors. During this talk the speaker will provide some background on LiDAR and discuss why it is a key link to the future autonomous vehicle ecosystem as well as its strong connection to power electronics technologies.

Date and Time: 
Wednesday, January 17, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Advancing Healthcare with AI and VR

Topic: 
Advancing Healthcare with AI and VR
Abstract / Description: 

Quality, cost, and accessibility form an iron triangle that has prevented healthcare from achieving accelerated advancement in the last few decades. Improving any one of the three metrics may lead to degradation of the other two. However, thanks to recent breakthroughs in artificial intelligence (AI) and virtual reality (VR), this iron triangle can finally be shattered. In this talk, I will share the experience of developing DeepQ, an AI platform for AI-assisted diagnosis and VR-facilitated surgery. I will present three healthcare initiatives we have undertaken since 2012: Healthbox, Tricorder, and VR surgery, and explain how AI and VR play pivotal roles in improving diagnosis accuracy and treatment effectiveness. And more specifically, how we have dealt with not only big data analytics, but also small data learning, which is typical in the medical domain. The talk concludes with roadmaps and a list of open research issues in signal processing and AI to achieve precision medicine and surgery.

Date and Time: 
Wednesday, January 10, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging

Topic: 
Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging
Abstract / Description: 

High-speed imaging is an indispensable technology for blur-free observation of fast transient dynamics in virtually all areas including science, industry, defense, energy, and medicine. Unfortunately, the frame rates of conventional cameras are significantly constrained by their data transfer bandwidth and onboard storage. We demonstrate a two-dimensional dynamic imaging technique, compressed ultrafast photography (CUP), which can capture non-repetitive time-evolving events at up to 100 billion fps. Compared with existing ultrafast imaging techniques, CUP has a prominent advantage of measuring an x, y, t (x, y, spatial coordinates; t, time) scene with a single camera snapshot, thereby allowing observation of transient events occurring on a time scale down to tens of picoseconds. Thanks to the CUP technology, for the first time, the human can see light pulses on the fly. Because this technology advances the imaging frame rate by orders of magnitude, we now enter a new regime and open new visions.

In this talk, I will discuss our recent effort to develop a second-generation CUP system and demonstrate its applications at scales from macroscopic to microscopic. For the first time, we imaged photonic Mach cones and captured "Sonic Boom" of light in action. Moreover, by adapting CUP for microscopy, we enabled two-dimensional fluorescence lifetime imaging at an unprecedented speed. The advantage of CUP recording is that even visually simple systems can be scientifically interesting when they are captured at such a high speed. Given CUP's capability, we expect it to find widespread applications in both fundamental and applied sciences including biomedical research.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Next Generation Wearable AR Display Technologies

Topic: 
Next Generation Wearable AR Display Technologies
Abstract / Description: 

Wearable AR/VR displays have a long history and earlier efforts failed due to various limitations. Advances in sensors, optical technologies, and computing technologies renewed the interest in this area. Most people are convinced AR will be very big. A key question is whether AR glasses can be the new computing platform and replace smart phones? I'll discuss some of the challenges ahead. We have been working on various wearable display architectures and I'll discuss our efforts related to MEMS scanned beam displays, head-mounted projectors and smart telepresence screens, and holographic near-eye displays.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Near-Eye Varifocal Augmented Reality Displays

Topic: 
Near-Eye Varifocal Augmented Reality Displays
Abstract / Description: 

With the goal of registering dynamic synthetic imagery onto the real world, Ivan Sutherland envisioned a fundamental idea to combine digital displays with conventional optical components in a wearable fashion. Since then, various new advancements in the display engineering domain, and a broader understanding in the vision science domain have led us to computational displays for virtual reality and augmented reality applications. Today, such displays promise a more realistic and comfortable experience through techniques such as lightfield displays, holographic displays, always-in-focus displays, multiplane displays, and varifocal displays. In this talk, as an Nvidian, I will be presenting our new optical layouts for see-through computational near-eye displays that is simple, compact, varifocal, and provides a wide field of view with clear peripheral vision and large eyebox. Key to our efforts so far contain novel see-through rear-projection holographic screens, and deformable mirror membranes. We establish fundamental trade-offs between the quantitative parameters of resolution, field of view, and the form-factor of our designs; opening an intriguing avenue for future work on accommodation-supporting augmented reality display.

Date and Time: 
Wednesday, November 15, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Interactive 3D Digital Humans

Topic: 
Interactive 3D Digital Humans
Abstract / Description: 

This talk will cover recent methods for recording and displaying interactive life-sized digital humans using the ICT Light Stage, natural language interfaces, and automultiscopic 3D displays. We will then discuss the first full application of this technology to preserve the experience of in-person interactions with Holocaust survivors

More Information: http://gl.ict.usc.edu/Research/TimeOffsetConversations/


The SCIEN Colloquia are open to the public. The talks are also videotaped and posted the following week on talks.stanford.edu.

There will a reception following the presentation.

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Mapping molecular orientation using polarized light microscopy

Topic: 
Mapping molecular orientation using polarized light microscopy
Abstract / Description: 

Polarization is a basic property of light, but the human eye is not sensitive to it. Therefore, we don't have an intuitive understanding of polarization and of optical phenomena that are based on it. They either elude us, like the polarization of the blue sky or the rainbow, or they puzzle us, like the effect of Polaroid sunglasses. Meanwhile, polarized light plays an important role in nature and can be used to manipulate and analyze molecular order in materials, including living cells, tissues, and whole organisms, by observation with the polarized light microscope.

In this seminar, Rudolf Oldenbourg will first illustrate the nature of polarized light and its interaction with aligned materials using hands-on demonstrations. He will then introduce a modern version of the polarized light microscope, the LC-PolScope, created at the MBL. Enhanced by liquid crystal devices, electronic imaging, and digital image processing techniques, the LC-PolScope reveals and measures the orientation of molecules in every resolved specimen point at once. In recent years, his lab expanded the LC-PolScope technique to include the measurement of polarized fluorescence of GFP and other fluorescent molecules, and applied it to record the remarkable choreography of septin proteins during cell division, displayed in yeast to mammalian cells.

Talon Chandler will then discuss extending polarized light techniques to multi-view microscopes, including light sheet and light field microscopes. In contrast to traditional, single-view microscopy, the recording of specimen images along two or more viewing directions allows us to unambiguously measure the three dimensional orientation of molecules and their aggregates. Chandler will discuss ongoing work on optimizing the design and reconstruction algorithms for multi-view polarized light microscopy.


The SCIEN Colloquia are open to the public. The talks are also videotaped and posted the following week on talks.stanford.edu.

There will a reception following the presentation.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN colloquium: Light field Retargeting for Integral and Multi-panel Displays

Topic: 
Light field Retargeting for Integral and Multi-panel Displays
Abstract / Description: 

Light fields are a collection of rays emanating from a 3D scene at various directions, that when properly captured provides a means of projecting depth and parallax cues on 3D displays. However due to the limited aperture size and the constrained spatial-angular sampling of many light field capture systems (e.g. plenoptic cameras), the displayed light fields provide only a narrow viewing zone in which parallax views can be supported. In addition, the autostereoscopic displaying devices may be of unmatched spatio-angular resolution (e.g. integral display) or of different architecture (e.g. multi-panel display) as opposed to the capturing plenoptic system which requires careful engineering between the capture and display stages. This talk presents an efficient light field retargeting pipeline for integral and multi-panel displays which provides us with a controllable enhanced parallax content. This is accomplished by slicing the captured light fields according to their depth content, boosting the parallax, and merging these slices with data filling. In integral displays, the synthesized views are simply resampled and reordered to create elemental images that beneath a lenslet array can collectively create multi-view rendering. For multi-panel displays, additional processing steps are needed to achieve seamless transition over different depth panels and viewing angles where displayed views are synthesized and aligned dynamically according to the position of the viewer. The retargeting technique is simulated and verified experimentally on actual integral and multi-panel displays.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Packard 101

Pages

SmartGrid

SmartGrid Seminar: Johanna Mathieu

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, March 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Daniel Kirschen

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 22, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Anthony Rowe

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 8, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Deepak Divan

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Adam Wierman

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, January 18, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar welcomes Saurabh Amin

Topic: 
TBA
Abstract / Description: 

The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions
to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.


Yours sincerely,

Smart Grid Seminar Organization Team,

Ram Rajagopal, Assistant Professor, Civil & Environmental Engineering, and Electrical Engineering
Chin-Woo Tan, Director, Stanford Smart Grid Lab
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
Emre Kara, Associate Staff Scientist, SLAC National Accelerator Laboratory

Date and Time: 
Thursday, November 16, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Optimization, Inference and Learning for District-Energy Systems

Topic: 
Optimization, Inference and Learning for District-Energy Systems
Abstract / Description: 

We discuss how Optimization, Inference and Learning (OIL) methodology is expected to re-shape future demand-response technologies acting across interdependent energy, i.e. power, natural gas andheating/cooling, infrastructures at the district/metropolitan/distribution level. We describe hierarchy ofdeterministic and stochastic planning and operational problems emerging in the context of physical flows over networks associated with the laws of electricity, gas-, fluid- and heat-mechanics. We proceed to illustratedevelopment and challenges of the physics-informed OIL methodology on examples of: a) Graphical Models approach applied to a broad spectrum of the energy flow problems, including online reconstruction of the grid(s) topology from measurements; b) Direct and inverse dynamical problems for timely delivery of services in the district heating/cooling systems; c) Ensemble Control of the phase-space cycling energy loads via Markov Decision Process (MDP) and related reinforcement learning approaches.

Date and Time: 
Thursday, November 2, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Emerging Technologies and Their Impact on the Grid

Topic: 
Emerging Technologies and Their Impact on the Grid
Abstract / Description: 

As rooftop solar, electric vehicles, and residential battery storage continue to become more and more commonplace, they can have significant impacts in the way the Energy Grid operates. By embracing these new technologies, PG&E is helping to create a vision for what a next generation energy company will look like and seeking to answer key questions such as: Is energy storage changing the way in which utilities operate the grid? What is needed for new technologies, such as residential battery energy storage, to go mainstream? What are some of the key factors driving the inevitable transition from a one-way grid to a two-way grid?

This presentation will focus on both the technology changes happening to the energy space as well as some of the technology advancements helping to reshape how the energy grid engages with these changes. It will cover these topics while exploring a case study of a recent pilot projects where PG&E, Tesla, GE & Green Charge teamed up on a project in San Jose to demonstrate how battery storage and rooftop solar connected to smart inverters can be used to support the electric grid during periods of high demand while providing participating residents and businesses with backup power and bill reduction. The project is a microcosm of what the grid will look like in the near future with the rapid adoption of distributed energy resources such as solar, battery storage & EVs.


 

The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions
to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

 

Yours sincerely,


Smart Grid Seminar Organization Team,

Ram Rajagopal, Assistant Professor, Civil & Environmental Engineering, and Electrical Engineering
Chin-Woo Tan, Director, Stanford Smart Grid Lab
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering
Emre Kara, Associate Staff Scientist, SLAC National Accelerator Laboratory

Date and Time: 
Thursday, November 9, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Smart Distribution Systems Research at Future Renewable Electric Energy Delivery and Management Systems Center

Topic: 
Smart Distribution Systems Research at Future Renewable Electric Energy Delivery and Management Systems Center
Abstract / Description: 

This talk will first highlight the challenges associated with upgrading the current electric power distribution system infrastructure towards a smart distribution system that can accommodate high levels of distributed energy resources (DERs). Then, an overview of the research efforts that has been undertaken at the FREEDM center will be provided. The focus will be on the new monitoring and control methods needed for the future smart distribution systems.

Date and Time: 
Thursday, October 12, 2017 - 1:30pm
Venue: 
Y2E2 111

Design, stability and control of ad-hoc microgrids [SmartGrid Seminar]

Topic: 
Design, stability and control of ad-hoc microgrids
Abstract / Description: 

Microgrids are a promising and viable solution for integrating the distributed generation resources in future power systems. Similar to large-scale power systems, microgrids are prone to a range of instability mechanisms and are naturally fragile with respect to disturbances. However, existing planning and operation practices employed in large scale transmission grids usually cannot be downscaled to small low-voltage microgrids. This talk will discuss the concept of ad-hoc microgrids that allow for arbitrary interconnection and switching with guaranteed stability. Although the problem of microgrid stability and control has received a lot of attention in the last years, vast majority of existing works assumed that the network configuration is given and fixed. Moreover, only few works have accounted for electromagnetic delays that will be shown to play a critical role in the context of stability.

The talk will introduce a new mathematical framework for characterization and certification of stability in an ad-hoc setting and derive the formal design constraints for both DC and AC networks. In the context of low-voltage DC network, the corresponding derivations will employ the Brayton-Moser potential theory and result in simple conditions on load capacitances that guarantee both small-signal and transient stability. Whereas for AC microgrids, the singular perturbation analysis will be used to derive simple relations for the droop coefficient of neighboring networks. The talk will conclude with a discussion of key open problems and challenges.

Date and Time: 
Wednesday, June 28, 2017 - 1:30pm
Venue: 
Y2E2 101

Pages

Stanford's NetSeminar

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, November 11, 2015 - 12:15pm to 1:30pm
Venue: 
Packard 202

NetSeminar

Topic: 
Precise localization and high throughput backscatter using WiFi signals
Abstract / Description: 

Indoor localization holds great promise to enable applications like location-based advertising, indoor navigation, inventory monitoring and management. SpotFi is an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems.

We then talk about BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter IoT sensors and WiFi APs using ambient WiFi transmissions as the excitation signal. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system.

Date and Time: 
Thursday, October 15, 2015 - 12:15pm to 1:30pm
Venue: 
Gates 104

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, October 7, 2015 - 12:15pm to 1:30pm
Venue: 
AllenX Auditorium

Pages

Statistics and Probability Seminars

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Statistics Seminar

Topic: 
Brownian Regularity for the Airy Line Ensemble
Abstract / Description: 

The Airy line ensemble is a positive-integer indexed ordered system of continuous random curves on the real line whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy2 process, describes after the subtraction of a parabola the limiting law of the scaled weight of a geodesic running from the origin to a variable point on an anti-diagonal line in such problems as Poissonian last passage percolation. The Airy line ensemble enjoys a simple and explicit spatial Markov property, the Brownian Gibbs property.


In this talk, I will discuss how this resampling property may be used to analyse the Airy line ensemble. Arising results include a close comparison between the ensemble's curves after affine shift and Brownian bridge. The Brownian Gibbs technique is also used to compute the value of a natural exponent describing the decay in probability for the existence of several near geodesics with common endpoints in Brownian last passage percolation, where the notion of "near" refers to a small deficit in scaled geodesic weight, with the parameter specifying this nearness tending to zero.

Date and Time: 
Monday, September 26, 2016 - 4:30pm
Venue: 
Sequoia Hall, room 200

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Probability Seminar

Topic: 
Upper tails and independence polynomials in sparse random graphs
Abstract / Description: 

The upper tail problem in the Erd˝os–R´enyi random graph G ∼ Gn,p is to estimate the probability that the number of copies of a graph H in G exceeds its expectation by a factor 1 + δ. Already, for the case of triangles, the order in the exponent of the tail probability was a long-standing open problem until fairly recently, when it was solved by Chatterjee (2012), and independently by DeMarco and Kahn (2012). Recently, Chatterjee and Dembo (2014) showed that in the sparse regime, the logarithm of the tail probability reduces to a natural variational problem on the space of weighted graphs. In this talk we derive the exact asymptotics of the tail probability by solving this variational problem for any fixed graph H. As it turns out, the leading order constant in the large deviation rate function is governed by the independence polynomial of H.


This is based on joint work with Shirshendu Ganguly, Eyal Lubetzky, and Yufei Zhao.


 

The Probability Seminars are held in Sequoia Hall, Room 200, at 4:30pm on Mondays. Refreshments are served at 4pm in the Lounge on the first floor.

Date and Time: 
Monday, January 11, 2016 - 4:30pm to 5:30pm
Venue: 
Sequoia Hall, Room 200

Probability Seminar

Topic: 
The Yang–Mills free energy
Abstract / Description: 

The construction of four-dimensional quantum Yang–Mills theories is a central open question in mathematical physics, famously posed as one of the millennium prize problems by the Clay Institute. While much progress has been made for the two dimensional problem, the techniques mostly break down in dimensions three and four. In this talk I will present a partial advance on this question, taking the program one step beyond the results proved in the Eighties.


 

The Probability Seminars are held in Sequoia Hall, Room 200, at 4:30pm on Mondays. Refreshments are served at 4pm in the Lounge on the first floor.

Date and Time: 
Monday, January 25, 2016 - 4:30pm to 5:30pm
Venue: 
Sequoia Hall, Room 200

Pages

SystemX

SystemX Seminar: Nanoscale MOSFET Modeling for the Design of Low-power Analog and RF Circuits

Topic: 
Nanoscale MOSFET Modeling for the Design of Low-power Analog and RF Circuits
Abstract / Description: 

The emergence of the Internet of Things (IoT) poses stringent requirements on the energy consumption and has hence become the primary driver for low-power analog and RF circuit design. Implementation of increasingly complex functions under highly constrained power and area budgets, while circumventing the challenges posed by modern device technologies, makes analog and RF circuit design ever more challenging. Some guidance would therefore be invaluable for the designer to navigate the multi-variable design space.

This talk presents low-power analog and RF design techniques that can be applied from device to circuit level. It starts with the presentation of the concept of inversion coefficient IC as an essential design parameter that spans the entire range of operating points from weak via moderate to strong inversion. Several figures-of-merit (FoM) including the Gm/ID, the Ft and their product Gm ‧ Ft/ID, capturing the various trade-offs encountered in analog and RF circuit design are presented. The simplicity of the IC-based models is emphasized and compared against measurements of 40- and 28-nm bulk CMOS processes and BSIM6 simulations. Finally, a simple technique to extract the basic model parameters from measurements or simulation is described before concluding.

Date and Time: 
Thursday, February 15, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Advanced SAR ADCs – Efficiency, Accuracy, Calibration and References

Topic: 
Advanced SAR ADCs – Efficiency, Accuracy, Calibration and References
Abstract / Description: 

This talk will discuss several recent techniques that were developed in the context of SAR ADCs. The presentation will show a few design examples with different performance targets. The first topic deals with minimizing power consumption while aiming to increase accuracy by means of linearization and noise reduction techniques. The second topic is about efficient calibration techniques for SAR ADCs. The last part describes a method to co-integrate the reference buffer with the SAR ADC.

Date and Time: 
Friday, February 9, 2018 - 4:00pm
Venue: 
Allen 101X

SystemX Seminar: Using the Stanford Driving Simulator for Human Machine Interaction Studies

Topic: 
Using the Stanford Driving Simulator for Human Machine Interaction Studies
Abstract / Description: 

The driving simulator at Stanford is used for human-in-the-loop, human-machine interaction (HMI) driving studies. Many of the studies focus on shared control between humans and autonomous systems. The simulator’s toolset collects objective driving behavior data directly from the simulator, as well as data streams from eye trackers, cameras and other physiological sensors that we employ to understand human responses to myriad circumstances in the simulated environment.  This presentation will describe the hardware and software associated with the driving studies, what is possible and show some similar labs at other universities. 

Date and Time: 
Thursday, January 25, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Programmable and Smart Silicon Interposers for 3D Chip Stacks

Topic: 
Programmable and Smart Silicon Interposers for 3D Chip Stacks
Abstract / Description: 

With increased demands for computation and the slowdown of CMOS scaling, alternative methods for further miniaturization of electronics are gaining momentum. Heterogeneous integration (HI) of chips from various manufacturing lines on to a silicon interposer is a newly recognized approach, which has been used in a number of high-performance applications. However, these 3D-IC chip stacks are time-consuming to develop and are application-specific, resulting in prohibitive costs.

Similar cost issues have been addressed in the form of field programmable gate arrays. In an analogous fashion, programmable silicon interposers open new possibilities of design-reuse of silicon for multiple applications, resulting in cost savings and time to market advantages. Programmable re-use of silicon interposers also enables just-in-time manufacturing to simultaneously produce several smaller lots made with high-mix of components.

In addition, programmable silicon interposers for 3D stacking allow system-level control of functions that can be embedded in the interposer, such as power management, built in self-test, and manufacturing defect repair. Power management techniques previously applied to single chip solutions can be re-architected to achieve higher system level efficiency in these 3D chip stack. We will demonstrate one such system built with a smart, programmable silicon interposer from zGlue – the first commercial implementation of a product in this category. This technology will help proliferate internet of things (IoT) devices, give a broader array of choices to product designers, and will accelerate proliferation of electronics in ultra-small form factor for healthcare, industrial as well as consumer space.

Date and Time: 
Thursday, January 18, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Smart Internet Connections: Your internet connection’s use of artificial intelligence and machine learning

Topic: 
Smart Internet Connections: Your internet connection’s use of artificial intelligence and machine learning
Abstract / Description: 

The next generation of internet communication has many uses for machine learning. This talk will review some of the applications for and types of 5th-generation converged software-defined communication networks, including the important access links to all users/consumers and devices/things, upon which humanity increasing and crucially depends. The general problem well addressed by communications theory is the inference from a large set of data (sometimes called a "channel" output) of a desired/intended conclusion (sometimes called the "channel input" or data "transmitted"); this is sometimes also known as "decoding." Many learning systems like search engines, detection of diseases, facial recognition, etc are all forms of this "decoding." Many of the methods for "machine learning" can be recast in this more general setting, and as well then re-used to advance further the art of next-generation communication. The talk will encourage further investigation into both the "learning" and advancement of the future networks that will increasingly connect us all. Some of these topics will be further examined in EE392AA (spring quarter), which can be used for EE MS Communications Depth sequence.

Date and Time: 
Thursday, January 11, 2018 - 4:30pm
Venue: 
Y2E2 111

Pages

Subscribe to RSS - Seminar / Colloquium