Seminar / Colloquium

Special Seminar: Molecular imaging at a cellular in vivo using Optical Coherence Tomography

Topic: 
Molecular imaging at a cellular in vivo using Optical Coherence Tomography
Abstract / Description: 

Molecular imaging offers a unique real time view of biochemical processes taking place inside a living subject, typically at a compromise of either spatial resolution or tissue depth of penetration. A holy grail would be to molecularly image live tissues at a cellular resolution over a large 3D field of view showing molecular behavior of billions of cells in real time. In this presentation, we will share our approach for accomplishing that. Optical Coherence Tomography (OCT) enables real-time structural imaging of living tissues with cellular resolution over large 3D fields of view. However, functional and molecular capabilities for OCT remain elusive due to the difficulties of distinguishing exogenous contrast agents from intrinsic tissue scattering and absorption. We developed a variety of gold nanoparticles as contrast agents, and new hardware and algorithmic approaches to enable highly sensitive detection of these gold particles in live animals. We will show the versatility of this imaging method in a variety of applications. For example, we used this method to study the spatial-temporal behavior of lymphatic valves and their influence on lymphatic drainage. In another study, we labeled tumor-associated macrophages using the gold nanoparticles and tracked their movement across a brain tumor animal model. Finally, we will also describe a recent method we developed to enhance the quality of OCT images by eliminating speckle noise. We will show its utility in delineating the margins of brain tumors versus the healthy brain in both animal models and in human brain cancer patients. 

Date and Time: 
Thursday, February 28, 2019 - 4:00pm
Venue: 
Alway Building, Room M114

IEEE IT Society, Santa Clara Valley Chapter presents "Deep knockoffs machines for replicable selections"

Topic: 
Deep knockoffs machines for replicable selections
Abstract / Description: 

The Santa Clara Valley Chapter of the IEEE Information Theory Society sponsors this event. Please register if you plan to attend. Registration is not necessary for attendance but it helps with planning a successful event.

This is a follow up talk to the 2019 Kailath lecture. The speaker, Yaniv Romano is one of Emmanuel Candes' postdoctoral researchers. He will go in more detail over model-X knockoffs.


 

Model-X knockoffs is a new statistical tool that reliably selects which of the many potentially explanatory variables of interest (e.g. the absence or not of a mutation) are indeed truly associated with the response under study (e.g. the risk of getting a specific form of cancer). This framework can deal with very complex statistical models; in fact, they may be so complex that they can be treated as black boxes. The idea is to construct fake variables — knockoffs — which obey some crucial exchangeability properties with the real variables we wish to assay so that they can be used as negative controls. To leverage the full power of this framework, however, we need flexible tools to construct knockoffs from sampled data. This talk presents a machine that can produce knockoffs for arbitrary and unspecified data distributions, using deep generative models. The main idea is to iteratively refine a knockoff sampling mechanism until a criterion measuring the validity of the knockoffs we produce is minimized. Extensive numerical experiments and quantitative tests confirm the generality, effectiveness, and power of our approach. This results in a model-free variable selection method, and we present an application to the study of mutations linked to changes in drug resistance in the human immunodeficiency virus.


 

Date and Time: 
Wednesday, March 20, 2019 - 6:30pm
Venue: 
Packard 202

ICME presents Introduction to Active Learning

Topic: 
Introduction to Active Learning
Abstract / Description: 

The greatest challenge when building a high-performance model isn't about choosing the right algorithm or doing hyperparameter tuning: it is about getting high quality labeled training data. Without good data, no algorithm, even the most sophisticated one, will deliver the results needed for real-life applications. And with most modern algorithms (such as Deep Learning models) requiring huge amounts of data to train, things aren't going to get better any time soon.

Active Learning is one of the possible solutions to this dilemma, but quite surprisingly, left out of most data science conferences and computer science curricula. By the end of this presentation, you will learn the importance of Active Learning and its application to make AI work in the real world.

Date and Time: 
Monday, February 25, 2019 - 4:30pm
Venue: 
Building 200, Room 305

IT Forum presents "Sub-packetization of Minimum Storage Regenerating codes"

Topic: 
Sub-packetization of Minimum Storage Regenerating codes: A lower bound and a work-around
Abstract / Description: 

Modern cloud storage systems need to store vast amounts of data in a fault tolerant manner, while also preserving data reliability and accessibility in the wake of frequent server failures. Traditional MDS (Maximum Distance Separable) codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. Minimum storage regenerating (MSR) codes are a special sub-class of MDS codes that provide mechanisms for exact regeneration of a single code-block by downloading the minimum amount of information from the remaining code-blocks. As a result, MSR codes are attractive for use in distributed storage systems to ensure node repairs with optimal repair- bandwidth. However, all known constructions of MSR codes require large sub-packetization levels (which is a measure of the granularity to which a single vector codeword symbol needs to be divided into for efficient repair). This restricts the applicability of MSR codes in practice.

This talk will present a lower bound that exponentially large sub- packetization is inherent for MSR codes. We will also propose a natural relaxation of MSR codes that allows one to circumvent this lower bound, and present a general approach to construct MDS codes that significantly reduces the required sub-packetization level by incurring slightly higher repair-bandwidth as compared to MSR codes.

The lower bound is joint work with Omar Alrabiah, and the constructions are joint work with Ankt Rawat, Itzhak Tamo, and Klim Efremenko.

Date and Time: 
Wednesday, February 20, 2019 - 2:00pm
Venue: 
Gates 463A

IT Forum presents "Adapting Maximum Likelihood Theory in Modern Applications"

Topic: 
Adapting Maximum Likelihood Theory in Modern Applications
Abstract / Description: 

Maximum likelihood estimation (MLE) is influential because it can be easily applied to generate optimal, statistically efficient procedures for broad classes of estimation problems. Nonetheless, the theory does not apply to modern settings --- such as problems with computational, communication, or privacy considerations --- where our estimators have resource constraints. In this talk, I will introduce a modern maximum likelihood theory that addresses these issues, focusing specifically on procedures that must be computationally efficient or privacy- preserving. To do so, I first derive analogues of Fisher information for these applications, which allows a precise characterization of tradeoffs between statistical efficiency, privacy, and computation. To complete the development, I also describe a recipe that generates optimal statistical procedures (analogues of the MLE) in the new settings, showing how to achieve the new Fisher information lower bounds.

Date and Time: 
Friday, February 22, 2019 - 1:15pm
Venue: 
Packard 202

SystemX Seminar presents "What’s Holding Back the Digital Technology Revolution in the Clinical Development of Drugs and Biologics?"

Topic: 
What’s Holding Back the Digital Technology Revolution in the Clinical Development of Drugs and Biologics?
Abstract / Description: 

New technologies are revolutionizing clinical trials and transforming drug and biologic development in ways that are unprecedented. In the process, digital data capture technologies allow for increasing data types and volume, improved accuracy and precision, and reductions in variance. They are also opening a window for capturing "real-world" data, minimizing patient inconvenience, increasing patient compliance, and decreasing site-monitoring costs. These new technologies are helping to transform the concept of what should constitute a medicine, so that new types of tech-enabled therapeutics will advance the ways in which both medical treatments are delivered and how clinical trials are performed.

There are many goals associated with using the new digital technologies in clinical trials. Among these are using wearable biosensors (or "unawearables") and "smart delivery systems" to collect large amounts of new types of data, as well as to analyze them in sophisticated ways in "near real-time." These approaches are helping to transform the mechanics of clinical research so that site-less trials are now becoming possibilities in some therapeutic areas. The pace of ongoing evolution has been rapid, and according to the editors of Applied Clinical Trials, "the ability to amalgamate, organize and analyze real-world data sets with structured data sets... can provide new, provable indications of hidden relationships that can precipitate better support, new hypotheses and provoke new, potentially life-saving questions that we didn't think to ask before."

Pharmaceutical companies are attempting to leverage the newer digital technologies to help develop targeted therapeutics faster, more efficiently, and more cheaply than ever before. In many cases, the technologies to do so have already been developed. Yet, to date, they are not being used at scale. So, why isn't the world of clinical trials changing for the better more rapidly? The answer is the need for clinical validation (and qualification) of what amounts to a new world of digital biomarkers. As with wet biomarkers, these must be rigorously validated both technically and clinically to ensure that they are proper indicators of meaningful clinical changes – ones that are both useful and robust.

One of the current challenges faced by pharmaceutical companies is that there is an intrinsic mismatch between the development times for high assurance "smart medical devices" and overall drug development processes due to intrinsic cycle time differences. As an example, consumer devices/software can typically be developed in 6-18 months (or less). For devices with medical-grade functions, development times can range up to 3-5 years, with iterative device upgrades occurring every 1-2 years. In contrast, drug/biologic development programs generally follow a more conservative and risk-averse strategy. The resulting timelines can be lengthy due to the lifecycle/ biology of the disease under consideration, and they often require up to 6-8 years(i.e., patients must be recruited and retained in clinical trials to assess differences in hard clinical outcomes).

In general, medical device development is iterative and guided by Design Control and Quality System principles that foster rapid engineering processes and quick development times. To accelerate these processes further, medical device manufacturers have attempted to adopt agile development paradigms that retain the necessary compliance characteristics from a regulatory standpoint. This faster pace introduces a complication: Numerous versions of a medical device may be developed over the course of a single drug development program, resulting in the need to re-qualify and cross-validate the outputs of the devices to ensure that their data are consistent. These must then be mated to analytical paradigms that are fit-for-purpose and qualified to assess clinically meaningful differences and treatment effects. Lastly, the regulatory landscape for "digital medical devices" is evolving in different and non-equivalent ways in the United States, EU, and Japan.

In summary, currently available digital technologies represent a likely tipping point for the way that clinical trials of the future will be conducted. The challenge is to determine how to deploy currently available technology sensibly in order to advance the clinical trial process and existing drug pipelines faster.

The speaker, Gerard G. Nahum, MD (Vice President of Clinical Development at Bayer Pharmaceuticals, Head of Medical Devices & eHealth), will address these various issues from a pharmaceutical industry perspective and outline the challenges that explain why we haven't yet seen adoption of these new digital technologies in clinical trials at scale.

Date and Time: 
Thursday, February 21, 2019 - 4:30pm
Venue: 
Bldg. 380 Rm. 380X

SystemX presents Quantum Computing with Spins in Silicon

Topic: 
Quantum computing with spins in silicon
Abstract / Description: 

The realization of quantum computers will provide a new and unprecedented computing resource that could significantly impact many industries ranging from medicine to artificial intelligence. Recently, the field of quantum computing has been transitioning from experimental demonstrations of quantum bits (qubits) to engineering larger scale quantum systems with the aid of industry. Electron spin qubits in silicon are an excellent candidate for this purpose as they can be made using transistor-like structures that are CMOS compatible, opening up the possibility to leverage off the semiconducting industry [1].

In this talk I will discuss the state-of-the-art in the silicon spin qubit field focusing on my recent work at TUDelft where we demonstrated a programmable two-qubit quantum processor that could perform the Deutsch-Josza and Grover search algorithm [2]. Moving to larger scale qubit systems will require the ability to make reproducible qubit arrays which is incredibly challenging for university clean rooms due limited process control and slow turn around. I will discuss how these issues are being addressed at Intel through the use of their industrial 300mm fabrication line and expertise in high volume electrical tests.

[1] F. A. Zwanenburg et al. Silicon quantum electronics, Rev. Mod. Phys. 85, 961 (2013)

[2] T. F. Watson et al. A programmable two-qubit processor in silicon, Nature 555, 633-637 (2018)

Date and Time: 
Thursday, February 28, 2019 - 4:30pm
Venue: 
Bldg. 380 Rm. 380X

SystemX BONUS lecture: Secure and Spectrum-Aware Wireless Communications

Topic: 
Secure and Spectrum-Aware Wireless Communications: Challenges and Opportunities
Abstract / Description: 

The Internet of Things (IoT) is redefining how we interact with the world by supplying a global view based not only on human-provided data but also human-device connected data. For example, in Health Care, IoT will bring decreased costs, improved treatment results, and better disease management. However, the connectivity-in-everything model brings heightened security concerns. Additionally, the projected growth of connected nodes not only increases security concerns, it also leads to a 1000-fold increase in wireless data traffic in the near future. This data storm results in a spectrum scarcity thereby driving the urgent need for shared spectrum access technologies. These security deficiencies and the wireless spectrum crunch require innovative system-level secure and scalable solutions.

This talk will introduce energy-efficient and application-driven system-level solutions for secure and spectrum-aware wireless communications. I will present an ultra-fast bit-level frequency-hopping scheme for physical-layer security. This scheme utilizes the frequency agility of devices in combination with time-interleaved radio frequency architectures and protocols to achieve secure wireless communications. To address the wireless spectrum crunch, future smart radio systems will evaluate the spectrum usage dynamically and opportunistically use the underutilized spectrum; this will require spectrum sensing for interferer avoidance. I will discuss a system-level approach using band-pass sparse signal processing for rapid interferer detection in a wideband spectrum to convert the abstract improvements promised by sparse signal processing theory, e.g., fewer measurements, to concrete improvements in time and energy efficiency. Beyond these system-level solutions, I will also discuss future research directions including secure package-less THz tags and ingestible micro-bio-electronic devices.

 

Date and Time: 
Friday, February 22, 2019 - 11:00am
Venue: 
Packard 202

EE380 Computer Systems Colloquium presents Karthik and Arushi

Topic: 
User Interface
Abstract / Description: 

TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, March 13, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Jamie Morgenstern

Topic: 
TBA
Abstract / Description: 

TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, March 6, 2019 - 4:30pm
Venue: 
Shriram 104

Pages

Applied Physics / Physics Colloquium

AP483 Optics & Electronics Seminar presents Critical Fabulations: Reworking the Methods and Margins of Design

Topic: 
Critical Fabulations: Reworking the Methods and Margins of Design
Abstract / Description: 

AP 483 & AMO Seminar Series
Time:
4:15 pm, every Monday (Refreshments begin at 4 pm)


Speaker Daniela Rosner
Assistant Professor, Department of Human Centered Design & Engineering, University of Washington
Critical Fabulations: Reworking the Methods and Margins of Design

Date and Time: 
Monday, March 11, 2019 - 4:15pm
Venue: 
Spilker 232

AP483 Optics & Electronics Seminar presents Seeing is Believing: The Role of Materials in Painting Life

Topic: 
Seeing is Believing: The Role of Materials in Painting Life
Abstract / Description: 

AP 483 & AMO Seminar Series
Time:
4:15 pm, every Monday (Refreshments begin at 4 pm)


Speaker Barbara Berrie
Head of Scientific Research Department, National Gallery of Art
Seeing is Believing: The Role of Materials in Painting Life

Date and Time: 
Monday, February 25, 2019 - 4:15pm
Venue: 
Spilker 232

AP483 Optics & Electronics Seminar presents Precision Chemical Sensing: Using techniques from Quantum Optics to reach part-per-trillion sensitivity in the field

Topic: 
Precision Chemical Sensing: Using techniques from Quantum Optics to reach part-per-trillion sensitivity in the field
Abstract / Description: 

AP 483 & AMO Seminar Series
Time:
4:15 pm, every Monday (Refreshments begin at 4 pm)


Speaker Tony Miller
CEO, Entanglement Technologies
Precision Chemical Sensing: Using techniques from Quantum Optics to reach part-per-trillion sensitivity in the field

Date and Time: 
Monday, February 11, 2019 - 4:15pm
Venue: 
Spilker 232

AP483 Optics & Electronics Seminar presents Internal models and the neural control of prey interception

Topic: 
Internal models and the neural control of prey interception
Abstract / Description: 

AP 483 & AMO Seminar Series
Time:
4:15 pm, every Monday (Refreshments begin at 4 pm)


Speaker Matthew Norcia
NRC Postdoctoral Fellow, JILA, University of Colorado at Boulder
Superradiance, enhanced cooling, and microscopic control with narrow-li

 

Date and Time: 
Monday, February 4, 2019 - 4:15pm
Venue: 
Spilker 232

Applied Physics/Physics Colloquium presents Pixels to Physics: The Promise and Challenges of Survey Cosmology

Topic: 
Pixels to Physics: The Promise and Challenges of Survey Cosmology
Abstract / Description: 

TBA


 

Wtr. Qtr. Colloq. committee: A. Linde (Chair), S. Kivelson, B. Lev, S. Zhang
Location: Hewlett Teaching Center, Rm. 201

Date and Time: 
Tuesday, March 5, 2019 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium presents New Perspectives on the Riemann Hypothesis

Topic: 
New Perspectives on the Riemann Hypothesis
Abstract / Description: 

The Riemann Hypothesis is widely considered as the greatest unsolved problem in pure mathematics, conjectured nearly over 160 years ago. Its importance is for the many far-reaching implications for the distribution of prime numbers. In this colloquium, I will first review these well-known facts and history, with some emphasis on the appearance of Riemann's zeta function in physics, in particular in quantum statistical mechanics. I will then outline some recent work that offers a clear strategy towards proving the Riemann Hypothesis. One such strategy is based on an analogy with random walks and stochastic time series. As a concrete application of these ideas, I will explain how I calculated the google-th Riemann zero.

This lecture is dedicated to Shoucheng Zhang. In discussions with him in the last year I learned that he had a deep interest and knowledge of this problem, and some interesting ideas about it.


 

Wtr. Qtr. Colloq. committee: A. Linde (Chair), S. Kivelson, B. Lev, S. Zhang
Location: Hewlett Teaching Center, Rm. 201

Date and Time: 
Tuesday, February 26, 2019 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium presents The Road to Higher Tc Superconductivity

Topic: 
The Road to Higher Tc Superconductivity
Abstract / Description: 

Unravelling the mechanism of high-Tc superconductivity is one of the most challenging problems in physics. Equally challenging is to enhance the superconducting transition temperature Tc, but Tc at ambient conditions has stopped rising for almost 25 years since 135 K was recorded for the Hg-based trilayer cuprate in 1993. However, in recent years, three different classes of high-Tc materials have shown a signature of higher Tc under extreme conditions; monolayer FeSe films deposited on a SrTiO3 substrate, H3S (and LaHx) under extremely high pressures, and YBCO pumped by c-axis polarized THz light pulses. These indicate that there is a room for the Tc–enhancement in the known high-Tc classes, specifically, the cuprate and the iron-based superconducting materials.

The first part of this colloquium addresses the question of why copper oxides and iron arsenides/selenides are special, and then a possibility of finding the third high-Tc materials containing 3d transition-metal elements will be suggested. The second part focusses on the hole-doped high-Tc cuprates, and an attempt to search for a new type of cuprate materials with structures more favorable for higher Tc. A candidate new cuprate is Ba2CuO4-y (and its Sr counterpart Sr2CuO4-y with Tc = 98 K) in which high-Tc superconductivity occurs in highly oxygen-deficient Cu-O planes with high hole density and short apical oxygen distance. A possible structure of this new cuprate will be discussed.


 

Wtr. Qtr. Colloq. committee: A. Linde (Chair), S. Kivelson, B. Lev, S. Zhang
Location: Hewlett Teaching Center, Rm. 201

Date and Time: 
Tuesday, February 19, 2019 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium welcomes John Doyle

Topic: 
Cold and Ultracold Molecules for Quantum Information and Particle Physics
Abstract / Description: 

TBA


 

Wtr. Qtr. Colloq. committee: A. Linde (Chair), S. Kivelson, B. Lev, S. Zhang
Location: Hewlett Teaching Center, Rm. 201

Date and Time: 
Tuesday, February 12, 2019 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium presents "Adding Numbers and Shuffling Cards"

Topic: 
Adding Numbers and Shuffling Cards
Abstract / Description: 

Just like you folks, (some) mathematicians look at the world and try to make sense of it. For example, when numbers are added in the usual way, 'carries' occur. It is natural to ask "how do the carries go?" How many carries are typical, and if we just had a carry, is it more (or less) likely that the next column will need a carry? It turns out that carries form a Markov chain with an "amazing" Transition matrix. Surprisingly, this same matrix turns up in the analysis of shuffling cards (the "seven shuffles theorem"). I will explain the connection and links to all kinds of other parts of mathematics: for example, sections of generating functions, the Veronese imbedding, Foulkes characters and Hopf algebras. The results "deform" and that is important in the analysis of casino "shelf shuffling machines."


 

Wtr. Qtr. Colloq. committee: A. Linde (Chair), S. Kivelson, B. Lev, S. Zhang
Location: Hewlett Teaching Center, Rm. 201

Date and Time: 
Tuesday, February 5, 2019 - 4:15pm
Venue: 
Hewlett 201

Pages

CS300 Seminar

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

SpaceX's journey on the road to mars

Topic: 
SpaceX's journey on the road to mars
Abstract / Description: 

SSI will be hosting Gwynne Shotwell — President and COO of SpaceX — to discuss SpaceX's journey on the road to mars. The event will be on Wednesday Oct 11th from 7pm - 8pm in Dinkelspiel Auditorium. After the talk, there will be a Q&A session hosted by Steve Jurvetson from DFJ Venture Capital.

Claim your tickets now on eventbright

 

Date and Time: 
Wednesday, October 11, 2017 - 7:00pm
Venue: 
Dinkelspiel Auditorium

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Subhasish Mitra

5:15-6:00, Silvio Savarese

Date and Time: 
Wednesday, December 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Phil Levis

5:15-6:00, Ron Fedkiw

Date and Time: 
Monday, December 5, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Dan Boneh

5:15-6:00, Aaron Sidford

Date and Time: 
Wednesday, November 30, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, John Mitchell

5:15-6:00, James Zou

Date and Time: 
Monday, November 28, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Emma Brunskill

5:15-6:00, Doug James

Date and Time: 
Wednesday, November 16, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, James Landay

5:15-6:00, Dan Jurafsky

Date and Time: 
Monday, November 14, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Ken Salisbury

5:15-6:00, Noah Goodman

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

Pages

EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium presents Karthik and Arushi

Topic: 
User Interface
Abstract / Description: 

TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, March 13, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Jamie Morgenstern

Topic: 
TBA
Abstract / Description: 

TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, March 6, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Judith Estrin

Topic: 
TBA
Abstract / Description: 

TBA


 

The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, February 27, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Babble Labble

Topic: 
Babble Labble: Training Classifiers with Natural Language Explanations
Abstract / Description: 

Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification). In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision. A semantic parser converts these explanations into programmatic labeling functions that generate noisy labels for an arbitrary amount of unlabeled data, which is used to train a classifier. On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100 times faster by providing explanations instead of just labels. Furthermore, given the inherent imperfection of labeling functions, we find that a simple rule-based semantic parser suffices.

The full paper can be found here: https://arxiv.org/abs/1805.03818.


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room 104 of the Shriram Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Stanford students may enroll in EE380 to take the Colloquium as a one unit S/NC class. Enrolled students are required to keep and electronic notebook or journal and to write a short, pithy comment about each of the ten lectures and a short free form evaluation of the class in order to receive credit. Assignments are due at the end of the quarter, on the last day of examinations.

EE380 is a video class. Live attendance is encouraged but not required. We (the organizers) feel that watching the video is not a substitute for being present in the classroom. Questions are encouraged.

Many past EE380 talks are available on YouTube, see the EE380 Playlist.

Date and Time: 
Wednesday, February 20, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Electronic Design Automation (EDA) and the Resurgence of Chip Design

Topic: 
Electronic Design Automation (EDA) and the Resurgence of Chip Design
Abstract / Description: 

Electronic Design Automation (EDA) enables the design of semiconductors comprised of tens of billions of devices. EDA consists of design software incorporating cutting-edge optimization and analysis algorithms, as well as pre-designed blocks known as "Intellectual Property" (IP). This talk reviews trends in the EDA industry in the context of the Semiconductor industry, highlighting new developments such as the rise of IP to become the largest segment in EDA, the increased use of system-level tools, and the resurgence of chip design particularly in the area of artificial intelligence. The talk ends with a brief mention of two related activities: The Stanford class "EDA and Machine Learning Hardware", in which students learn the inner workings of (some) EDA tools and how to use them effectively to design digital hardware, e.g. a convolutional neural network for image recognition implemented in an FPGA; and Silicon Catalyst, an innovative local incubator focused on accelerating solutions in Silicon.

Date and Time: 
Wednesday, February 6, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents "Scalable Intelligent Systems Build and Deploy by 2025"

Topic: 
Scalable Intelligent Systems Build and Deploy by 2025
Abstract / Description: 

The next stage of human-computer evolution, Scalable Intelligent Systems, integrates people, communications, and computers into a unified cooperative environment. Think of it as moving beyond Google, Facebook, Instagram, and the other social networks. Scalable Intelligent Systems can be actualized by 2025 and are likely to include the following characteristics:

  • Interactively acquire and present information from video, web pages, hologlasses, online data bases, sensors, articles, human speech and gestures, etc.
  • Real-time integration of massive, pervasively inconsistent information
  • Close human collaboration using hologlasses for secure mobile interaction.
  • Organizations of people and IoT devices (Citadels) for trustworthiness, resilience, and performance with no single point of failure
  • Scalability in all important dimensions including no hard barriers to continual improvement in the above areas.

There is no computer-only solution that can implement the above by 2025. Consequently, people are fundamental to a Scalable Intelligent System.

Scalable Intelligent Systems, as envisioned, will be the most complex software that has ever been created. Every advanced country in the world has recently introduced its own development plan. The development of Scalable Intelligent Systems will create enormous social and policy challenges.

For example, Scalable Intelligent Systems can be of enormous value in Pain Management. Pain management requires much more than just prescribing opioids and other pain killers, which are often critical for short-term and less often longer-term use. Organizational aspects play an important role in pain management. Scalable Intelligent Systems can help users with appliances, entertainment, exercise, hypnosis, medication, meditation, physical therapy, and collaboration with medical helpers.

Building and deploying a Scalable Intelligent Technology stack is possible by 2025. The rewards are high but the amount of effort required is substantial. We have to get to work.

 

Date and Time: 
Wednesday, January 23, 2019 - 4:30pm
Venue: 
Shriram 104

EE380 Computer Systems Colloquium presents Erudite: A Low-Latency, High-Capacity, and High-efficiency Prototype System for Computational Intelligence

Topic: 
Erudite: A Low-Latency, High-Capacity, and High-efficiency Prototype System for Computational Intelligence
Abstract / Description: 

Since the rise of deep learning in 2012, much progress has been made in deep-learning-based AI tasks such as image/video understanding and natural language understanding, as well as GPU/accelerator architectures that greatly improve the training and inference speed for neural-network models. As the industry players race to develop ambitious applications such as self-driving vehicles, cashier-less supermarkets, human-level interactive robot systems, and human intelligence augmentation, major research challenges remain in computational methods as well as hardware/software infrastructures required for these applications to be effective, robust, responsive, accountable and cost-effective. Innovations in scalable iterative solvers and graph algorithms will be needed to achieve these application-level goals but will also impose much higher-level of data storage capacity, access latency, energy efficiency, and processing throughput. In this talk, I will present our recent progress in building highly performant AI task libraries, creating full AI applications, providing AI application development tools, and prototyping the Erudite system at the IBM-Illinois C3SR.

Date and Time: 
Wednesday, January 16, 2019 - 4:30pm
Venue: 
Shriram 104

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

EE380 Computer Systems Colloquium presents "Leela: a Semantic Intelligent Agent"

Topic: 
Leela: a Semantic Intelligent Agent
Abstract / Description: 

Leela is a semantic artificially intelligent agent modeled on the theories of Jean Piaget. She builds increasingly abstract semantic models of the world from her experiences of exploration, play, and experimentation. As an agent she is able to formulate, execute, and explain her own plans.

This talk will provide an introduction to Leela's background and design and will show her in action.

Date and Time: 
Wednesday, December 5, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium presents "Safe passwords made easy to use"

Topic: 
Safe passwords made easy to use
Abstract / Description: 

How do we choose and remember our secure access codes? So far biometrics, password managers, and systems like Facebook connect have not been able to guarantee the security we need. Remembering dozens of different passwords becomes a usability nightmare. 25+ years into online experience, each of us have many hard-to-remember or easy-to-guess passwords, with all the risks and frustration they imply.

We describe experiments showing how to make easy to remember codes and passwords and the system to make them, called Cue-Pin-Select. It can generate (and regenerate) passwords on the go using only the user's brain for computation. It has the advantage of creating memorable passwords, not requiring any external storage or computing device, and can be executed in less than a minute to create a new password.

This talk will summarize recent usable security work done with Ted Selker. It will start with the Cue-Pin-Select algorithm, cover an improvement we found that applies to all passphrase-based security systems, and explain some of the work currently underway to have better tools to study password schemes and human computation.

Date and Time: 
Wednesday, November 28, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Ginzton Lab

AP483 Optics & Electronics Seminar presents Ultrafast X-ray diffraction imaging with Free Electron Lasers

Topic: 
Ultrafast X-ray diffraction imaging with Free Electron Lasers
Abstract / Description: 

The advent of X-ray Free Electron Lasers (FELs) opens the door for unprecedented studies on non-crystallin nanoparticles with high spatial and temporal resolutions. In the recent past, ultrafast X-ray imaging studies with intense, femtosecond short FEL pulses have elucidated hidden processes in individual fragile specimens, which are inaccessible with conventional imaging techniques. Examples include airborne soot particle formation [1], metastable states in the synthesis of metal nanoparticles [2] and transient vortices in superfluid quantum systems [3] . Theoretically, ultrafast coherent diffraction X-ray imaging (CDI) could achieve atomic resolution in combination with sub-femtosecond temporal precision. Currently, the spatial resolution of ultrafast X-ray CDI is limited to several nanometers due to a combination of several factors such as X-ray photon flux, image imperfections and ultimately, sample damage [4] .

In this talk, I will present several experimental studies, which address these limitations and/or demonstrate the potential of ultrafast CDI. In the first part of the talk, I will report on a novel "in-flight" holographic method which overcomes the phase problem and paves the way for high-resolution X-ray imaging in presence of noise and image imperfections [5]. The second part will focus on potential applications of ultrafast X-ray CDI such as visualization of irreversible light-induced dynamics at the nanoscale with nanometer and sub-femtosecond resolutions [6]. In the third part, I will present world's first diffraction images of heavy atom nanoparticles recorded with isolated soft X-ray attosecond pulses. The study indicates that the combination of the optimal pulse length and X-ray energy can significantly deviate from linear models and control over transient resonances might be an efficient pathway for the improvement of spatial resolution [7] .

In summary, ultrafast CDI is a powerful method for studies of transient non-equilibrium dynamics at the nanoscale. The increasing number of X-ray FEL facilities, and the constant improvement in accelerator and X-ray focusing technology will broaden our capabilities to observe transient states of matter. This development will have a significant impact on research fields such as catalysis, nanophotonics, matter under extreme conditions, light-matter interactions and biological studies.

[1] Loh, N. D. et al. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight. Nature 486, 513–517 (2012).
[2] Barke, I. et al. The 3D-architecture of individual free silver nanoparticles captured by X-ray scattering. Nat. Commun. 6, (2015):6187.
[3] Gomez, L. F. et al. Shapes and vorticities of superfluid helium nanodroplets. Science 345, 906–909 (2014).
[4] Aquila, Andrew, et al., The linac coherent light source single particle imaging road map., Structur. Dyn. 2.4 (2015): 041701
[5] Gorkhover,T. et al., Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles. Nat. Phot. 10, (2016):93.
[6] Gorkhover,T.,et al., Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles. Nat. Phot. 12.3, (2018): 150.
[7] Kuschel, S., et al, in prep.

Date and Time: 
Monday, January 14, 2019 - 4:15pm
Venue: 
Spilker 232

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

AP483, Ginzton Lab, & AMO Seminar Series presents "Quantum Electrodynamics of Superconducting Circuits"

Topic: 
Quantum Electrodynamics of Superconducting Circuits
Abstract / Description: 

The demand for rapid and high-fidelity execution of initialization, gate and read-out operations casts tight constraints on the accuracy of quantum electrodynamic modeling of superconducting integrated circuits. Attaining the required accuracies requires reconsidering our basic approach to the quantization of the electromagnetic field in a light-confining medium and the notion of normal modes. I will discuss a computational framework based on the Heisenberg-Langevin approach to address these fundamental questions. This framework allows the accurate determination of the quantum dynamics of a superconducting qubit in an arbitrarily complex electromagnetic environment, free of divergences that have plagued earlier approaches. I will also discuss the effectiveness of this computational approach in meeting the demands of present-day quantum computing research.


Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, December 3, 2018 - 4:15pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series

Topic: 
When quantum-information scrambling met quasiprobabilities
Abstract / Description: 

We do physics partially out of a drive to understand essences. One topic whose essence merits understanding is the out-of-time-ordered correlator (OTOC). The OTOC reflects quantum manybody thermalization, chaos, and scrambling (the spread of quantum information through manybody entanglement). The OTOC, I will show, equals an average over a quasiprobability distribution. A quasiprobability resembles a probability but can become negative and nonreal. Such nonclassical values can signal nonclassical physics. The OTOC quasiprobability has several applications: Experimentally, the quasiprobability points to a scheme for measuring the OTOC (via weak measurements, which refrain from disturbing the measured system much). The quasiprobability also signals false positives in attempts to measure scrambling of open systems. Theoretically, the quasiprobability links the OTOC to uncertainty relations, to nonequilibrium statistical mechanics, and more strongly to chaos. As coarse-graining the quasiprobability yields the OTOC, the quasiprobability forms the OTOC's essence.

References
• NYH, Phys. Rev. A 95, 012120 (2017). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.012120
• NYH, Swingle, and Dressel, Phys. Rev. A 97, 042105 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.042105
• NYH, Bartolotta, and Pollack, arXiv:1806.04147 (2018). https://arxiv.org/abs/1806.04147
• Gonzàlez Alonso, NYH, and Dressel, arXiv:1806.09637 (2018). https://arxiv.org/abs/1806.09637
• Swingle and NYH, Phys. Rev. A 97, 062113 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.062113
• Dressel, Gonzàlez Alonso, Waegell, and NYH, Phys. Rev. A 98, 012132 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.012132


Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, November 12, 2018 - 4:15pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series presents Conductivity of a perfect crystal

Topic: 
Conductivity of a perfect crystal
Abstract / Description: 

Dissipation of electrical current in typical metals is due to scattering off material defects and phonons. But what if the material were a perfect crystal, and sufficiently stiff or cold to eliminate phonons -- would conductivity become infinite? We realize an analogous scenario with atomic fermions in a cubic optical lattice, and measure conductivity. The equivalent of Ohm's law for neutral particles gives conductivity as the ratio of particle current to the strength of an applied force. Our measurements are at non-zero frequency (since a trapping potential prevents dc current flow), giving the low-frequency spectrum of real and imaginary conductivity. Since our atoms carry no charge, we measure particle currents with in-situ microscopy, with which both on- and off-diagonal response is visible. Sum rules are used to relate the observed conductivity to thermodynamic properties such as kinetic energy. We explore the effect of lattice depth, temperature, interaction strength, and atom number on conductivity. Using a relaxation-time approximation, we extract the transport time, i.e., the relaxation rate of current through collisions. Returning to the initial question, we demonstrate that fermion-fermion collisions damp current since the lattice breaks Galilean invariance.


Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, October 29, 2018 - 4:15pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series

Topic: 
New opportunities with old photonic materials
Abstract / Description: 

 Lithium niobate (LN) is an "old" material with many applications in optical and microwave technologies, owing to its unique properties that include large second order nonlinear susceptibility, large piezoelectric response, and wide optical transparency window. Conventional LN components, including modulators and periodically polled frequency converters, have been the workhorse of the optoelectronic industry. They are reaching their limits, however, as they rely on weakly guiding ion-diffusion defined optical waveguides in bulk LN crystal. I will discuss our efforts aimed at the development of integrated LN platform, featuring sub-wavelength scale light confinement and dense integration of optical and electrical components, that has the potential to revolutionize optical communication networks and microwave photonic systems, as well as enable realization of quantum photonic circuits. Good example is our recently demonstrated integrated LN electro-optic modulator that can be driven directly by a CMOS circuit, that supports data rates > 200 gigabits per second with > 90% optical transmission efficiency. I will also discuss our work on ultra-high Q LN optical cavities (Q ~ 10,000,000) and their applications, as well as nonlinear wavelength conversion using different approaches based on LN films.
Diamond is another "old" material with remarkable properties! It is transparent from the ultra-violet to infrared, has a high refractive index, strong optical nonlinearity and a wide variety of light-emitting defects of interest for quantum communication and computation. In my talk, I will summarize our efforts towards the development of integrated diamond quantum photonics platform aimed at realization of efficient photonic and phononic interfaces for diamond spin qubits.


 

Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, October 22, 2018 - 4:15pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series presents Dynamic photonic structures

Topic: 
Dynamic photonic structures: non-reciprocity, gauge potential, and synthetic dimensions.
Abstract / Description: 

 

We show that dynamic photonic structures, where refractive index of the structure is modulated as a function of time, offers a wide ranges of possibilities for exploration of physics and applications of light. In particular, dynamic photonic structures naturally break reciprocity. With proper design such photonic structure can then be used to achieve complete optical isolation and to completely reproduce magneto-optical effects without the use of gyrotropic materials. Moreover, the phase of the modulation corresponds to an effective magnetic gauge potential for photons, through which one can explore a wide variety of fundamental physics effects of synthetic magnetic field using photons. Finally, such dynamic photonic structure can be used to explore physics, especially topological physics, in dimensions that are higher than the physical dimension of the structure, leading to intriguing possibilities in manipulation of the frequencies of light in non-trivial ways.


 

Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, October 15, 2018 - 4:15pm
Venue: 
Spilker 232

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Pages

Information Systems Lab (ISL) Colloquium

IT Forum presents "Adapting Maximum Likelihood Theory in Modern Applications"

Topic: 
Adapting Maximum Likelihood Theory in Modern Applications
Abstract / Description: 

Maximum likelihood estimation (MLE) is influential because it can be easily applied to generate optimal, statistically efficient procedures for broad classes of estimation problems. Nonetheless, the theory does not apply to modern settings --- such as problems with computational, communication, or privacy considerations --- where our estimators have resource constraints. In this talk, I will introduce a modern maximum likelihood theory that addresses these issues, focusing specifically on procedures that must be computationally efficient or privacy- preserving. To do so, I first derive analogues of Fisher information for these applications, which allows a precise characterization of tradeoffs between statistical efficiency, privacy, and computation. To complete the development, I also describe a recipe that generates optimal statistical procedures (analogues of the MLE) in the new settings, showing how to achieve the new Fisher information lower bounds.

Date and Time: 
Friday, February 22, 2019 - 1:15pm
Venue: 
Packard 202

ISL Colloquium presents "Statistical Inference Under Local Information Constraints"

Topic: 
Statistical Inference Under Local Information Constraints
Abstract / Description: 

Independent samples from an unknown probability distribution p on a domain of size k are distributed across n players, with each player holding one sample. Each player can send a message to a central referee in a simultaneous message passing (SMP) model of communication, whose goal is to solve a pre-specified inference problem. The catch, however, is that each player cannot simply send their own sample to the referee; instead, the message they send must obey some (local) information constraint. For instance, each player may be limited to communicating only L bits, where L << log k; or they may seek to reveal as little information as possible, and preserve local differentially privacy.

We propose a general formulation for inference problems in this distributed setting, and instantiate it to two fundamental inference questions, learning and uniformity testing. We study the role of randomness for those questions, and obtain striking separations between public- and private-coin protocols for the latter, while showing the two settings are equally powerful for the former. (Put differently, "sharing with neighbors does help a lot for the test, but not really for learning.")

Based on joint works with Jayadev Acharya (Cornell University), Cody Freitag (Cornell University), and Himanshu Tyagi (IISc Bangalore).

Date and Time: 
Friday, February 1, 2019 - 1:15am
Venue: 
Packard 202

ISL Colloquium presents "Learning in Non-Stationary Environments: Near-Optimal Guarantees"

Topic: 
Learning in Non-Stationary Environments: Near-Optimal Guarantees
Abstract / Description: 

Motivated by scenarios in which heterogeneous autonomous agents interact, in this talk we present recent work on the development of learning algorithms with performance guarantees for both simultaneous and hierarchical decision-making. Adoption of new technologies is transforming application domains from intelligent infrastructure to e-commerce, allowing operators and intelligently augmented humans to make decisions rapidly as they engage with these systems. Algorithms and market mechanisms supporting interactions occur on multiple time-scales, face resource constraints and system dynamics, and are exposed to exogenous uncertainties, information asymmetries, and behavioral aspects of human decision-making. Techniques for synthesis and analysis of decision-making algorithms, for either inference or influence, that fundamentally depend on an assumption of environment stationarity often breakdown in this context. For instance, humans engaging with platform-based transportation services make decisions that are dependent on their immediate observations of the environment and past experience, both of which are functions of the decisions of other users, multi-timescale policies (e.g., dynamic pricing and fixed laws), and even environmental context that may be non-stationary (e.g., weather patterns or congestion). Implementation of algorithms designed assuming stationarity may lead to unintended or unforeseen consequences.

Stochastic models with high-probability guarantees that capture the dynamics and the decision-making behavior of autonomous agents are needed to support effective interventions such as physical control, economic incentives, or information shaping mechanisms. Two fundamental components are missing in the state-of-the-art: (i) a toolkit for analysis of interdependent learning processes and for adaptive inference in these settings, and (ii) certifiable algorithms for co-designing adaptive influence mechanisms that achieve measurable improvement in system-level performance while ensuring individual-level quality of service through design-in high-probability guarantees. In this talk, we discuss our work towards addressing these gaps. In particular, we provide (asymptotic and non-asymptotic) convergence guarantees for simultaneous play, multi-agent gradient-based learning (a class of algorithms that encompasses a broad set of multi-agent reinforcement learning algorithms) and performance guarantees (regret bounds) for hierarchical decision-making (incentive design) with bandit feedback in non-stationary, Markovian environments. Building on insights from these results, the talk concludes with a discussion of interesting future directions in the design of certifiable, robust algorithms for adaptive inference and influence.

Date and Time: 
Thursday, January 24, 2019 - 4:15pm
Venue: 
Packard 101

ISL Colloquium presents "Algorithms and Algorithmic Intractability in High Dimensional Linear Regression"

Topic: 
Algorithms and Algorithmic Intractability in High Dimensional Linear Regression
Abstract / Description: 

In this talk we will focus on the high dimensional linear regression problem. The goal is to recover a hidden k-sparse binary vector \beta under n noisy linear observations Y=X\beta+W where X is an n \times p matrix with iid N(0,1) entries and W is an n-dimensional vector with iid N(0,\sigma^2) entries. In the literature of the problem, an apparent asymptotic gap is observed between the optimal sample size for information-theoretic recovery, call it n*, and for computationally efficient recovery, call it n_alg.

We will discuss several new contributions on studying this gap. We first identify tightly the information limit of the problem using a novel analysis of the Maximum Likelihood Estimator (MLE) performance. Furthermore, we establish that the algorithmic barrier n_alg coincides with the phase transition point for the appearance of a certain Overlap Gap Property (OGP) over the space of k-sparse binary vectors. The presence of such an Overlap Gap Property phase transition, which originates in spin glass theory, is known to provide evidence of an algorithmic hardness. Finally, we show that in the extreme case where the noise level is zero, i.e. \sigma=0, the computational-statistical gap closes by proposing an optimal polynomial-time algorithm using the Lenstra-Lenstra-Lov\'asz lattice basis reduction algorithm.

This is joint work with David Gamarnik.

Date and Time: 
Friday, January 18, 2019 - 3:00pm
Venue: 
Gates 463A

ISL Colloquium presents "Stochastic Descent Algorithms: Minimax Optimality, Implicit Regularization, and Deep Networks"

Topic: 
Stochastic Descent Algorithms: Minimax Optimality, Implicit Regularization, and Deep Networks
Abstract / Description: 

Stochastic descent methods have had a long history in optimization, adaptive filtering, and online learning and have recently gained tremendous popularity as the workhorse for deep learning. So much so that, it is now widely recognized that the success of deep networks is not only due to their special deep architecture, but also due to the behavior of the stochastic descent methods used, which plays a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD)---originally developed for quadratic loss and linear models in the context of H-infinity control in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. These minimax properties can be used to explain the convergence and implicit-regularization of the algorithms when the linear regression problem is over-parametrized (in what is now being called the "interpolating regime"). In the nonlinear setting, exemplified by training a deep neural network, we show that when the setup is "highly over-parametrized", stochastic descent methods enjoy similar convergence and implicit-regularization properties. This observation gives some insight into why deep networks exhibit such powerful generalization abilities. It is also a further example of what is increasingly referred to as the "blessing of dimensionality".

Date and Time: 
Thursday, January 17, 2019 - 4:15pm
Venue: 
Packard 101

IT Forum & ISL presents Functional interfaces to compression (or: Down With Streams!)

Topic: 
Functional interfaces to compression (or: Down With Streams!)
Abstract / Description: 

From a computer-science perspective, the world of compression can seem like an amazing country glimpsed through a narrow straw. The problem isn't the compression itself, but the typical interface: a stream of symbols (or audio samples, pixels, video frames, nucleotides, or Lidar points...) goes in, an opaque bitstream comes out, and on the other side, the bitstream is translated back into some approximation of the input. The coding and decoding modules maintain an internal state that evolves over time. In practice, these "streaming" interfaces with inaccessible mutable state have limited the kinds of applications that can be built.

In this talk, I'll discuss my group's experience with what can happen when we build applications around compression systems that expose a "functional" interface, one that makes state explicit and visible. We've found it's possible to achieve tighter couplings between coding and the rest of an application, improving performance and allowing compression algorithms to be used in settings where they were previously infeasible. In Lepton (NSDI 2017), we implemented a Huffman and a JPEG encoder in purely functional style, allowing the system to compress images in parallel across a distributed network filesystem with arbitrary block boundaries (e.g., in the middle of a Huffman symbol or JPEG block). This free-software system is is in production at Dropbox and has compressed, by 23%, hundreds of petabytes of user files. ExCamera (NSDI 2017) uses a purely functional video codec to parallelize video encoding into thousands of tiny tasks, each handling a fraction of a second of video, much shorter than the interval between key frames, and executing with 4,000-way parallelism on AWS Lambda. Salsify (NSDI 2018) uses a purely functional video codec to explore execution paths of the encoder without committing to them, letting it match the capacity estimates from a transport protocol. This architecture outperforms "streaming"-oriented video applications -- Skype, Facetime, Hangouts, WebRTC -- in delay and visual quality. I'll briefly discuss some of our ongoing work in trying to compress the communication between a pair of neural networks jointly trained to accomplish some goal, e.g. to support efficient evaluation when data and compute live in different places. In general, our findings suggest that while, in some contexts, improvements in codecs may have reached a point of diminishing returns, compression *systems* still have plenty of low-hanging fruit.

Date and Time: 
Friday, January 11, 2019 - 1:15pm
Venue: 
Packard 202

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

CANCELLED! ISL & IT Forum present "Bayesian Suffix Trees: Learning and Using Discrete Time Series"

Topic: 
CANCELLED! Bayesian Suffix Trees: Learning and Using Discrete Time Series
Abstract / Description: 

CANCELLED!  We apologize for any inconvenience.

One of the main obstacles in the development of effective algorithms for inference and learning from discrete time series data, is the difficulty encountered in the identification of useful temporal structure. We will discuss a class of novel methodological tools for effective Bayesian inference and model selection for general discrete time series, which offer promising results on both small and big data. Our starting point is the development of a rich class of Bayesian hierarchical models for variable-memory Markov chains. The particular prior structure we adopt makes it possible to design effective, linear-time algorithms that can compute most of the important features of the resulting posterior and predictive distributions without resorting to MCMC. We have applied the resulting tools to numerous application-specific tasks, including on-line prediction, segmentation, classification, anomaly detection, entropy estimation, and causality testing, on data sets from different areas of application, including data compression, neuroscience, finance, genetics, and animal communication. Results on both simulated and real data will be presented.

Date and Time: 
Wednesday, December 12, 2018 - 3:00pm
Venue: 
Packard 202

ISL Colloquium presents "Information-theoretic Privacy"

Topic: 
Information-theoretic Privacy: A holistic view via leakage measures, robust privacy guarantees, and adversarial models for mechanism design
Abstract / Description: 

Privacy is the problem of ensuring limited leakage of information about sensitive features while sharing information (utility) about non-private features to legitimate data users. Even as differential privacy has emerged as a strong desideratum for privacy, there is a need for varied yet rigorous approaches for applications with different requirements. This talk presents an information-theoretic approach and takes a holistic view focusing on leakage measures, design of privacy mechanisms, and verifiable implementations using generative adversarial models. Specifically, we introduce maximal alpha leakage as a new class of adversarially motivated tunable leakage measures that quantifies the maximal gain of an adversary in refining a tilted belief of any (potentially random) function of a dataset conditioned on a disclosed dataset. The choice of alpha determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = ∞, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. The problem of guaranteeing privacy can then be viewed as one of designing a randomizing mechanism that minimizes alpha leakage subject to utility constraints. We then present bounds on the robustness of privacy guarantees that can be made when designing mechanisms from a finite number of samples. Finally, we focus on a data-driven approach, generative adversarial privacy (GAP), to design privacy mechanisms using neural networks. GAP is modeled as a constrained minimax game between a privatizer (intent on publishing a utility-guaranteeing learning representation that limits leakage of the sensitive features) and an adversary (intent on learning the sensitive features). We demonstrate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI dataset. Time permitting, we will briefly discuss the learning-theoretic underpinnings of GAP as well as connections to the problem of algorithmic fairness.

This work is a result of multiple collaborations: (a) maximal alpha leakage with J. Liao (ASU), O. Kosut (ASU), and F. P. Calmon (Harvard); (b) robust mechanism design with M. Diaz (ASU), H. Wang (Harvard), and F. P. Calmon (Harvard); and (c) GAP with C. Huang (ASU), P. Kairouz (Google), X. Chen (Stanford), and R. Rajagopal (Stanford).

Date and Time: 
Wednesday, December 5, 2018 - 4:15pm
Venue: 
Packard 202

ISL Colloquium presents "Estimation After Parameter Selection"

Topic: 
Estimation After Parameter Selection
Abstract / Description: 

In many practical parameter estimation problems, such as medical experiments and cognitive radio communications, parameter selection is performed prior to estimation. The selection process has a major impact on subsequent estimation by introducing a selection bias and creating coupling between decoupled parameters. As a result, classical estimation theory may be inappropriate and inaccurate and a new methodology is needed. In this study, the problem of estimating a preselected unknown deterministic parameter, chosen from a parameter set based on a predetermined data-based selection rule, Ψ, is considered. In this talk, I will present a general non-Bayesian estimation theory for estimation after parameter selection, includes estimation methods, performance analysis, and adaptive sampling strategies. The new theory is based on the post-selection mean-square-error (PSMSE) criterion as a performance measure instead of the commonly used mean-square-error (MSE). We derive the corresponding Cramér-Rao-type bound on the PSMSE of any Ψ-unbiased estimator, where the Ψ -unbiasedness is in the Lehmann-unbiasedness sense. Then, the post-selection maximum-likelihood (PSML) estimator is presented and its Ψ–efficiency properties are demonstrated. Practical implementations of the PSML estimator are proposed as well. As time permits, I will discuss the similar ideas that can be applied to estimation after model selection and to estimation in Good-Turing models.

Date and Time: 
Monday, December 3, 2018 - 4:15pm
Venue: 
Packard 101

Pages

IT-Forum

IT Forum presents "Sub-packetization of Minimum Storage Regenerating codes"

Topic: 
Sub-packetization of Minimum Storage Regenerating codes: A lower bound and a work-around
Abstract / Description: 

Modern cloud storage systems need to store vast amounts of data in a fault tolerant manner, while also preserving data reliability and accessibility in the wake of frequent server failures. Traditional MDS (Maximum Distance Separable) codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. Minimum storage regenerating (MSR) codes are a special sub-class of MDS codes that provide mechanisms for exact regeneration of a single code-block by downloading the minimum amount of information from the remaining code-blocks. As a result, MSR codes are attractive for use in distributed storage systems to ensure node repairs with optimal repair- bandwidth. However, all known constructions of MSR codes require large sub-packetization levels (which is a measure of the granularity to which a single vector codeword symbol needs to be divided into for efficient repair). This restricts the applicability of MSR codes in practice.

This talk will present a lower bound that exponentially large sub- packetization is inherent for MSR codes. We will also propose a natural relaxation of MSR codes that allows one to circumvent this lower bound, and present a general approach to construct MDS codes that significantly reduces the required sub-packetization level by incurring slightly higher repair-bandwidth as compared to MSR codes.

The lower bound is joint work with Omar Alrabiah, and the constructions are joint work with Ankt Rawat, Itzhak Tamo, and Klim Efremenko.

Date and Time: 
Wednesday, February 20, 2019 - 2:00pm
Venue: 
Gates 463A

IT Forum presents "Adapting Maximum Likelihood Theory in Modern Applications"

Topic: 
Adapting Maximum Likelihood Theory in Modern Applications
Abstract / Description: 

Maximum likelihood estimation (MLE) is influential because it can be easily applied to generate optimal, statistically efficient procedures for broad classes of estimation problems. Nonetheless, the theory does not apply to modern settings --- such as problems with computational, communication, or privacy considerations --- where our estimators have resource constraints. In this talk, I will introduce a modern maximum likelihood theory that addresses these issues, focusing specifically on procedures that must be computationally efficient or privacy- preserving. To do so, I first derive analogues of Fisher information for these applications, which allows a precise characterization of tradeoffs between statistical efficiency, privacy, and computation. To complete the development, I also describe a recipe that generates optimal statistical procedures (analogues of the MLE) in the new settings, showing how to achieve the new Fisher information lower bounds.

Date and Time: 
Friday, February 22, 2019 - 1:15pm
Venue: 
Packard 202

IT Forum presents "Student Evaluations, Quantifauxcation, and Gender Bias"

Topic: 
Student Evaluations, Quantifauxcation, and Gender Bias
Abstract / Description: 

Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. The way SET are used is statistically unsound--but worse, SET are biased and unreliable. Observational evidence shows that student ratings vary with instructor gender, ethnicity, and attractiveness; with course rigor, mathematical content, and format; and with students' grade expectations. Experiments show that the majority of student responses to some objective questions can be demonstrably false. A recent randomized experiment shows that giving students cookies increases SET scores. Randomized experiments show that SET are negatively associated with objective measures of teaching effectiveness and biased against female instructors by an amount that can cause more effective female instructors to get lower SET than less effective male instructors. Gender bias also affects how students rate objective aspects of teaching. It is not possible to adjust for the bias, because it depends on many factors, including course topic and student gender. Students are uniquely situated to observe some aspects of teaching and students' opinions matter. But for the purposes of evaluating and improving teaching quality, SET are biased, unreliable, and subject to strategic manipulation. Reliance on SET for employment decisions disadvantages protected groups and may violate federal law. For some administrators, risk mitigation might be a more persuasive argument than equity for ending reliance on SET in employment decisions: union arbitration and civil litigation over institutional use of SET are on the rise. Several major universities in the U.S. and Canada have already de-emphasized, substantially re-worked, or abandoned reliance on SET for personnel decisions.

Date and Time: 
Friday, February 8, 2019 - 1:15pm
Venue: 
Packard 202

IT Forum welcomes Paul Cuff, Renaissance Technologies

Topic: 
TBA
Abstract / Description: 

Soft covering is a phenomenon whereby an i.i.d. distribution on sequences of a given length is approximately produced from a very structured generation process. Specifically, a sequence is drawn uniformly at random from a "codebook" of sequences and then corrupted by memoryless noise (i.e. a discrete memoryless channel, or DMC). Among other things, soft covering means that the codebook is not recognizable in the resulting distribution. The soft covering phenomenon occurs when the codebook itself is constructed randomly, with a correspondence between the codebook distribution, the DMC, and the target i.i.d. distribution, and when the codebook is large enough. Mutual information is the minimum exponential rate for the codebook size. We show the exact exponential rate of convergence of the approximating distribution to the target, as measured by total variation distance, as a function of the excess codebook rate above mutual information. The proof involves a novel Poisson approximation step in the converse.

Soft covering is a crucial tool for secrecy capacity proofs and has applications broadly in network information theory for analysis of encoder performance. Wyner invented this tool for the purpose of solving a problem he named "common information." The quantum analogue of Wyner's problem is "entanglement of purification," which is an important open problem with physical significance: What is the minimum entanglement needed to produce a desired quantum state spanning two locations? The literature on this problem identifies sufficient asymptotic rates for this question that are generally excessively high. I will make brief mention of how the soft covering principle might be utilized for a more efficient design in this quantum setting.

Date and Time: 
Friday, January 25, 2019 - 1:15pm
Venue: 
Packard 202

IT Forum presents "Understanding the limitations of AI: When Algorithms Fail"

Topic: 
Understanding the limitations of AI: When Algorithms Fail
Abstract / Description: 

Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one's suitability for a job, to health diagnostic systems trained to determine a patient's outcome, machine learning models are used to make decisions that can have serious consequences on people's lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability.

Date and Time: 
Friday, January 18, 2019 - 1:15pm
Venue: 
Packard 202

IT Forum & ISL presents Functional interfaces to compression (or: Down With Streams!)

Topic: 
Functional interfaces to compression (or: Down With Streams!)
Abstract / Description: 

From a computer-science perspective, the world of compression can seem like an amazing country glimpsed through a narrow straw. The problem isn't the compression itself, but the typical interface: a stream of symbols (or audio samples, pixels, video frames, nucleotides, or Lidar points...) goes in, an opaque bitstream comes out, and on the other side, the bitstream is translated back into some approximation of the input. The coding and decoding modules maintain an internal state that evolves over time. In practice, these "streaming" interfaces with inaccessible mutable state have limited the kinds of applications that can be built.

In this talk, I'll discuss my group's experience with what can happen when we build applications around compression systems that expose a "functional" interface, one that makes state explicit and visible. We've found it's possible to achieve tighter couplings between coding and the rest of an application, improving performance and allowing compression algorithms to be used in settings where they were previously infeasible. In Lepton (NSDI 2017), we implemented a Huffman and a JPEG encoder in purely functional style, allowing the system to compress images in parallel across a distributed network filesystem with arbitrary block boundaries (e.g., in the middle of a Huffman symbol or JPEG block). This free-software system is is in production at Dropbox and has compressed, by 23%, hundreds of petabytes of user files. ExCamera (NSDI 2017) uses a purely functional video codec to parallelize video encoding into thousands of tiny tasks, each handling a fraction of a second of video, much shorter than the interval between key frames, and executing with 4,000-way parallelism on AWS Lambda. Salsify (NSDI 2018) uses a purely functional video codec to explore execution paths of the encoder without committing to them, letting it match the capacity estimates from a transport protocol. This architecture outperforms "streaming"-oriented video applications -- Skype, Facetime, Hangouts, WebRTC -- in delay and visual quality. I'll briefly discuss some of our ongoing work in trying to compress the communication between a pair of neural networks jointly trained to accomplish some goal, e.g. to support efficient evaluation when data and compute live in different places. In general, our findings suggest that while, in some contexts, improvements in codecs may have reached a point of diminishing returns, compression *systems* still have plenty of low-hanging fruit.

Date and Time: 
Friday, January 11, 2019 - 1:15pm
Venue: 
Packard 202

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

CANCELLED! ISL & IT Forum present "Bayesian Suffix Trees: Learning and Using Discrete Time Series"

Topic: 
CANCELLED! Bayesian Suffix Trees: Learning and Using Discrete Time Series
Abstract / Description: 

CANCELLED!  We apologize for any inconvenience.

One of the main obstacles in the development of effective algorithms for inference and learning from discrete time series data, is the difficulty encountered in the identification of useful temporal structure. We will discuss a class of novel methodological tools for effective Bayesian inference and model selection for general discrete time series, which offer promising results on both small and big data. Our starting point is the development of a rich class of Bayesian hierarchical models for variable-memory Markov chains. The particular prior structure we adopt makes it possible to design effective, linear-time algorithms that can compute most of the important features of the resulting posterior and predictive distributions without resorting to MCMC. We have applied the resulting tools to numerous application-specific tasks, including on-line prediction, segmentation, classification, anomaly detection, entropy estimation, and causality testing, on data sets from different areas of application, including data compression, neuroscience, finance, genetics, and animal communication. Results on both simulated and real data will be presented.

Date and Time: 
Wednesday, December 12, 2018 - 3:00pm
Venue: 
Packard 202

IT Forum presents "Perceptual Engineering"

Topic: 
Perceptual Engineering
Abstract / Description: 

The distance between the real and the digital is clearest at the interface layer. The ways that our bodies interact with the physical world are rich and elaborate while digital interactions are far more limited. Through an increased level of direct and intuitive interaction, my work aims to raise computing devices from external systems that require deliberate usage to those that are truly an extension of us, advancing both the state of research and human ability. My approach is to use the entire body for input and output, to allow for implicit and natural interactions. I call my concept "perceptual engineering," i.e., a method to alter the user's perception (or more specifically the input signals to their perception) and manipulate it in subtle ways. For example, modifying a user's sense of space, place, balance and orientation or manipulating their visual attention, all without the user's explicit input, and in order to assist or guide their interactive experience in an effortless way.

I build devices and immersive systems that explore the use of cognitive illusions to manage attention, physiological signals for interaction, deep learning for automatic VR generation, embodiment for remote collaborative learning, tangible interaction for augmenting play, haptics for enhancing immersion, and vestibular stimulation to mitigate motion sickness in VR. My "perceptual engineering" approach has been shown to, (1) support implicit and natural interactions with haptic feedback, (2) induce believable physical sensations of motion in VR, (3) provide a novel way to communicate with the user through proprioception and kinesthesia, and (4) serve as a platform to question the boundaries of our sense of agency and trust. For decades, interaction design has been driven to answer the question: how can new technologies allow users to interact with digital content in the most natural way? If we look at the evolution of computing over the last 50 years, interaction has gone from punch cards to mouse and keyboard to touch and voice. Similarly, devices have become smaller and closer to the user's body. With every transition, the things people can do have become more personal. The main question that drives my research is: what is the next logical step?

Date and Time: 
Friday, December 7, 2018 - 1:15pm
Venue: 
Packard 202

Pages

Optics and Electronics Seminar

AP483 Optics & Electronics Seminar presents Ultrafast X-ray diffraction imaging with Free Electron Lasers

Topic: 
Ultrafast X-ray diffraction imaging with Free Electron Lasers
Abstract / Description: 

The advent of X-ray Free Electron Lasers (FELs) opens the door for unprecedented studies on non-crystallin nanoparticles with high spatial and temporal resolutions. In the recent past, ultrafast X-ray imaging studies with intense, femtosecond short FEL pulses have elucidated hidden processes in individual fragile specimens, which are inaccessible with conventional imaging techniques. Examples include airborne soot particle formation [1], metastable states in the synthesis of metal nanoparticles [2] and transient vortices in superfluid quantum systems [3] . Theoretically, ultrafast coherent diffraction X-ray imaging (CDI) could achieve atomic resolution in combination with sub-femtosecond temporal precision. Currently, the spatial resolution of ultrafast X-ray CDI is limited to several nanometers due to a combination of several factors such as X-ray photon flux, image imperfections and ultimately, sample damage [4] .

In this talk, I will present several experimental studies, which address these limitations and/or demonstrate the potential of ultrafast CDI. In the first part of the talk, I will report on a novel "in-flight" holographic method which overcomes the phase problem and paves the way for high-resolution X-ray imaging in presence of noise and image imperfections [5]. The second part will focus on potential applications of ultrafast X-ray CDI such as visualization of irreversible light-induced dynamics at the nanoscale with nanometer and sub-femtosecond resolutions [6]. In the third part, I will present world's first diffraction images of heavy atom nanoparticles recorded with isolated soft X-ray attosecond pulses. The study indicates that the combination of the optimal pulse length and X-ray energy can significantly deviate from linear models and control over transient resonances might be an efficient pathway for the improvement of spatial resolution [7] .

In summary, ultrafast CDI is a powerful method for studies of transient non-equilibrium dynamics at the nanoscale. The increasing number of X-ray FEL facilities, and the constant improvement in accelerator and X-ray focusing technology will broaden our capabilities to observe transient states of matter. This development will have a significant impact on research fields such as catalysis, nanophotonics, matter under extreme conditions, light-matter interactions and biological studies.

[1] Loh, N. D. et al. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight. Nature 486, 513–517 (2012).
[2] Barke, I. et al. The 3D-architecture of individual free silver nanoparticles captured by X-ray scattering. Nat. Commun. 6, (2015):6187.
[3] Gomez, L. F. et al. Shapes and vorticities of superfluid helium nanodroplets. Science 345, 906–909 (2014).
[4] Aquila, Andrew, et al., The linac coherent light source single particle imaging road map., Structur. Dyn. 2.4 (2015): 041701
[5] Gorkhover,T. et al., Femtosecond and nanometre visualization of structural dynamics in superheated nanoparticles. Nat. Phot. 10, (2016):93.
[6] Gorkhover,T.,et al., Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles. Nat. Phot. 12.3, (2018): 150.
[7] Kuschel, S., et al, in prep.

Date and Time: 
Monday, January 14, 2019 - 4:15pm
Venue: 
Spilker 232

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

Optics & Electronics Seminar presents New designer materials: Sculpting electromagnetic fields on the atomic scale

Topic: 
New designer materials: Sculpting electromagnetic fields on the atomic scale
Abstract / Description: 

New optical nanomaterials hold the potential for breakthroughs in a wide range of areas from ultrafast optoelectronics such as modulators, light sources and hyperspectral detectors, to efficient upconversion for energy applications, bio-sensing and on-chip components for quantum information science. An exciting opportunity to realize such new nanomaterials lies in controlling the local electromagnetic environment on the atomic- and molecular-scale, (~1-10 nm) which enables extreme local field enhancements. We use creative nanofabrication techniques at the interface between chemistry and physics to realize this new regime, together with advanced, ultrafast optical techniques to probe the emerging phenomena. Here, I will provide an overview of our recent research including high-speed thermal photodetectors, ultrafast spontaneous emission and enhanced biosensors.

Date and Time: 
Monday, November 26, 2018 - 4:15pm
Venue: 
Spilker 232

Detecting Single Photons with Superconductors

Topic: 
Detecting Single Photons with Superconductors
Abstract / Description: 

From space communications to quantum communications to sensing dark matter, ultrasenstive, ultrafast photodetectors are required. But conventional detector technologies often fall short, exhibiting noise, slow response times, poor sensitivity, or a combination of these issues. In contrast, superconducting detectors based on nanowires provide a unique combination of high speed, excellent efficiency, and low noise. Their underlying physical operating mechanism also provides a rich parameter space for application of physics across the optical, condensed-matter, and microwave domains. For example, we have recently used an ultra-slow plasmonic microwave mode in the nanowires to demonstrate single-photon-sensitive imaging. This rich physical parameter space for engineering has has resulted in improved device performance and extended the impact of these devices even further.

Date and Time: 
Tuesday, November 27, 2018 - 4:15pm
Venue: 
Packard 101

OSA/SPIE Seminar presents Photovoltaic Restoration of Sight in Retinal Degeneration

Topic: 
Photovoltaic Restoration of Sight in Retinal Degeneration
Abstract / Description: 

Retinal degenerative diseases lead to blindness due to loss of the "image capturing" photoreceptors, while neurons in the "image-processing" inner retinal layers are relatively well preserved. Information can be reintroduced into the visual system using electrical stimulation of the surviving inner retinal neurons. Some electronic retinal prosthetic systems have been already approved for clinical use, but they provide low resolution and involve very difficult implantation procedures.

We developed a photovoltaic subretinal prosthesis which converts light into pulsed electric current, stimulating the nearby inner retinal neurons. Visual information is projected onto the retina from video goggles using pulsed nearinfrared (~880nm) light. This design avoids the use of bulky electronics and wiring, thereby greatly reducing the surgical complexity. Optical activation of the photovoltaic pixels allows scaling the implants to thousands of electrodes. In preclinical studies, we found that prosthetic vision with subretinal implants preserves many features of natural vision, including flicker fusion at high frequencies (>20 Hz), adaptation to static images, center-surround organization and non-linear summation of subunits in receptive fields, providing high spatial resolution. Results of the clinical trial with our implants (PRIMA, Pixium Vision) having 100µm pixels, as well as preclinical measurements with 75 and 55µm pixels, confirm that spatial resolution of prosthetic vision can reach the sampling density limit.

For a broad acceptance of this technology by patients who lost central vision due to age-related macular degeneration, visual acuity should exceed 20/100, which requires pixels smaller than 25µm. I will describe the fundamental limitations in electro-neural interfaces and 3-dimensional configurations which should enable such a high spatial resolution. Ease of implantation of these wireless arrays, combined with high resolution opens the door to highly functional restoration of sight.

Date and Time: 
Thursday, November 15, 2018 - 3:45pm
Venue: 
Shriram 262

AP483, Ginzton Lab, & AMO Seminar Series presents Impact of Structural Correlation and Monomer Heterogeneity in the Phase Behavior of Soft Materials and Chromosomal DNA

Topic: 
Impact of Structural Correlation and Monomer Heterogeneity in the Phase Behavior of Soft Materials and Chromosomal DNA
Abstract / Description: 

Polymer self-assembly plays a critical role in a range of soft-material applications and in the organization of chromosomal DNA in living cells. In many cases, the polymer chains are composed of incompatible monomers that are not regularly arranged along the chains. The resulting phase segregation exhibits considerable heterogeneity in the microstructures, and the size scale of these morphologies can be comparable to the statistical correlation that arises from the molecular rigidity of the polymer chains. To establish a predictive understanding of these effects, molecular models must retain sufficient detail to capture molecular elasticity and sequence heterogeneity. This talk highlights efforts to capture these effects using analytical theory and computational modeling. First, we demonstrate the impact of structural rigidity on the phase segregation of copolymer chain in the melt phase, resulting in non-universal phase phenomena due to the interplay of concentration fluctuations and structural correlation. We then demonstrate how these effects impact the phase behavior in statistical random copolymers and in heterogeneous copolymers based on chromosomal DNA properties. With these results, we demonstrate that the spatial segregation of DNA in living cells can be predicted using a heterogeneous copolymer model of microphase segregation.

Date and Time: 
Monday, November 5, 2018 - 4:15pm
Venue: 
Spilker 232

OSA/SPIE Seminar: Entanglement across disciplines

Topic: 
Entanglement across disciplines
Abstract / Description: 

As physicists or engineers we may be aware that philosophers and historians have long been interested in quantum theory and its potential ontological implications. Over the past few decades, diverse new branches of the humanities and social sciences have begun to grapple with aspects of quantum physics and to offer radical interpretive approaches. In this talk I'll briefly introduce some of these developments and then invite the audience to participate in an open discussion. The presentation will be non-technical in nature but I'll assume that everyone is familiar with the structure and application of quantum theory.

Date and Time: 
Wednesday, October 31, 2018 - 4:00pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series presents "Quantum Electrodynamics of Superconducting Circuits"

Topic: 
Quantum Electrodynamics of Superconducting Circuits
Abstract / Description: 

The demand for rapid and high-fidelity execution of initialization, gate and read-out operations casts tight constraints on the accuracy of quantum electrodynamic modeling of superconducting integrated circuits. Attaining the required accuracies requires reconsidering our basic approach to the quantization of the electromagnetic field in a light-confining medium and the notion of normal modes. I will discuss a computational framework based on the Heisenberg-Langevin approach to address these fundamental questions. This framework allows the accurate determination of the quantum dynamics of a superconducting qubit in an arbitrarily complex electromagnetic environment, free of divergences that have plagued earlier approaches. I will also discuss the effectiveness of this computational approach in meeting the demands of present-day quantum computing research.


Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, December 3, 2018 - 4:15pm
Venue: 
Spilker 232

AP483, Ginzton Lab, & AMO Seminar Series

Topic: 
When quantum-information scrambling met quasiprobabilities
Abstract / Description: 

We do physics partially out of a drive to understand essences. One topic whose essence merits understanding is the out-of-time-ordered correlator (OTOC). The OTOC reflects quantum manybody thermalization, chaos, and scrambling (the spread of quantum information through manybody entanglement). The OTOC, I will show, equals an average over a quasiprobability distribution. A quasiprobability resembles a probability but can become negative and nonreal. Such nonclassical values can signal nonclassical physics. The OTOC quasiprobability has several applications: Experimentally, the quasiprobability points to a scheme for measuring the OTOC (via weak measurements, which refrain from disturbing the measured system much). The quasiprobability also signals false positives in attempts to measure scrambling of open systems. Theoretically, the quasiprobability links the OTOC to uncertainty relations, to nonequilibrium statistical mechanics, and more strongly to chaos. As coarse-graining the quasiprobability yields the OTOC, the quasiprobability forms the OTOC's essence.

References
• NYH, Phys. Rev. A 95, 012120 (2017). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.012120
• NYH, Swingle, and Dressel, Phys. Rev. A 97, 042105 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.042105
• NYH, Bartolotta, and Pollack, arXiv:1806.04147 (2018). https://arxiv.org/abs/1806.04147
• Gonzàlez Alonso, NYH, and Dressel, arXiv:1806.09637 (2018). https://arxiv.org/abs/1806.09637
• Swingle and NYH, Phys. Rev. A 97, 062113 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.062113
• Dressel, Gonzàlez Alonso, Waegell, and NYH, Phys. Rev. A 98, 012132 (2018). https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.012132


Academic year 2018-2019, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, November 12, 2018 - 4:15pm
Venue: 
Spilker 232

Pages

SCIEN Talk

SCIEN Colloquium presents Deep Learning Meets Computational Imaging: Combining Data-Driven Priors and Domain Knowledge

Topic: 
Deep Learning Meets Computational Imaging: Combining Data-Driven Priors and Domain Knowledge
Abstract / Description: 

Neural networks have surpassed the performance of virtually any traditional computer vision algorithm thanks to their ability to learn priors directly from the data. The common encoder/decoder with skip connections architecture, for instance, has been successfully employed in a number of tasks, from optical flow estimation, to image deblurring, image denoising, and even higher level tasks, such as image-to-image translation.

To improve the results further, one must leverage the constraints of the specific problem at hand, in particular when the domain is fairly well understood, such as the case of computational imaging.

In this talk I will describe a few of my recent projects that build on this observation, ranging from reflection removal, to novel view synthesis, and deblurring.

Date and Time: 
Wednesday, February 20, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium presents Digital Humans At Disney Research

Topic: 
Digital Humans At Disney Research
Abstract / Description: 

Disney Research has been actively pushing the state-of-the-art in digitizing humans over the past decade, impacting both academia and industry. In this talk I will give an overview of a selected few projects in this area, from research into production. I will be talking about photogrammetric shape acquisition and dense performance capture for faces, eye and teeth scanning and parameterization, as well as physically based capture and modelling for hair and volumetric tissues.

Date and Time: 
Wednesday, February 13, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium presents Microscopic particle localization in 3D and in multicolor

Topic: 
Microscopic particle localization in 3D and in multicolor
Abstract / Description: 

Precise determination of the position of a single point source (e.g. fluorescent molecule/protein, quantum dot) is at the heart of microscopy methods such as single particle tracking and super-resolution localization microscopy ((F)PALM, STORM). Localizing a point source in all three dimensions, i.e. including depth, poses a significant challenge; the depth of field of a standard high-NA microscope is fundamentally limited, and its pointspread-function (PSF), namely, the shape that a point source creates in the image plane, contains little information about the emitter's depth. Various techniques exist that enable 3D localization, prominent among them being PSF engineering, in which the PSF of a microscope is modified to encode the depth of the source. This is achieved by shaping the wavefront of the light emitted from the sample, using a phase mask in the pupil (Fourier) plane of the microscope.


In this talk, I will describe how our search for the optimal PSF for 3D localization, using tools from estimation theory, led to the development of microscopy systems with unprecedented capabilities in terms of depth of field and spectral discrimination. Such methods enable fast, precise, non-destructive localization in thick samples and in multicolor. Applications of these novel advances will be demonstrated, including super-resolution imaging, tracking biomolecules in living cells and microfluidic flow profiling. I will also present our most recent results: 1. Application of deep learning for solving difficult localization problems (high density, low SNR, multicolor imaging), and 2. Precise refractometry from minute volumes by super-critical-angle fluorescence.

Date and Time: 
Wednesday, February 6, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium presents Optical Super-resolution Microscopy with Spatial Frequency Shift

Topic: 
Optical Super-resolution Microscopy with Spatial Frequency Shift
Abstract / Description: 

Exploiting to see beyond the diffraction-limit of optical microscopies is of great significance. State-of-the-art solutions of super-resolution microscopy, like STED and STORM approaches, rely on the fluorescent effect of labeling samples. It is still challenging to obtain the super-resolution for unlabeling samples without fluorescent effect. To this end, we have developed a novel super-resolution method, called Spatial Frequency Shift (SFS), to realize the deep super-resolution with or without fluorescent effect in wide field imaging. The principle and the applications of this SFS technique will be presented.

Date and Time: 
Tuesday, February 5, 2019 - 10:00am
Venue: 
Packard 101

SCIEN Colloquium and EE292E present "Bottlenecks in Autonomy: The Last 1%"

Topic: 
Bottlenecks in Autonomy: The Last 1%
Abstract / Description: 

Accurate and reliable 3D perception is the key remaining bottleneck to making self-driving vehicles safe and ubiquitous. Today, it's relatively easy to get an autonomous car to work 99% of the time, but it's the incredibly long tail of edge cases that's preventing them from reaching real-world deployment without a backup driver constantly watching over. All of this comes down to how well the autonomous car can see and understand the world around it. The key to achieving accurate, safer-than-human level 3D perception all starts with the LiDAR. That said, both legacy LiDAR solutions and newer upstarts, which largely leverage off-the-shelf components, have still struggled to meet the stringent performance requirements needed to solve key edge cases encountered in everyday driving scenarios.
Luminar, founded in 2012 by Austin Russell, has taken an entirely new approach to LiDAR, building its' system from the ground up at the component level for over 5 years. The result was the first and only solution that meets and exceeds all of the key performance requirements demanded by Car/Truck OEM's and technology leaders to achieve safe autonomy, in addition to unit economics that can enable widespread adoption across even mainstream consumer vehicle platforms. This has culminated with last years' release of their first scalable product for autonomous test and development fleets, which has subsequently led to rapidly accelerating adoption in the market. During this talk, raw Luminar LiDAR data from autonomous test vehicles will be presented to the audience, demonstrating real world examples of life-threatening edge cases and how they can now be avoided.

Date and Time: 
Wednesday, January 30, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium presents "Leveraging lanthanide-based spectral encoding for highly multiplexed biological assays"

Topic: 
Leveraging lanthanide-based spectral encoding for highly multiplexed biological assays
Abstract / Description: 

Encoded microparticles have become a powerful tool for a wide array of applications, including high-throughput sample tracking and massively parallel biological multiplexing. Spectral encoding, where particles are encoded with distinct luminescence spectra, provides a particularly appealing encoding strategy because of the ease of reading codes and assay flexibility. We recently developed a microfluidic method for producing microparticles with > 1,100 spectral codes by ratiometrically embedding different amounts of lanthanide nanophosphorsn within them, which we term MRBLEs (Microspheres with Ratiometric Barcode Lanthanide Encoding). We are now applying these MRBLEs towards a wide variety of biological problems, from high-throughput and quantitative profiling of protein-peptide interactions to specific and sensitive detection of multiple bacterial species from blood for fast diagnosis of sepsis.

Date and Time: 
Wednesday, January 23, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium presents How many pixels are too many?

Topic: 
How many pixels are too many?
Abstract / Description: 

We start to lack the processing power and bandwidth to drive 8K and high-resolution head-mounted displays. However, as the human eye and visual system have their own limitations, the relevant question is what spatial and temporal resolution is the ultimate limit for any technology. In this talk, I will review the visual models of spatio-temporal and chromatic contrast sensitivity which can explain such limitations. Then I will show how they can be used to reduce rendering cost in VR-applications, find more efficient encoding of high dynamic range images and compress images in a visually lossless manner.

Date and Time: 
Friday, January 18, 2019 - 10:00am
Venue: 
Packard 101

SCIEN Colloquium presents Image Domain Transfer

Topic: 
Image Domain Transfer
Abstract / Description: 

Image domain transfer includes methods that transform an image based on an example, commonly used in photorealistic and artistic style transfer, as well as learning-based methods that learn a transfer function based on a training set. These are usually based on generative adversarial networks (GANs), and can be supervised or unsupervised as well as unimodal or multimodal. I will present a number of our recent methods in this space that can be used to translate, for instance, a label map to a realistic street image, a day time street image to a night time street image, a dog to different cat breeds, and many more.

Date and Time: 
Wednesday, January 9, 2019 - 4:30pm
Venue: 
Packard 101

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

SCIEN Industry Affiliates Meeting

Topic: 
SCIEN Industry Affiliates Meeting
Abstract / Description: 

SCIEN Industry Affiliates Meeting gives you the opportunity to meet new SCIEN faculty and the postdocs and graduate students who are working in image systems engineering, with expertise in optics, computational imaging, human vision and machine learning. Read Poster Abstracts and bios.

Registration is required
Date and Time: 
Friday, November 30, 2018 - 1:30pm to 5:30pm
Venue: 
- please register -

Pages

SmartGrid

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

SmartGrid Seminar presents "Electricity Network Design and Operation in an Era of Solar and Storage"

Topic: 
Electricity Network Design and Operation in an Era of Solar and Storage
Abstract / Description: 

As prices for solar photovoltaics and battery energy storage plummet, grids around the globe are undergoing tremendous changes. How should we design and operate grids in the future in the presence of these technologies? This talk will cover some of my group's recent efforts to answer this question, focusing on a new approach to decentralized network optimization – a variant of the primal-dual subgradient method — that can be used to enable grid-integration of distributed energy resources such as solar photovoltaics, batteries and electric vehicles. I will then discuss how grids should be built in the future when distributed energy resource costs are so low. Using a simple concept called an iso-reliability curve, I will explain a method to identify cost-optimal fully decentralized systems – i.e. standalone solar home systems. After applying this method to a large solar resource dataset, I will present results indicating that in many unelectrified parts of the world, future decentralized systems will be able to deliver electricity at costs and reliabilities better than existing centralized grids.


The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

 

Yours sincerely,
Smart Grid Seminar Organization Team,

Ram Rajagopal, Associate Professor, Civil & Environmental Engineering, and Electrical Engineering
Sila Kiliccote, Managing Director of Grid Innovations, Bits & Watts 
Chin-Woo Tan, Director, Stanford Smart Grid Lab 
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering

Date and Time: 
Thursday, December 6, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Battery storage

Topic: 
Battery storage: New Applications, Markets and Business Models
Abstract / Description: 

Since 2015, Tesla has installed a total of over one gigawatt-hour of energy storage that is critical for using renewable energy at scale. Over 20,000 customers across 40 countries are using Tesla stationary storage products for a variety of sustainable energy applications: powering filtration systems for clean water in Puerto Rico, stabilizing the grid in Australia, cooling classrooms in Hawaii, and powering entire islands in the South Pacific, etc. This talk will introduce the general efforts of Tesla's Energy Optimization Team, which develops the "brain" of its energy storage products. Optimization and machine learning techniques are utilized on all different products. A few recent projects will also be presented.


The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

 

Yours sincerely,
Smart Grid Seminar Organization Team,

Ram Rajagopal, Associate Professor, Civil & Environmental Engineering, and Electrical Engineering
Sila Kiliccote, Managing Director of Grid Innovations, Bits & Watts 
Chin-Woo Tan, Director, Stanford Smart Grid Lab 
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering

Date and Time: 
Thursday, November 15, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar presents Power Electronics: A Key Enabling Technology for Smart Grid

Topic: 
Power Electronics: A Key Enabling Technology for Smart Grid
Abstract / Description: 

Power electronic converters impact all aspects of power systems – generation (renewable), transmission, distribution and end use. Power electronics is the key technology that enables reliable and secure integration of very large-scale renewable resources to the grid, new architectures including micro-grids, distributed grid control, and the rapid shift to electric transportation. This talk will highlight power electronics and controls in advanced PV inverters, wind energy systems, solid-state transformers and EV infrastructure. Key concepts that explain how the advanced functionalities are realized will be described. Recent advances in high voltage power electronics with wide bandgap devices, new topologies, and emerging trends and research challenges will be presented.


The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

 

Yours sincerely,
Smart Grid Seminar Organization Team,

Ram Rajagopal, Associate Professor, Civil & Environmental Engineering, and Electrical Engineering
Sila Kiliccote, Managing Director of Grid Innovations, Bits & Watts
Chin-Woo Tan, Director, Stanford Smart Grid Lab
Yuting Ji, Postdoctoral Scholar, Civil and Environmental Engineering

Date and Time: 
Thursday, November 8, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Clean Energy at the Crossroads: A Look Ahead Through the Eyes of an Environmental Economist

Topic: 
Clean Energy at the Crossroads: A Look Ahead Through the Eyes of an Environmental Economist
Abstract / Description: 

California has the will, ambition, technology and legal requirement to decarbonize our energy sector by 2045. In the dozen years since passage of AB32, we have made great progress but we may be making some grave mistakes. In this discussion, Dr. Fine describes how distributed energy resources are presenting new opportunities in distribution resources, transmission and procurement planning, and market reforms that will determine if our clean energy future is one that is affordable for all.

Date and Time: 
Thursday, October 11, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Future Power System Control Functions: An Industry Perspective

Topic: 
Future Power System Control Functions: An Industry Perspective
Abstract / Description: 

This talk provides an overview of Siemens Corporate Technology's recent research on new control functions for future power systems. Three different topics are discussed: (a) adaptive power oscillation damping optimization to increase the stability reserve of power systems, (b) robust power flow optimization to increase power system resilience to volatile generation, and (c) new research challenges for autonomous microgrids that provide autonomous operation and plug-and-produce capabilities.

Date and Time: 
Thursday, May 31, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Trends in Electric Power Distribution System Analysis at PNNL

Topic: 
Trends in Electric Power Distribution System Analysis at PNNL
Abstract / Description: 

Pacific Northwest National Laboratory (PNNL) originated and continues to maintain one of the two leading open-source distribution system simulators, called GridLAB-D, which has been downloaded 80,000+ times world-wide. While it continues to improve core functionality, PNNL is placing more emphasis recently on GridLAB-D as part of a development platform, improving its interoperability and opening the software up to more customization by researchers. This talk will cover two ongoing open-source development projects, funded by the U. S. Department of Energy, that incorporate and extend GridLAB-D. One of these projects is also expected to contribute distribution feeder model conversion tools for a new California Energy Commission project headed by SLAC. Highlights of the talk will include:

  • Transactive energy simulation platform, at tesp.readthedocs.io/en/latest
  • GridAPPS-D application development platform, at gridappsd.readthedocs.io/en/latest
  • Evole GridLAB-D's co-simulation support from FNCS interface, to a multi-lab interface called HELICS compliant with Functional Mockup Interface (FMI): https://github.com/GMLC-TDC/HELICS-src
  • Leveraging new capabilities for large-building simulation in JModelica, power flow analysis in OpenDSS, and transactive energy system agents in Python
  • Implementation and use of the Common Information Model (CIM) in a NoSQL triple-store database for standardized feeder model conversion
  • Comparison of different types of stochastic modeling for load and distributed energy resource (DER) output variability, and its impact on feeder model order reduction and state estimation
  • Special system protection example concerns on urban secondary networks with high penetration of DER
Date and Time: 
Thursday, May 24, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Renewable Scenario Generation Using Adversarial Networks

Topic: 
Renewable Scenario Generation Using Adversarial Networks
Abstract / Description: 

Scenario generation is an important step in the operation and planning of power systems. In this talk, we present a data-driven approach for scenario generation using the popular generative adversarial networks, where to deep neural networks are used in tandem. Compared with existing methods that are often hard to scale or sample from, our method is easy to train, robust, and captures both spatial and temporal patterns in renewable generation. In addition, we show that different conditional information can be embedded in the framework. Because of the feedforward nature of the neural networks, scenarios can be generated extremely efficiently.

Date and Time: 
Thursday, April 19, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids

Topic: 
Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids
Abstract / Description: 

System resiliency is the number 1 concern for electrical utilities in 2018 according to the CEO of the PJM, the nation's largest independent system operator. This talk will offer insights and practical answers through examples, of how power grids can be affected by weather and how countermeasures, such microgrids, can be applied to mitigate them. It will focus on two major events; Super Storm Sandy and Hurricane Maria, and the role of renewable energy resources and microgrids in these two natural disasters. It will discuss the role of microgrids in blackstarting the power grid after a blackout.

Date and Time: 
Thursday, April 12, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration

Topic: 
Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration
Abstract / Description: 

Transmission-distribution coordinated energy management (TDCEM) is recognized as a promising solution to the challenge of high DER penetration, but lack of a distributed computation method that universally and effectively works for TDCEM. To bridge this gap, a generalized master-slave-splitting (G-MSS) method is proposed based on a general-purpose transmission-distribution coordination model (G-TDCM), enabling G-MSS to be applicable to most central functions of TDCEM. In G-MSS, a basic heterogeneous decomposition (HGD) algorithm is first derived from the heterogeneous decomposition of the coupling constraints in the KKT system regarding G-TDCM. Optimality and convergence properties of this algorithm are proved. Furthermore, a modified HGD algorithm is developed by utilizing subsystem's response function, resulting in faster convergence. The distributed G-MSS method is then demonstrated to successfully solve central functions of TDCEM including power flow, contingency analysis, voltage stability assessment, economic dispatch and optimal power flow. Severe issues of over-voltage and erroneous assessment of the system security that are caused by DERs are thus resolved by G-MSS with modest computation cost. A real-world demonstration project in China will be presented.

Date and Time: 
Thursday, April 5, 2018 - 1:30pm
Venue: 
Y2E2 111

Pages

Stanford's NetSeminar

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, November 11, 2015 - 12:15pm to 1:30pm
Venue: 
Packard 202

NetSeminar

Topic: 
Precise localization and high throughput backscatter using WiFi signals
Abstract / Description: 

Indoor localization holds great promise to enable applications like location-based advertising, indoor navigation, inventory monitoring and management. SpotFi is an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems.

We then talk about BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter IoT sensors and WiFi APs using ambient WiFi transmissions as the excitation signal. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system.

Date and Time: 
Thursday, October 15, 2015 - 12:15pm to 1:30pm
Venue: 
Gates 104

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, October 7, 2015 - 12:15pm to 1:30pm
Venue: 
AllenX Auditorium

Pages

Statistics and Probability Seminars

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

Statistics Seminar: Inference, Computation, and Visualization for Convex Clustering and Biclustering

Topic: 
Inference, Computation, and Visualization for Convex Clustering and Biclustering
Abstract / Description: 

Hierarchical clustering enjoys wide popularity because of its fast computation, ease of interpretation, and appealing visualizations via the dendogram and cluster heatmap. Recently, several have proposed and studied convex clustering and biclustering which, similar in spirit to hierarchical clustering, achieve cluster merges via convex fusion penalties. While these techniques enjoy superior statistical performance, they suffer from slower computation and are not generally conducive to representation as a dendogram. In the first part of the talk, we present new convex (bi)clustering methods and fast algorithms that inherit all of the advantages of hierarchical clustering. Specifically, we develop a new fast approximation and variation of the convex (bi)clustering solution path that can be represented as a dendogram or cluster heatmap. Also, as one tuning parameter indexes the sequence of convex (bi)clustering solutions, we can use these to develop interactive and dynamic visualization strategies that allow one to watch data form groups as the tuning parameter varies. In the second part of this talk, we consider how to conduct inference for convex clustering solutions that addresses questions like: Are there clusters in my data set? Or, should two clusters be merged into one? To achieve this, we develop a new geometric representation of Hotelling's T2-test that allows us to use the selective inference paradigm to test multivariate hypotheses for the first time. We can use this approach to test hypotheses and calculate confidence ellipsoids on the cluster means resulting from convex clustering. We apply these techniques to examples from text mining and cancer genomics.

This is joint work with John Nagorski and Frederick Campbell.


The Statistics Seminars for Winter Quarter will be held in Room 380Y of the Sloan Mathematics Center in the Main Quad at 4:30pm on Tuesdays. 

Date and Time: 
Tuesday, March 13, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

Statistics Seminar: Understanding rare events in models of statistical mechanics

Topic: 
Understanding rare events in models of statistical mechanics
Abstract / Description: 

Statistical mechanics models are ubiquitous at the interface of probability theory, information theory, and inference problems in high dimensions. To develop a refined understanding of such models, one often needs to study not only typical fluctuation theory but also the realm of atypical events. In this talk, we will focus on sparse networks and polymer models on lattices. In particular we will consider the rare events that a sparse random network has an atypical number of certain local structures, and that a polymer in random media has atypical weight. The random geometry associated with typical instances of these rare events is an important topic of inquiry: this geometry can involve merely local structures, or more global ones. We will discuss recent solutions to certain longstanding questions and connections to stochastic block models, exponential random graphs, eigenvalues of random matrices, and fundamental growth models.

Date and Time: 
Tuesday, January 30, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Statistics Seminar

Topic: 
Brownian Regularity for the Airy Line Ensemble
Abstract / Description: 

The Airy line ensemble is a positive-integer indexed ordered system of continuous random curves on the real line whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy2 process, describes after the subtraction of a parabola the limiting law of the scaled weight of a geodesic running from the origin to a variable point on an anti-diagonal line in such problems as Poissonian last passage percolation. The Airy line ensemble enjoys a simple and explicit spatial Markov property, the Brownian Gibbs property.


In this talk, I will discuss how this resampling property may be used to analyse the Airy line ensemble. Arising results include a close comparison between the ensemble's curves after affine shift and Brownian bridge. The Brownian Gibbs technique is also used to compute the value of a natural exponent describing the decay in probability for the existence of several near geodesics with common endpoints in Brownian last passage percolation, where the notion of "near" refers to a small deficit in scaled geodesic weight, with the parameter specifying this nearness tending to zero.

Date and Time: 
Monday, September 26, 2016 - 4:30pm
Venue: 
Sequoia Hall, room 200

Pages

SystemX

SystemX Seminar presents "What’s Holding Back the Digital Technology Revolution in the Clinical Development of Drugs and Biologics?"

Topic: 
What’s Holding Back the Digital Technology Revolution in the Clinical Development of Drugs and Biologics?
Abstract / Description: 

New technologies are revolutionizing clinical trials and transforming drug and biologic development in ways that are unprecedented. In the process, digital data capture technologies allow for increasing data types and volume, improved accuracy and precision, and reductions in variance. They are also opening a window for capturing "real-world" data, minimizing patient inconvenience, increasing patient compliance, and decreasing site-monitoring costs. These new technologies are helping to transform the concept of what should constitute a medicine, so that new types of tech-enabled therapeutics will advance the ways in which both medical treatments are delivered and how clinical trials are performed.

There are many goals associated with using the new digital technologies in clinical trials. Among these are using wearable biosensors (or "unawearables") and "smart delivery systems" to collect large amounts of new types of data, as well as to analyze them in sophisticated ways in "near real-time." These approaches are helping to transform the mechanics of clinical research so that site-less trials are now becoming possibilities in some therapeutic areas. The pace of ongoing evolution has been rapid, and according to the editors of Applied Clinical Trials, "the ability to amalgamate, organize and analyze real-world data sets with structured data sets... can provide new, provable indications of hidden relationships that can precipitate better support, new hypotheses and provoke new, potentially life-saving questions that we didn't think to ask before."

Pharmaceutical companies are attempting to leverage the newer digital technologies to help develop targeted therapeutics faster, more efficiently, and more cheaply than ever before. In many cases, the technologies to do so have already been developed. Yet, to date, they are not being used at scale. So, why isn't the world of clinical trials changing for the better more rapidly? The answer is the need for clinical validation (and qualification) of what amounts to a new world of digital biomarkers. As with wet biomarkers, these must be rigorously validated both technically and clinically to ensure that they are proper indicators of meaningful clinical changes – ones that are both useful and robust.

One of the current challenges faced by pharmaceutical companies is that there is an intrinsic mismatch between the development times for high assurance "smart medical devices" and overall drug development processes due to intrinsic cycle time differences. As an example, consumer devices/software can typically be developed in 6-18 months (or less). For devices with medical-grade functions, development times can range up to 3-5 years, with iterative device upgrades occurring every 1-2 years. In contrast, drug/biologic development programs generally follow a more conservative and risk-averse strategy. The resulting timelines can be lengthy due to the lifecycle/ biology of the disease under consideration, and they often require up to 6-8 years(i.e., patients must be recruited and retained in clinical trials to assess differences in hard clinical outcomes).

In general, medical device development is iterative and guided by Design Control and Quality System principles that foster rapid engineering processes and quick development times. To accelerate these processes further, medical device manufacturers have attempted to adopt agile development paradigms that retain the necessary compliance characteristics from a regulatory standpoint. This faster pace introduces a complication: Numerous versions of a medical device may be developed over the course of a single drug development program, resulting in the need to re-qualify and cross-validate the outputs of the devices to ensure that their data are consistent. These must then be mated to analytical paradigms that are fit-for-purpose and qualified to assess clinically meaningful differences and treatment effects. Lastly, the regulatory landscape for "digital medical devices" is evolving in different and non-equivalent ways in the United States, EU, and Japan.

In summary, currently available digital technologies represent a likely tipping point for the way that clinical trials of the future will be conducted. The challenge is to determine how to deploy currently available technology sensibly in order to advance the clinical trial process and existing drug pipelines faster.

The speaker, Gerard G. Nahum, MD (Vice President of Clinical Development at Bayer Pharmaceuticals, Head of Medical Devices & eHealth), will address these various issues from a pharmaceutical industry perspective and outline the challenges that explain why we haven't yet seen adoption of these new digital technologies in clinical trials at scale.

Date and Time: 
Thursday, February 21, 2019 - 4:30pm
Venue: 
Bldg. 380 Rm. 380X

SystemX presents Quantum Computing with Spins in Silicon

Topic: 
Quantum computing with spins in silicon
Abstract / Description: 

The realization of quantum computers will provide a new and unprecedented computing resource that could significantly impact many industries ranging from medicine to artificial intelligence. Recently, the field of quantum computing has been transitioning from experimental demonstrations of quantum bits (qubits) to engineering larger scale quantum systems with the aid of industry. Electron spin qubits in silicon are an excellent candidate for this purpose as they can be made using transistor-like structures that are CMOS compatible, opening up the possibility to leverage off the semiconducting industry [1].

In this talk I will discuss the state-of-the-art in the silicon spin qubit field focusing on my recent work at TUDelft where we demonstrated a programmable two-qubit quantum processor that could perform the Deutsch-Josza and Grover search algorithm [2]. Moving to larger scale qubit systems will require the ability to make reproducible qubit arrays which is incredibly challenging for university clean rooms due limited process control and slow turn around. I will discuss how these issues are being addressed at Intel through the use of their industrial 300mm fabrication line and expertise in high volume electrical tests.

[1] F. A. Zwanenburg et al. Silicon quantum electronics, Rev. Mod. Phys. 85, 961 (2013)

[2] T. F. Watson et al. A programmable two-qubit processor in silicon, Nature 555, 633-637 (2018)

Date and Time: 
Thursday, February 28, 2019 - 4:30pm
Venue: 
Bldg. 380 Rm. 380X

SystemX BONUS lecture: Secure and Spectrum-Aware Wireless Communications

Topic: 
Secure and Spectrum-Aware Wireless Communications: Challenges and Opportunities
Abstract / Description: 

The Internet of Things (IoT) is redefining how we interact with the world by supplying a global view based not only on human-provided data but also human-device connected data. For example, in Health Care, IoT will bring decreased costs, improved treatment results, and better disease management. However, the connectivity-in-everything model brings heightened security concerns. Additionally, the projected growth of connected nodes not only increases security concerns, it also leads to a 1000-fold increase in wireless data traffic in the near future. This data storm results in a spectrum scarcity thereby driving the urgent need for shared spectrum access technologies. These security deficiencies and the wireless spectrum crunch require innovative system-level secure and scalable solutions.

This talk will introduce energy-efficient and application-driven system-level solutions for secure and spectrum-aware wireless communications. I will present an ultra-fast bit-level frequency-hopping scheme for physical-layer security. This scheme utilizes the frequency agility of devices in combination with time-interleaved radio frequency architectures and protocols to achieve secure wireless communications. To address the wireless spectrum crunch, future smart radio systems will evaluate the spectrum usage dynamically and opportunistically use the underutilized spectrum; this will require spectrum sensing for interferer avoidance. I will discuss a system-level approach using band-pass sparse signal processing for rapid interferer detection in a wideband spectrum to convert the abstract improvements promised by sparse signal processing theory, e.g., fewer measurements, to concrete improvements in time and energy efficiency. Beyond these system-level solutions, I will also discuss future research directions including secure package-less THz tags and ingestible micro-bio-electronic devices.

 

Date and Time: 
Friday, February 22, 2019 - 11:00am
Venue: 
Packard 202

SystemX BONUS lecture: Microrobots as the Future of Tools: Designing Effective Platforms and Collaborative Swarms

Topic: 
Microrobots as the Future of Tools: Designing Effective Platforms and Collaborative Swarms
Abstract / Description: 

In the near future, swarms of millimeter scale robots will be vital and common tools in industrial, commercial, and personal settings. With applications ranging from distributed chemical sensing to tangible 3D interfaces, providing mobility platforms to low-power sensing and actuation nodes will push us that much closer to the dream of ubiquitous computing. In this talk I will present my efforts to develop a flying microrobot, the "ionocraft", which uses atmospheric ion thrusters to move completely silently and with no mechanical moving parts. Spanning from development of novel MEMS actuators to incorporation of onboard sensor packages for control, I will discuss system design at the resource-constrained edge of robotics. Even given a working mobility platform, a bevy of interdisciplinary challenges remain to make microrobots useful tools; I will further discuss strategies for enabling future autonomous swarm deployments as well as for studying human-robot interaction outside the context of traditional social robotics.

Date and Time: 
Friday, February 1, 2019 - 2:00pm
Venue: 
Allen 101X

SystemX Seminar presents "Photonic MEMS: Coupling Mechanics & Photonics at the Micro- and Nanoscale"

Topic: 
Photonic MEMS: Coupling Mechanics & Photonics at the Micro- and Nanoscale
Abstract / Description: 

Photonic integrated circuits have seen a dramatic increase in complexity over the past decades, driven by recent applications in datacenter communications and enabled by the availability of standardized mature technology offerings. Among several directions that are currently pursued to enhance functionality and to reduce power consumption in photonic integrated circuits, we exploit in our research mechanical movement of wave-guiding structures at the micro- and nanoscale, motivated by the unique opportunities of access to a strong modulation of the effective index and the possibility to include mechanical latching and thus non-volatile states. In this talk, we will show how we can exploit nano-mechanics in photonic integrated circuits to perform basic operations on-chip, such as phase shifting, attenuation or photonic switching. Due to their small footprint and low insertion loss, such components can be integrated to form large arrays of several thousands of unit cells with outstanding system performance. We will discuss how movable waveguides can be fabricated in dedicated surface micromachining technology or by selective post-processing in a standard silicon photonics platform. We will discuss an experimental demonstrator of a mechanical waveguide latching mechanism, and provide an outlook on the implementation of the concept of large-scale reconfigurable photonic integrated circuits using Silicon Photonic MEMS. 

Date and Time: 
Thursday, January 31, 2019 - 4:30pm
Venue: 
Building 380, Room 380X

SystemX Seminar presents "Plate Mechanical Metamaterials and their Applications: from Energy Converters to Levitation"

Topic: 
Plate Mechanical Metamaterials and their Applications: from Energy Converters to Levitation
Abstract / Description: 

UPDATED TOPIC AND SPEAKER: We used atomic layer deposition (ALD) and microfabrication techniques to make robust plates out of a single continuous ALD layer with cm-scale lateral dimensions and thicknesses between 25 and 100 nm, creating the thinnest freestanding plates that can be picked up by hand. We also fabricated and characterized nanocardboard – plate metamaterials made from multiple layers of nanoscale thickness, whose geometry and properties are reminiscent of honeycomb sandwich plates or corrugated paper cardboard. Ultralow weight, mechanical robustness, thermal insulation, as well as chemical and thermal stability of alumina make plate metamaterials attractive for numerous applications, including structural elements in flying microrobots and interstellar light sails, high-temperature thermal insulation in energy converters, photophoretic levitation, as well as ultrathin sensors and resonators. I will briefly discuss our experimental progress on all these applications, including demonstrations of extremely robust thermal insulators that can sustain a temperature drop of ~1000 K across a micron-scale gap, macroscopic plates that levitate when illuminated by light, and hollow AFM cantilevers that offer greatly enhanced sensitivity and data acquisition rates.

 

Date and Time: 
Thursday, January 24, 2019 - 4:30pm
Venue: 
Allen 101X

SystemX Seminar presents " Fine-Grain Many-Core Processor Arrays for Efficient and High-Performance Computation"

Topic: 
Fine-Grain Many-Core Processor Arrays for Efficient and High-Performance Computation
Abstract / Description: 


Topic:
Fine-Grain Many-Core Processor Arrays for Efficient and High-Performance Computation
Thursday, January 17, 2019 - 4:30pm to 5:30pm
Venue:
Bldg. 380 Rm. 380X
Speaker:
Prof. Bevan Baas - Electrical and Computer Engineering - UC Davis
Abstract / Description:
The continually-growing number of devices available per chip assures the presence of many processing blocks per die communicating by some type of inter-processor interconnect. It is interesting to consider what the granularity of the processing blocks should be given a fixed amount of die area. The smallest reasonable tile size is on the order of an FPGA's LUT. Between the domains of FPGAs and traditional processors lies a lightly-explored region which we call fine-grain many-core, whose processors: can be programmed by simple traditional programs; typically operate with high throughput and high energy-efficiency; are well suited for deep submicron fabrication technologies; and are well matched to many DSP, multimedia, and embedded workloads, and--somewhat counterintuitively--also to some enterprise and scientific kernels.

The AsAP project has developed fine-grain many-core systems composed of large numbers of programmable reduced-complexity processors with no algorithm-specific hardware and with individual per-processor digitally-tunable clock oscillators operating completely independently with respect to each other (GALS). Due to the independence of the MIMD cores and individual near-optimal oscillator halting, the system operates with a power dissipation that is almost ideally proportional to the system load.

A third generation 32 nm design that integrates 1000 independent programmable processors and 12 memory modules has been designed and fabricated. The processors and memory modules communicate through a reconfigurable full-rate circuit-switched mesh network and a complementary very small area packet router, and they operate to an average maximum clock frequency of 1.78 GHz, which is believed to be the highest clock frequency achieved by a fabricated processor designed in a university. At a supply voltage of 0.9 V, processors operating at an average of 1.24 GHz dissipate 17 mW while issuing one instruction per cycle. At 0.56 V, processors operating at 115 MHz dissipate 0.61 mW resulting in 5.3 pJ/instruction, enabling 1000 100%-active cores to be powered by a single AA battery.

Several dozen DSP and general tasks have been coded plus more complex applications including: AES encryption engines, a full-rate H.264 1080p 30fps HDTV residual encoder, a fully-compliant IEEE 802.11a/11g Wi-Fi wireless LAN baseband transmitter and receiver, a SAR radar engine, a complete first-pass H.264 encoder, convolutional neural networks, large sparse matrix operations, sorting and processing of enterprise data, and others. Power, throughput, and die area results generally compare very well with solutions on existing programmable processors. A C++ compiler and automatic mapping tool greatly simplify programming.

Date and Time: 
Thursday, January 17, 2019 - 4:30pm
Venue: 
Building 380, Room 380X

SystemX BONUS Seminar - "Spectral signatures of many-body localization with interacting photo"

Topic: 
Spectral signatures of many-body localization with interacting photo
Abstract / Description: 

Statistical mechanics is founded on the assumption that a system can reach thermal equilibrium, regardless of the starting state. Interactions between particles facilitate thermalization, but, can interacting systems always equilibrate regardless of parameter values\,? The energy spectrum of a system can answer this question and reveal the nature of the underlying phases. However, most experimental techniques only indirectly probe the many-body energy spectrum. Using a chain of nine superconducting qubits, we implement a novel technique for directly resolving the energy levels of interacting photons. We benchmark this method by capturing the intricate energy spectrum predicted for 2D electrons in a magnetic field, the Hofstadter butterfly. By increasing disorder, the spatial extent of energy eigenstates at the edge of the energy band shrink, suggesting the formation of a mobility edge. At strong disorder, the energy levels cease to repel one another and their statistics approaches a Poisson distribution - the hallmark of transition from the thermalized to the many-body localized phase. Our work introduces a new many-body spectroscopy technique to study quantum phases of matter.

Date and Time: 
Wednesday, January 16, 2019 - 4:00pm
Venue: 
Allen 101X

SystemX Seminar presents "Health research on a massive scale using smartphones"

Topic: 
Health research on a massive scale using smartphones: MyHeart Counts and Beyond
Abstract / Description: 

The widespread adoption of smartphones and wearables offers an unprecedented opportunity for low cost health studies of far-ranging reach. Within 6 months of the launch of Apple’s ResearchKit, more than 100,000 people from all over the United States were recruited into five launch mobile application-based research studies.  Similar libraries have since sprung up on Android.  This talk will provide an introduction to mobile health studies on smartphones including study design, regulatory approvals, production, recruitment, maintenance and scientific discovery.  I will highlight ways in which smartphones have been coopted as data collection tools able to amass a wealth of survey, device, sensor and clinical data and the challenges that stand before completing a successful study.

Date and Time: 
Thursday, January 10, 2019 - 4:30pm
Venue: 
Building 380, Room 380X

John G. Linvill Distinguished Seminar on Electronic Systems Technology

Topic: 
Internet of Things and Internet of Energy for Connecting at Any Time and Any Place
Abstract / Description: 

In this presentation, I would like to discuss with you how to establish a sustainable and smart society through the internet of energy for connecting at any time and any place. I suspect that you have heard the phrase, "Internet of Energy" less often. The meaning of this phrase is simple. Because of a ubiquitous energy transmission system, you do not need to worry about a shortage of electric power. One of the most important items for establishing a sustainable society is [...]


"Inaugural Linvill Distinguished Seminar on Electronic Systems Technology," EE News, July 2018

 

Date and Time: 
Monday, January 14, 2019 - 4:30pm
Venue: 
Hewlett 200

Pages

Subscribe to RSS - Seminar / Colloquium