Seminar / Colloquium

EE380 Computer Systems Colloquium: News Diffusion - fighting misinformation

Topic: 
News Diffusion: Scoring (automatically) news articles to fight misinformation
Abstract / Description: 

Deepnews.ai wants to make a decisive contribution to the sustainability of the journalistic information ecosystem by addressing two problems:
1. The lack of correlation between the cost of producing great editorial content and its economic value.
2. The vast untapped potential for news editorial products.

Deepnews.ai willl have a simple and accessible scoring system: the online platform receives a batch of news stories will score on a scale of 1 to 5 based on their journalistic quality. This is done automatically and in real time. This scoring system has multiple applications.

On the business side, the greatest potential is the possibility to adjust the price of an advertisement to the quality of the editorial context. There is room for improvement. Today, a story that required months of work and cost hundreds of thousands of dollars carries the same unitary value (a few dollars per thousand page views) as a short, gossipy article. But times are changing. In the digital ad business, indicators are blinking red: CPMs, click-through rates, and viewability are on a steady downward decline. We believe that inevitably, advertisers and marketers will seek high-quality content--as long as they can rely on a credible indicator of quality. Deepnews.ai will interface with ad servers to assess the value of a story and price and serve ads accordingly. The higher a story's quality score, the pricier the ad space adjacent to it can be. This adjustment will substantially raise the revenue per page to match the quality of news.

On the editorial side: The ability to assess the quality of news will open up opportunities for new products and services such as:
• Recommendation engines improvement: instead of relying on keywords or frequency, Deepnews.ai will surface stories based on substantial quality, which will increase the number of articles read per visit. (Currently, visitors to many news sites read less than two articles per visit).
• Personalization: We believe a reader's profile should not be limited to consumption analytics but should reflect his or her editorial preferences. Deepnews.ai is considering a dedicated "tag" which will be able to connect stories' metadata with a reader's affinity.
• Curation: Publishers will be able to use Deepnews.ai to offer curation services, a business currently left to players like Google and Apple. By providing technology that can automatically surface the best stories from trusted websites (even small ones), Deepnews.ai can help publishers expand their footprint.

The platform will be based on two of ML approaches: a feature-based model and a text content analytic model.

Using traditional ML methods, the first model assesses quality taking as input two sets of "signals" to assess the quality of journalistic work: Quantifiable Signals and Subjective Signals. Quantifiable Signals include the structure and patterns of the HTML page, advertising density, use of visual elements, bylines, word count, readability of the text, information density (number of quotes and named entities). This is processed data from news content. Subjective Signals are human scoring of quality based on criteria such as writing style, thoroughness, balance and fairness, timeliness, etc. These measures are produced by editors and experienced journalists.

The second approach is based on deep learning methods. Here, the goal is to build models that will be able to accurately classify an unseen incoming article purely based on the quality of the report, distinct from the metadata or the topic of discussion. The main challenge in many such deep learning approaches is the availability of labeled data. Nearly four million contemporary articles have been processed. They come from sources deemed as "good" or "commodity" (with no journalistic value-added). For the bulk of our data, the reputation and consistency of the news brand had a significant weight, but the objective is also to classify quality at a finer grained level, detached from the name of the source. To this end, various models are used to capture differences in writing that are agnostic to topical differences.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Toward Managing the Complexity of Molecules: Letting Matter Compute Itself

Topic: 
Toward Managing the Complexity of Molecules: Letting Matter Compute Itself
Abstract / Description: 

Person-millenia are spent each year seeking useful molecules for medicine, food, agriculture and other uses. For biomolecules, the near infinite universe of possibilities is staggering and humbling. As an example, antibodies, which make up the majority of the top-grossing medicines today, are comprised of 1,100 amino acids chosen from the twenty used by living things. The binding part (variable region) that allows the antibody to bind and recognize pathogens, is about 110 amino acids, giving rise to 10143 possible combinations. There are only about 1080 atoms in the universe, illustrating the intractability of exploring the entire space of possibility. This is just one example…

Presently, machine learning (ML), artificial intelligence (AI), quantum computing, and “big data” are often put forth as the solutions to all problems, particularly by pontificating TED presenters and in Sand Hill pitches dripping with hyperbole. Expecting these methods to provide intelligent prediction of molecular structure and function within our lifetimes is unrealistic. For example, a neural network trained on daily weather patterns in Palo Alto cannot develop an internal model for global weather. In a similar way, finite and reasonable molecular training sets will not magically cause a generalizable model of molecular quantum mechanics to arise within a neural network, no matter how many layers it is endowed with.

With that provocative preface, we turn to the notion of letting matter compute itself. Massive combinatorial libraries can now be intelligently and efficiently mined with appropriate molecular readouts (AKA “the question vector”) at ever-increasing throughputs presently surpassing 1012 unique molecules in a few hours. Once “matter-in-the-loop” exploration is embraced, AI, ML and other methods can be brought to bear usefully in closed-loop methods to follow veins of opportunity in molecular space. Several examples of mining massive molecular spaces will be presented, including drug discovery, digital pathology, and AI-guided continuous-flow chemical synthesis – all real, all working today.

Date and Time: 
Thursday, March 15, 2018 - 4:30pm to 5:30pm
Venue: 
Y2E2 Room 111

ISL Special Seminar: Low- and high-dimensional computations in neural circuits

Topic: 
Low- and high-dimensional computations in neural circuits
Abstract / Description: 

Computation in the brain is distributed across large populations. Individual neurons are noisy and receive limited information but, by acting collectively, neural populations perform a wide variety of complex computations. In this talk I will discuss two approaches to understanding these collective computations. First, I will introduce a method to identify and decode unknown variables encoded in the activity of neural populations. While the number of neurons in a population may be large, if the population encodes a low-dimensional variable there will be low-dimensional structure in the collective activity, and the method aims to find and parameterize this low-dimensional structure. In the rodent head direction (HD) system, the method reveals a nonlinear ring manifold and allows encoded head direction and the tuning curves of single cells to be recovered with high accuracy and without prior knowledge of what neurons were encoding. When applied to sleep, it provides mechanistic insight into the circuit construction of the ring manifold and, during nREM sleep, reveals a new dynamical regime possibly linked to memory consolidation in the brain. I will then address the problem of understanding genuinely high-dimensional computations in the brain, where low-dimensional structure does not exist. Modern work studying distributed algorithms on large sparse networks may provide a compelling approach to neural computation, and I will use insights from recent work on error correction to construct a novel architecture for high-capacity neural memory. Unlike previous models, which yield either weak (linear) increases in capacity with network size or exhibit poor robustness to noise, this network is able to store a number of states exponential in network size while preserving noise robustness, thus resolving a long-standing theoretical question.
These results demonstrate new approaches for studying neural representations and computation across a variety of scales, both when low-dimensional structure is present and when computations are high-dimensional.

Date and Time: 
Tuesday, March 6, 2018 - 10:00am
Venue: 
Clark S360

Applied Physics/Physics Colloquium: The Entropic Matter(s) of an Ordered Universe

Topic: 
The Entropic Matter(s) of an Ordered Universe
Abstract / Description: 

Cosmic Information Theory and Analysis, CITA@CITA, uses entropy constrained by control/order parameters to relate our increasingly highly-entangled Cosmic Microwave Background and Large Scale Clustering big-sky data to how our Universe morphed from a coherently smooth accelerating Hubble-patch into the intricate evolving complexity of the cosmic web. I will chat about ongoing problems in (non-equilibrium) Information-Entropy generation: in post-inflation shock-in-time heating, stored now mostly in the cosmic photon and neutrino seas; in the space-shocked web of galaxies and clusters and its accompanying nuclear/black hole cosmic infrared waste. Central to our statistical analyses are the all-sky deep-volume ensembles of "webskys" we build to mock the real-sky webskys we observe. As in particle physics, simulating and discovering what lies Beyond the Standard Model of Cosmology is the goal, as yet with no B in the SMc in spite of tantalizing 2sigma-ish SMc anomalies and tensions.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Hewlett 201

EE380 Computer Systems Colloquium: The Evolution of Public Key Cryptography

Topic: 
The Evolution of Public Key Cryptography
Abstract / Description: 

While public key cryptography is seen as revolutionary, after this talk you might wonder why it took Whit Diffie, Ralph Merkle and Hellman so long to discover it. This talk also highlights the contributions of some unsung (or "under-sung") heroes: Ralph Merkle, John Gill, Stephen Pohlig, Richard Schroeppel, Loren Kohnfelder, and researchers at GCHQ (Ellis, Cocks, and Williamson).

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Gates B03

Applied Physics/Physics Colloquium: Topological Quantum Chemistry

Topic: 
Topological Quantum Chemistry
Abstract / Description: 

The past decade has seen tremendous success in predicting and experimentally discovering distinct classes of topological insulators (TIs) and semimetals. We review the field and we propose an electronic band theory that highlights the link between topology and local chemical bonding, and combines this with the conventional band theory of electrons. Topological Quantum Chemistry is a description of the universal global properties of all possible band structures and materials, comprised of a graph theoretical description of momentum space and a dual group theoretical description in real space. We classify the possible band structures for all 230 crystal symmetry groups that arise from local atomic orbitals, and show which are topologically nontrivial. We show how our topological band theory sheds new light on known TIs, and demonstrate the power of our method to predict a plethora of new TIs.

Date and Time: 
Tuesday, February 27, 2018 - 4:15pm
Venue: 
Hewlett 201

IEEE IT Society, Santa Clara Valley presents From Differential Privacy to Generative Adversarial Privacy

Topic: 
From Differential Privacy to Generative Adversarial Privacy
Abstract / Description: 

6:00PM Refreshments and Conversation

6:30PM The explosive growth in connectivity and data collection is accelerating the use of machine learning to guide consumers through a myriad of choices and decisions. While this vision is expected to generate many disruptive businesses and social opportunities, it presents one of the biggest threats to privacy in recent history. In response to this threat, differential privacy (DP) has recently surfaced as a context-free, robust, and mathematically rigorous notion of privacy.
The first part of my talk will focus on understanding the fundamental tradeoff between DP and utility for a variety of unsupervised learning applications. Surprisingly, our results show the universal optimality of a family of extremal privacy mechanisms called staircase mechanisms. While the vast majority of works on DP have focused on using the Laplace mechanism, our results indicate that it is strictly suboptimal and can be replaced by a staircase mechanism to improve utility. Our results also show that the strong privacy guarantees of DP often come at a significant loss in utility.
The second part of my talk is motivated by the following question: can we exploit data statistics to achieve a better privacy-utility tradeoff? To address this question, I will present a novel context-aware notion of privacy called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to arrive to a unified framework for data-driven privacy that has deep game-theoretic and information-theoretic roots. I will conclude my talk by showcasing the performance of GAP on real life datasets.

Date and Time: 
Wednesday, February 28, 2018 - 6:00pm
Venue: 
Packard 202

ISL Colloquium: Deep Exploration via Randomized Value Functions

Topic: 
Deep Exploration via Randomized Value Functions
Abstract / Description: 

An important challenge in reinforcement learning concerns how an agent can simultaneously explore and generalize in a reliably efficient manner. It is difficult to claim that one can produce a robust artificial intelligence without tackling this fundamental issue. This talk will present a systematic approach to exploration that induces judicious probing through randomization of value function estimates and operates effectively in tandem with common reinforcement learning algorithms, such as least-squares value iteration and temporal-difference learning, that generalize via parameterized representations of the value function. Theoretical results offer assurances with tabular representations of the value function, and computational results suggest that the approach remains effective with generalizing representations.

Date and Time: 
Thursday, February 22, 2018 - 4:15pm
Venue: 
Packard 101

EE380 Computer Systems Colloquium: Graph Analysis of Russian Twitter Trolls using Neo4j

Topic: 
Graph Analysis of Russian Twitter Trolls using Neo4j
Abstract / Description: 

As part of the US House Intelligence Committee investigation into how Russia may have influenced the 2016 US election, Twitter released the screen names of nearly 3000 Twitter accounts tied to Russia's Internet Research Agency. These accounts were immediately suspended, removing the data from Twitter.com and Twitter's developer API. In this talk, we show how we can reconstruct a subset of the Twitter network of these Russian troll accounts and apply graph analytics to the data using the Neo4j graph database to uncover how these accounts were spreading fake news.

This case study style presentation will show how we collected and munged the data, taking advantage of the flexibility of the property graph. We'll dive into how NLP and graph algorithms like PageRank and community detection can be applied in the context of social media to make sense of the data. We'll show how Cypher, the query language for graphs is used to work with graph data. And we'll show how visualization is used in combination with these algorithms to interpret results of the analysis and to help share the story of the data. No familiarity with graphs or Neo4j is necessary as we'll start with a brief overview of graph databases and Neo4j.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Gates B03

ISL Special Seminar: Computational structure in large-scale neural population recordings: how to find it, and when to believe it

Topic: 
Computational structure in large-scale neural population recordings: how to find it, and when to believe it
Abstract / Description: 

One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding structure in this data, and (ii) methods for statistically validating that structure. First, I will review our work that has used factor modeling and dynamical systems to advance understanding of the computational structure in the motor cortex of primates and rodents. Second, while these methods and the broader class of such methods are promising, they are also perilous: novel analysis techniques do not always consider the possibility that their results are an expected consequence of some simpler, already-known feature of the data. I will present two works that address this growing problem, the first of which derives a tensor-variate maximum entropy distribution with user-specified moment constraints along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of structure in the motor and prefrontal cortices. I will then discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling.

Date and Time: 
Thursday, February 15, 2018 - 10:00am
Venue: 
Munzer Auditorium

Pages

Applied Physics / Physics Colloquium

Applied Physics/Physics Colloquium: Ultracold Atom Quantum Simulations: From Exploring Low Temperature Fermi-Hubbard Phases to Many-body Localization

Topic: 
Ultracold Atom Quantum Simulations: From Exploring Low Temperature Fermi-Hubbard Phases to Many-body Localization
Abstract / Description: 

Ultracold-atom model-systems offer a unique way to investigate a wide range of many-body quantum physics in uncharted regimes. Quantum gas microscopy enables us to "zoom in" both, in space and time, on a single particle level. We can explore many-body quantum physics in regimes that are not computationally accessible. In my talk I will present an overview of recent experiments, including the first observation of an anti-ferromagnetic phase of Fermions in an optical lattice, and the observation of many-body localization.

Date and Time: 
Tuesday, April 24, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Reverse Engineering the Universe

Topic: 
Reverse Engineering the Universe
Abstract / Description: 

Prof. Andrei Linde of the Stanford Physics Department will give the Applied Physics/Physics colloquium on Tues., May 8, 2018 entitled "Reverse Engineering the Universe."

Date and Time: 
Tuesday, May 8, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Demonology: The Curious Role of Intelligence in Physics & Biology

Topic: 
Demonology: The Curious Role of Intelligence in Physics & Biology
Abstract / Description: 

For the lion's share of its history, physics analyzed the inanimate world. Or, that is the view it has of itself. Careful reflection, though, shows that physics regularly invoked an expressly extra-physical agency—intelligence—in its efforts to understand even the most basic physical phenomena. I will survey this curious proclivity, noting that similar appeals to intelligent "demons" go back to Laplace's theory of chance, Poincaré's discovery of deterministic chaos in the solar system, and Darwin's explanation of the origin of biological organisms in terms of natural selection. Today, we are on the verge of a new physics of information that will transform this bad "demonology" to a constructive, perhaps even an engineering, paradigm that explains information processing embedded in the natural world. In the process I will show how deterministic chaos arises in the operation of Maxwell's Demon and outline nanoscale experimental implementations ongoing at Caltech's Kavli Nanoscience Institute.

Date and Time: 
Tuesday, April 17, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Top Quarks: The New Flavor

Topic: 
Top Quarks: The New Flavor
Abstract / Description: 

The Large Hadron Collider is providing an enormous dataset of proton-proton collisions at the highest energies ever achieved in a laboratory.

With our new ability to study the Higgs boson and the unprecedentedly large sample of top quarks, a new frontier has opened: the flavor physics of the top quark - at heart, the question of how the top quark interacts with the Higgs field. We can start to ask questions such as whether the Higgs field is the unique source of the top quark's mass and whether there are unexpected interactions between the top quark and the Higgs boson. The answers to these questions will shed light on what may lie beyond the particle physics Standard Model and have cosmological implications.

Date and Time: 
Tuesday, April 10, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: The Search for Dark Energy and NASA’s WFIRST mission

Topic: 
The Search for Dark Energy and NASA’s WFIRST mission
Abstract / Description: 

Over the last twenty years, there has been growing evidence that our universe is dominated by dark energy. The nature of this dark energy remains a mystery. Is it the signature of the breakdown of general relativity or vacuum energy associated with quantum gravity? I will review the current observations and note the intriguing tensions between measurements based on the cosmic microwave background (CMB) and local measurements of the expansion rate of the universe and the amplitude of density fluctuations. I will then discuss on-going and upcoming CMB experiments and the role of the WFIRST mission in studying the nature of dark energy. I will also discuss the broader scientific mission of the WFIRST mission and its current status.

Date and Time: 
Tuesday, March 13, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: The Entropic Matter(s) of an Ordered Universe

Topic: 
The Entropic Matter(s) of an Ordered Universe
Abstract / Description: 

Cosmic Information Theory and Analysis, CITA@CITA, uses entropy constrained by control/order parameters to relate our increasingly highly-entangled Cosmic Microwave Background and Large Scale Clustering big-sky data to how our Universe morphed from a coherently smooth accelerating Hubble-patch into the intricate evolving complexity of the cosmic web. I will chat about ongoing problems in (non-equilibrium) Information-Entropy generation: in post-inflation shock-in-time heating, stored now mostly in the cosmic photon and neutrino seas; in the space-shocked web of galaxies and clusters and its accompanying nuclear/black hole cosmic infrared waste. Central to our statistical analyses are the all-sky deep-volume ensembles of "webskys" we build to mock the real-sky webskys we observe. As in particle physics, simulating and discovering what lies Beyond the Standard Model of Cosmology is the goal, as yet with no B in the SMc in spite of tantalizing 2sigma-ish SMc anomalies and tensions.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Hewlett 201

The 2018 Robert Hofstadter Memorial Lecture: The Dawn of Gravitational-Wave Astrophysics

Topic: 
The Dawn of Gravitational-Wave Astrophysics
Abstract / Description: 

In the past two years the gravitational-wave detections enabled by the LIGO detectors have launched a new field in observational astronomy allowing us to study compact object mergers involving pairs of black holes and neutron stars. I will discuss what current results reveal about compact object astrophysics, from binary black hole formation to short gamma-ray bursts and nuclear matter physics. I will also highlight what we can expect in the near future as detectors' sensitivity improves and multi-messenger astronomy further advances.

Date and Time: 
Tuesday, April 3, 2018 - 4:30pm
Venue: 
Hewlett 201

The 2018 Robert Hofstadter Memorial Lecture: Cosmic Collisions Reveal Einstein's Gravitational-Wave Universe

Topic: 
Cosmic Collisions Reveal Einstein's Gravitational-Wave Universe
Abstract / Description: 

For the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein's 1915 general theory of relativity and opens an unprecedented new window onto the cosmos. Gravitational waves carry unique information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Detected gravitational waves were produced during the final fraction of a second of the mergers of two black holes but also during the last hundred seconds of the collision of two neutron stars. The latter is the first ever cosmic event to be observed both in gravitational waves and in electromagnetic waves, shedding light on several long-standing puzzles, like the production of gold in nature and the physics origins of brief gamma-ray flashes. I will review the beginnings of this exciting field of cosmic exploration and the unprecedented technology and engineering that made it possible.

Date and Time: 
Monday, April 2, 2018 - 7:30pm
Venue: 
Hewlett 200

Applied Physics/Physics Colloquium: Topological Quantum Chemistry

Topic: 
Topological Quantum Chemistry
Abstract / Description: 

The past decade has seen tremendous success in predicting and experimentally discovering distinct classes of topological insulators (TIs) and semimetals. We review the field and we propose an electronic band theory that highlights the link between topology and local chemical bonding, and combines this with the conventional band theory of electrons. Topological Quantum Chemistry is a description of the universal global properties of all possible band structures and materials, comprised of a graph theoretical description of momentum space and a dual group theoretical description in real space. We classify the possible band structures for all 230 crystal symmetry groups that arise from local atomic orbitals, and show which are topologically nontrivial. We show how our topological band theory sheds new light on known TIs, and demonstrate the power of our method to predict a plethora of new TIs.

Date and Time: 
Tuesday, February 27, 2018 - 4:15pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Magic Angle Graphene: A New Platform for Strongly Correlated Physics

Topic: 
Magic Angle Graphene: A New Platform for Strongly Correlated Physics
Abstract / Description: 

The understanding of strongly-correlated quantum matter has challenged physicists for decades. Such difficulties have stimulated new research paradigms, such as ultra-cold atom lattices for simulating quantum materials. In this talk I will present a new platform to investigate strongly correlated physics, based on graphene moiré superlattices. In particular, I will show that when two graphene sheets are twisted by an angle close to the theoretically predicted 'magic angle', the resulting flat band structure near the Dirac point gives rise to a strongly-correlated electronic system. These flat bands exhibit half-filling insulating phases at zero magnetic field, which we show to be a Mott-like insulator arising from electrons localized in the moiré superlattice. These unique properties of magic-angle twisted bilayer graphene open up a new playground for exotic many-body quantum phases in a 2D platform made of pure carbon and without magnetic field. The easy accessibility of the flat bands, the electrical tunability, and the bandwidth tunability though twist angle may pave the way towards more exotic correlated systems, such as quantum spin liquids. I will end my talk with an unconventional experimental surprise.

Date and Time: 
Tuesday, February 13, 2018 - 4:30pm
Venue: 
Hewlett 201

Pages

CS300 Seminar

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

SpaceX's journey on the road to mars

Topic: 
SpaceX's journey on the road to mars
Abstract / Description: 

SSI will be hosting Gwynne Shotwell — President and COO of SpaceX — to discuss SpaceX's journey on the road to mars. The event will be on Wednesday Oct 11th from 7pm - 8pm in Dinkelspiel Auditorium. After the talk, there will be a Q&A session hosted by Steve Jurvetson from DFJ Venture Capital.

Claim your tickets now on eventbright

 

Date and Time: 
Wednesday, October 11, 2017 - 7:00pm
Venue: 
Dinkelspiel Auditorium

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Subhasish Mitra

5:15-6:00, Silvio Savarese

Date and Time: 
Wednesday, December 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Phil Levis

5:15-6:00, Ron Fedkiw

Date and Time: 
Monday, December 5, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Dan Boneh

5:15-6:00, Aaron Sidford

Date and Time: 
Wednesday, November 30, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, John Mitchell

5:15-6:00, James Zou

Date and Time: 
Monday, November 28, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Emma Brunskill

5:15-6:00, Doug James

Date and Time: 
Wednesday, November 16, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, James Landay

5:15-6:00, Dan Jurafsky

Date and Time: 
Monday, November 14, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Ken Salisbury

5:15-6:00, Noah Goodman

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Kunle Olukotun

5:15-6:00, Jure Leskovec

Date and Time: 
Monday, November 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

Pages

EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium: The End of Privacy

Topic: 
The End of Privacy
Abstract / Description: 

A growing proportion of human activities such as social interactions, entertainment, shopping, and gathering information are now mediated by digital devices and services. Such digitally mediated activities can be easily recorded, offering an unprecedented opportunity to study and measure intimate psycho-demographic traits using actual--rather than self-reported--behavior. Our research shows that digital records of behavior, such as samples of text, Tweets, Facebook Likes, web-browsing logs, or even facial images can be used to accurately measure a wide range of traits including personality, intelligence, and political views. Such Big Data assessment has a number of advantages: it does not require participants' active involvement; it can be easily and inexpensively applied to large populations; and it is relatively immune to cheating or misrepresentation. If used ethically, it could revolutionize psychological assessment, marketing, recruitment, insurance, and many other industries. In the wrong hands, however, such methods pose significant privacy risks. In this talk, we will discuss how to reap the benefits of Big Data assessment while avoiding the pitfalls.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computer Accessibility

Topic: 
Exploring the implications of machine learning for people with cognitive disabilities
Abstract / Description: 

Advances in information technology have provided many benefits for people with disabilities, including wide availability of textual content via text to speech, flexible control of motor wheelchairs, captioned video, and much more. People with cognitive disabilities benefit from easier communication, and better tools for scheduling and reminders. Will advances in machine learning enhance this impact? Progress in natural language processing, autonomous vehicles, and emotion detection, all driven by machine learning, may deliver important benefits soon. Further out, can we look for systems that can help people with cognitive challenges understand our complex world more easily, work more effectively, stay safe, and interact more comfortably in social situations? What are the technical barriers to overcome in pursuing these goals, and what are the theoretical developments in machine learning that may overcome them?

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Information Theory of Deep Learning

Topic: 
Information Theory of Deep Learning
Abstract / Description: 

I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottleneck framework. The new theory has the following components:

  1. rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning.
  2. I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottleneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm.
  3. I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottleneck problem for large scale supervised learning problems. The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.

Based partly on works with Ravid Shwartz-Ziv and Noga Zaslavsky.

Date and Time: 
Wednesday, April 4, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: News Diffusion - fighting misinformation

Topic: 
News Diffusion: Scoring (automatically) news articles to fight misinformation
Abstract / Description: 

Deepnews.ai wants to make a decisive contribution to the sustainability of the journalistic information ecosystem by addressing two problems:
1. The lack of correlation between the cost of producing great editorial content and its economic value.
2. The vast untapped potential for news editorial products.

Deepnews.ai willl have a simple and accessible scoring system: the online platform receives a batch of news stories will score on a scale of 1 to 5 based on their journalistic quality. This is done automatically and in real time. This scoring system has multiple applications.

On the business side, the greatest potential is the possibility to adjust the price of an advertisement to the quality of the editorial context. There is room for improvement. Today, a story that required months of work and cost hundreds of thousands of dollars carries the same unitary value (a few dollars per thousand page views) as a short, gossipy article. But times are changing. In the digital ad business, indicators are blinking red: CPMs, click-through rates, and viewability are on a steady downward decline. We believe that inevitably, advertisers and marketers will seek high-quality content--as long as they can rely on a credible indicator of quality. Deepnews.ai will interface with ad servers to assess the value of a story and price and serve ads accordingly. The higher a story's quality score, the pricier the ad space adjacent to it can be. This adjustment will substantially raise the revenue per page to match the quality of news.

On the editorial side: The ability to assess the quality of news will open up opportunities for new products and services such as:
• Recommendation engines improvement: instead of relying on keywords or frequency, Deepnews.ai will surface stories based on substantial quality, which will increase the number of articles read per visit. (Currently, visitors to many news sites read less than two articles per visit).
• Personalization: We believe a reader's profile should not be limited to consumption analytics but should reflect his or her editorial preferences. Deepnews.ai is considering a dedicated "tag" which will be able to connect stories' metadata with a reader's affinity.
• Curation: Publishers will be able to use Deepnews.ai to offer curation services, a business currently left to players like Google and Apple. By providing technology that can automatically surface the best stories from trusted websites (even small ones), Deepnews.ai can help publishers expand their footprint.

The platform will be based on two of ML approaches: a feature-based model and a text content analytic model.

Using traditional ML methods, the first model assesses quality taking as input two sets of "signals" to assess the quality of journalistic work: Quantifiable Signals and Subjective Signals. Quantifiable Signals include the structure and patterns of the HTML page, advertising density, use of visual elements, bylines, word count, readability of the text, information density (number of quotes and named entities). This is processed data from news content. Subjective Signals are human scoring of quality based on criteria such as writing style, thoroughness, balance and fairness, timeliness, etc. These measures are produced by editors and experienced journalists.

The second approach is based on deep learning methods. Here, the goal is to build models that will be able to accurately classify an unseen incoming article purely based on the quality of the report, distinct from the metadata or the topic of discussion. The main challenge in many such deep learning approaches is the availability of labeled data. Nearly four million contemporary articles have been processed. They come from sources deemed as "good" or "commodity" (with no journalistic value-added). For the bulk of our data, the reputation and consistency of the news brand had a significant weight, but the objective is also to classify quality at a finer grained level, detached from the name of the source. To this end, various models are used to capture differences in writing that are agnostic to topical differences.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The Evolution of Public Key Cryptography

Topic: 
The Evolution of Public Key Cryptography
Abstract / Description: 

While public key cryptography is seen as revolutionary, after this talk you might wonder why it took Whit Diffie, Ralph Merkle and Hellman so long to discover it. This talk also highlights the contributions of some unsung (or "under-sung") heroes: Ralph Merkle, John Gill, Stephen Pohlig, Richard Schroeppel, Loren Kohnfelder, and researchers at GCHQ (Ellis, Cocks, and Williamson).

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Graph Analysis of Russian Twitter Trolls using Neo4j

Topic: 
Graph Analysis of Russian Twitter Trolls using Neo4j
Abstract / Description: 

As part of the US House Intelligence Committee investigation into how Russia may have influenced the 2016 US election, Twitter released the screen names of nearly 3000 Twitter accounts tied to Russia's Internet Research Agency. These accounts were immediately suspended, removing the data from Twitter.com and Twitter's developer API. In this talk, we show how we can reconstruct a subset of the Twitter network of these Russian troll accounts and apply graph analytics to the data using the Neo4j graph database to uncover how these accounts were spreading fake news.

This case study style presentation will show how we collected and munged the data, taking advantage of the flexibility of the property graph. We'll dive into how NLP and graph algorithms like PageRank and community detection can be applied in the context of social media to make sense of the data. We'll show how Cypher, the query language for graphs is used to work with graph data. And we'll show how visualization is used in combination with these algorithms to interpret results of the analysis and to help share the story of the data. No familiarity with graphs or Neo4j is necessary as we'll start with a brief overview of graph databases and Neo4j.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Ginzton Lab

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Ginzton Lab / AMO Seminar

Topic: 
2D/3D Photonic Integration Technologies for Arbitrary Optical Waveform Generation in Temporal, Spectral, and Spatial Domains
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 29, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
Silicon-Plus Photonics for Tomorrow's (Astronomically) Large-Scale Networks
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 22, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
'Supermode-Polariton Condensation in a Multimode Cavity QED-BEC System' and 'Probing Ultrafast Electron Dynamics in Atoms and Molecules'
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, January 4, 2016 - 4:15pm to 5:30pm
Venue: 
Spilker 232

Ginzton Lab: Special Optics Seminar

Topic: 
A Carbon Nanotube Optical Rectenna
Abstract / Description: 

An optical rectenna – that is, a device that directly converts free-propagating electromagnetic waves at optical frequencies to d.c. electricity – was first proposed over 40 years ago, yet this concept has not been demonstrated experimentally due to fabrication challenges at the nanoscale. Realizing an optical rectenna requires that an antenna be coupled to a diode that operates on the order of 1 petahertz (switching speed on the order of a femtosecond). Ultralow capacitance, on the order of a few attofarads, enables a diode to operate at these frequencies; and the development of metal-insulator-metal tunnel junctions with nanoscale dimensions has emerged as a potential path to diodes with ultralow capacitance, but these structures remain extremely difficult to fabricate and couple to a nanoscale antenna reliably. Here we demonstrate an optical rectenna by engineering metal-insulator-metal tunnel diodes, with ultralow junction capacitance of approximately 2 attofarads, at the tips of multiwall carbon nanotubes, which act as the antenna and metallic electron field emitter in the diode. This demonstration is achieved using very small diode areas based on the diameter of a single carbon nanotube (about 10 nanometers), geometric field enhancement at the carbon nanotube tips, and a low work function semitransparent top metal contact. Using vertically-aligned arrays of the diodes, we measure d.c. open-circuit voltage and short-circuit current at visible and infrared electromagnetic frequencies that is due to a rectification process, and quantify minor contributions from thermal effects. In contrast to recent reports of photodetection based on hot electron decay in plasmonic nanoscale antenna, a coherent optical antenna field is rectified directly in our devices, consistent with rectenna theory. Our devices show evidence of photon-assisted tunneling that reduces diode resistance by two orders of magnitude under monochromatic illumination. Additionally, power rectification is observed under simulated solar illumination. Numerous current-voltage scans on different devices, and between 5-77 degrees Celsius, show no detectable change in diode performance, indicating a potential for robust operation.

Date and Time: 
Tuesday, October 20, 2015 - 2:00pm to 3:00pm
Venue: 
Spilker 232

Pages

Information Systems Lab (ISL) Colloquium

ISL Colloquium: Reinforcement Learning without Reinforcement

Topic: 
Reinforcement Learning without Reinforcement
Abstract / Description: 

Reinforcement Learning (RL) is concerned with solving sequential decision-making problems in the presence of uncertainty. RL is really about two problems together. The first is the 'Bellman problem': Finding the optimal policy given the model, which may involve large state spaces. Various approximate dynamic programming and RL schemes have been developed, but either there are no guarantees, or not universal, or rather slow. In fact, most RL algorithms have become synonymous with stochastic approximation (SA) schemes that are known to be rather slow. This is an even more difficult problem for MDPs with continuous state (and action) spaces. We present a class of non-SA algorithms for reinforcement learning in continuous state space MDP problems based on 'empirical' ideas, which are simple, effective and yet universal with probabilistic guarantees. The idea involves randomized Kernel-based function fitting combined with 'empirical' updates. The key is the first known "probabilistic contraction analysis" method we have developed for analysis of fairly general stochastic iterative algorithms, wherein we show convergence to a probabilistic fixed point of a sequence of random operators via a stochastic dominance argument.

The second RL problem is the 'online learning (or the Lai-Robbins) problem' when the model itself is unknown. We propose a simple posterior sampling-based regret-minimization reinforcement learning algorithm for MDPs. It achieves O(sqrt{T})-regret which is order-optimal. It not only optimally manages the "exploration versus exploitation tradeoff" but also obviates the need for expensive computation for exploration. The algorithm differs from classical adaptive control in its focus on non-asymptotic regret optimality as opposed to asymptotic stability. This seems to resolve a long standing open problem in Reinforcement Learning.

Date and Time: 
Tuesday, April 24, 2018 - 4:00pm
Venue: 
Packard 101

ISL Colloquium: Recent Developments in Compressed Sensing

Topic: 
Recent Developments in Compressed Sensing
Abstract / Description: 

Compressed sensing refers to the reconstruction of high-dimensional but low-complexity objects from a limited number of measurements. Examples include the recovery of high-dimensional but sparse vectors, and the recovery of high-dimensional but low-rank matrices, which includes the so-called partial realization problem in linear control theory. Much of the work to date focuses on probabilistic methods, which are CPU-intensive and have high computational complexity. In contrast, deterministic methods are far faster in execution and more efficient in terms of storage. Moreover, deterministic methods draw from many branches of mathematics, including graph theory and algebraic coding theory. In this talk a brief overview will be given of such recent developments.

Date and Time: 
Thursday, April 19, 2018 - 4:15pm
Venue: 
Packard 101

IT Forum: From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform

Topic: 
From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform
Abstract / Description: 

Characterizing the rate-distortion region of Gaussian multiterminal source coding is a longstanding open problem in network information theory. In this talk, I will show how to obtain new conclusive results for this problem using nonlinear analysis and convex relaxation techniques. A byproduct of this line of research is an efficient algorithm for determining the optimal distributed Karhunen–Loève transform in the high-resolution regime, which partially settles a question posed by Gastpar, Dragotti, and Vetterli. I will also introduce a generalized version of the Gaussian multiterminal source coding problem where the source-encoder connections can be arbitrary. It will be demonstrated that probabilistic graphical models offer an ideal mathematical language for describing how the performance limit of a generalized Gaussian multiterminal source coding system depends on its topology, and more generally they can serve as the long-sought platform for systematically integrating the existing achievability schemes and converse arguments. The architectural implication of our work for low-latency lossy source coding will also be discussed.

This talk is based on joint work with Jia Wang, Farrokh Etezadi, and Ashish Khisti.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, April 13, 2018 - 1:15pm
Venue: 
Packard 202

ISL Special Seminar: Low- and high-dimensional computations in neural circuits

Topic: 
Low- and high-dimensional computations in neural circuits
Abstract / Description: 

Computation in the brain is distributed across large populations. Individual neurons are noisy and receive limited information but, by acting collectively, neural populations perform a wide variety of complex computations. In this talk I will discuss two approaches to understanding these collective computations. First, I will introduce a method to identify and decode unknown variables encoded in the activity of neural populations. While the number of neurons in a population may be large, if the population encodes a low-dimensional variable there will be low-dimensional structure in the collective activity, and the method aims to find and parameterize this low-dimensional structure. In the rodent head direction (HD) system, the method reveals a nonlinear ring manifold and allows encoded head direction and the tuning curves of single cells to be recovered with high accuracy and without prior knowledge of what neurons were encoding. When applied to sleep, it provides mechanistic insight into the circuit construction of the ring manifold and, during nREM sleep, reveals a new dynamical regime possibly linked to memory consolidation in the brain. I will then address the problem of understanding genuinely high-dimensional computations in the brain, where low-dimensional structure does not exist. Modern work studying distributed algorithms on large sparse networks may provide a compelling approach to neural computation, and I will use insights from recent work on error correction to construct a novel architecture for high-capacity neural memory. Unlike previous models, which yield either weak (linear) increases in capacity with network size or exhibit poor robustness to noise, this network is able to store a number of states exponential in network size while preserving noise robustness, thus resolving a long-standing theoretical question.
These results demonstrate new approaches for studying neural representations and computation across a variety of scales, both when low-dimensional structure is present and when computations are high-dimensional.

Date and Time: 
Tuesday, March 6, 2018 - 10:00am
Venue: 
Clark S360

ISL Colloquium: Deep Exploration via Randomized Value Functions

Topic: 
Deep Exploration via Randomized Value Functions
Abstract / Description: 

An important challenge in reinforcement learning concerns how an agent can simultaneously explore and generalize in a reliably efficient manner. It is difficult to claim that one can produce a robust artificial intelligence without tackling this fundamental issue. This talk will present a systematic approach to exploration that induces judicious probing through randomization of value function estimates and operates effectively in tandem with common reinforcement learning algorithms, such as least-squares value iteration and temporal-difference learning, that generalize via parameterized representations of the value function. Theoretical results offer assurances with tabular representations of the value function, and computational results suggest that the approach remains effective with generalizing representations.

Date and Time: 
Thursday, February 22, 2018 - 4:15pm
Venue: 
Packard 101

ISL Special Seminar: Computational structure in large-scale neural population recordings: how to find it, and when to believe it

Topic: 
Computational structure in large-scale neural population recordings: how to find it, and when to believe it
Abstract / Description: 

One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding structure in this data, and (ii) methods for statistically validating that structure. First, I will review our work that has used factor modeling and dynamical systems to advance understanding of the computational structure in the motor cortex of primates and rodents. Second, while these methods and the broader class of such methods are promising, they are also perilous: novel analysis techniques do not always consider the possibility that their results are an expected consequence of some simpler, already-known feature of the data. I will present two works that address this growing problem, the first of which derives a tensor-variate maximum entropy distribution with user-specified moment constraints along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of structure in the motor and prefrontal cortices. I will then discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling.

Date and Time: 
Thursday, February 15, 2018 - 10:00am
Venue: 
Munzer Auditorium

ISL Colloquium: Data Driven Dialog Management

Topic: 
Data Driven Dialog Management
Abstract / Description: 

Modern virtual personal assistants provide a convenient interface for completing daily tasks via voice commands. An important consideration for these assistants is the ability to recover from automatic speech recognition (ASR) and natural language understanding (NLU) errors. I present our recent work on learning robust dialog policies to recover from these errors. To this end, we developed a user simulator which interacts with the assistant through voice commands in realistic scenarios with noisy audio, and use it to learn dialog policies through deep reinforcement learning. We show that dialogs generated by our simulator are indistinguishable from human generated dialogs, as determined by human evaluators. Furthermore, preliminary experimental results show that the learned policies in noisy environments achieve the same execution success rate with fewer dialog turns compared to fixed rule-based policies.

Date and Time: 
Wednesday, February 7, 2018 - 4:30pm
Venue: 
Packard 202

ISL Colloquium: Data-driven analysis of neuronal activity

Topic: 
Data-driven analysis of neuronal activity
Abstract / Description: 

Recent advances in experimental methods in neuroscience enable the acquisition of large-scale, high-dimensional and high-resolution datasets. In this talk I will present new data-driven methods based on global and local spectral embeddings for the processing and organization of high-dimensional datasets, and demonstrate their application to neuronal measurements. Looking deeper into the spectrum, we develop Local Selective Spectral Clustering, a new method capable of handling overlapping clusters and disregarding clutter. Applied to in-vivo calcium imaging, we extract hundreds of neuronal structures with detailed morphology, and demixed and denoised time-traces. Next we introduce a nonliner model-free approach for the analysis of a dynamical system, developing data-driven tree-based transforms and metrics for multiscale co-organization of the data. Applied to trial-based neuronal measurements, we identify, solely from observations and in a purely unsupervised manner, functional subsets of neurons, activity patterns associated with particular behaviors and pathological dysfunction caused by external intervention.

Date and Time: 
Thursday, February 15, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Dynamical Systems on Weighted Lattices: Nonlinear Processing and Optimization

Topic: 
Dynamical Systems on Weighted Lattices: Nonlinear Processing and Optimization
Abstract / Description: 

In this talk we will present a unifying theoretical framework of nonlinear processing operators and dynamical systems that obey a superposition of a weighted max-* or min-* type and evolve on nonlinear spaces which we call complete weighted lattices. Their algebraic structure has a polygonal geometry. Some of the special cases unified include max-plus, max-product, and probabilistic dynamical systems. Such systems have found applications in diverse fields including nonlinear image analysis and vision scale- spaces, control of discrete-event dynamical systems, dynamic programming (e.g. shortest paths, Viterbi algorithm), inference on graphical models, tracking salient events in multimodal information streams using generalized Markov chains, and sparse modeling. Our theoretical approach establishes their representation in state and input-output spaces using monotone lattice operators, finds analytically their state and output responses using nonlinear convolutions of a weighted max-min type, studies their stability and reachability, and provides optimal solutions to solving max-* matrix equations. The talk will summarize the main concepts and our theoretical results in this broad field using weighted lattice algebra and will sample some application areas.

Date and Time: 
Thursday, February 8, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Communication in Machine Learning

Topic: 
Communication in Machine Learning
Abstract / Description: 

This Information Systems Seminar talk investigates the converse use of communication technologies and methods, sometimes in alternative forms or with different names, in machine learning. The next generation of internet communication has many uses for machine learning. This talk instead looks in more detail at some structures used in machine learning and draws analogies to methods previously used in communication. For instance, the recasting of a neural network with ReLu (rectifier linear unit) as having a state (linear or nonlinear) allows some analogy with hidden Markov models and state machines. Further, some of the back-propagation learning methods have analogies with forward-backward decoding algorithms that are in use as communication decoders. The question is thus posed as to if some of these communication methods might help certain applications of machine learning that are not viewed initially as communication problems. Some of these topics will be further examined in EE392AA (spring quarter), which can be used for EE MS Communications Depth sequence.


 

ISL Colloquium: The Information Systems Laboratory Colloquium (ISLC) is typically held in Packard 101 every Thursday at 4:15 pm during the academic year. Refreshments are usually served after the talk.

The Colloquium is organized by graduate students Martin Zhang, Farzan Farnia, Reza Takapoui, and Zhengyuan Zhou.

Date and Time: 
Thursday, January 25, 2018 - 4:15pm
Venue: 
Packard 101

Pages

IT-Forum

IT Forum: From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform

Topic: 
From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform
Abstract / Description: 

Characterizing the rate-distortion region of Gaussian multiterminal source coding is a longstanding open problem in network information theory. In this talk, I will show how to obtain new conclusive results for this problem using nonlinear analysis and convex relaxation techniques. A byproduct of this line of research is an efficient algorithm for determining the optimal distributed Karhunen–Loève transform in the high-resolution regime, which partially settles a question posed by Gastpar, Dragotti, and Vetterli. I will also introduce a generalized version of the Gaussian multiterminal source coding problem where the source-encoder connections can be arbitrary. It will be demonstrated that probabilistic graphical models offer an ideal mathematical language for describing how the performance limit of a generalized Gaussian multiterminal source coding system depends on its topology, and more generally they can serve as the long-sought platform for systematically integrating the existing achievability schemes and converse arguments. The architectural implication of our work for low-latency lossy source coding will also be discussed.

This talk is based on joint work with Jia Wang, Farrokh Etezadi, and Ashish Khisti.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, April 13, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum & ISL presents Robust sequential change-point detection

Topic: 
Robust sequential change-point detection
Abstract / Description: 

Sequential change-point detection is a fundamental problem in statistics and signal processing, with broad applications in security, network monitoring, imaging, and genetics. Given a sequence of data, the goal is to detect any change in the underlying distribution as quickly as possible from the streaming data. Various algorithms have been developed including the commonly used CUSUM procedure. However, there is a still a gap when applying change-point detection methods to real problems, notably, due to the lack of robustness. Classic approaches usually require exact specification of the pre and post change distributions forms, which may be quite restrictive and do not perform well with real data. On the other hand, Huber’s classic robust statistics built based on least favorable distributions are not directly applicable since they are computationally intractable in the multi-dimensional setting. In this seminar, I will present several of our recent works in developing computationally efficient and robust change-point detection algorithms with certain near optimality properties, by building a connection of statistical sequential analysis with (online) convex optimization.

 


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, March 16, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Restricted Isometry Property of Random Projection for Low-Dimensional Subspaces

Topic: 
Restricted Isometry Property of Random Projection for Low-Dimensional Subspaces
Abstract / Description: 

Dimensionality reduction is in demand to reduce the complexity of solving large-scale problems with data lying in latent low-dimensional structures in machine learning and computer version. Motivated by such need, in this talk I will introduce the Restricted Isometry Property (RIP) of Gaussian random projections for low-dimensional subspaces in R^N, and prove that the projection Frobenius norm distance between any two subspaces spanned by the projected data in R^n for n smaller than N remain almost the same as the distance between the original subspaces with probability no less than 1 - e^O(-n).

Previously the well-known Johnson-Lindenstrauss (JL) Lemma and RIP for sparse vectors have been the foundation of sparse signal processing including Compressed Sensing. As an analogy to JL Lemma and RIP for sparse vectors, this work allows the use of random projections to reduce the ambient dimension with the theoretical guarantee that the distance between subspaces after compression is well preserved.

As a direct result of our theory, when solving the subspace clustering (SC) problem at a large scale, one may conduct SC algorithm on randomly compressed samples to alleviate the high computational burden and still have theoretical performance guarantee. Because the distance between subspaces almost remains unchanged after projection, the clustering error rate of any SC algorithm may keep as small as that conducting in the original space. Considering that our theory is independent of SC algorithms, this may benefit future studies on other subspace related topics.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 23, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: BATS: Network Coding in Action

Topic: 
BATS: Network Coding in Action
Abstract / Description: 

Multi-hop wireless networks can be found in many application scenarios, including IoT, fog computing, satellite communication, underwater communication, etc. The main challenge in such networks is the accumulation of packet loss in the wireless links. With existing technologies, the throughput decreases exponentially fast with the number of hops.

In this talk, we introduce BATched Sparse code (BATS code) as a solution to this challenge. BATS code is a rateless implementation of network coding. The advantages of BATS codes include low encoding/decoding complexities, high throughput, low latency, and low storage requirement. This makes BATS codes ideal for implementation on IoT devices that have limited computing power and storage. At the end of the talk, we will show a video demonstration of BATS code over a Wi-Fi network with 10 IoT devices acting as relay nodes.


 The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 9, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Deterministic Random Matrices

Topic: 
Deterministic Random Matrices
Abstract / Description: 

Random matrices have become a very active area of research in the recent years and have found enormous applications in modern mathematics, physics, engineering, biological modeling, and other fields. In this work, we focus on symmetric sign (+/-1) matrices (SSMs) that were originally utilized by Wigner to model the nuclei of heavy atoms in mid-50s. Assuming the entries of the upper triangular part to be independent +/-1 with equal probabilities, Wigner showed in his pioneering works that when the sizes of matrices grow, their empirical spectra converge to a non-random measure having a semicircular shape. Later, this fundamental result was improved and substantially extended to more general families of matrices and finer spectral properties. In many physical phenomena, however, the entries of matrices exhibit significant correlations. At the same time, almost all available analytical tools heavily rely on the independence condition making the study of matrices with structure (dependencies) very challenging. The few existing works in this direction consider very specific setups and are limited by particular techniques, lacking a unified framework and tight information-theoretic bounds that would quantify the exact amount of structure that matrices may possess without affecting the limiting semicircular form of their spectra.

From a different perspective, in many applications one needs to simulate random objects. Generation of large random matrices requires very powerful sources of randomness due to the independence condition, the experiments are impossible to reproduce, and atypical or non-random looking outcomes may appear with positive probability. Reliable deterministic construction of SSMs with random-looking spectra and low algorithmic and computational complexity is of particular interest due to the natural correspondence of SSMs and undirected graphs, since the latter are extensively used in combinatorial and CS applications e.g. for the purposes of derandomization. Unfortunately, most of the existing constructions of pseudo-random graphs focus on the extreme eigenvalues and do not provide guaranties on the whole spectrum. In this work, using binary Golomb sequences, we propose a simple completely deterministic construction of circulant SSMs with spectra converging to the semicircular law with the same rate as in the original Wigner ensemble. We show that this construction has close to lowest possible algorithmic complexity and is very explicit. Essentially, the algorithm requires at most 2log(n) bits implying that the real amount of randomness conveyed by the semicircular property is quite small.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 2, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Recent Advances in Algorithmic High-Dimensional Robust Statistics

Topic: 
Recent Advances in Algorithmic High-Dimensional Robust Statistics
Abstract / Description: 

Fitting a model to a collection of observations is one of the quintessential problems in machine learning. Since any model is only approximately valid, an estimator that is useful in practice must also be robust in the presence of model misspecification. It turns out that there is a striking tension between robustness and computational efficiency. Even for the most basic high-dimensional tasks, such as robustly computing the mean and covariance, until recently the only known estimators were either hard to compute or could only tolerate a negligible fraction of errors.

In this talk, I will survey the recent progress in algorithmic high-dimensional robust statistics. I will describe the first robust and efficiently computable estimators for several fundamental statistical tasks that were previously thought to be computationally intractable. These include robust estimation of mean and covariance in high dimensions, and robust learning of various latent variable models. The new robust estimators are scalable in practice and yield a number of applications in exploratory data analysis.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, January 26, 2018 - 1:15pm
Venue: 
Packard 202

IT Forum: Tight regret bounds for a latent variable model of recommendation systems

Topic: 
Tight regret bounds for a latent variable model of recommendation systems
Abstract / Description: 

We consider an online model for recommendation systems, with each user being recommended an item at each time-step and providing 'like' or 'dislike' feedback. A latent variable model specifies the user preferences: both users and items are clustered into types. The model captures structure in both the item and user spaces, and our focus is on simultaneous use of both structures. We analyze the situation in which the type preference matrix has i.i.d. entries. Our analysis elucidates the system operating regimes in which existing algorithms are nearly optimal, as well as highlighting the sub-optimality of using only one of item or user structure (as is done in commonly used item-item and user-user collaborative filtering). This prompts a new algorithm that is nearly optimal in essentially all parameter regimes.

Joint work with Prof. Guy Bresler.

Date and Time: 
Friday, November 10, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning

Topic: 
Information Theoretic Limits of Molecular Communication and System Design Using Machine Learning
Abstract / Description: 

Molecular communication is a new and bio-inspired field, where chemical signals are used to transfer information instead of electromagnetic or electrical signals. In this paradigm, the transmitter releases chemicals or molecules and encodes information on some property of these signals such as their timing or concentration. The signal then propagates the medium between the transmitter and the receiver through different means such as diffusion, until it arrives at the receiver where the signal is detected and the information decoded. This new multidisciplinary field can be used for in-body communication, secrecy, networking microscale and nanoscale devices, infrastructure monitoring in smart cities and industrial complexes, as well as for underwater communications. Since these systems are fundamentally different from telecommunication systems, most techniques that have been developed over the past few decades to advance radio technology cannot be applied to them directly.

In this talk, we first explore some of the fundamental limits of molecular communication channels, evaluate how capacity scales with respect to the number of particles released by the transmitter, and the optimal input distribution. Finally, since the underlying channel models for some molecular communication systems are unknown, we demonstrate how techniques from machine learning and deep learning can be used to design components such as detection algorithms, directly from transmission data, without any knowledge of the underlying channel models.

Date and Time: 
Monday, October 16, 2017 - 3:25pm to 4:25pm
Venue: 
Packard 202

Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions

Topic: 
Estimation of entropy and differential entropy beyond i.i.d. and discrete distributions
Abstract / Description: 

Recent years have witnessed significant progress in entropy and mutual information estimation, in particular in the large alphabet regime. Concretely, there exist efficiently computable estimators whose performance with n samples is essentially that of the maximum likelihood estimator with n log(n) samples, a phenomenon termed "effective sample size enlargement". Generalizations to processes with memory (estimation of the entropy rate) and continuous distributions (estimation of the differential entropy) have remained largely open. This talk is about the challenges behind those generalizations and recent progress in this direction. For estimating the entropy rate of a Markov chain, we show that when the mixing time is not too slow, at least S^2/log(S) samples are required to consistently estimate the entropy rate, where S is the size of the state space. In contrast, the empirical entropy rate requires S^2 samples to achieve consistency even if the Markov chain is i.i.d. We propose a general approach to achieve the S^2/log(S) sample complexity, and illustrate our results through estimating the entropy rate of the English language from the Penn Treebank (PTB) and the Google 1 Billion Word Dataset. For differential entropy estimation, we characterize the minimax behavior over Besov balls, and show that a fixed-k nearest neighbor estimator adaptively achieves the minimax rates up to logarithmic factors without knowing the smoothness of the density. The "effective sample size enlargement" phenomenon holds in both the Markov chain case and the case of continuous distributions.

 

Joint work with Weihao Gao, Yanjun Han, Chuan-Zheng Lee, Pramod Viswanath, Tsachy Weissman, Yihong Wu, and Tiancheng Yu.

Date and Time: 
Friday, October 13, 2017 - 1:15pm
Venue: 
Packard 202

IT-Forum: Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications

Topic: 
Multi-Agent Online Learning under Imperfect Information: Algorithms, Theory and Applications
Abstract / Description: 

We consider a model of multi-agent online learning under imperfect information, where the reward structures of agents are given by a general continuous game. After introducing a general equilibrium stability notion for continuous games, called variational stability, we examine the well-known online mirror descent (OMD) learning algorithm and show that the "last iterate" (that is, the actual sequence of actions) of OMD converges to variationally stable Nash equilibria provided that the feedback delays faced by the agents are synchronous and bounded. We then extend the result to almost sure convergence to variationally stable Nash equilibria under both unbiased noise and synchronous and bounded delays. Subsequently, to tackle fully decentralized, asynchronous environments with unbounded feedback delays, we propose a variant of OMD which we call delayed mirror descent (DMD), and which relies on the repeated leveraging of past information. With this modification, the algorithm converges to variationally stable Nash equilibria, with no feedback synchronicity assumptions, and even when the delays grow super-linearly relative to the game's horizon. We then again extend it to the case where there are both delays and noise.

In the second part of the talk, we present two applications of the multi-agent online learning framework. The first application is on non-convex stochastic optimization, where we characterize almost sure convergence of the well-known stochastic mirror descent algorithm to global optima for a large class of non-convex stochastic optimization problems (strictly including convex, quasi-convex and start-convex problems). A step further, our results also include as a special case the large-scale stochastic optimization problem, where stochastic mirror descent is applied in a distributed, asynchronous manner across multiple machines/processors. Time permitting, we will discuss how these results help (at least in part) clarify and affirm the recent successes of mirror-descent type algorithms in large-scale machine learning. The second application concerns power management on random wireless networks, where we use a game-design approach to derive robust power control algorithms that converge (almost surely) to the optimal power allocation in the presence of randomly fluctuating networks.

This is joint work with Nick Bambos, Stephen Boyd, Panayotis Mertikopoulos, Peter Glynn and Claire Tomlin.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, October 6, 2017 - 1:15pm
Venue: 
Packard 101

Pages

Optics and Electronics Seminar

OSA/SPIE Seminar: Noninvasive diffuse optical imaging of breast cancer risk and treatment response

Topic: 
Noninvasive diffuse optical imaging of breast cancer risk and treatment response
Abstract / Description: 

Diffuse optical spectroscopy and imaging (DOSI) is a class of non-invasive near-infrared imaging techniques based upon measuring the wavelength-dependent absorption and (reduced) scattering optical properties of living tissues. In the far-red to near-infrared optical therapeutic window, these optical properties provide information about deep (several cm) tissue composition, structure, and oxygen metabolism. In particular, DOSI is capable of quantifying tissue concentrations of the physiologically relevant molecules oxyhemoglobin, deoxygenated hemoglobin, lipid, and water, as well as structural parameters including cellular size and density (obtained from scattering spectra). The significance and applicability of these and other DOSI biomarkers collected with research devices have been demonstrated in numerous clinical studies of oncology, cardiovascular assessment, exercise physiology, and neuroscience.

In this presentation, I will discuss how DOSI has shown promise in the field of breast oncology for risk assessment, screening, differential diagnosis of benign and malignant lesions, and predicting and monitoring response to chemotherapy treatment. DOSI biomarkers vary significantly in abundance and molecular state between breast cancer and normal tissue and unique cancer-specific absorption signatures have been observed. Finally, I will demonstrate how we are working to translate this promising technology to clinical practice and my vision for the future.

Date and Time: 
Thursday, April 19, 2018 - 4:15pm
Venue: 
Spilker 232

Optics & Electronics Seminar: The Physics and Applications of high Q optical microcavities: Cavity Quantum Optomechanics

Topic: 
The Physics and Applications of high Q optical microcavities: Cavity Quantum Optomechanics
Abstract / Description: 

TBA

Date and Time: 
Monday, May 14, 2018 - 4:15pm
Venue: 
Spilker 232

Light-field-driven currents in graphene

Topic: 
Light-field-driven currents in graphene
Abstract / Description: 

The ability to steer electrons using the strong electromagnetic field of light has opened up the possibility of controlling electron dynamics on the sub-femtosecond timescale. In dielectrics and semiconductors, various light-field-driven effects have been explored, including high-harmonic generation and sub-optical-cycle interband population transfer. In contrast, much less is known about light-field-driven electron dynamics in narrow-bandgap systems or in conductors, in which screening due to free carriers or light absorption hinders the application of strong optical fields.

Graphene is a promising platform with which to achieve light-field-driven control of electrons in a conducting material because of its broadband and ultrafast optical response, weak screening and high damage threshold. We have recently shown that a current induced in monolayer graphene by two-cycle laser pulses is sensitive to the electric-field waveform, that is, to the exact shape of the optical carrier field of the pulse, which is controlled by the carrier-envelope phase, with a precision on the attosecond timescale. Such a current, dependent on the carrier-envelope phase, shows a striking reversal of the direction of the current as a function of the driving field amplitude at about two volts per nanometre. This reversal indicates a transition of light–matter interaction from the weak-field (photon-driven) regime to the strong-field (light-field-driven) regime, where the intraband dynamics influence interband transitions.

We show that in this strong-field regime the electron dynamics are governed by sub-optical-cycle Landau–Zener–Stückelberg interference, composed of coherent repeated Landau–Zener transitions on the femtosecond timescale. Time permitting, we will show another type of quantum path interference in multiphoton emission of electrons from nanoscale tungsten tips, where the admixture of a few percent of second harmonic radiation can suppress or enhance the emission with a visibility of 98%, depending on the relative phase of fundamental and second harmonic.

Date and Time: 
Monday, April 9, 2018 - 4:15pm
Venue: 
Spilker 232

OSA Seminar: From the Optics Lab to the Ear Clinic: Translating Photonic Techniques in Pediatric Ear Nose and Throat

Topic: 
From the Optics Lab to the Ear Clinic: Translating Photonic Techniques in Pediatric Ear Nose and Throat
Abstract / Description: 

Visible light pneumatic otoscopy is considered the best currently available diagnostic office tool for otitis media and congenital cholesteatoma. The implementation of pneumatic otoscopy, however, by primary care physicians in their practice has not been optimal, leading to a lack of resident training and a perception that pneumatic otoscopy is difficult to master.

This has resulted in a diagnostic certainty for otitis media among primary care physicians around 55-70% however antibiotic prescriptions are still used in large number for this condition despite this level of uncertainty. Cholesteatoma is a benign but destructive middle ear condition only curative with surgery. More than one surgical procedure is often required due to the high recurrence rate of cholesteatoma. We will discuss various optical techniques focused in understanding middle ear conditions in an attempt to improve our ability to make the diagnosis.

Date and Time: 
Tuesday, February 13, 2018 - 4:15pm
Venue: 
Spilker 232

Pages

SCIEN Talk

SCIEN & EE292E seminar: Transport-Aware Cameras

Topic: 
Transport-Aware Cameras
Abstract / Description: 

Conventional cameras record all light falling onto their sensor regardless of its source or its 3D path to the camera. In this talk I will present a emerging family of coded-exposure video cameras that can be programmed to record just a fraction of the light coming from an artificial source---be it a common street lamp or a programmable projector---based on the light path's geometry or timing. Live video from these cameras offers a very unconventional view of our everyday world in which refraction and scattering can be notice with the naked eye can become apparent, and the flicker of electric lights can be turned into a powerful cue for analyzing the electrical grid from room to city.

I will discuss the unique optical properties and power efficiency of these "transport aware cameras" through three case studies: the ACam for analyzing the electrical grid, EpiScan3D for robust progress toward designing a computational CMOS sensor for coded two-bucket imaging---a novel capability that promises much more flexible and powerful transport-aware cameras compared to existing off-the-shelf solutions.

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D

Topic: 
Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D
Abstract / Description: 

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display's projection volume. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer's perspective as in the natural real-world light-field.

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes. This presentation will review:
· The application agnostic light-field display architecture being developed at FoVI3D.
· General light-field display properties and characteristics such as field of view, directional resolution, and their effect on the 3D aerial image.
· The computation challenge for generating high-fidelity light-fields.
· A display agnostic ecosystem.

Demo after the talk: The FoVI3D Light-field Display Developer Kit (LfD DK2) is a prototype, wide field-of-view, full parallax, monochrome light-field display capable of projecting ~100,000,000 million unique rays to fill a 9cm x 9cm x 9cm projection volume. The particulars of the light-field compute, photonics subsystem and hogel optics will be discussed during the presentation.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN presents Video-based Reconstruction of the Real World in Motion

Topic: 
Video-based Reconstruction of the Real World in Motion
Abstract / Description: 

New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are needed to build highly realistic models of virtual human actors. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.

The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.

In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT approach) and hands that work in more general environments, and even in real-time and with one camera. I will then show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities. Live demos included!

Date and Time: 
Wednesday, March 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Drone IoT Networks for Virtual Human Teleportation

Topic: 
Drone IoT Networks for Virtual Human Teleportation
Abstract / Description: 

Cyber-physical/human systems (CPS/CHS) are set to play an increasingly visible role in our lives, advancing research and technology across diverse disciplines. I am exploring novel synergies between three emerging CPS/CHS technologies of prospectively broad societal impact, virtual/augmented reality (VR/AR), the Internet of Things (IoT), and autonomous micro-aerial robots (UAVs). My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy.
I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will break existing barriers in remote sensing, monitoring, localization and mapping, navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Temporal coding of volumetric imagery

Topic: 
Temporal coding of volumetric imagery
Abstract / Description: 

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned or captured in parallel and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This talk describes systems and methods with which to efficiently detect and visualize image volumes by temporally encoding the extra dimensions' information into 2D measurements or displays. Some highlights of my research include video and 3D recovery from photographs, and true-3D augmented reality image display by time multiplexing. In the talk, I show how temporal optical coding can improve system performance, battery life, and hardware simplicity for a variety of platforms and applications.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

Topic: 
ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism
Abstract / Description: 

Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It can thereby minimize the adverse effects of vergence-accommodation conflicts.

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Data-driven Computational Imaging

Topic: 
Data-driven Computational Imaging
Abstract / Description: 

Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Accelerated Computing for Light Field and Holographic Displays

Topic: 
Accelerated Computing for Light Field and Holographic Displays
Abstract / Description: 

In this talk, I will present two recently published papers at the annual SIGGRAPH ASIA 2017. For the first paper, we present a 4D light field sampling and rendering system for light field displays that can support both foveation and accommodation to reduce rendering cost while maintaining perceptual quality and comfort. For the second paper, we present a light field based Computer Generated Holography (CGH) rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view dependent occlusion using computer generated hologram. Our rendering and Fresnel integral accurately accounts for diffraction and supports various types of reference illumination for holograms.

Date and Time: 
Wednesday, February 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Street View 2018 - The Newest Generation of Mapping Hardware

Topic: 
Street View 2018 - The Newest Generation of Mapping Hardware
Abstract / Description: 

A brief overview of Street View from it's inception 10 years ago until now will be presented. Street level Imagery has been the prime objective for Google's Street View in the past, and has now migrated into a state-of-the-art mapping platform. Challenges and solutions to the design and fabrication of the imaging system and optimization of hardware to align with specific software post processing will be discussed. Real world challenges of fielding hardware in 80+ countries will also be addressed.

Date and Time: 
Wednesday, February 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Learning where to look in 360 environments

Topic: 
Learning where to look in 360 environments
Abstract / Description: 

Many vision tasks require not just categorizing a well-composed human-taken photo, but also intelligently deciding "where to look" in order to get a meaningful observation in the first place. We explore how an agent can anticipate the visual effects of its actions, and develop policies for learning to look around actively---both for the sake of a specific recognition task as well as for generic exploratory behavior. In addition, we examine how a system can learn from unlabeled video to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree panoramas. Finally, to facilitate 360 video processing, we introduce spherical convolution, which allows application of off-the-shelf deep networks and object detectors to 360 imagery.

Date and Time: 
Wednesday, January 24, 2018 - 4:30pm
Venue: 
Packard 101

Pages

SmartGrid

SmartGrid Seminar: Renewable Scenario Generation Using Adversarial Networks

Topic: 
Renewable Scenario Generation Using Adversarial Networks
Abstract / Description: 

Scenario generation is an important step in the operation and planning of power systems. In this talk, we present a data-driven approach for scenario generation using the popular generative adversarial networks, where to deep neural networks are used in tandem. Compared with existing methods that are often hard to scale or sample from, our method is easy to train, robust, and captures both spatial and temporal patterns in renewable generation. In addition, we show that different conditional information can be embedded in the framework. Because of the feedforward nature of the neural networks, scenarios can be generated extremely efficiently.

Date and Time: 
Thursday, April 19, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids

Topic: 
Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids
Abstract / Description: 

System resiliency is the number 1 concern for electrical utilities in 2018 according to the CEO of the PJM, the nation's largest independent system operator. This talk will offer insights and practical answers through examples, of how power grids can be affected by weather and how countermeasures, such microgrids, can be applied to mitigate them. It will focus on two major events; Super Storm Sandy and Hurricane Maria, and the role of renewable energy resources and microgrids in these two natural disasters. It will discuss the role of microgrids in blackstarting the power grid after a blackout.

Date and Time: 
Thursday, April 12, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration

Topic: 
Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration
Abstract / Description: 

Transmission-distribution coordinated energy management (TDCEM) is recognized as a promising solution to the challenge of high DER penetration, but lack of a distributed computation method that universally and effectively works for TDCEM. To bridge this gap, a generalized master-slave-splitting (G-MSS) method is proposed based on a general-purpose transmission-distribution coordination model (G-TDCM), enabling G-MSS to be applicable to most central functions of TDCEM. In G-MSS, a basic heterogeneous decomposition (HGD) algorithm is first derived from the heterogeneous decomposition of the coupling constraints in the KKT system regarding G-TDCM. Optimality and convergence properties of this algorithm are proved. Furthermore, a modified HGD algorithm is developed by utilizing subsystem's response function, resulting in faster convergence. The distributed G-MSS method is then demonstrated to successfully solve central functions of TDCEM including power flow, contingency analysis, voltage stability assessment, economic dispatch and optimal power flow. Severe issues of over-voltage and erroneous assessment of the system security that are caused by DERs are thus resolved by G-MSS with modest computation cost. A real-world demonstration project in China will be presented.

Date and Time: 
Thursday, April 5, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Johanna Mathieu

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, March 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Optimizing the Operation and Deployment of Battery Energy Storage

Topic: 
Optimizing the Operation and Deployment of Battery Energy Storage
Abstract / Description: 

While the cost of battery energy storage systems is decreasing, justifying their deployment beyond pilot or subsidized projects remains challenging. In this talk, we will discuss how to optimize the size and location of batteries used for spatio-temporal arbitrage by either vertically-integrated utilities or merchant storage developers. We will also consider other applications of battery energy storage, such as reserve and frequency regulation and how battery degradation can be taken into account in optimal dispatch decisions.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 22, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: From Sensors to Software: The role of Wireless in the Smart Grid

Topic: 
From Sensors to Software: The role of Wireless in the Smart Grid
Abstract / Description: 

Sensor-enabled embedded systems are redefining how future communities sense, reason about and manage utilities (water, electric, gas, sewage), roads, traffic lights, bridges, parking complexes, agriculture, waterways and the broader environment. With advances in low-power wide area networks (LP-WANs), we are seeing radios able to transmit small payloads at low data rates (a few kilobits per second) over long distances (several kilometers) with minimal power consumption. As such, LP-WANs have become both a target of study as well as an enabler for a variety of research projects. In this talk, I will describe our experiences in developing and deploying wireless sensing systems for energy-efficient building and smart-grid applications. I will start-off by discussing a number of hardware platforms and sensing techniques developed to improve visibility into buildings and their occupants. This includes new devices for occupancy estimation, demand-side management using electric water heaters and an assortment of low-cost and easy-to-install sub-metering devices. I then show how these devices can be easily integrated using an open-source platform called OpenChirp that provides data context, storage and visualization for sensing systems. Finally, I will go over a case-study where we electrified over 500 homes in rural Haiti with wireless smart-meters that now no longer require expensive and toxic kerosene for lighting.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 8, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Deepak Divan

Topic: 
Massively Distributed Control – An Enabler for the Future Grid
Abstract / Description: 

The power infrastructure is poised for dramatic change. Drivers include rapid growth in the deployment of exponential technologies such as solar, wind, storage, EVs and power electronics; improved economic, operational and energy efficiency; and higher grid resiliency under cyber-attacks and natural disasters. Data from the field shows severe limitations with using the traditional top-down centralized control strategy, and an alternate decentralized approach with dynamic control capability is needed. The 'future' grid will involve a full integration of the physical and transactive grids, and will be more dynamic, with bidirectional power flows, and a real-time market that all generators and consumers will be able to participate in. This will translate into unique requirements for autonomous distributed control using power converters distributed around the grid. The presentation will highlight several key issues and possible solutions for addressing them, showing that decentralized dynamic control using power electronics is very feasible and provides a path to a future grid that is more resource-efficient, flexible, resilient and can support higher levels of PV and wind energy penetration. The power infrastructure is poised for dramatic change. Drivers include rapid growth in the deployment of exponential technologies such as solar, wind, storage, EVs and power electronics; improved economic, operational and energy efficiency; and higher grid resiliency under cyber-attacks and natural disasters. Data from the field shows severe limitations with using the traditional top-down centralized control strategy, and an alternate decentralized approach with dynamic control capability is needed. The ‘future’ grid will involve a full integration of the physical and transactive grids, and will be more dynamic, with bidirectional power flows, and a real-time market that all generators and consumers will be able to participate in. This will translate into unique requirements for autonomous distributed control using power converters distributed around the grid. The presentation will highlight several key issues and possible solutions for addressing them, showing that decentralized dynamic control using power electronics is very feasible and provides a path to a future grid that is more resource-efficient, flexible, resilient and can support higher levels of PV and wind energy penetration.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Adam Wierman

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, January 18, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar welcomes Saurabh Amin

Topic: 
TBA
Abstract / Description: 

The seminars are scheduled for 1:30 pm on the dates listed above. The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions
to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, November 16, 2017 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Optimization, Inference and Learning for District-Energy Systems

Topic: 
Optimization, Inference and Learning for District-Energy Systems
Abstract / Description: 

We discuss how Optimization, Inference and Learning (OIL) methodology is expected to re-shape future demand-response technologies acting across interdependent energy, i.e. power, natural gas andheating/cooling, infrastructures at the district/metropolitan/distribution level. We describe hierarchy ofdeterministic and stochastic planning and operational problems emerging in the context of physical flows over networks associated with the laws of electricity, gas-, fluid- and heat-mechanics. We proceed to illustratedevelopment and challenges of the physics-informed OIL methodology on examples of: a) Graphical Models approach applied to a broad spectrum of the energy flow problems, including online reconstruction of the grid(s) topology from measurements; b) Direct and inverse dynamical problems for timely delivery of services in the district heating/cooling systems; c) Ensemble Control of the phase-space cycling energy loads via Markov Decision Process (MDP) and related reinforcement learning approaches.

Date and Time: 
Thursday, November 2, 2017 - 1:30pm
Venue: 
Y2E2 111

Pages

Stanford's NetSeminar

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, November 11, 2015 - 12:15pm to 1:30pm
Venue: 
Packard 202

NetSeminar

Topic: 
Precise localization and high throughput backscatter using WiFi signals
Abstract / Description: 

Indoor localization holds great promise to enable applications like location-based advertising, indoor navigation, inventory monitoring and management. SpotFi is an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems.

We then talk about BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter IoT sensors and WiFi APs using ambient WiFi transmissions as the excitation signal. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system.

Date and Time: 
Thursday, October 15, 2015 - 12:15pm to 1:30pm
Venue: 
Gates 104

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, October 7, 2015 - 12:15pm to 1:30pm
Venue: 
AllenX Auditorium

Pages

Statistics and Probability Seminars

Statistics Seminar: Inference, Computation, and Visualization for Convex Clustering and Biclustering

Topic: 
Inference, Computation, and Visualization for Convex Clustering and Biclustering
Abstract / Description: 

Hierarchical clustering enjoys wide popularity because of its fast computation, ease of interpretation, and appealing visualizations via the dendogram and cluster heatmap. Recently, several have proposed and studied convex clustering and biclustering which, similar in spirit to hierarchical clustering, achieve cluster merges via convex fusion penalties. While these techniques enjoy superior statistical performance, they suffer from slower computation and are not generally conducive to representation as a dendogram. In the first part of the talk, we present new convex (bi)clustering methods and fast algorithms that inherit all of the advantages of hierarchical clustering. Specifically, we develop a new fast approximation and variation of the convex (bi)clustering solution path that can be represented as a dendogram or cluster heatmap. Also, as one tuning parameter indexes the sequence of convex (bi)clustering solutions, we can use these to develop interactive and dynamic visualization strategies that allow one to watch data form groups as the tuning parameter varies. In the second part of this talk, we consider how to conduct inference for convex clustering solutions that addresses questions like: Are there clusters in my data set? Or, should two clusters be merged into one? To achieve this, we develop a new geometric representation of Hotelling's T2-test that allows us to use the selective inference paradigm to test multivariate hypotheses for the first time. We can use this approach to test hypotheses and calculate confidence ellipsoids on the cluster means resulting from convex clustering. We apply these techniques to examples from text mining and cancer genomics.

This is joint work with John Nagorski and Frederick Campbell.


The Statistics Seminars for Winter Quarter will be held in Room 380Y of the Sloan Mathematics Center in the Main Quad at 4:30pm on Tuesdays. 

Date and Time: 
Tuesday, March 13, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

Statistics Seminar: Understanding rare events in models of statistical mechanics

Topic: 
Understanding rare events in models of statistical mechanics
Abstract / Description: 

Statistical mechanics models are ubiquitous at the interface of probability theory, information theory, and inference problems in high dimensions. To develop a refined understanding of such models, one often needs to study not only typical fluctuation theory but also the realm of atypical events. In this talk, we will focus on sparse networks and polymer models on lattices. In particular we will consider the rare events that a sparse random network has an atypical number of certain local structures, and that a polymer in random media has atypical weight. The random geometry associated with typical instances of these rare events is an important topic of inquiry: this geometry can involve merely local structures, or more global ones. We will discuss recent solutions to certain longstanding questions and connections to stochastic block models, exponential random graphs, eigenvalues of random matrices, and fundamental growth models.

Date and Time: 
Tuesday, January 30, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Statistics Seminar

Topic: 
Brownian Regularity for the Airy Line Ensemble
Abstract / Description: 

The Airy line ensemble is a positive-integer indexed ordered system of continuous random curves on the real line whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy2 process, describes after the subtraction of a parabola the limiting law of the scaled weight of a geodesic running from the origin to a variable point on an anti-diagonal line in such problems as Poissonian last passage percolation. The Airy line ensemble enjoys a simple and explicit spatial Markov property, the Brownian Gibbs property.


In this talk, I will discuss how this resampling property may be used to analyse the Airy line ensemble. Arising results include a close comparison between the ensemble's curves after affine shift and Brownian bridge. The Brownian Gibbs technique is also used to compute the value of a natural exponent describing the decay in probability for the existence of several near geodesics with common endpoints in Brownian last passage percolation, where the notion of "near" refers to a small deficit in scaled geodesic weight, with the parameter specifying this nearness tending to zero.

Date and Time: 
Monday, September 26, 2016 - 4:30pm
Venue: 
Sequoia Hall, room 200

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Pages

SystemX

SystemX Seminar: Power Electronics for the Future: Research Trends and Challenges

Topic: 
Power Electronics for the Future: Research Trends and Challenges
Abstract / Description: 

Power electronics can be found in everything from cellphones and laptops to gasoline/electric vehicles, industrial motors and inverters that connect solar panels to the electric grid. With close to 80% of electrical energy consumption in the US expected to flow through a power converter by 2030, innovative solutions are required to tackle key issues related to conversion efficiency, power density and cost. This talk will look at the trends in power electronics across different application spaces, describe the ongoing research efforts and highlight the challenges ahead.

Date and Time: 
Thursday, April 19, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Computational Near-Eye Displays (for VR/AR Applications)

Topic: 
Computational Near-Eye Displays (for VR/AR Applications)
Abstract / Description: 

Immersive visual and experiential computing systems, i.e. virtual and augmented reality (VR/AR), are entering the consumer market and have the potential to profoundly impact our society. Applications of these systems range from communication, entertainment, education, collaborative work, simulation and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Thus, developing near-eye display systems that provide a high-quality user experience is of the utmost importance. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a significant source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display. In this talk, we discuss frontiers of engineering next-generation opto-computational near-eye display systems to increase visual comfort and provide realistic and effective visual experiences.

Date and Time: 
Thursday, April 12, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices

Topic: 
Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices
Abstract / Description: 

Neuromorphic computing has recently emerged as one of the most promising option to reduce power consumption of big data analysis, paving the way for artificial intelligence systems with power efficiencies like the human brain. The key device for neuromorphic computing system is given by artificial two-terminal synapses controlling signal processing and transmission. Their conductivity must be changed in an analog/continuous way depending on neural signal strengths. In addition, synaptic devices must have: symmetric/linear conductivity potentiation and depression; a high number of levels (~32), which depend on applications and algorithm performances; high data retention (>10 years) and cycling (>109); ultra-low power consumption (<10fJ); low variability; high scalability (<10nm) and possibility of 3D integration.

A variety of different device technologies have been explored such as phase change memories, ferroelectric random-access memory and resistive random-access memory (RRAM). In each case matching the desired specs is a complex multivariable problem requiring a deep quantitative understanding of the link between material properties at the atomic scale and electrical device performance. We have used a multiscale modeling platform GINESTRATM to illustrate this for the case of RRAM and Ferroelectric tunnel junctions (FTJ).

In the case of RRAM, modeling of key mechanisms shows that a dielectric stack composed of two appropriately chosen dielectrics provides the best solution, in agreement with experimental data. In the case of FTJ, the hysteretic ferroelectric behavior of dielectric stacks fabricated from the orthorhombic phase of doped HfO2 is nicely captured by the simulations. These show that Fe-HfO2 stack can be easily used for analog switching by simply tuning set/reset voltage amplitudes. An added advantage of the simulations is that they point out ways to improve the performance, variability and endurance of the devices in order to meet industrial requirements.

Date and Time: 
Thursday, April 5, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Alliance hosts Spring 2018 Workshop

Topic: 
SystemX Alliance Spring 2018 Workshop
Abstract / Description: 

Join SystemX laliance for their SPRING Workshop Week: Apr 30-May 3, 2018. 
Details available on SystemX SPRING workshop page.

SystemX Alliance research broadly encompasses ubiquitous sensing, computing, and communications in various application areas. Currently affiliated SystemX faculty are found in departments across Stanford's School of Engineering and in some areas of natural Sciences and Medicine. Their research agenda is continually evolving in accordance with the interests of Stanford faculty and industry affiliates. 

Date and Time: 
Monday, April 30, 2018 (All day) to Thursday, May 3, 2018 (All day)
Venue: 
Li Ka Shing Center for Learning and Knowledge

SystemX Seminar: Toward Managing the Complexity of Molecules: Letting Matter Compute Itself

Topic: 
Toward Managing the Complexity of Molecules: Letting Matter Compute Itself
Abstract / Description: 

Person-millenia are spent each year seeking useful molecules for medicine, food, agriculture and other uses. For biomolecules, the near infinite universe of possibilities is staggering and humbling. As an example, antibodies, which make up the majority of the top-grossing medicines today, are comprised of 1,100 amino acids chosen from the twenty used by living things. The binding part (variable region) that allows the antibody to bind and recognize pathogens, is about 110 amino acids, giving rise to 10143 possible combinations. There are only about 1080 atoms in the universe, illustrating the intractability of exploring the entire space of possibility. This is just one example…

Presently, machine learning (ML), artificial intelligence (AI), quantum computing, and “big data” are often put forth as the solutions to all problems, particularly by pontificating TED presenters and in Sand Hill pitches dripping with hyperbole. Expecting these methods to provide intelligent prediction of molecular structure and function within our lifetimes is unrealistic. For example, a neural network trained on daily weather patterns in Palo Alto cannot develop an internal model for global weather. In a similar way, finite and reasonable molecular training sets will not magically cause a generalizable model of molecular quantum mechanics to arise within a neural network, no matter how many layers it is endowed with.

With that provocative preface, we turn to the notion of letting matter compute itself. Massive combinatorial libraries can now be intelligently and efficiently mined with appropriate molecular readouts (AKA “the question vector”) at ever-increasing throughputs presently surpassing 1012 unique molecules in a few hours. Once “matter-in-the-loop” exploration is embraced, AI, ML and other methods can be brought to bear usefully in closed-loop methods to follow veins of opportunity in molecular space. Several examples of mining massive molecular spaces will be presented, including drug discovery, digital pathology, and AI-guided continuous-flow chemical synthesis – all real, all working today.

Date and Time: 
Thursday, March 15, 2018 - 4:30pm to 5:30pm
Venue: 
Y2E2 Room 111

Pages

Subscribe to RSS - Seminar / Colloquium