Seminar / Colloquium

US-ATMC (EE402T) Seminar presents Bobby Lee

Topic: 
Co-founder of BTCC & Board Member of Bitcoin Foundation
Abstract / Description: 

 


 

About EE402T Seminar: Over the course of our public seminar series, we will explore the most recent trends, patterns, and challenges of entrepreneurship in Asia and their relevance to Silicon Valley and the U.S. Guest speakers include entrepreneurs, investors and mentors, and other experts on the current entrepreneurial ecosystems of major Asia economies.

Date and Time: 
Tuesday, May 1, 2018 - 4:30pm
Venue: 
Skilling Auditorium, 494 Lomita Mall

US-ATMC (EE402T) Seminar presents Allen Miner

Topic: 
Founder/General Partner of SunBridge Partners & Founder/CEO of SunBridge Corporation
Abstract / Description: 

As SunBridge CEO, Allen Miner aims to create a dynamic, collaborative environment in which Japanese information technology startups develop at a globally competitive pace.

 


About EE402T Seminar: Over the course of our public seminar series, we will explore the most recent trends, patterns, and challenges of entrepreneurship in Asia and their relevance to Silicon Valley and the U.S. Guest speakers include entrepreneurs, investors and mentors, and other experts on the current entrepreneurial ecosystems of major Asia economies.

Date and Time: 
Tuesday, April 24, 2018 - 4:30pm
Venue: 
Skilling Auditorium, 494 Lomita Mall

US-ATMC (EE402T) Seminar: Globalization in Shenzhen

Topic: 
Globalization in Shenzhen: A Perspective on Business Practices in China
Abstract / Description: 

 


 

About EE402T Seminar: Over the course of our public seminar series, we will explore the most recent trends, patterns, and challenges of entrepreneurship in Asia and their relevance to Silicon Valley and the U.S. Guest speakers include entrepreneurs, investors and mentors, and other experts on the current entrepreneurial ecosystems of major Asia economies.

Date and Time: 
Tuesday, April 17, 2018 - 4:30pm
Venue: 
Skilling Auditorium, 494 Lomita Mall

US-ATMC (EE402T) Seminar: The Venture Capital Industry and Entrepreneurship in China

Topic: 
The Venture Capital Industry and Entrepreneurship in China: Women in a Rapidly Growing Ecosystem
Abstract / Description: 

 


About EE402T Seminar: Over the course of our public seminar series, we will explore the most recent trends, patterns, and challenges of entrepreneurship in Asia and their relevance to Silicon Valley and the U.S. Guest speakers include entrepreneurs, investors and mentors, and other experts on the current entrepreneurial ecosystems of major Asia economies.

Date and Time: 
Tuesday, April 10, 2018 - 4:30pm
Venue: 
Skilling Auditorium, 494 Lomita Mall

EE380 Computer Systems Colloquium: The future of low power circuits and embedded intelligence: emerging devices and new design paradigms

Topic: 
The future of low power circuits and embedded intelligence: emerging devices and new design paradigms
Abstract / Description: 

Circuit and design division at CEA LETI is focusing on innovative architectures and circuits dedicated to digital, imagers, wireless, sensors, power management and embedded software. After a brief overview of adaptive circuits for low power multi-processors and IoT architectures, the talk will detail new technologies opportunities for more flexibility. Digital and mixed-signal architectures using 3D technologies will be presented in the scope of multi-processors activity as well as imagers and neuro-inspired circuits. Also, the integration of non-volatile memories will be shown in the perspective of new architectures for computing. Finally, embedding learning will be addressed to solve power challenges at the edge and in end-devices: some new design approaches will be discussed.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The Future of Wireless Communications Hint: It's not a linear amplifier

Topic: 
The Future of Wireless Communications Hint: It's not a linear amplifier
Abstract / Description: 

Wireless communications are ubiquitous in the 21 st century--we use them to read the newspaper, talk to our colleagues or children, watch sporting events or other forms of entertainment, and to monitor and control the environment we live ins-- among just a few. This exponentiation of demand for wireless capacity has driven a new era of innovation in this space because spectrum and energy are expensive and constrained resources.

The future of wireless communications will demand leaps in spectrum efficiency, bandwidth efficiency, and power efficiency for successful technology deployments. Key applications that will fundamentally change how we interact with wireless systems and the demands we place on wireless technologies include Dynamic Spectrum Access Networks, massive MIMO, and the evasive unicorn of the "universal handset". While each of these breakthrough "system" capabilities make simultaneous demands of spectrum efficiency, bandwidth efficiency, and power efficiency, the current suite of legacy technologies forces system designers to make undesirable trade-offs because of the limitations of linear amplifier technology.

Eridan's solution is the antithesis of "linear". The Switch Mode Mixer Modulator (SMs3 ) technology emphasizes precision and flexibility, and simultaneously delivers spectrum efficiency, bandwidth efficiency, and power efficiency. The resulting capabilities dramatically increase total wireless capacity with minimum need for expanding operations into extended regions of the wireless spectrum.

This presentation will discuss the driving forces behind wireless system performance, the physics of linear amplifiers and SM3, measured performance of SM3 systems, and the implications for wireless system capabilities in the near future.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Cryptocurrencies

Topic: 
Cryptocurrencies
Abstract / Description: 

I will give introduction to blockchain technology, current state of the industry and its challenges and the Einsteinium Foundation that is embarking on a truly ambitious path likely to change how cryptocurrency is viewed and used in everyday life. Scientific research is a long-term investment in our future, and the future of our planet. Funding "big ideasi" has fallen dramatically around the world for in recent years. The defining characteristic of Einsteinium is its ongoing commitment to research and charitable missions. Einsteinium coin is a Bitcoin-like currency with a philanthropic objective of funding scientific, cutting edge IT and cryptocurrency projects.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computer Accessibility

Topic: 
Exploring the implications of machine learning for people with cognitive disabilities
Abstract / Description: 

Advances in information technology have provided many benefits for people with disabilities, including wide availability of textual content via text to speech, flexible control of motor wheelchairs, captioned video, and much more. People with cognitive disabilities benefit from easier communication, and better tools for scheduling and reminders. Will advances in machine learning enhance this impact? Progress in natural language processing, autonomous vehicles, and emotion detection, all driven by machine learning, may deliver important benefits soon. Further out, can we look for systems that can help people with cognitive challenges understand our complex world more easily, work more effectively, stay safe, and interact more comfortably in social situations? What are the technical barriers to overcome in pursuing these goals, and what are the theoretical developments in machine learning that may overcome them?

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Information Theory of Deep Learning

Topic: 
Information Theory of Deep Learning
Abstract / Description: 

I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottleneck framework. The new theory has the following components:

  1. rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning.
  2. I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottleneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm.
  3. I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottleneck problem for large scale supervised learning problems. The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.

Based partly on works with Ravid Shwartz-Ziv and Noga Zaslavsky.

Date and Time: 
Wednesday, April 4, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Applied Physics / Physics Colloquium

Applied Physics/Physics Colloquium: GW170817: Hearing and Seeing a Binary Neutron Star Merger

Topic: 
GW170817: Hearing and Seeing a Binary Neutron Star Merger
Abstract / Description: 

With the discovery of GW170817 in gravitational waves, and the discovery of an associated short gamma-ray burst, and the discovery of an associated optical afterglow, we have finally entered the era of gravitational-wave multi-messenger astronomy. We will discuss LIGO/Virgo's detection of this binary coalescence and focus on some of the scientific implications, including insight into the origin of gold and platinum in the universe, tests of black holes and general relativity, elucidation of the formation mechanisms for black holes and neutron stars, and the first standard siren measurement of the Hubble constant.

Date and Time: 
Tuesday, May 29, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: The IceCube Neutrino Observatory and the Beginning of Neutrino Astrophysics

Topic: 
The IceCube Neutrino Observatory and the Beginning of Neutrino Astrophysics
Abstract / Description: 

The IceCube Neutrino Observatory is the world's largest neutrino detector, instrumenting a cubic kilometer of ice at the geographic South Pole. IceCube was designed to detect high-energy astrophysical neutrinos from potential cosmic ray acceleration sites such as active galactic nuclei, gamma ray bursts and supernova remnants. IceCube announced the detection of a diffuse flux of astrophysical neutrinos in 2013, including the highest energy neutrinos ever detected. The sources of these neutrinos are as yet unknown, and IceCube continues to collect data and to collaborate with multi messenger partners in order to explore the neutrino sky. I will discuss the latest results from IceCube and discuss prospects for future upgrades and expansions of the detector.

Date and Time: 
Tuesday, May 22, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Quantum Computing with Trapped Ions

Topic: 
Quantum Computing with Trapped Ions
Abstract / Description: 

Individual atoms are standards for quantum information technology, acting as qubits that have unsurpassed levels of quantum coherence, can be replicated and scaled with the atomic clock accuracy, and allow near-perfect measurement. Atomic ions can be confined by silicon-based chip traps with lithographically-defined electrodes under high vacuum in a room temperature environment. Entangling quantum gate operations can be mediated with control laser beams, allowing the qubit connectivity graph to be reconfigured and optimally adapted to a given algorithm or mode of quantum computing. Existing work has shown >99.9% fidelity gate operations, fully-connected control with up to about 10 qubits, and quantum simulations with over 50 qubits. I will speculate on combining all this into a single universal quantum computing device that can be co-designed with future quantum applications and scaled to useful dimensions.

Date and Time: 
Tuesday, May 15, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Large Volume 3D Imaging by FIBSEM and Cryo-Fluorescence for Cell Biology and Neural Circuits

Topic: 
Large Volume 3D Imaging by FIBSEM and Cryo-Fluorescence for Cell Biology and Neural Circuits
Abstract / Description: 

3D Electron microscopy volume data can be acquired by a variety of approaches. Focused Ion beam – scanning electron microscopy, FIBSEM, offers no limitation on section thickness, so that isotropic voxels with 8 nm or less sampling in x,y,z dimensions can be acquired. The FIBSEM, which is normally limited to a couple days of continuous operation, was refined to enable year-long reliable data acquisition needed for the large volumes of neural imaging and the fly brain connectome. Concurrently, this capability opens a new regime where entire cells can be imaged with 4 nm voxel sampling, thereby surpassing partial cell or section limitations to complete cell data. The heavy metal staining for EM contrast gives spatially detailed but generic black and white rendering of protein and membrane defined structures. On the other hand, fluorescence microscopy is highly protein specific, by labeling only a tiny subset (1-3) of the thousands of constituent proteins of the cell. Most 99.9% of the cell remains dark. Correlated cryogenic fluorescence microscopy offers a way to combine both without compromising the quality of either EM or fluorescence image. Fluorescent properties at low temperatures (down to 10K) include new regimes of stable fluorescence with highly reduced bleaching, new blinking regimes, good contrast ratios useable for PALM, nonlinearity to excitation power, and photo-reactivation. Multicolor 3D structured illumination SIM images can be acquired on such samples and 2 color, 3D PALM images offer even higher resolution. Examples of such correlative Cryo SIM/PALM and FIBSEM images will be presented on cultured cells.

Date and Time: 
Tuesday, May 1, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Ultracold Atom Quantum Simulations: From Exploring Low Temperature Fermi-Hubbard Phases to Many-body Localization

Topic: 
Ultracold Atom Quantum Simulations: From Exploring Low Temperature Fermi-Hubbard Phases to Many-body Localization
Abstract / Description: 

Ultracold-atom model-systems offer a unique way to investigate a wide range of many-body quantum physics in uncharted regimes. Quantum gas microscopy enables us to "zoom in" both, in space and time, on a single particle level. We can explore many-body quantum physics in regimes that are not computationally accessible. In my talk I will present an overview of recent experiments, including the first observation of an anti-ferromagnetic phase of Fermions in an optical lattice, and the observation of many-body localization.

Date and Time: 
Tuesday, April 24, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Reverse Engineering the Universe

Topic: 
Reverse Engineering the Universe
Abstract / Description: 

Prof. Andrei Linde of the Stanford Physics Department will give the Applied Physics/Physics colloquium on Tues., May 8, 2018 entitled "Reverse Engineering the Universe."

Date and Time: 
Tuesday, May 8, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Demonology: The Curious Role of Intelligence in Physics & Biology

Topic: 
Demonology: The Curious Role of Intelligence in Physics & Biology
Abstract / Description: 

For the lion's share of its history, physics analyzed the inanimate world. Or, that is the view it has of itself. Careful reflection, though, shows that physics regularly invoked an expressly extra-physical agency—intelligence—in its efforts to understand even the most basic physical phenomena. I will survey this curious proclivity, noting that similar appeals to intelligent "demons" go back to Laplace's theory of chance, Poincaré's discovery of deterministic chaos in the solar system, and Darwin's explanation of the origin of biological organisms in terms of natural selection. Today, we are on the verge of a new physics of information that will transform this bad "demonology" to a constructive, perhaps even an engineering, paradigm that explains information processing embedded in the natural world. In the process I will show how deterministic chaos arises in the operation of Maxwell's Demon and outline nanoscale experimental implementations ongoing at Caltech's Kavli Nanoscience Institute.

Date and Time: 
Tuesday, April 17, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: Top Quarks: The New Flavor

Topic: 
Top Quarks: The New Flavor
Abstract / Description: 

The Large Hadron Collider is providing an enormous dataset of proton-proton collisions at the highest energies ever achieved in a laboratory.

With our new ability to study the Higgs boson and the unprecedentedly large sample of top quarks, a new frontier has opened: the flavor physics of the top quark - at heart, the question of how the top quark interacts with the Higgs field. We can start to ask questions such as whether the Higgs field is the unique source of the top quark's mass and whether there are unexpected interactions between the top quark and the Higgs boson. The answers to these questions will shed light on what may lie beyond the particle physics Standard Model and have cosmological implications.

Date and Time: 
Tuesday, April 10, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: The Search for Dark Energy and NASA’s WFIRST mission

Topic: 
The Search for Dark Energy and NASA’s WFIRST mission
Abstract / Description: 

Over the last twenty years, there has been growing evidence that our universe is dominated by dark energy. The nature of this dark energy remains a mystery. Is it the signature of the breakdown of general relativity or vacuum energy associated with quantum gravity? I will review the current observations and note the intriguing tensions between measurements based on the cosmic microwave background (CMB) and local measurements of the expansion rate of the universe and the amplitude of density fluctuations. I will then discuss on-going and upcoming CMB experiments and the role of the WFIRST mission in studying the nature of dark energy. I will also discuss the broader scientific mission of the WFIRST mission and its current status.

Date and Time: 
Tuesday, March 13, 2018 - 4:30pm
Venue: 
Hewlett 201

Applied Physics/Physics Colloquium: The Entropic Matter(s) of an Ordered Universe

Topic: 
The Entropic Matter(s) of an Ordered Universe
Abstract / Description: 

Cosmic Information Theory and Analysis, CITA@CITA, uses entropy constrained by control/order parameters to relate our increasingly highly-entangled Cosmic Microwave Background and Large Scale Clustering big-sky data to how our Universe morphed from a coherently smooth accelerating Hubble-patch into the intricate evolving complexity of the cosmic web. I will chat about ongoing problems in (non-equilibrium) Information-Entropy generation: in post-inflation shock-in-time heating, stored now mostly in the cosmic photon and neutrino seas; in the space-shocked web of galaxies and clusters and its accompanying nuclear/black hole cosmic infrared waste. Central to our statistical analyses are the all-sky deep-volume ensembles of "webskys" we build to mock the real-sky webskys we observe. As in particle physics, simulating and discovering what lies Beyond the Standard Model of Cosmology is the goal, as yet with no B in the SMc in spite of tantalizing 2sigma-ish SMc anomalies and tensions.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Hewlett 201

Pages

CS300 Seminar

Special Seminar: Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design

Topic: 
Formal Methods meets Machine Learning: Explorations in Cyber-Physical Systems Design
Abstract / Description: 

Cyber-physical systems (CPS) are computational systems tightly integrated with physical processes. Examples include modern automobiles, fly-by-wire aircraft, software-controlled medical devices, robots, and many more. In recent times, these systems have exploded in complexity due to the growing amount of software and networking integrated into physical environments via real-time control loops, as well as the growing use of machine learning and artificial intelligence (AI) techniques. At the same time, these systems must be designed with strong verifiable guarantees.

In this talk, I will describe our research explorations at the intersection of machine learning and formal methods that address some of the challenges in CPS design. First, I will describe how machine learning techniques can be blended with formal methods to address challenges in specification, design, and verification of industrial CPS. In particular, I will discuss the use of formal inductive synthesis --- algorithmic synthesis from examples with formal guarantees — for CPS design. Next, I will discuss how formal methods can be used to improve the level of assurance in systems that rely heavily on machine learning, such as autonomous vehicles using deep learning for perception. Both theory and industrial case studies will be discussed, with a special focus on the automotive domain. I will conclude with a brief discussion of the major remaining challenges posed by the use of machine learning and AI in CPS.

Date and Time: 
Monday, December 4, 2017 - 4:00pm
Venue: 
Gates 463A

SpaceX's journey on the road to mars

Topic: 
SpaceX's journey on the road to mars
Abstract / Description: 

SSI will be hosting Gwynne Shotwell — President and COO of SpaceX — to discuss SpaceX's journey on the road to mars. The event will be on Wednesday Oct 11th from 7pm - 8pm in Dinkelspiel Auditorium. After the talk, there will be a Q&A session hosted by Steve Jurvetson from DFJ Venture Capital.

Claim your tickets now on eventbright

 

Date and Time: 
Wednesday, October 11, 2017 - 7:00pm
Venue: 
Dinkelspiel Auditorium

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Subhasish Mitra

5:15-6:00, Silvio Savarese

Date and Time: 
Wednesday, December 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Phil Levis

5:15-6:00, Ron Fedkiw

Date and Time: 
Monday, December 5, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Dan Boneh

5:15-6:00, Aaron Sidford

Date and Time: 
Wednesday, November 30, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, John Mitchell

5:15-6:00, James Zou

Date and Time: 
Monday, November 28, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Emma Brunskill

5:15-6:00, Doug James

Date and Time: 
Wednesday, November 16, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, James Landay

5:15-6:00, Dan Jurafsky

Date and Time: 
Monday, November 14, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Ken Salisbury

5:15-6:00, Noah Goodman

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

CS Department Lecture Series (CS300)

Topic: 
Faculty speak about their research to new PhD students
Abstract / Description: 

Offered to incoming first-year PhD students in the Autumn quarter.

The seminar gives CS faculty the opportunity to speak about their research, which allows new CS PhD students the chance to learn about the professors and their research before permanently aligning.

4:30-5:15, Kunle Olukotun

5:15-6:00, Jure Leskovec

Date and Time: 
Monday, November 7, 2016 - 4:30pm to 6:00pm
Venue: 
200-305 Lane History Corner, Main Quad

Pages

EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium presents Optional Static Typing for Python

Topic: 
Optional Static Typing for Python
Abstract / Description: 

Python is a dynamically typed language, and some of its appeal derives from this. Nevertheless, especially for large code bases, it would be nice if a compiler could find type errors before the code is even run. Optional static type checking promises exactly this, and over the past four years we have successfully introduced this feature into Python 3. This talk introduces the type system we've adopted and the syntax used for type annotations, some tips on how to get started with a large existing code base, and our experience using the 'mypy' type checker at Dropbox. The entire system is open source, and has also been adopted by other companies such as Lyft, Quora and Facebook.

Date and Time: 
Wednesday, June 6, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Artificial Intelligence: Current and Future Paradigms and Implications

Topic: 
Artificial Intelligence: Current and Future Paradigms and Implications
Abstract / Description: 

Artificial intelligence has advanced rapidly in the last five years. This talk intends to provide high level answers to questions like:

  • What can the evolution of intelligence in the animal kingdom teach us about the evolution of AI?
  • How should people who are not AI researchers view the societal transformation that is now underway? What are some of the social, economic, and political implications of this technology as it exists now?
  • What will future AI systems likely be capable of, and what are the largest expected impacts of these systems?

The talk will be understandable for non-computer scientists.

Date and Time: 
Wednesday, May 30, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Towards theories of single-trial high dimensional neural data analysis

Topic: 
Towards theories of single-trial high dimensional neural data analysis
Abstract / Description: 

Neuroscience has entered a golden age in which experimental technologies now allow us to record thousands of neurons, over many trials during complex behaviors, yielding large-scale, high dimensional datasets. However, while we can record thousands of neurons, mammalian circuits controlling complex behaviors can contain tens of millions of behaviorally relevant neurons. Thus, despite significant experimental advances, neuroscience remains in a vastly undersampled measurement regime. Nevertheless, a wide array of statistical procedures for dimensionality reduction of multineuronal recordings uncover remarkably insightful, low dimensional neural state space dynamics whose geometry reveals how behavior and cognition emerge from neural circuits. What theoretical principles explain this remarkable success; in essence, how is it that we can understand anything about the brain while recording an infinitesimal fraction of its degrees of freedom?

We present a theory that addresses this question, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover neural state space dynamics in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior on single trials as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. We will also discuss non-negative tensor analysis methods to perform multi-timescale dimensionality reduction and demixing of neural dynamics that reveal how rapid neural dynamics within single trials mediate perception, cognition and action, and how slow changes in these dynamics mediate learning.

Date and Time: 
Wednesday, May 2, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The End of Privacy

Topic: 
The End of Privacy
Abstract / Description: 

A growing proportion of human activities such as social interactions, entertainment, shopping, and gathering information are now mediated by digital devices and services. Such digitally mediated activities can be easily recorded, offering an unprecedented opportunity to study and measure intimate psycho-demographic traits using actual--rather than self-reported--behavior. Our research shows that digital records of behavior, such as samples of text, Tweets, Facebook Likes, web-browsing logs, or even facial images can be used to accurately measure a wide range of traits including personality, intelligence, and political views. Such Big Data assessment has a number of advantages: it does not require participants' active involvement; it can be easily and inexpensively applied to large populations; and it is relatively immune to cheating or misrepresentation. If used ethically, it could revolutionize psychological assessment, marketing, recruitment, insurance, and many other industries. In the wrong hands, however, such methods pose significant privacy risks. In this talk, we will discuss how to reap the benefits of Big Data assessment while avoiding the pitfalls.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The future of low power circuits and embedded intelligence: emerging devices and new design paradigms

Topic: 
The future of low power circuits and embedded intelligence: emerging devices and new design paradigms
Abstract / Description: 

Circuit and design division at CEA LETI is focusing on innovative architectures and circuits dedicated to digital, imagers, wireless, sensors, power management and embedded software. After a brief overview of adaptive circuits for low power multi-processors and IoT architectures, the talk will detail new technologies opportunities for more flexibility. Digital and mixed-signal architectures using 3D technologies will be presented in the scope of multi-processors activity as well as imagers and neuro-inspired circuits. Also, the integration of non-volatile memories will be shown in the perspective of new architectures for computing. Finally, embedding learning will be addressed to solve power challenges at the edge and in end-devices: some new design approaches will be discussed.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The Future of Wireless Communications Hint: It's not a linear amplifier

Topic: 
The Future of Wireless Communications Hint: It's not a linear amplifier
Abstract / Description: 

Wireless communications are ubiquitous in the 21 st century--we use them to read the newspaper, talk to our colleagues or children, watch sporting events or other forms of entertainment, and to monitor and control the environment we live ins-- among just a few. This exponentiation of demand for wireless capacity has driven a new era of innovation in this space because spectrum and energy are expensive and constrained resources.

The future of wireless communications will demand leaps in spectrum efficiency, bandwidth efficiency, and power efficiency for successful technology deployments. Key applications that will fundamentally change how we interact with wireless systems and the demands we place on wireless technologies include Dynamic Spectrum Access Networks, massive MIMO, and the evasive unicorn of the "universal handset". While each of these breakthrough "system" capabilities make simultaneous demands of spectrum efficiency, bandwidth efficiency, and power efficiency, the current suite of legacy technologies forces system designers to make undesirable trade-offs because of the limitations of linear amplifier technology.

Eridan's solution is the antithesis of "linear". The Switch Mode Mixer Modulator (SMs3 ) technology emphasizes precision and flexibility, and simultaneously delivers spectrum efficiency, bandwidth efficiency, and power efficiency. The resulting capabilities dramatically increase total wireless capacity with minimum need for expanding operations into extended regions of the wireless spectrum.

This presentation will discuss the driving forces behind wireless system performance, the physics of linear amplifiers and SM3, measured performance of SM3 systems, and the implications for wireless system capabilities in the near future.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Cryptocurrencies

Topic: 
Cryptocurrencies
Abstract / Description: 

I will give introduction to blockchain technology, current state of the industry and its challenges and the Einsteinium Foundation that is embarking on a truly ambitious path likely to change how cryptocurrency is viewed and used in everyday life. Scientific research is a long-term investment in our future, and the future of our planet. Funding "big ideasi" has fallen dramatically around the world for in recent years. The defining characteristic of Einsteinium is its ongoing commitment to research and charitable missions. Einsteinium coin is a Bitcoin-like currency with a philanthropic objective of funding scientific, cutting edge IT and cryptocurrency projects.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computer Accessibility

Topic: 
Exploring the implications of machine learning for people with cognitive disabilities
Abstract / Description: 

Advances in information technology have provided many benefits for people with disabilities, including wide availability of textual content via text to speech, flexible control of motor wheelchairs, captioned video, and much more. People with cognitive disabilities benefit from easier communication, and better tools for scheduling and reminders. Will advances in machine learning enhance this impact? Progress in natural language processing, autonomous vehicles, and emotion detection, all driven by machine learning, may deliver important benefits soon. Further out, can we look for systems that can help people with cognitive challenges understand our complex world more easily, work more effectively, stay safe, and interact more comfortably in social situations? What are the technical barriers to overcome in pursuing these goals, and what are the theoretical developments in machine learning that may overcome them?

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Information Theory of Deep Learning

Topic: 
Information Theory of Deep Learning
Abstract / Description: 

I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottleneck framework. The new theory has the following components:

  1. rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning.
  2. I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottleneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm.
  3. I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottleneck problem for large scale supervised learning problems. The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.

Based partly on works with Ravid Shwartz-Ziv and Noga Zaslavsky.

Date and Time: 
Wednesday, April 4, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Ginzton Lab

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Ginzton Lab / AMO Seminar

Topic: 
2D/3D Photonic Integration Technologies for Arbitrary Optical Waveform Generation in Temporal, Spectral, and Spatial Domains
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 29, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
Silicon-Plus Photonics for Tomorrow's (Astronomically) Large-Scale Networks
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP 483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, February 22, 2016 - 4:15pm to 5:15pm
Venue: 
Spilker 232

Ginzton Lab / AMO Seminar

Topic: 
'Supermode-Polariton Condensation in a Multimode Cavity QED-BEC System' and 'Probing Ultrafast Electron Dynamics in Atoms and Molecules'
Abstract / Description: 

Beginning Academic year 2015-2016, please join us at Spilker room 232 every Monday afternoon from 4 pm for the AP483 & Ginzton Lab, and AMO Seminar Series.

Refreshments begin at 4 pm, seminar at 4:15 pm.

Date and Time: 
Monday, January 4, 2016 - 4:15pm to 5:30pm
Venue: 
Spilker 232

Ginzton Lab: Special Optics Seminar

Topic: 
A Carbon Nanotube Optical Rectenna
Abstract / Description: 

An optical rectenna – that is, a device that directly converts free-propagating electromagnetic waves at optical frequencies to d.c. electricity – was first proposed over 40 years ago, yet this concept has not been demonstrated experimentally due to fabrication challenges at the nanoscale. Realizing an optical rectenna requires that an antenna be coupled to a diode that operates on the order of 1 petahertz (switching speed on the order of a femtosecond). Ultralow capacitance, on the order of a few attofarads, enables a diode to operate at these frequencies; and the development of metal-insulator-metal tunnel junctions with nanoscale dimensions has emerged as a potential path to diodes with ultralow capacitance, but these structures remain extremely difficult to fabricate and couple to a nanoscale antenna reliably. Here we demonstrate an optical rectenna by engineering metal-insulator-metal tunnel diodes, with ultralow junction capacitance of approximately 2 attofarads, at the tips of multiwall carbon nanotubes, which act as the antenna and metallic electron field emitter in the diode. This demonstration is achieved using very small diode areas based on the diameter of a single carbon nanotube (about 10 nanometers), geometric field enhancement at the carbon nanotube tips, and a low work function semitransparent top metal contact. Using vertically-aligned arrays of the diodes, we measure d.c. open-circuit voltage and short-circuit current at visible and infrared electromagnetic frequencies that is due to a rectification process, and quantify minor contributions from thermal effects. In contrast to recent reports of photodetection based on hot electron decay in plasmonic nanoscale antenna, a coherent optical antenna field is rectified directly in our devices, consistent with rectenna theory. Our devices show evidence of photon-assisted tunneling that reduces diode resistance by two orders of magnitude under monochromatic illumination. Additionally, power rectification is observed under simulated solar illumination. Numerous current-voltage scans on different devices, and between 5-77 degrees Celsius, show no detectable change in diode performance, indicating a potential for robust operation.

Date and Time: 
Tuesday, October 20, 2015 - 2:00pm to 3:00pm
Venue: 
Spilker 232

Pages

Information Systems Lab (ISL) Colloquium

ISL Colloquium: A Differential View of Reliable Communications

Topic: 
A Differential View of Reliable Communications
Abstract / Description: 

This talk introduces a "differential" approach to information theory. In contrast to the more traditional "elemental" approach, in which we work to understand communication networks by studying the behavior of their elements in isolation, the differential approach works to understand the impact components can have on the larger networks in which they are employed. Results achieved through this differential viewpoint highlight some startling facts about network communications -- including both opportunities where even very small changes to a communication network can have a big impact on network performance and vulnerabilities where small failures can cause big harm.

Date and Time: 
Thursday, May 31, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Finite Sample Guarantees for Control of an Unknown Linear Dynamical System

Topic: 
Finite Sample Guarantees for Control of an Unknown Linear Dynamical System
Abstract / Description: 

In principle, control of a physical system is accomplished by first deriving a faithful model of the underlying dynamics from first principles, and then solving an optimal control problem with the modeled dynamics. In practice, the system may be too complex to precisely characterize, and an appealing alternative is to instead collect trajectories of the system and fit a model of the dynamics from the data. How many samples are needed for this to work? How sub-optimal is the resulting controller?

In this talk, I will shed light on these questions when the underlying dynamical system is linear and the control objective is quadratic, a classic optimal control problem known as the Linear Quadratic Regulator. Despite the simplicity of linear dynamical systems, deriving finite-time guarantees for both system identification and controller performance is non-trivial. I will first talk about our results in the "one-shot" setting, where measurements are collected offline, a model is estimated from the data, and a controller is synthesized using the estimated model with confidence bounds. Then, I will discuss our recent work on guarantees in the online regret setting, where noise injected into the system for learning the dynamics needs to trade-off with state regulation.

This talk is based on joint work with Sarah Dean, Horia Mania, Nikolai Matni, and Benjamin Recht.

Date and Time: 
Thursday, May 24, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium & IT Forum: Random initialization and implicit regularization in nonconvex statistical estimation

Topic: 
Random initialization and implicit regularization in nonconvex statistical estimation
Abstract / Description: 

Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation / learning problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require suitable initialization and proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory is often either far from optimal or completely lacks theoretical guarantees.

This talk is concerned with a striking phenomenon arising in two nonconvex problems (i.e. phase retrieval and matrix completion): even in the absence of careful initialization, proper saddle escaping, and/or explicit regularization, gradient descent converges to the optimal solution within a logarithmic number of iterations, thus achieving near-optimal statistical and computational guarantees at once. All of this is achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal entrywise error control.

Date and Time: 
Wednesday, May 23, 2018 - 4:15pm
Venue: 
Building 370

ISL Colloquium: Low-dimensional Structures and Deep Models for High-dimensional Data

Topic: 
Low-dimensional Structures and Deep Models for High-dimensional Data
Abstract / Description: 

In this talk, we will discuss a class of models and techniques that can effectively model and extract rich low-dimensional structures in high-dimensional data such as images and videos, despite nonlinear transformation, gross corruption, or severely compressed measurements. This work leverages recent advancements in convex optimization from Compressive Sensing for recovering low-rank or sparse signals that provide both strong theoretical guarantees and efficient and scalable algorithms for solving such high-dimensional combinatorial problems. We illustrate how these new mathematical models and tools could bring disruptive changes to solutions to many challenging tasks in computer vision, image processing, and pattern recognition. We will also illustrate some emerging applications of these tools to other data types such as 3D range data, web documents, image tags, bioinformatics data, audio/music analysis, etc. Throughout the talk, we will discuss strong connections of algorithms from Compressive Sensing with other popular data-driven methods such as Deep Neural Networks, providing some new perspectives to understand Deep Learning.

This is joint work with John Wright of Columbia, Emmanuel Candes of Stanford, Zhouchen Lin of Peking University, Shenghua Gao of ShanghaiTech, and my former students Zhengdong Zhang of MIT, Xiao Liang of Tsinghua University, Arvind Ganesh, Zihan Zhou, Kerui Min of UIUC.

Date and Time: 
Thursday, May 3, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Reinforcement Learning: Hidden Theory, and New Super-Fast Algorithms

Topic: 
Reinforcement Learning: Hidden Theory, and New Super-Fast Algorithms
Abstract / Description: 

Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.

Date and Time: 
Tuesday, May 1, 2018 - 4:30pm
Venue: 
Packard 101

ISL Seminar: Inventing Algorithms via Deep Learning

Topic: 
Inventing Algorithms via Deep Learning
Abstract / Description: 

Deep learning is a part of daily life, owing to its successes in computer vision and natural language processing. In these applications, the success of the model-free deep learning approach can be attributed to a lack of (mathematical) generative model. In yet other applications, the data is generated by a simple model and performance criterion mathematically precise and training/test samples infinitely abundant, but the space of algorithmic choices is enormous (example: chess). Deep learning has recently shown strong promise in these problems too (example: alphazero). In this talk, we study two canonical problems of great scientific and engineering interest through the lens of deep learning.

The first is reliable communication over noisy media where we successfully revisit classical open problems in information theory; we show that creatively trained and architected neural networks can beat state of the art on the AWGN channel with noisy feedback by a 100 fold improvement in bit error rate.

The second is optimization and classification problems on graphs, where the key algorithmic challenge is scalable performance to arbitrary sized graphs. Representing graphs as randomized nonlinear dynamical systems via recurrent neural networks, we show that creative adversarial training allows one to train on small size graphs and test on much larger sized graphs (100~1000x) with approximation ratios that rival state of the art on a variety of optimization problems across the complexity theoretic hardness spectrum.

Apart from the obvious practical value, this study of mathematically precise problems sheds light on the mysteries of deep learning methods: training example choices, architectural design decisions and loss function/learning methodologies. Our (mostly) empirical research is conducted under the backdrop of a theoretical research program of understanding gated neural networks (eg: attention networks, GRU, LSTM); we show the first provably (globally) consistent algorithms to recover the parameters of a classical gated neural network architecture: mixture of experts (MoE).

Date and Time: 
Thursday, April 26, 2018 - 4:15pm
Venue: 
Packard 101

ISL Colloquium: Reinforcement Learning without Reinforcement

Topic: 
Reinforcement Learning without Reinforcement
Abstract / Description: 

Reinforcement Learning (RL) is concerned with solving sequential decision-making problems in the presence of uncertainty. RL is really about two problems together. The first is the 'Bellman problem': Finding the optimal policy given the model, which may involve large state spaces. Various approximate dynamic programming and RL schemes have been developed, but either there are no guarantees, or not universal, or rather slow. In fact, most RL algorithms have become synonymous with stochastic approximation (SA) schemes that are known to be rather slow. This is an even more difficult problem for MDPs with continuous state (and action) spaces. We present a class of non-SA algorithms for reinforcement learning in continuous state space MDP problems based on 'empirical' ideas, which are simple, effective and yet universal with probabilistic guarantees. The idea involves randomized Kernel-based function fitting combined with 'empirical' updates. The key is the first known "probabilistic contraction analysis" method we have developed for analysis of fairly general stochastic iterative algorithms, wherein we show convergence to a probabilistic fixed point of a sequence of random operators via a stochastic dominance argument.

The second RL problem is the 'online learning (or the Lai-Robbins) problem' when the model itself is unknown. We propose a simple posterior sampling-based regret-minimization reinforcement learning algorithm for MDPs. It achieves O(sqrt{T})-regret which is order-optimal. It not only optimally manages the "exploration versus exploitation tradeoff" but also obviates the need for expensive computation for exploration. The algorithm differs from classical adaptive control in its focus on non-asymptotic regret optimality as opposed to asymptotic stability. This seems to resolve a long standing open problem in Reinforcement Learning.

Date and Time: 
Tuesday, April 24, 2018 - 4:00pm
Venue: 
Packard 101

ISL Colloquium: Recent Developments in Compressed Sensing

Topic: 
Recent Developments in Compressed Sensing
Abstract / Description: 

Compressed sensing refers to the reconstruction of high-dimensional but low-complexity objects from a limited number of measurements. Examples include the recovery of high-dimensional but sparse vectors, and the recovery of high-dimensional but low-rank matrices, which includes the so-called partial realization problem in linear control theory. Much of the work to date focuses on probabilistic methods, which are CPU-intensive and have high computational complexity. In contrast, deterministic methods are far faster in execution and more efficient in terms of storage. Moreover, deterministic methods draw from many branches of mathematics, including graph theory and algebraic coding theory. In this talk a brief overview will be given of such recent developments.

Date and Time: 
Thursday, April 19, 2018 - 4:15pm
Venue: 
Packard 101

IT Forum: From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform

Topic: 
From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform
Abstract / Description: 

Characterizing the rate-distortion region of Gaussian multiterminal source coding is a longstanding open problem in network information theory. In this talk, I will show how to obtain new conclusive results for this problem using nonlinear analysis and convex relaxation techniques. A byproduct of this line of research is an efficient algorithm for determining the optimal distributed Karhunen–Loève transform in the high-resolution regime, which partially settles a question posed by Gastpar, Dragotti, and Vetterli. I will also introduce a generalized version of the Gaussian multiterminal source coding problem where the source-encoder connections can be arbitrary. It will be demonstrated that probabilistic graphical models offer an ideal mathematical language for describing how the performance limit of a generalized Gaussian multiterminal source coding system depends on its topology, and more generally they can serve as the long-sought platform for systematically integrating the existing achievability schemes and converse arguments. The architectural implication of our work for low-latency lossy source coding will also be discussed.

This talk is based on joint work with Jia Wang, Farrokh Etezadi, and Ashish Khisti.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, April 13, 2018 - 1:15pm
Venue: 
Packard 202

ISL Special Seminar: Low- and high-dimensional computations in neural circuits

Topic: 
Low- and high-dimensional computations in neural circuits
Abstract / Description: 

Computation in the brain is distributed across large populations. Individual neurons are noisy and receive limited information but, by acting collectively, neural populations perform a wide variety of complex computations. In this talk I will discuss two approaches to understanding these collective computations. First, I will introduce a method to identify and decode unknown variables encoded in the activity of neural populations. While the number of neurons in a population may be large, if the population encodes a low-dimensional variable there will be low-dimensional structure in the collective activity, and the method aims to find and parameterize this low-dimensional structure. In the rodent head direction (HD) system, the method reveals a nonlinear ring manifold and allows encoded head direction and the tuning curves of single cells to be recovered with high accuracy and without prior knowledge of what neurons were encoding. When applied to sleep, it provides mechanistic insight into the circuit construction of the ring manifold and, during nREM sleep, reveals a new dynamical regime possibly linked to memory consolidation in the brain. I will then address the problem of understanding genuinely high-dimensional computations in the brain, where low-dimensional structure does not exist. Modern work studying distributed algorithms on large sparse networks may provide a compelling approach to neural computation, and I will use insights from recent work on error correction to construct a novel architecture for high-capacity neural memory. Unlike previous models, which yield either weak (linear) increases in capacity with network size or exhibit poor robustness to noise, this network is able to store a number of states exponential in network size while preserving noise robustness, thus resolving a long-standing theoretical question.
These results demonstrate new approaches for studying neural representations and computation across a variety of scales, both when low-dimensional structure is present and when computations are high-dimensional.

Date and Time: 
Tuesday, March 6, 2018 - 10:00am
Venue: 
Clark S360

Pages

IT-Forum

IT-Forum: Hardware-limited task-based quantization

Topic: 
Hardware-limited task-based quantization
Abstract / Description: 

Quantization plays a critical role in digital signal processing systems. Quantizers are typically designed to obtain an accurate digital representation of the input signal, operating independently of the system task, and are commonly implemented using serial scalar analog-to-digital converters (ADCs). This talk is concerned with hardware-limited task-based quantization, where a system utilizing a serial scalar ADC is designed to provide a suitable representation in order to allow the recovery of a parameter vector underlying the input signal. We propose hardware-limited task-based quantization systems for a fixed and finite quantization resolution, and characterize their achievable distortion. Our results illustrate the benefits of properly taking into account the underlying task in the design of the quantization scheme.

Date and Time: 
Wednesday, June 13, 2018 - 1:15pm
Venue: 
Packard 202

IT Forum: Phase transitions in generalized linear models

Topic: 
Phase transitions in generalized linear models
Abstract / Description: 

This is joint work with Jean Barbier, Florent Krzakala, Nicolas Macris and Lenka Zdeborova.

We consider generalized linear models (GLMs) where an unknown $n$-dimensional signal vector is observed through the application of a random matrix and a non-linear (possibly probabilistic) componentwise output function.

We study the models in the high-dimensional limit, where the observations consists of $m$ points, and $m/n \to \alpha > 0$ as $n \to \infty$. This situation is ubiquitous in applications ranging from supervised machine learning to signal processing.

We will analyze the model-case when the observation matrix has i.i.d. elements and the components of the ground-truth signal are taken independently from some known distribution.

We will compute the limit of the mutual information between the signal and the observations in the large system limit. This quantity is particularly interesting because it is related to the free energy (i.e. the logarithm of the partition function) of the posterior distribution of the signal given the observations. Therefore, the study of the asymptotic mutual information allows to deduce the limit of important quantities such as the minimum mean squared error for the estimation of the signal.

We will observe some phase transition phenomena. Depending on the noise level, the distribution of the signal and the non-linear function of the GLM we may encounter various scenarios where it may be impossible / hard (only with exponential-time algorithms) / easy (with polynomial-time algorithms) to recover the signal.

Date and Time: 
Friday, May 18, 2018 - 1:15pm
Venue: 
Packard 202

ISL Colloquium & IT Forum: Random initialization and implicit regularization in nonconvex statistical estimation

Topic: 
Random initialization and implicit regularization in nonconvex statistical estimation
Abstract / Description: 

Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation / learning problems. Due to the highly nonconvex nature of the empirical loss, state-of-the-art procedures often require suitable initialization and proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descent, however, prior theory is often either far from optimal or completely lacks theoretical guarantees.

This talk is concerned with a striking phenomenon arising in two nonconvex problems (i.e. phase retrieval and matrix completion): even in the absence of careful initialization, proper saddle escaping, and/or explicit regularization, gradient descent converges to the optimal solution within a logarithmic number of iterations, thus achieving near-optimal statistical and computational guarantees at once. All of this is achieved by exploiting the statistical models in analyzing optimization algorithms, via a leave-one-out approach that enables the decoupling of certain statistical dependency between the gradient descent iterates and the data. As a byproduct, for noisy matrix completion, we demonstrate that gradient descent achieves near-optimal entrywise error control.

Date and Time: 
Wednesday, May 23, 2018 - 4:15pm
Venue: 
Building 370

IT-Forum: Tight sample complexity bounds via dualizing LeCam's method

Topic: 
Tight sample complexity bounds via dualizing LeCam's method
Abstract / Description: 

In this talk we consider a general question of estimating linear functional of the distribution based on the noisy samples from it. We discover that the (two-point) LeCam lower bound is in fact achievable by optimizing bias-variance tradeoff of an empirical-mean type of estimator. We extend the method to certain symmetric functionals of high-dimensional parametric models.

Next, we apply this general framework to two problems: population recovery and predicting the number of unseen species. In population recovery, the goal is to estimate an unknown high-dimensional distribution (in $L_\infty$-distance) from noisy samples. In the case of \textit{erasure} noise, i.e. when each coordinate is erased with probability $\epsilon$, we discover a curious phase transition in sample complexity at $\epsilon=1/2$. In the second (classical) problem, we observe $n$ iid samples from an unknown distribution on a countable alphabet and the goal is to predict the number of new species that will be observed in the next (unseen) $tn$ samples. Again, we discover a phase transition at $t=1$. In both cases, the complete characterization of sample complexity relies on complex-analytic methods, such as Hadamard's three-lines theorem.

Joint work with Yihong Wu (Yale).

Date and Time: 
Friday, May 4, 2018 - 1:15pm
Venue: 
Packard 202

IT Forum: From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform

Topic: 
From Gaussian Multiterminal Source Coding to Distributed Karhunen–Loève Transform
Abstract / Description: 

Characterizing the rate-distortion region of Gaussian multiterminal source coding is a longstanding open problem in network information theory. In this talk, I will show how to obtain new conclusive results for this problem using nonlinear analysis and convex relaxation techniques. A byproduct of this line of research is an efficient algorithm for determining the optimal distributed Karhunen–Loève transform in the high-resolution regime, which partially settles a question posed by Gastpar, Dragotti, and Vetterli. I will also introduce a generalized version of the Gaussian multiterminal source coding problem where the source-encoder connections can be arbitrary. It will be demonstrated that probabilistic graphical models offer an ideal mathematical language for describing how the performance limit of a generalized Gaussian multiterminal source coding system depends on its topology, and more generally they can serve as the long-sought platform for systematically integrating the existing achievability schemes and converse arguments. The architectural implication of our work for low-latency lossy source coding will also be discussed.

This talk is based on joint work with Jia Wang, Farrokh Etezadi, and Ashish Khisti.


The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, April 13, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum & ISL presents Robust sequential change-point detection

Topic: 
Robust sequential change-point detection
Abstract / Description: 

Sequential change-point detection is a fundamental problem in statistics and signal processing, with broad applications in security, network monitoring, imaging, and genetics. Given a sequence of data, the goal is to detect any change in the underlying distribution as quickly as possible from the streaming data. Various algorithms have been developed including the commonly used CUSUM procedure. However, there is a still a gap when applying change-point detection methods to real problems, notably, due to the lack of robustness. Classic approaches usually require exact specification of the pre and post change distributions forms, which may be quite restrictive and do not perform well with real data. On the other hand, Huber’s classic robust statistics built based on least favorable distributions are not directly applicable since they are computationally intractable in the multi-dimensional setting. In this seminar, I will present several of our recent works in developing computationally efficient and robust change-point detection algorithms with certain near optimality properties, by building a connection of statistical sequential analysis with (online) convex optimization.

 


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, March 16, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Restricted Isometry Property of Random Projection for Low-Dimensional Subspaces

Topic: 
Restricted Isometry Property of Random Projection for Low-Dimensional Subspaces
Abstract / Description: 

Dimensionality reduction is in demand to reduce the complexity of solving large-scale problems with data lying in latent low-dimensional structures in machine learning and computer version. Motivated by such need, in this talk I will introduce the Restricted Isometry Property (RIP) of Gaussian random projections for low-dimensional subspaces in R^N, and prove that the projection Frobenius norm distance between any two subspaces spanned by the projected data in R^n for n smaller than N remain almost the same as the distance between the original subspaces with probability no less than 1 - e^O(-n).

Previously the well-known Johnson-Lindenstrauss (JL) Lemma and RIP for sparse vectors have been the foundation of sparse signal processing including Compressed Sensing. As an analogy to JL Lemma and RIP for sparse vectors, this work allows the use of random projections to reduce the ambient dimension with the theoretical guarantee that the distance between subspaces after compression is well preserved.

As a direct result of our theory, when solving the subspace clustering (SC) problem at a large scale, one may conduct SC algorithm on randomly compressed samples to alleviate the high computational burden and still have theoretical performance guarantee. Because the distance between subspaces almost remains unchanged after projection, the clustering error rate of any SC algorithm may keep as small as that conducting in the original space. Considering that our theory is independent of SC algorithms, this may benefit future studies on other subspace related topics.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 23, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: BATS: Network Coding in Action

Topic: 
BATS: Network Coding in Action
Abstract / Description: 

Multi-hop wireless networks can be found in many application scenarios, including IoT, fog computing, satellite communication, underwater communication, etc. The main challenge in such networks is the accumulation of packet loss in the wireless links. With existing technologies, the throughput decreases exponentially fast with the number of hops.

In this talk, we introduce BATched Sparse code (BATS code) as a solution to this challenge. BATS code is a rateless implementation of network coding. The advantages of BATS codes include low encoding/decoding complexities, high throughput, low latency, and low storage requirement. This makes BATS codes ideal for implementation on IoT devices that have limited computing power and storage. At the end of the talk, we will show a video demonstration of BATS code over a Wi-Fi network with 10 IoT devices acting as relay nodes.


 The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 9, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Deterministic Random Matrices

Topic: 
Deterministic Random Matrices
Abstract / Description: 

Random matrices have become a very active area of research in the recent years and have found enormous applications in modern mathematics, physics, engineering, biological modeling, and other fields. In this work, we focus on symmetric sign (+/-1) matrices (SSMs) that were originally utilized by Wigner to model the nuclei of heavy atoms in mid-50s. Assuming the entries of the upper triangular part to be independent +/-1 with equal probabilities, Wigner showed in his pioneering works that when the sizes of matrices grow, their empirical spectra converge to a non-random measure having a semicircular shape. Later, this fundamental result was improved and substantially extended to more general families of matrices and finer spectral properties. In many physical phenomena, however, the entries of matrices exhibit significant correlations. At the same time, almost all available analytical tools heavily rely on the independence condition making the study of matrices with structure (dependencies) very challenging. The few existing works in this direction consider very specific setups and are limited by particular techniques, lacking a unified framework and tight information-theoretic bounds that would quantify the exact amount of structure that matrices may possess without affecting the limiting semicircular form of their spectra.

From a different perspective, in many applications one needs to simulate random objects. Generation of large random matrices requires very powerful sources of randomness due to the independence condition, the experiments are impossible to reproduce, and atypical or non-random looking outcomes may appear with positive probability. Reliable deterministic construction of SSMs with random-looking spectra and low algorithmic and computational complexity is of particular interest due to the natural correspondence of SSMs and undirected graphs, since the latter are extensively used in combinatorial and CS applications e.g. for the purposes of derandomization. Unfortunately, most of the existing constructions of pseudo-random graphs focus on the extreme eigenvalues and do not provide guaranties on the whole spectrum. In this work, using binary Golomb sequences, we propose a simple completely deterministic construction of circulant SSMs with spectra converging to the semicircular law with the same rate as in the original Wigner ensemble. We show that this construction has close to lowest possible algorithmic complexity and is very explicit. Essentially, the algorithm requires at most 2log(n) bits implying that the real amount of randomness conveyed by the semicircular property is quite small.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, February 2, 2018 - 1:15pm
Venue: 
Packard 202

IT-Forum: Recent Advances in Algorithmic High-Dimensional Robust Statistics

Topic: 
Recent Advances in Algorithmic High-Dimensional Robust Statistics
Abstract / Description: 

Fitting a model to a collection of observations is one of the quintessential problems in machine learning. Since any model is only approximately valid, an estimator that is useful in practice must also be robust in the presence of model misspecification. It turns out that there is a striking tension between robustness and computational efficiency. Even for the most basic high-dimensional tasks, such as robustly computing the mean and covariance, until recently the only known estimators were either hard to compute or could only tolerate a negligible fraction of errors.

In this talk, I will survey the recent progress in algorithmic high-dimensional robust statistics. I will describe the first robust and efficiently computable estimators for several fundamental statistical tasks that were previously thought to be computationally intractable. These include robust estimation of mean and covariance in high dimensions, and robust learning of various latent variable models. The new robust estimators are scalable in practice and yield a number of applications in exploratory data analysis.


 

The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:15 pm during the academic year.

The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.

Date and Time: 
Friday, January 26, 2018 - 1:15pm
Venue: 
Packard 202

Pages

Optics and Electronics Seminar

OSA/SPIE Seminar: Computational Optics for Multidimensional Nanoscale Imaging of Single Fluorescent Molecules

Topic: 
Computational Optics for Multidimensional Nanoscale Imaging of Single Fluorescent Molecules
Abstract / Description: 

Visualizing the dynamic movements and interactions between biomolecules remains a challenge, motivating the development of new optical technology and computational algorithms for imaging at the nanoscale. We have built two technologies for multidimensional imaging of single molecules (SMs): the Tri-spot point spread function (PSF) and the Robust Statistical Estimation (RoSE) algorithm. The Tri-spot PSF measures each second moment of SM orientation with near-uniform sensitivity, thereby capturing the orientation and rotational diffusion of SMs using just one camera frame. For 3D imaging, we developed RoSE to minimize the vectorial localization errors in super-resolution microscopy that result from both the structure of the sample and the PSF itself. By estimating the likelihood of a blinking event to be present in each imaging frame, RoSE localizes molecules accurately and minimizes false localizations even when images overlap.

Date and Time: 
Thursday, June 7, 2018 - 4:15pm
Venue: 
Spilker 232

OSA/SPIE Seminar: Noninvasive diffuse optical imaging of breast cancer risk and treatment response

Topic: 
Noninvasive diffuse optical imaging of breast cancer risk and treatment response
Abstract / Description: 

Diffuse optical spectroscopy and imaging (DOSI) is a class of non-invasive near-infrared imaging techniques based upon measuring the wavelength-dependent absorption and (reduced) scattering optical properties of living tissues. In the far-red to near-infrared optical therapeutic window, these optical properties provide information about deep (several cm) tissue composition, structure, and oxygen metabolism. In particular, DOSI is capable of quantifying tissue concentrations of the physiologically relevant molecules oxyhemoglobin, deoxygenated hemoglobin, lipid, and water, as well as structural parameters including cellular size and density (obtained from scattering spectra). The significance and applicability of these and other DOSI biomarkers collected with research devices have been demonstrated in numerous clinical studies of oncology, cardiovascular assessment, exercise physiology, and neuroscience.

In this presentation, I will discuss how DOSI has shown promise in the field of breast oncology for risk assessment, screening, differential diagnosis of benign and malignant lesions, and predicting and monitoring response to chemotherapy treatment. DOSI biomarkers vary significantly in abundance and molecular state between breast cancer and normal tissue and unique cancer-specific absorption signatures have been observed. Finally, I will demonstrate how we are working to translate this promising technology to clinical practice and my vision for the future.

Date and Time: 
Thursday, April 19, 2018 - 4:15pm
Venue: 
Spilker 232

Optics & Electronics Seminar: The Physics and Applications of high Q optical microcavities: Cavity Quantum Optomechanics

Topic: 
The Physics and Applications of high Q optical microcavities: Cavity Quantum Optomechanics
Abstract / Description: 

TBA

Date and Time: 
Monday, May 14, 2018 - 4:15pm
Venue: 
Spilker 232

Light-field-driven currents in graphene

Topic: 
Light-field-driven currents in graphene
Abstract / Description: 

The ability to steer electrons using the strong electromagnetic field of light has opened up the possibility of controlling electron dynamics on the sub-femtosecond timescale. In dielectrics and semiconductors, various light-field-driven effects have been explored, including high-harmonic generation and sub-optical-cycle interband population transfer. In contrast, much less is known about light-field-driven electron dynamics in narrow-bandgap systems or in conductors, in which screening due to free carriers or light absorption hinders the application of strong optical fields.

Graphene is a promising platform with which to achieve light-field-driven control of electrons in a conducting material because of its broadband and ultrafast optical response, weak screening and high damage threshold. We have recently shown that a current induced in monolayer graphene by two-cycle laser pulses is sensitive to the electric-field waveform, that is, to the exact shape of the optical carrier field of the pulse, which is controlled by the carrier-envelope phase, with a precision on the attosecond timescale. Such a current, dependent on the carrier-envelope phase, shows a striking reversal of the direction of the current as a function of the driving field amplitude at about two volts per nanometre. This reversal indicates a transition of light–matter interaction from the weak-field (photon-driven) regime to the strong-field (light-field-driven) regime, where the intraband dynamics influence interband transitions.

We show that in this strong-field regime the electron dynamics are governed by sub-optical-cycle Landau–Zener–Stückelberg interference, composed of coherent repeated Landau–Zener transitions on the femtosecond timescale. Time permitting, we will show another type of quantum path interference in multiphoton emission of electrons from nanoscale tungsten tips, where the admixture of a few percent of second harmonic radiation can suppress or enhance the emission with a visibility of 98%, depending on the relative phase of fundamental and second harmonic.

Date and Time: 
Monday, April 9, 2018 - 4:15pm
Venue: 
Spilker 232

Pages

SCIEN Talk

SCIEN Talk, eWear seminar: 'Immersive Technology and AI' with focus on mobile AR research

Topic: 
'Immersive Technology and AI' with focus on mobile AR research
Abstract / Description: 

Talk Title: Saliency in VR: How Do People Explore Virtual Environments,presented by Vincent Sitzmann

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-based compression.

Talk Title: "Immersive Technology and AI" with focus on mobile AR research

Abstract: not available

 

Date and Time: 
Thursday, May 31, 2018 - 3:30pm
Venue: 
Spilker 232

SCIEN & EE 292E: Mobile VR for vision testing and treatment

Topic: 
Mobile VR for vision testing and treatment
Abstract / Description: 

Consumer-level HMDs are adequate for many medical applications. Vivid Vision (VV) takes advantage of their low cost, light weight, and large VR gaming code base to make vision tests and treatments. The company's software is built using the Unity engine, which allows for multiplatform support.in the Unity framework, allowing it to run on many hardware platforms. New headsets are available every six months or less, which creates interesting challenges within in the medical device space. VV's flagship product is the commercially available Vivid Vision System, used by more than 120 clinics to test and treat binocular dysfunctions such as convergence difficulties, amblyopia, strabismus, and stereo blindness. VV has recently developed a new, VR-based visual field analyzer.

Date and Time: 
Wednesday, June 6, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Emerging LIDAR concepts and sensor technologies for autonomous vehicles

Topic: 
Emerging LIDAR concepts and sensor technologies for autonomous vehicles
Abstract / Description: 

Sensor technologies such as radar, camera, and LIDAR have become the key enablers for achieving higher levels of autonomous control in vehicles, from fleets to commercial. There are, however, still questions remaining: to what extent will radar and camera technologies continue to improve, and which LIDAR concepts will be the most successful? This presentation will provide an overview of the tradeoffs for LIDAR vs. competing sensor technologies (camera and radar); this discussion will reinforce the need for sensor fusion. We will also discuss the types of improvements that are necessary for each sensor technology. The presentation will summarize and compare various LIDAR designs -- mechanical, flash, MEMS-mirror based, optical phased array, and FMCW (frequency modulated continuous wave) -- and then discuss each LIDAR concept's future outlook. Finally, there will be a quick review of guidelines for selecting photonic components such as photodetectors, light sources, and MEMS mirrors.

Date and Time: 
Wednesday, May 30, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: LiDAR Technology for Autonomous Vehicles

Topic: 
LiDAR Technology for Autonomous Vehicles
Abstract / Description: 

LiDAR is a key sensor for autonomous vehicles that enables them to understand their surroundings in 3 dimensions. I will discuss the evolution of LiDAR, and describe various LiDAR technologies currently being developed. These include rotating sensors, MEMs and Optical Phase Array scanning devices, flash detector arrays, and single photon avalanche detectors. Requirements for autonomous vehicles are very challenging, and the different technologies each have advantages and disadvantages that will be discussed. The architecture of LiDAR also affects how it fits into the overall vehicle architecture. Image fusion with other sensors including radar, cameras, and ultrasound will be part of the overall solution. Other LiDAR applications including non-automotive transportation, mining, precision agriculture, UAV's, mapping, surveying, and security will be described.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning

Topic: 
Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning
Abstract / Description: 

Fluorescence microscopy lets biologist see and understand the intricate machinery at the heart of living systems and has led to numerous discoveries. Any technological progress towards improving image quality would extend the range of possible observations and would consequently open up the path to new findings. I will show how modern machine learning and smart robotic microscopes can push the boundaries of observability. One fundamental obstacle in microscopy takes the form of a trade-of between imaging speed, spatial resolution, light exposure, and imaging depth. We have shown that deep learning can circumvent these physical limitations: microscopy images can be restored even if 60-fold fewer photons are used during acquisition, isotropic resolution can be achieved even with a 10-fold under-sampling along the axial direction, and diffraction-limited structures can be resolved at 20-times higher frame-rates compared to state-of-the-art methods. Moreover, I will demonstrate how smart microscopy techniques can achieve the full optical resolution of light-sheet microscopes — instruments capable of capturing the entire developmental arch of an embryo from a single cell to a fully formed motile organism. Our instrument improves spatial resolution and signal strength two to five-fold, recovers cellular and sub-cellular structures in many regions otherwise not resolved, adapts to spatiotemporal dynamics of genetically encoded fluorescent markers and robustly optimises imaging performance during large-scale morphogenetic changes in living organisms.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Advances in automotive image sensors

Topic: 
Advances in automotive image sensors
Abstract / Description: 

In this talk I present recent advances in 2D and 3D image sensors for automotive applications such as rear view cameras, surround view cameras, ADAS cameras and in cabin driver monitoring cameras. This includes developments in high dynamic range image capture, LED flicker mitigation, high frame rate capture, global shutter, near infrared sensitivity and range imaging. I will also describe sensor developments for short range and long range LIDAR systems.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: 3D single-molecule super-resolution microscopy using a tilted light sheet

Topic: 
3D single-molecule super-resolution microscopy using a tilted light sheet
Abstract / Description: 

To obtain a complete picture of subcellular structures, cells must be imaged with high resolution in all three dimensions (3D). In this talk, I will present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. Here the axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The result is simple and flexible 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSFs for fiducial bead tracking and live axial drift correction. We think that TILT3D in the future will become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

Date and Time: 
Wednesday, May 2, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Video Coding before and beyond HEVC

Topic: 
Video Coding before and beyond HEVC
Abstract / Description: 

We are enjoying video contents in various situations. Though they are already compressed down to 1/10 - 1/1000 from its original size, it has been reported that video traffic over the internet is increasing 31% per year, within which the video traffic will occupy 82% by 2020. This is why development of better compression technology is eagerly demanded. ITU-T/ISO/IEC jointly developed the latest video coding standard, High Efficiency Video Coding (HEVC), in 2013. They are about to start next generation standard. Corresponding proposals will be evaluated at April 2018 meeting in San Diego, just a week before this talk.

In this talk, we will first overview the advances of video coding technology in the last several decades, latest topics including the report of the San Diego meeting, some new approaches including deep learning technique etc. will be presented.

Date and Time: 
Wednesday, April 25, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Transport-Aware Cameras

Topic: 
Transport-Aware Cameras
Abstract / Description: 

Conventional cameras record all light falling onto their sensor regardless of its source or its 3D path to the camera. In this talk I will present a emerging family of coded-exposure video cameras that can be programmed to record just a fraction of the light coming from an artificial source---be it a common street lamp or a programmable projector---based on the light path's geometry or timing. Live video from these cameras offers a very unconventional view of our everyday world in which refraction and scattering can be notice with the naked eye can become apparent, and the flicker of electric lights can be turned into a powerful cue for analyzing the electrical grid from room to city.

I will discuss the unique optical properties and power efficiency of these "transport aware cameras" through three case studies: the ACam for analyzing the electrical grid, EpiScan3D for robust progress toward designing a computational CMOS sensor for coded two-bucket imaging---a novel capability that promises much more flexible and powerful transport-aware cameras compared to existing off-the-shelf solutions.

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D

Topic: 
Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D
Abstract / Description: 

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display's projection volume. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer's perspective as in the natural real-world light-field.

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes. This presentation will review:
· The application agnostic light-field display architecture being developed at FoVI3D.
· General light-field display properties and characteristics such as field of view, directional resolution, and their effect on the 3D aerial image.
· The computation challenge for generating high-fidelity light-fields.
· A display agnostic ecosystem.

Demo after the talk: The FoVI3D Light-field Display Developer Kit (LfD DK2) is a prototype, wide field-of-view, full parallax, monochrome light-field display capable of projecting ~100,000,000 million unique rays to fill a 9cm x 9cm x 9cm projection volume. The particulars of the light-field compute, photonics subsystem and hogel optics will be discussed during the presentation.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Packard 101

Pages

SmartGrid

SmartGrid Seminar: Future Power System Control Functions: An Industry Perspective

Topic: 
Future Power System Control Functions: An Industry Perspective
Abstract / Description: 

This talk provides an overview of Siemens Corporate Technology's recent research on new control functions for future power systems. Three different topics are discussed: (a) adaptive power oscillation damping optimization to increase the stability reserve of power systems, (b) robust power flow optimization to increase power system resilience to volatile generation, and (c) new research challenges for autonomous microgrids that provide autonomous operation and plug-and-produce capabilities.

Date and Time: 
Thursday, May 31, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Trends in Electric Power Distribution System Analysis at PNNL

Topic: 
Trends in Electric Power Distribution System Analysis at PNNL
Abstract / Description: 

Pacific Northwest National Laboratory (PNNL) originated and continues to maintain one of the two leading open-source distribution system simulators, called GridLAB-D, which has been downloaded 80,000+ times world-wide. While it continues to improve core functionality, PNNL is placing more emphasis recently on GridLAB-D as part of a development platform, improving its interoperability and opening the software up to more customization by researchers. This talk will cover two ongoing open-source development projects, funded by the U. S. Department of Energy, that incorporate and extend GridLAB-D. One of these projects is also expected to contribute distribution feeder model conversion tools for a new California Energy Commission project headed by SLAC. Highlights of the talk will include:

  • Transactive energy simulation platform, at tesp.readthedocs.io/en/latest
  • GridAPPS-D application development platform, at gridappsd.readthedocs.io/en/latest
  • Evole GridLAB-D's co-simulation support from FNCS interface, to a multi-lab interface called HELICS compliant with Functional Mockup Interface (FMI): https://github.com/GMLC-TDC/HELICS-src
  • Leveraging new capabilities for large-building simulation in JModelica, power flow analysis in OpenDSS, and transactive energy system agents in Python
  • Implementation and use of the Common Information Model (CIM) in a NoSQL triple-store database for standardized feeder model conversion
  • Comparison of different types of stochastic modeling for load and distributed energy resource (DER) output variability, and its impact on feeder model order reduction and state estimation
  • Special system protection example concerns on urban secondary networks with high penetration of DER
Date and Time: 
Thursday, May 24, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Renewable Scenario Generation Using Adversarial Networks

Topic: 
Renewable Scenario Generation Using Adversarial Networks
Abstract / Description: 

Scenario generation is an important step in the operation and planning of power systems. In this talk, we present a data-driven approach for scenario generation using the popular generative adversarial networks, where to deep neural networks are used in tandem. Compared with existing methods that are often hard to scale or sample from, our method is easy to train, robust, and captures both spatial and temporal patterns in renewable generation. In addition, we show that different conditional information can be embedded in the framework. Because of the feedforward nature of the neural networks, scenarios can be generated extremely efficiently.

Date and Time: 
Thursday, April 19, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids

Topic: 
Increasing Power Grid Resiliency for Adverse Conditions & the Role of Renewable Energy Resources and Microgrids
Abstract / Description: 

System resiliency is the number 1 concern for electrical utilities in 2018 according to the CEO of the PJM, the nation's largest independent system operator. This talk will offer insights and practical answers through examples, of how power grids can be affected by weather and how countermeasures, such microgrids, can be applied to mitigate them. It will focus on two major events; Super Storm Sandy and Hurricane Maria, and the role of renewable energy resources and microgrids in these two natural disasters. It will discuss the role of microgrids in blackstarting the power grid after a blackout.

Date and Time: 
Thursday, April 12, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration

Topic: 
Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration
Abstract / Description: 

Transmission-distribution coordinated energy management (TDCEM) is recognized as a promising solution to the challenge of high DER penetration, but lack of a distributed computation method that universally and effectively works for TDCEM. To bridge this gap, a generalized master-slave-splitting (G-MSS) method is proposed based on a general-purpose transmission-distribution coordination model (G-TDCM), enabling G-MSS to be applicable to most central functions of TDCEM. In G-MSS, a basic heterogeneous decomposition (HGD) algorithm is first derived from the heterogeneous decomposition of the coupling constraints in the KKT system regarding G-TDCM. Optimality and convergence properties of this algorithm are proved. Furthermore, a modified HGD algorithm is developed by utilizing subsystem's response function, resulting in faster convergence. The distributed G-MSS method is then demonstrated to successfully solve central functions of TDCEM including power flow, contingency analysis, voltage stability assessment, economic dispatch and optimal power flow. Severe issues of over-voltage and erroneous assessment of the system security that are caused by DERs are thus resolved by G-MSS with modest computation cost. A real-world demonstration project in China will be presented.

Date and Time: 
Thursday, April 5, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Johanna Mathieu

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, March 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Optimizing the Operation and Deployment of Battery Energy Storage

Topic: 
Optimizing the Operation and Deployment of Battery Energy Storage
Abstract / Description: 

While the cost of battery energy storage systems is decreasing, justifying their deployment beyond pilot or subsidized projects remains challenging. In this talk, we will discuss how to optimize the size and location of batteries used for spatio-temporal arbitrage by either vertically-integrated utilities or merchant storage developers. We will also consider other applications of battery energy storage, such as reserve and frequency regulation and how battery degradation can be taken into account in optimal dispatch decisions.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 22, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: From Sensors to Software: The role of Wireless in the Smart Grid

Topic: 
From Sensors to Software: The role of Wireless in the Smart Grid
Abstract / Description: 

Sensor-enabled embedded systems are redefining how future communities sense, reason about and manage utilities (water, electric, gas, sewage), roads, traffic lights, bridges, parking complexes, agriculture, waterways and the broader environment. With advances in low-power wide area networks (LP-WANs), we are seeing radios able to transmit small payloads at low data rates (a few kilobits per second) over long distances (several kilometers) with minimal power consumption. As such, LP-WANs have become both a target of study as well as an enabler for a variety of research projects. In this talk, I will describe our experiences in developing and deploying wireless sensing systems for energy-efficient building and smart-grid applications. I will start-off by discussing a number of hardware platforms and sensing techniques developed to improve visibility into buildings and their occupants. This includes new devices for occupancy estimation, demand-side management using electric water heaters and an assortment of low-cost and easy-to-install sub-metering devices. I then show how these devices can be easily integrated using an open-source platform called OpenChirp that provides data context, storage and visualization for sensing systems. Finally, I will go over a case-study where we electrified over 500 homes in rural Haiti with wireless smart-meters that now no longer require expensive and toxic kerosene for lighting.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 8, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Deepak Divan

Topic: 
Massively Distributed Control – An Enabler for the Future Grid
Abstract / Description: 

The power infrastructure is poised for dramatic change. Drivers include rapid growth in the deployment of exponential technologies such as solar, wind, storage, EVs and power electronics; improved economic, operational and energy efficiency; and higher grid resiliency under cyber-attacks and natural disasters. Data from the field shows severe limitations with using the traditional top-down centralized control strategy, and an alternate decentralized approach with dynamic control capability is needed. The 'future' grid will involve a full integration of the physical and transactive grids, and will be more dynamic, with bidirectional power flows, and a real-time market that all generators and consumers will be able to participate in. This will translate into unique requirements for autonomous distributed control using power converters distributed around the grid. The presentation will highlight several key issues and possible solutions for addressing them, showing that decentralized dynamic control using power electronics is very feasible and provides a path to a future grid that is more resource-efficient, flexible, resilient and can support higher levels of PV and wind energy penetration. The power infrastructure is poised for dramatic change. Drivers include rapid growth in the deployment of exponential technologies such as solar, wind, storage, EVs and power electronics; improved economic, operational and energy efficiency; and higher grid resiliency under cyber-attacks and natural disasters. Data from the field shows severe limitations with using the traditional top-down centralized control strategy, and an alternate decentralized approach with dynamic control capability is needed. The ‘future’ grid will involve a full integration of the physical and transactive grids, and will be more dynamic, with bidirectional power flows, and a real-time market that all generators and consumers will be able to participate in. This will translate into unique requirements for autonomous distributed control using power converters distributed around the grid. The presentation will highlight several key issues and possible solutions for addressing them, showing that decentralized dynamic control using power electronics is very feasible and provides a path to a future grid that is more resource-efficient, flexible, resilient and can support higher levels of PV and wind energy penetration.


 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, February 1, 2018 - 1:30pm
Venue: 
Y2E2 111

SmartGrid Seminar: Adam Wierman

Topic: 
TBA
Abstract / Description: 

The speakers are renowned scholars or industry experts in power and energy systems. We believe they will bring novel insights and fruitful discussions to Stanford. This seminar is offered as a 1 unit seminar course, CEE 272T/EE292T. Interested students can take this seminar course for credit by completing a project based on the topics presented in this course.

Date and Time: 
Thursday, January 18, 2018 - 1:30pm
Venue: 
Y2E2 111

Pages

Stanford's NetSeminar

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, November 11, 2015 - 12:15pm to 1:30pm
Venue: 
Packard 202

NetSeminar

Topic: 
Precise localization and high throughput backscatter using WiFi signals
Abstract / Description: 

Indoor localization holds great promise to enable applications like location-based advertising, indoor navigation, inventory monitoring and management. SpotFi is an accurate indoor localization system that can be deployed on commodity WiFi infrastructure. SpotFi only uses information that is already exposed by WiFi chips and does not require any hardware or firmware changes, yet achieves the same accuracy as state-of-the-art localization systems.

We then talk about BackFi, a novel communication system that enables high throughput, long range communication between very low power backscatter IoT sensors and WiFi APs using ambient WiFi transmissions as the excitation signal. We show via prototypes and experiments that it is possible to achieve communication rates of up to 5 Mbps at a range of 1 m and 1 Mbps at a range of 5 meters. Such performance is an order to three orders of magnitude better than the best known prior WiFi backscatter system.

Date and Time: 
Thursday, October 15, 2015 - 12:15pm to 1:30pm
Venue: 
Gates 104

NetSeminar

Topic: 
BlindBox: Deep Packet Inspection over Encrypted Traffic
Abstract / Description: 

SIGCOMM 2015, Joint work with: Justine Sherry, Chang Lan, and Sylvia Ratnasamy

Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption.

We propose BlindBox, the first system that simultaneously provides both of these properties. The approach of BlindBox is to perform the deep-packet inspection directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.

Date and Time: 
Wednesday, October 7, 2015 - 12:15pm to 1:30pm
Venue: 
AllenX Auditorium

Pages

Statistics and Probability Seminars

Statistics Seminar: Inference, Computation, and Visualization for Convex Clustering and Biclustering

Topic: 
Inference, Computation, and Visualization for Convex Clustering and Biclustering
Abstract / Description: 

Hierarchical clustering enjoys wide popularity because of its fast computation, ease of interpretation, and appealing visualizations via the dendogram and cluster heatmap. Recently, several have proposed and studied convex clustering and biclustering which, similar in spirit to hierarchical clustering, achieve cluster merges via convex fusion penalties. While these techniques enjoy superior statistical performance, they suffer from slower computation and are not generally conducive to representation as a dendogram. In the first part of the talk, we present new convex (bi)clustering methods and fast algorithms that inherit all of the advantages of hierarchical clustering. Specifically, we develop a new fast approximation and variation of the convex (bi)clustering solution path that can be represented as a dendogram or cluster heatmap. Also, as one tuning parameter indexes the sequence of convex (bi)clustering solutions, we can use these to develop interactive and dynamic visualization strategies that allow one to watch data form groups as the tuning parameter varies. In the second part of this talk, we consider how to conduct inference for convex clustering solutions that addresses questions like: Are there clusters in my data set? Or, should two clusters be merged into one? To achieve this, we develop a new geometric representation of Hotelling's T2-test that allows us to use the selective inference paradigm to test multivariate hypotheses for the first time. We can use this approach to test hypotheses and calculate confidence ellipsoids on the cluster means resulting from convex clustering. We apply these techniques to examples from text mining and cancer genomics.

This is joint work with John Nagorski and Frederick Campbell.


The Statistics Seminars for Winter Quarter will be held in Room 380Y of the Sloan Mathematics Center in the Main Quad at 4:30pm on Tuesdays. 

Date and Time: 
Tuesday, March 13, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

Statistics Seminar: Understanding rare events in models of statistical mechanics

Topic: 
Understanding rare events in models of statistical mechanics
Abstract / Description: 

Statistical mechanics models are ubiquitous at the interface of probability theory, information theory, and inference problems in high dimensions. To develop a refined understanding of such models, one often needs to study not only typical fluctuation theory but also the realm of atypical events. In this talk, we will focus on sparse networks and polymer models on lattices. In particular we will consider the rare events that a sparse random network has an atypical number of certain local structures, and that a polymer in random media has atypical weight. The random geometry associated with typical instances of these rare events is an important topic of inquiry: this geometry can involve merely local structures, or more global ones. We will discuss recent solutions to certain longstanding questions and connections to stochastic block models, exponential random graphs, eigenvalues of random matrices, and fundamental growth models.

Date and Time: 
Tuesday, January 30, 2018 - 4:30pm
Venue: 
Sloan Mathematics Building, Room 380Y

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Statistics Seminar

Topic: 
Brownian Regularity for the Airy Line Ensemble
Abstract / Description: 

The Airy line ensemble is a positive-integer indexed ordered system of continuous random curves on the real line whose finite dimensional distributions are given by the multi-line Airy process. It is a natural object in the KPZ universality class: for example, its highest curve, the Airy2 process, describes after the subtraction of a parabola the limiting law of the scaled weight of a geodesic running from the origin to a variable point on an anti-diagonal line in such problems as Poissonian last passage percolation. The Airy line ensemble enjoys a simple and explicit spatial Markov property, the Brownian Gibbs property.


In this talk, I will discuss how this resampling property may be used to analyse the Airy line ensemble. Arising results include a close comparison between the ensemble's curves after affine shift and Brownian bridge. The Brownian Gibbs technique is also used to compute the value of a natural exponent describing the decay in probability for the existence of several near geodesics with common endpoints in Brownian last passage percolation, where the notion of "near" refers to a small deficit in scaled geodesic weight, with the parameter specifying this nearness tending to zero.

Date and Time: 
Monday, September 26, 2016 - 4:30pm
Venue: 
Sequoia Hall, room 200

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Pages

SystemX

SystemX Seminar: Intracellular recording of thousands of connected neurons on a silicon chip

Topic: 
Intracellular recording of thousands of connected neurons on a silicon chip
Abstract / Description: 

Massively parallel, intracellular recording of a large number of neurons across a network is a great technological pursuit in neurobiology, but it has not been achieved. The intracellular recording by the patch clamp electrode boasts unparalleled sensitivity that can measure down to sub-threshold synaptic events, but it is too bulky to be implemented into a dense massive-scale array: so far only ~10 parallel patch recordings have been possible. Optical methods––e.g., voltage-sensitive dyes/proteins––have been developed in hopes of parallelizing intracellular recording, but they have not been able to perform recording from more than ~30 neurons in parallel. As an opposite example, the microelectrode array can record from many more neurons, but this extracellular technique has too low a sensitivity to tap into synaptic activities. In this talk, I would like to share our on-going effort, a silicon chip that conducts intracellular recording from thousands of connected mammalian neurons in vitro, and discuss applications in high-throughput screening, functional connectome mapping, neuromorphic engineering, and data science.

Date and Time: 
Tuesday, May 15, 2018 - 2:00pm
Venue: 
Allen 101X

SystemX Seminar: Hardware Opportunities for AI/Cognitive Computing

Topic: 
Hardware Opportunities for AI/Cognitive Computing
Abstract / Description: 

Deep Neural Networks (DNNs) are very large artificial neural networks trained using very large datasets, typically using the supervised learning technique known as backpropagation. Currently, CPUs and GPUs are used for these computations. Over the next few years, we can expect special-purpose hardware accelerators based on conventional digital-design techniques to optimize the GPU framework for these DNN computations. Here there are opportunities to increase speed and reduce power for two distinct but related tasks: training and forward-inference. During training, the weights of a DNN are adjusted to improve network performance through repeated exposure to the labelled data-examples of a large dataset. Often this involves a distributed network of chips working together in the cloud. During forward-inference, already trained networks are used to analyze new data-examples, sometimes in a latency-constrained cloud environment and sometimes in a power-constrained environment (sensors, mobile phones, "edge-of-network" devices, etc.)

Even after the improved computational performance and efficiency that is expected from these special-purpose digital accelerators, there would still be an opportunity for even higher performance and even better energy-efficiency from neuromorphic computation based on analog memories.

In this presentation, I discuss the origin of this opportunity as well as the challenges inherent in delivering on it, with some focus on materials and devices for analog volatile and non-volatile memory. I review our group's work towards neuromorphic chips for the hardware acceleration of training and inference of Fully-Connected DNNs [1-5]. Our group uses arrays of emerging non-volatile memories (NVM), such as Phase Change Memory, to implement the synaptic weights connecting layers of neurons. I will discuss the impact of real device characteristics – such as non-linearity, variability, asymmetry, and stochasticity – on performance, and describe how these effects determine the desired specifications for the analog resistive memories needed for this application. I present some novel solutions to finesse some of these issues in the near-term, and describe some challenges in designing and implementing the CMOS circuitry around the NVM array. I will end with an outlook on the prospects for analog memory-based DNN hardware accelerators.

[1] G. W. Burr et al., IEDM Tech. Digest, 29.5 (2014).
[2] G. W. Burr et al., IEEE Trans. Elec. Dev, 62(11), pp. 3498 (2015).
[3] G. W. Burr et al., IEDM Tech. Digest, 4.4 (2015).
[4] P. Narayanan et al., IBM J. Res. Dev., 61(4/5), 11:1-11 (2017).
[5] S. Ambrogio et al., Nature, to appear (2018).

Date and Time: 
Thursday, May 31, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Brain-machine Interfaces: From basic science and engineering to clinical trials

Topic: 
Brain-machine Interfaces: From basic science and engineering to clinical trials
Abstract / Description: 

Millions of people worldwide suffer from neurological disease and injury leading to paralysis, which is often so severe that people are unable to feed themselves or communicate. Cortically-controlled brain-machine interfaces (BMIs) aim to restore some of this lost function by converting neural activity from the brain into control signals for prosthetic devices. I will describe some of our group's recent investigations into basic motor neurophysiology focused on understanding neural population dynamics, pre-clinical BMIs focused on high-performance control algorithm design, and translational BMI development and pilot clinical trial results focused on helping establish clinical viability.

Date and Time: 
Thursday, May 24, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Design verification for unsupervised learning systems

Topic: 
Design verification for unsupervised learning systems
Abstract / Description: 

The deployment of artificial intelligence (AI), particularly of systems that learn from data and experience, is rapidly expanding in our society. Verified artificial intelligence (AI) is the goal of designing AI-based systems that have strong, verified assurances of correctness with respect to mathematically-specified requirements. In this talk, I will consider Verified AI from a formal methods perspective. I will describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges. I will illustrate these challenges and principles with examples and sample results from the domain of intelligent cyber-physical systems, with a particular focus on autonomous vehicles.

Date and Time: 
Thursday, May 17, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: On the role of interaction in future mobility systems, from vehicle-centric to system-wide control

Topic: 
On the role of interaction in future mobility systems, from vehicle-centric to system-wide control
Abstract / Description: 

In this talk I will discuss my work on self-driving vehicles, with an emphasis on accounting for interactions with external counterparts at both the vehicle- and system-levels. Specifically, I will first discuss a decision-making framework that enables a self-driving vehicle to proactively interact with humans to infer their intents, and to use such information for safe and efficient driving. I will then turn the discussion to the operational and economic aspects of autonomous mobility-on-demand (AMoD) systems, with an emphasis on the interaction between AMoD and the electric power network.

Date and Time: 
Thursday, April 26, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Power Electronics for the Future: Research Trends and Challenges

Topic: 
Power Electronics for the Future: Research Trends and Challenges
Abstract / Description: 

Power electronics can be found in everything from cellphones and laptops to gasoline/electric vehicles, industrial motors and inverters that connect solar panels to the electric grid. With close to 80% of electrical energy consumption in the US expected to flow through a power converter by 2030, innovative solutions are required to tackle key issues related to conversion efficiency, power density and cost. This talk will look at the trends in power electronics across different application spaces, describe the ongoing research efforts and highlight the challenges ahead.

Date and Time: 
Thursday, April 19, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Computational Near-Eye Displays (for VR/AR Applications)

Topic: 
Computational Near-Eye Displays (for VR/AR Applications)
Abstract / Description: 

Immersive visual and experiential computing systems, i.e. virtual and augmented reality (VR/AR), are entering the consumer market and have the potential to profoundly impact our society. Applications of these systems range from communication, entertainment, education, collaborative work, simulation and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Thus, developing near-eye display systems that provide a high-quality user experience is of the utmost importance. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a significant source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display. In this talk, we discuss frontiers of engineering next-generation opto-computational near-eye display systems to increase visual comfort and provide realistic and effective visual experiences.

Date and Time: 
Thursday, April 12, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Seminar: Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices

Topic: 
Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices
Abstract / Description: 

Neuromorphic computing has recently emerged as one of the most promising option to reduce power consumption of big data analysis, paving the way for artificial intelligence systems with power efficiencies like the human brain. The key device for neuromorphic computing system is given by artificial two-terminal synapses controlling signal processing and transmission. Their conductivity must be changed in an analog/continuous way depending on neural signal strengths. In addition, synaptic devices must have: symmetric/linear conductivity potentiation and depression; a high number of levels (~32), which depend on applications and algorithm performances; high data retention (>10 years) and cycling (>109); ultra-low power consumption (<10fJ); low variability; high scalability (<10nm) and possibility of 3D integration.

A variety of different device technologies have been explored such as phase change memories, ferroelectric random-access memory and resistive random-access memory (RRAM). In each case matching the desired specs is a complex multivariable problem requiring a deep quantitative understanding of the link between material properties at the atomic scale and electrical device performance. We have used a multiscale modeling platform GINESTRATM to illustrate this for the case of RRAM and Ferroelectric tunnel junctions (FTJ).

In the case of RRAM, modeling of key mechanisms shows that a dielectric stack composed of two appropriately chosen dielectrics provides the best solution, in agreement with experimental data. In the case of FTJ, the hysteretic ferroelectric behavior of dielectric stacks fabricated from the orthorhombic phase of doped HfO2 is nicely captured by the simulations. These show that Fe-HfO2 stack can be easily used for analog switching by simply tuning set/reset voltage amplitudes. An added advantage of the simulations is that they point out ways to improve the performance, variability and endurance of the devices in order to meet industrial requirements.

Date and Time: 
Thursday, April 5, 2018 - 4:30pm
Venue: 
Gates B03

SystemX Alliance hosts Spring 2018 Workshop

Topic: 
SystemX Alliance Spring 2018 Workshop
Abstract / Description: 

Join SystemX laliance for their SPRING Workshop Week: Apr 30-May 3, 2018. 
Details available on SystemX SPRING workshop page.

SystemX Alliance research broadly encompasses ubiquitous sensing, computing, and communications in various application areas. Currently affiliated SystemX faculty are found in departments across Stanford's School of Engineering and in some areas of natural Sciences and Medicine. Their research agenda is continually evolving in accordance with the interests of Stanford faculty and industry affiliates. 

Date and Time: 
Monday, April 30, 2018 (All day) to Thursday, May 3, 2018 (All day)
Venue: 
Li Ka Shing Center for Learning and Knowledge

Pages

Subscribe to RSS - Seminar / Colloquium