Graduate

IT-Forum: A Tale of Two Measures

Topic: 
A Tale of Two Measures
Abstract / Description: 

Information theory has been traditionally studied in the context of communication theory and statistical physics. However, it has also had important applications in other fields such as computer science, economics, mathematics, and statistics. This talk is very much in the spirit of discovering applications of information theory in other fields. We will discuss three such recent applications:

Statistics: The Hirschfeld-Gebelein-Rényi maximal correlation is an important tool in statistics that has found numerous applications from correspondence analysis, to detection of non-linear patterns in data. We will describe a simple information-theoretic proof of a fundamental result on maximal correlation due to Dembo, Kagan, and Shepp (2001).

Computer Science: Boolean functions are one of the most basic objects of study in theoretical computer science. We show how information-theoretic tools can aid Fourier analytic tools in this quest. Specifically, we will consider the problem of correlation between Boolean functions on a noisy hypercube graph.

Mathematics: Hypercontractivity and Reverse Hypercontractivity are very useful tools for studying concentration of measure, and extremal questions in the geometry of high-dimensional spaces, both discrete and continuous. In this talk, we will describe a recent result by Chandra Nair characterizing hypercontractivity using information measures. We will extend this result to reverse hypercontractivity, and we will discuss implications of these results.

The title of this presentation is derived from two measures of correlation - the maximal correlation and the so-called strong data processing constant - that will be key concepts used throughout. This talk is based on joint work with Venkat Anantharam, Amin Gohari, and Chandra Nair.

Date and Time: 
Friday, March 13, 2015 - 1:00pm to 2:00pm
Venue: 
Packard 202

IT-Forum: Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy

Topic: 
Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy
Abstract / Description: 

Recent studies have examined racial disparities in stop-and-frisk, a widely employed but controversial policing tactic. The statistical evidence, though, has been limited and contradictory. We investigate by analyzing three million stops in New York City over five years, focusing on cases where officers suspected the stopped individual of criminal possession of a weapon (CPW). For each CPW stop, we estimate the ex-ante probability that the detained suspect would have a weapon. We find that in 44% of cases, the likelihood of finding a weapon was less than 1%, raising concerns that the legal requirement of "reasonable suspicion" was often not met. We further find that blacks and Hispanics were disproportionately stopped in these low hit rate contexts, a phenomenon largely attributable to lower thresholds for stopping individuals in high-crime, predominately minority areas, particularly public housing. Even after adjusting for location effects, however, we find that stopped blacks and Hispanics were still less likely than similarly situated whites to possess weapons, indicative of racial bias in stop decisions. We demonstrate that by conducting only the 6% ex-ante highest hit rate stops, one can both recover the majority of weapons and mitigate racial disparities. Finally, we develop stop heuristics that can be implemented as a simple scoring rule, and have comparable accuracy to our full statistical models.

This work is joint with Justin Rao (Microsoft) and Ravi Shroff (NYU).A draft of the paper can be downloaded here: https://5harad.com/papers/frisky.pdf

Date and Time: 
Friday, February 20, 2015 - 1:00pm to 2:00pm
Venue: 
Packard 202

IT-Forum

Topic: 
A puzzle: How to communicate via a binary erasure channel with feedback without repeated ones?
Abstract / Description: 

In this talk we will present a simple and fundamental problem: communicating via a memoryless binary erasure channel with feedback without consecutive 1's.

First, we will present the problem as a puzzle and provide a simple solution. We will prove its optimality using only counting, logics and basic probability arguments. Then we will show how we obtained the solution using information theory tools (such as the Directed information) and optimization tools (such as Dynamic Programing).

The talk will be given mostly on a whiteboard.

Based on Joint work with Oron Sabag from Ben-Gurion University and Navin Kashyap from Indian Institute of Science.

Date and Time: 
Friday, February 13, 2015 - 1:00pm to 2:00pm
Venue: 
Packard 202

IT-Forum: On Optimal Solutions in Decentralized Control (Team) Problems

Topic: 
On Optimal Solutions in Decentralized Control (Team) Problems
Abstract / Description: 

Recently, there has been a lot of work on decentralized control (team) problems -- problems in which multiple agents having different information act, perhaps in a dynamic environment, to minimize a common objective function. Such scenarios naturally occur, for example, in large-scale control systems, communication systems, organizations, and networks. However, very few team problems were known to admit optimal solutions.

In this talk, we discuss some recent results on this topic and show that a class of dynamic LQG teams with no observation sharing information structures admit team-optimal solutions. This result provides the first unified proof of existence of optimal solutions in several different classes of stochastic teams, including the celebrated Witsenhausen's counterexample, the Gaussian test channel, the Gaussian relay channel and their non-scalar extensions.

Date and Time: 
Friday, January 30, 2015 - 1:00pm to 2:00pm
Venue: 
Packard 202

SmartGrid Seminar: Key Issues and Challenges in the Deepening Penetration of Demand Response Resources

Topic: 
Key Issues and Challenges in the Deepening Penetration of Demand Response Resources
Abstract / Description: 

We focus on the key developments in the implementation of demand response resources or DRRs, with special attention on their economics and policy aspects. The Federal Energy Regulatory Commission (FERC) forecasts an achievable 2019 DRR penetration range of 4 – 14 % of system peak load in the various ISO/RTOs under its jurisdiction. We discuss the three key factors driving the rapid growth in the DRR implementation: the rollout of the smart grid, the emergence of curtailment service providers or aggregators, and the developments on the demand response policy front. The large-scale implementation of advanced metering solutions to replace the legacy metering infrastructure and the deployment of appropriate technologies, devices, and services to access and leverage energy usage information are direct outcomes of the smart grid advancements. The creation of an important new class of market participants – the load aggregators – makes possible the deeper penetration of DRRs as viable competitors to supply-side resources. Recent policies, starting with the Energy Policy Act of 2005 and followed by FERC Order Nos. 719 and 745, and the various state-level initiatives have been instrumental in the removal of barriers to DRR participation and in bringing about the persistent deepening of DRR penetrations. We highlight some of the unintended consequences of FERC Order No. 745 and the challenges that deepening DRR penetrations present. While DRR curtailments result in lower loads, which reduce prices and emissions at specific nodes in the system during the curtailment hours, some portion of the curtailed energy is recovered in subsequent hours, resulting in impacts on prices and emissions in those hours — the so-called DRR payback effects. The recovery severely affects the economic benefits and emission reductions. Such outcomes underline the importance of the formulation and implementation of effective DRR policies.

Date and Time: 
Thursday, February 19, 2015 - 1:15pm to 2:15pm
Venue: 
Y2E2 270

SmartGrid Seminar: High Performance Computing for Power System Applications

Topic: 
High Performance Computing for Power System Applications
Abstract / Description: 

Control center operation is becoming more complex as new and often-conflicting reliability, economics, and public policy issues emerge. Computer simulations will be required to analyze larger and larger amounts of system data (of different types) and what-if-scenarios to derive succinct information for operators to make informed decisions. Existing control center applications are primarily based on the original digital computing infrastructure first designed in the 1970's. While some incremental improvements have been made over the past several years, control center applications do not take full advantage of computing power in their existing infrastructure or in the computing industry in general.

High Performance Computing (HPC) and advanced computer are used widely within the government and in selected industry applications to solve important problems of high complexity, providing a factor from hundreds to millions times improvements in time-to-solution over desktop computer solutions. This presentation will share and discuss some research work of applying high performance computing to solve power system problems.

Date and Time: 
Thursday, January 29, 2015 - 1:15pm to 2:15pm
Venue: 
Y2E2 270

Computing with Fluids

Topic: 
Computing with Fluids
Abstract / Description: 

In this talk, I will discuss an emerging intersection of physical science and computer science, starting with a historical perspective in all-fluidic computation from the 60's. Advances in modern microfluidics lead to renewed interests in fluidic mechanisms for coding and computation in fluids. With absence of inertial forces at low Reynolds numbers in small geometries, we were faced with exploring new non-linearities that are intrinsic in multi-phase microfluidics. I will briefly describe our motivations for inventing a universal digital logic family in multi-phase microfluidics that utilizes drops and bubbles to compute. This first logic family was asynchronous leading to timing errors. Next, I will discuss latest advances from our lab that finally demonstrate a new synchronous universal droplet fluidic logic family and control. We will discuss scaling laws and demonstrate cascadable universal logic, feedback, fan-out and non-volatile ring memory. The presented platform opens up a scalable means to both program and manipulate matter and information simultaneously in integrated microfluidic systems. Exploring applications of microfluidics in low-resource settings and global health, I will finally describe a new platform we have recently invented that enables programmable microfluidics using old-school "punch card" tapes.

I will close the talk with (slightly unrelated) but fascinating demonstration of an artificial fluidic analogue of biological chemotaxis.

Date and Time: 
Monday, January 26, 2015 - 4:00pm to 5:00pm
Venue: 
AllenX Auditorium

Stanford Optical Society Seminar: Cleantech Investing - A Perspective from KPCB

Topic: 
Cleantech Investing - A Perspective from KPCB
Abstract / Description: 

John Denniston was previously a Partner at the venture capital firm Kleiner Perkins Caufield & Byers, where he co-founded and co-ran the firm's $1 billion Green Growth Fund. John retired from the firm in 2013. Before joining KPCB, John was a Managing Director and Head of Technology Investment Banking for the Western United States at Salomon Smith Barney. He also served on the Investment Committee for Salomon's venture capital direct investment fund and CitiGroup's venture capital fund-of-funds. Earlier in his career, John was a Partner at Brobeck, Phleger & Harrison, where he was Head of the firm's Venture Capital Practice Group and Co-Head of its Information Technology Practice Group. He also served on the investment committee for the firm's venture capital fund. John earned his B.A. in Economics and J.D. from the University of Michigan.

Date and Time: 
Wednesday, January 21, 2015 - 4:30pm to 5:30pm
Venue: 
Spilker 232

Toward energy-neutral computational sensing - challenges and opportunities

Topic: 
Toward energy-neutral computational sensing - challenges and opportunities
Abstract / Description: 

The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification, coupled with advanced multi-standard/multi-mode communication capabilities. Higher level intelligence, requiring local storage and complex search and matching algorithms, will come next, ultimately leading to situational awareness and truly "intelligent things" harvesting energy from their environment.


From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. We believe that CMOS technology can still take us a long way toward this vision. Our recent results with the PULP (parallel ultra-low power) open computing platform demonstrate that pj/OP (GOPS/mW) computational efficiency is within reach in today's 28nm CMOS FDSOI technology. In the longer term, looking toward the next 1000x of energy efficiency improvement, we will need to fully exploit the flexibility of heterogeneous 3D integration, stop being religious about analog vs. digital, Von Neumann vs. "new" computing paradigms, and seriously look into relaxing traditional "hardware-software contracts" such as numerical precision and error-free permanent storage.

Date and Time: 
Tuesday, January 27, 2015 - 11:00am to 12:00pm
Venue: 
Gates 415

Pages

Subscribe to RSS - Graduate