Graduate

EE380 Computer Systems Colloquium: The future of low power circuits and embedded intelligence: emerging devices and new design paradigms

Topic: 
The future of low power circuits and embedded intelligence: emerging devices and new design paradigms
Abstract / Description: 

Circuit and design division at CEA LETI is focusing on innovative architectures and circuits dedicated to digital, imagers, wireless, sensors, power management and embedded software. After a brief overview of adaptive circuits for low power multi-processors and IoT architectures, the talk will detail new technologies opportunities for more flexibility. Digital and mixed-signal architectures using 3D technologies will be presented in the scope of multi-processors activity as well as imagers and neuro-inspired circuits. Also, the integration of non-volatile memories will be shown in the perspective of new architectures for computing. Finally, embedding learning will be addressed to solve power challenges at the edge and in end-devices: some new design approaches will be discussed.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: The Future of Wireless Communications Hint: It's not a linear amplifier

Topic: 
The Future of Wireless Communications Hint: It's not a linear amplifier
Abstract / Description: 

Wireless communications are ubiquitous in the 21 st century--we use them to read the newspaper, talk to our colleagues or children, watch sporting events or other forms of entertainment, and to monitor and control the environment we live ins-- among just a few. This exponentiation of demand for wireless capacity has driven a new era of innovation in this space because spectrum and energy are expensive and constrained resources.

The future of wireless communications will demand leaps in spectrum efficiency, bandwidth efficiency, and power efficiency for successful technology deployments. Key applications that will fundamentally change how we interact with wireless systems and the demands we place on wireless technologies include Dynamic Spectrum Access Networks, massive MIMO, and the evasive unicorn of the "universal handset". While each of these breakthrough "system" capabilities make simultaneous demands of spectrum efficiency, bandwidth efficiency, and power efficiency, the current suite of legacy technologies forces system designers to make undesirable trade-offs because of the limitations of linear amplifier technology.

Eridan's solution is the antithesis of "linear". The Switch Mode Mixer Modulator (SMs3 ) technology emphasizes precision and flexibility, and simultaneously delivers spectrum efficiency, bandwidth efficiency, and power efficiency. The resulting capabilities dramatically increase total wireless capacity with minimum need for expanding operations into extended regions of the wireless spectrum.

This presentation will discuss the driving forces behind wireless system performance, the physics of linear amplifiers and SM3, measured performance of SM3 systems, and the implications for wireless system capabilities in the near future.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Cryptocurrencies

Topic: 
Cryptocurrencies
Abstract / Description: 

I will give introduction to blockchain technology, current state of the industry and its challenges and the Einsteinium Foundation that is embarking on a truly ambitious path likely to change how cryptocurrency is viewed and used in everyday life. Scientific research is a long-term investment in our future, and the future of our planet. Funding "big ideasi" has fallen dramatically around the world for in recent years. The defining characteristic of Einsteinium is its ongoing commitment to research and charitable missions. Einsteinium coin is a Bitcoin-like currency with a philanthropic objective of funding scientific, cutting edge IT and cryptocurrency projects.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computer Accessibility

Topic: 
Exploring the implications of machine learning for people with cognitive disabilities
Abstract / Description: 

Advances in information technology have provided many benefits for people with disabilities, including wide availability of textual content via text to speech, flexible control of motor wheelchairs, captioned video, and much more. People with cognitive disabilities benefit from easier communication, and better tools for scheduling and reminders. Will advances in machine learning enhance this impact? Progress in natural language processing, autonomous vehicles, and emotion detection, all driven by machine learning, may deliver important benefits soon. Further out, can we look for systems that can help people with cognitive challenges understand our complex world more easily, work more effectively, stay safe, and interact more comfortably in social situations? What are the technical barriers to overcome in pursuing these goals, and what are the theoretical developments in machine learning that may overcome them?

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Information Theory of Deep Learning

Topic: 
Information Theory of Deep Learning
Abstract / Description: 

I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottleneck framework. The new theory has the following components:

  1. rethinking Learning theory; I will prove a new generalization bound, the input-compression bound, which shows that compression of the representation of input variable is far more important for good generalization than the dimension of the network hypothesis class, an ill defined notion for deep learning.
  2. I will prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This makes the information Bottleneck bound for the problem as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm.
  3. I will show how Stochastic Gradient Descent, as used in Deep Learning, achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottleneck problem for large scale supervised learning problems. The theory provide a new computational understating of the benefit of the hidden layers, and gives concrete predictions for the structure of the layers of Deep Neural Networks and their design principles. These turn out to depend solely on the joint distribution of the input and output and on the sample size.

Based partly on works with Ravid Shwartz-Ziv and Noga Zaslavsky.

Date and Time: 
Wednesday, April 4, 2018 - 4:30pm
Venue: 
Gates B03

SmartGrid Seminar: Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration

Topic: 
Transmission-Distribution Coordinated Energy Management: A Solution to the Challenge of Distributed Energy Resource Integration
Abstract / Description: 

Transmission-distribution coordinated energy management (TDCEM) is recognized as a promising solution to the challenge of high DER penetration, but lack of a distributed computation method that universally and effectively works for TDCEM. To bridge this gap, a generalized master-slave-splitting (G-MSS) method is proposed based on a general-purpose transmission-distribution coordination model (G-TDCM), enabling G-MSS to be applicable to most central functions of TDCEM. In G-MSS, a basic heterogeneous decomposition (HGD) algorithm is first derived from the heterogeneous decomposition of the coupling constraints in the KKT system regarding G-TDCM. Optimality and convergence properties of this algorithm are proved. Furthermore, a modified HGD algorithm is developed by utilizing subsystem's response function, resulting in faster convergence. The distributed G-MSS method is then demonstrated to successfully solve central functions of TDCEM including power flow, contingency analysis, voltage stability assessment, economic dispatch and optimal power flow. Severe issues of over-voltage and erroneous assessment of the system security that are caused by DERs are thus resolved by G-MSS with modest computation cost. A real-world demonstration project in China will be presented.

Date and Time: 
Thursday, April 5, 2018 - 1:30pm
Venue: 
Y2E2 111

US-ATMC (EE402) Seminar presents Asia Entrepreneurship Update – 2018

Topic: 
Asia Entrepreneurship Update – 2018
Abstract / Description: 

In this first session in our Spring Quarter weekly series on "Entrepreneurship in Asian High-Tech Industries," Professor Richard Dasher introduces new (updated) data and discusses trends in entrepreneurism and the supporting ecosystems for startup companies in major Asian economies, as well as the implications for U.S. investors and businesses.

Date and Time: 
Tuesday, April 3, 2018 - 4:30pm
Venue: 
Skilling Auditorium, 494 Lomita Mall

Special Seminar: New Algorithms and Hardware Acceleration for the IoT Revolution

Topic: 
New Algorithms and Hardware Acceleration for the IoT Revolution
Abstract / Description: 

Deep Neural Networks (DNNs) are computation intensive. Without efficient hardware implementation of DNNs, many promising IoT (Internet of Things) applications will not be practically realizable. In this talk, we will first take a detailed look into one type of compute accelerators, FPGA, and evaluate its potential role in the upcoming IoT revolution. Although FPGAs can provide desirable customized hardware solutions, they are difficult to program and optimize. High-level synthesis is an effective design flow for FPGAs due to improved productivity, debugging, and design space exploration abilities. However, optimizing large DNNs under resource constraints for FPGAs is still a key challenge. We will present a series of effective design techniques for implementing DNNs on FPGAs with high performance and energy efficiency. These include the use of configurable DNN IPs, resource allocation across DNN layers, Winograd and FFT techniques, and DNN reduction and re-training. We showcase several design solutions including Long-term Recurrent Convolution Network (LRCN) for video captioning, Inception module (GoogleNet) for face recognition, as well as Long Short-Term Memory (LSTM) for sound recognition. We will also present some of our recent work on developing new DNN models and data structures for achieving higher accuracy for several interesting applications such as crowd counting, genomics, and music modeling.

Date and Time: 
Monday, April 9, 2018 - 4:00pm
Venue: 
Gates 104

SystemX Seminar: Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices

Topic: 
Modeling and Simulation for neuromorphic applications with focus on RRAM and ferroelectric devices
Abstract / Description: 

Neuromorphic computing has recently emerged as one of the most promising option to reduce power consumption of big data analysis, paving the way for artificial intelligence systems with power efficiencies like the human brain. The key device for neuromorphic computing system is given by artificial two-terminal synapses controlling signal processing and transmission. Their conductivity must be changed in an analog/continuous way depending on neural signal strengths. In addition, synaptic devices must have: symmetric/linear conductivity potentiation and depression; a high number of levels (~32), which depend on applications and algorithm performances; high data retention (>10 years) and cycling (>109); ultra-low power consumption (<10fJ); low variability; high scalability (<10nm) and possibility of 3D integration.

A variety of different device technologies have been explored such as phase change memories, ferroelectric random-access memory and resistive random-access memory (RRAM). In each case matching the desired specs is a complex multivariable problem requiring a deep quantitative understanding of the link between material properties at the atomic scale and electrical device performance. We have used a multiscale modeling platform GINESTRATM to illustrate this for the case of RRAM and Ferroelectric tunnel junctions (FTJ).

In the case of RRAM, modeling of key mechanisms shows that a dielectric stack composed of two appropriately chosen dielectrics provides the best solution, in agreement with experimental data. In the case of FTJ, the hysteretic ferroelectric behavior of dielectric stacks fabricated from the orthorhombic phase of doped HfO2 is nicely captured by the simulations. These show that Fe-HfO2 stack can be easily used for analog switching by simply tuning set/reset voltage amplitudes. An added advantage of the simulations is that they point out ways to improve the performance, variability and endurance of the devices in order to meet industrial requirements.

Date and Time: 
Thursday, April 5, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Subscribe to RSS - Graduate