SystemX

SystemX Seminar: Toward Managing the Complexity of Molecules: Letting Matter Compute Itself

Topic: 
Toward Managing the Complexity of Molecules: Letting Matter Compute Itself
Abstract / Description: 

Person-millenia are spent each year seeking useful molecules for medicine, food, agriculture and other uses. For biomolecules, the near infinite universe of possibilities is staggering and humbling. As an example, antibodies, which make up the majority of the top-grossing medicines today, are comprised of 1,100 amino acids chosen from the twenty used by living things. The binding part (variable region) that allows the antibody to bind and recognize pathogens, is about 110 amino acids, giving rise to 10143 possible combinations. There are only about 1080 atoms in the universe, illustrating the intractability of exploring the entire space of possibility. This is just one example…

Presently, machine learning (ML), artificial intelligence (AI), quantum computing, and “big data” are often put forth as the solutions to all problems, particularly by pontificating TED presenters and in Sand Hill pitches dripping with hyperbole. Expecting these methods to provide intelligent prediction of molecular structure and function within our lifetimes is unrealistic. For example, a neural network trained on daily weather patterns in Palo Alto cannot develop an internal model for global weather. In a similar way, finite and reasonable molecular training sets will not magically cause a generalizable model of molecular quantum mechanics to arise within a neural network, no matter how many layers it is endowed with.

With that provocative preface, we turn to the notion of letting matter compute itself. Massive combinatorial libraries can now be intelligently and efficiently mined with appropriate molecular readouts (AKA “the question vector”) at ever-increasing throughputs presently surpassing 1012 unique molecules in a few hours. Once “matter-in-the-loop” exploration is embraced, AI, ML and other methods can be brought to bear usefully in closed-loop methods to follow veins of opportunity in molecular space. Several examples of mining massive molecular spaces will be presented, including drug discovery, digital pathology, and AI-guided continuous-flow chemical synthesis – all real, all working today.

Date and Time: 
Thursday, March 15, 2018 - 4:30pm to 5:30pm
Venue: 
Y2E2 Room 111

SystemX BONUS! Seminar: Soft Switching Inverters with Wide-Bandgap Devices

Topic: 
Soft Switching Inverters with Wide-Bandgap Devices
Abstract / Description: 

Soft switching has been successfully applied in switching supplies, single-phase inverter for induction heating etc. However, applications of soft switching to three-phase inverters or converters are not so common up to now. Three-phase converters/inverters are widely used in Data Center, UPS, fast EV chargers, PV/Wind power inverter, and drives. In this presentation soft switching inverters with Zero-Voltage-Switching SVM scheme(ZVS-SVM) is introduced. The ZVS-SVM can be used either three-Phase AC/DC converters or inverters and realize zero voltage switching for all switches including both inverter bridges switches and the auxiliary switch for three-phase inverters. Then impact of SiC device on soft switching inverters is investigated with respect to the power density and conversion efficiency. Finally experimental results of a soft-switching 20 kW SiC MOSFET grid inverter with 300kHz switching frequency is introduced.

Date and Time: 
Tuesday, February 20, 2018 - 4:30pm
Venue: 
Packard 204

SystemX Seminar: Coherent Ising machines for combinatorial optimization - Optical neural networks operating at the quantum limit

Topic: 
Coherent Ising machines for combinatorial optimization - Optical neural networks operating at the quantum limit
Abstract / Description: 

Optimization problems with discrete and continuous variables are ubiquitous in numerous important areas, including operations and scheduling, drug discovery, wireless communications, finance, integrated circuit design, compressed sensing and machine learning. Despite rapid advances in both algorithm and digital computing technology, even modest sized optimization problems that arise in practice may be very difficult to solve on modern digital computers. One alternative of current interest is the adiabatic quantum computing (AQC) or quantum annealing (QA). Sophisticated AQC/QA devices are already under development, but providing dense connectivity between qubits remains a major challenge with serious implications for the efficiency of AQC/QA approaches. In this talk, we will introduce a novel computing system, coherent Ising machine, and describe its theoretical and experimental performance. We start with the physics of quantum-to-classical crossover as a computational mechanism and how to construct such physical devices as quantum neurons and synapses. We show the performance comparison against various classical neural network models implemented in CPU and supercomputers as algorithms. We end the talk by introducing the portal of the QNNCloud service system based on the coherent Ising machines.

Date and Time: 
Monday, January 29, 2018 - 2:00pm
Venue: 
Packard 204

SystemX Seminar: Hardware architectures for computational imaging and vision

Topic: 
Hardware architectures for computational imaging and vision
Abstract / Description: 

85% of images today are taken by cell phones. These images are not merely projections of light from the scene onto the camera sensor but result from a deep calculation. This calculation involves a number of computational imaging algorithms such as high dynamic range (HDR) imaging, panorama stitching, image deblurring and low-light imaging that compensate for camera limitations, and a number of deep learning based vision algorithms such as face recognition, object recognition and scene understanding that make inference on these images for a variety of emerging applications. However, because of their high computational complexity, mobile CPU or GPU based implementations of these algorithms do not achieve real-time performance. Moreover, offloading these algorithms to the cloud is not a viable solution because wirelessly transmitting large amounts of image data results in long latency and high energy consumption, making them unsuitable for mobile devices.

My approach to solving this problem has to been to design energy-efficient hardware accelerators targeted at these applications. In this talk, I will present my work on the architecture design and implementation of three complete computational imaging systems for energy-constrained mobile environments: (1) an energy-scalable accelerator for blind image deblurring, (2) a reconfigurable bilateral filtering processor for computational photography applications such as HDR imaging, low-light imaging and glare reduction, and (3) a low-power processor for real-time motion magnification in videos. Each of these accelerator-based systems achieves 2 to 3 orders of magnitude improvement in runtime and 3 to 4 orders of magnitude improvement in energy compared to existing implementations on CPU or GPU platforms. In my talk, I will present the energy minimization techniques that I employed in my designs to obtain these improvements. In addition, I will talk about how these systems achieve energy scalability by trading off accuracy with execution time. This is essential in real-life applications where one might still want to run a complex algorithm in a low-battery scenario but might be willing to sacrifice some visual quality.

I will conclude my talk by giving my vision for how such accelerator-based systems will enable energy-efficient integration of computational imaging and deep learning based vision algorithms into mobile and wearable devices for emerging applications such as autonomous driving, micro-robotics, assistive technology, medical imaging and augmented and virtual reality.

Date and Time: 
Thursday, March 8, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Beyond inspiration: Three lessons from biology on building intelligent machines

Topic: 
Beyond inspiration: Three lessons from biology on building intelligent machines
Abstract / Description: 

The only known systems that exhibit truly intelligent, autonomous behavior are biological. If we wish to build machines capable of such behavior, then it makes sense to learn as much as we can about how these systems work. Inspiration is a good start, but real progress will require gaining a more solid understanding of the principles of information processing at work in nervous systems. Here I will focus on three areas of investigation that I believe will be especially fruitful: 1) the study of perception-action loops, in particular how sensory information is actively acquired via motor commands, 2) developing good computational models of nonlinear signal integration in dendritic trees, and 3) elucidating the computational role of feedback in neural systems.

Date and Time: 
Thursday, February 22, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Nanoscale MOSFET Modeling for the Design of Low-power Analog and RF Circuits

Topic: 
Nanoscale MOSFET Modeling for the Design of Low-power Analog and RF Circuits
Abstract / Description: 

The emergence of the Internet of Things (IoT) poses stringent requirements on the energy consumption and has hence become the primary driver for low-power analog and RF circuit design. Implementation of increasingly complex functions under highly constrained power and area budgets, while circumventing the challenges posed by modern device technologies, makes analog and RF circuit design ever more challenging. Some guidance would therefore be invaluable for the designer to navigate the multi-variable design space.

This talk presents low-power analog and RF design techniques that can be applied from device to circuit level. It starts with the presentation of the concept of inversion coefficient IC as an essential design parameter that spans the entire range of operating points from weak via moderate to strong inversion. Several figures-of-merit (FoM) including the Gm/ID, the Ft and their product Gm ‧ Ft/ID, capturing the various trade-offs encountered in analog and RF circuit design are presented. The simplicity of the IC-based models is emphasized and compared against measurements of 40- and 28-nm bulk CMOS processes and BSIM6 simulations. Finally, a simple technique to extract the basic model parameters from measurements or simulation is described before concluding.

Date and Time: 
Thursday, February 15, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Advanced SAR ADCs – Efficiency, Accuracy, Calibration and References

Topic: 
Advanced SAR ADCs – Efficiency, Accuracy, Calibration and References
Abstract / Description: 

This talk will discuss several recent techniques that were developed in the context of SAR ADCs. The presentation will show a few design examples with different performance targets. The first topic deals with minimizing power consumption while aiming to increase accuracy by means of linearization and noise reduction techniques. The second topic is about efficient calibration techniques for SAR ADCs. The last part describes a method to co-integrate the reference buffer with the SAR ADC.

Date and Time: 
Friday, February 9, 2018 - 4:00pm
Venue: 
Allen 101X

SystemX Seminar: Mobile technology trends in 5G and beyond

Topic: 
Mobile technology trends in 5G and beyond
Abstract / Description: 

The technology development and standardization of 5G radio access have been rapidly progressing. A major agreement was reached in the past few weeks, enabling industry to complete its product development, with early commercial network deployments expected in 2018. In addition to enhancing mobile broadband services, which have dominated 4G, 5G aims to enable critical machine type communications (cMTC) and support Internet of Things (IoT) using the same network. This ambition poses stringent design requirements and performance objectives in many different dimensions. For example, in addition to significant improvements in peak data rates and network capacity compared to existing cellular technologies, 5G performance objectives further include ultra-low latency and ultra-reliability for cMTC, and superior device energy efficiency, low device cost, ubiquitous coverage reaching devices deep indoors, and ultra-high device connection density for IoT. The three pillars of 5G technologies, enhanced MBB, cMTC, and IoT, extend 5G services vastly to many new use cases. In this talk, we first describe the principles adopted in 5G to achieve its performance objectives. We give an overview of upcoming early deployments, which address MBB primarily. We also give examples of how 5G enables smart city and connected industry. Finally, we discuss the next steps in 5G and what may come beyond 5G.

Date and Time: 
Thursday, February 8, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Quantum Supremacy

Topic: 
Quantum supremacy: checking a quantum computer with a classical supercomputer
Abstract / Description: 

As microelectronics technology nears the end of exponential growth over time, known as Moore's law, there is a renewed interest in new computing paradigms such as quantum computing. A key step in the roadmap to build a scientifically or commercially useful quantum computer will be to demonstrate its exponentially growing computing power. I will explain how a 7 by 7 array of superconducting xmon qubits with nearest-neighbor coupling, and with programmable single- and two-qubit gate with errors of about 0.2%, can execute a modest depth quantum computation that fully entangles the 49 qubits. Sampling of the resulting output can be checked against a classical simulation to demonstrate proper operation of the quantum computer and compare its system error rate with predictions. With a computation space of 2^49 = 5 x 10^14 states, the quantum computation can only be checked using the biggest supercomputers. I will show experimental data towards this demonstration from a 9 qubit adjustable-coupler "gmon" device, which implements the basic sampling algorithm of quantum supremacy for a computational (Hilbert) space of about 500. We have begun testing of the quantum supremacy chip.

Date and Time: 
Thursday, February 1, 2018 - 4:30pm
Venue: 
Y2E2 111

SystemX Seminar: Using the Stanford Driving Simulator for Human Machine Interaction Studies

Topic: 
Using the Stanford Driving Simulator for Human Machine Interaction Studies
Abstract / Description: 

The driving simulator at Stanford is used for human-in-the-loop, human-machine interaction (HMI) driving studies. Many of the studies focus on shared control between humans and autonomous systems. The simulator’s toolset collects objective driving behavior data directly from the simulator, as well as data streams from eye trackers, cameras and other physiological sensors that we employ to understand human responses to myriad circumstances in the simulated environment.  This presentation will describe the hardware and software associated with the driving studies, what is possible and show some similar labs at other universities. 

Date and Time: 
Thursday, January 25, 2018 - 4:30pm
Venue: 
Y2E2 111

Pages

Subscribe to RSS - SystemX