Save the Date for the SystemX Fall Conference.
Details published as available.
Save the Date for the SystemX Fall Conference.
Details published as available.
Quantum technologies for computing, communication, and metrology become much more powerful if they are part of an integrated network operating at the quantum level. The first part of the this talk will introduce some of the interconnect technologies that can help to build up large quantum networks, including quantum memories and transducers. The second part of the talk will focus on demonstrations of these types of interconnects in crystals containing rare-earth ions, including on-chip technology developed at Caltech.
Computing technology has been a backbone of our society. Its importance is hard to overemphasize. Today, we again confirm its extreme importance with recent advances in artificial intelligence and deep learning. Those emerging workloads impose an unprecedented amount of arithmetic complexity and data access beyond our existing computing systems can barely handle. Particularly, mobile and embedded computing systems will face a major challenge in achieving energy-efficient computing for truly enabling intelligent systems. In this seminar, we will outline the bottlenecks of energy-efficient computing, notably the broken Dennard scaling and the memory wall problem. We will then discuss several approaches that our group has been working on, including a massively-parallel near-threshold processor, circuits and architectures to tolerate variability, active leakage suppression, integrated DCDC converters and voltage regulators for per-core DVFS, in-memory computing hardware, hybrid analog-digital computing, and a deep learning algorithm that reduces communication to off-chip memory. We will introduce several test-chip prototypes and their measurement results.
Within the last few years, quantum computing has moved from an academic field of study to a blossoming industry with a healthy ecosystem comprised of startups as well as research groups at large corporations. In this talk we will give a general introduction to this topic, describe a selection of the currently competing hardware platforms and demonstrate how to program a near term quantum computer.
The talk will provide an overview of Wireless Electric Vehicle Charging (WEVC) technology and the market. It will include a discussion of the system and its high-level components, safety and emissions considerations and standards work. Current state of the art systems both in late stage and early stage development will be discussed, including dynamic wireless charging. It will also include a brief overview on Formula E, the electric racing series.
Massively parallel, intracellular recording of a large number of neurons across a network is a great technological pursuit in neurobiology, but it has not been achieved. The intracellular recording by the patch clamp electrode boasts unparalleled sensitivity that can measure down to sub-threshold synaptic events, but it is too bulky to be implemented into a dense massive-scale array: so far only ~10 parallel patch recordings have been possible. Optical methods––e.g., voltage-sensitive dyes/proteins––have been developed in hopes of parallelizing intracellular recording, but they have not been able to perform recording from more than ~30 neurons in parallel. As an opposite example, the microelectrode array can record from many more neurons, but this extracellular technique has too low a sensitivity to tap into synaptic activities. In this talk, I would like to share our on-going effort, a silicon chip that conducts intracellular recording from thousands of connected mammalian neurons in vitro, and discuss applications in high-throughput screening, functional connectome mapping, neuromorphic engineering, and data science.
Deep Neural Networks (DNNs) are very large artificial neural networks trained using very large datasets, typically using the supervised learning technique known as backpropagation. Currently, CPUs and GPUs are used for these computations. Over the next few years, we can expect special-purpose hardware accelerators based on conventional digital-design techniques to optimize the GPU framework for these DNN computations. Here there are opportunities to increase speed and reduce power for two distinct but related tasks: training and forward-inference. During training, the weights of a DNN are adjusted to improve network performance through repeated exposure to the labelled data-examples of a large dataset. Often this involves a distributed network of chips working together in the cloud. During forward-inference, already trained networks are used to analyze new data-examples, sometimes in a latency-constrained cloud environment and sometimes in a power-constrained environment (sensors, mobile phones, "edge-of-network" devices, etc.)
Even after the improved computational performance and efficiency that is expected from these special-purpose digital accelerators, there would still be an opportunity for even higher performance and even better energy-efficiency from neuromorphic computation based on analog memories.
In this presentation, I discuss the origin of this opportunity as well as the challenges inherent in delivering on it, with some focus on materials and devices for analog volatile and non-volatile memory. I review our group's work towards neuromorphic chips for the hardware acceleration of training and inference of Fully-Connected DNNs [1-5]. Our group uses arrays of emerging non-volatile memories (NVM), such as Phase Change Memory, to implement the synaptic weights connecting layers of neurons. I will discuss the impact of real device characteristics – such as non-linearity, variability, asymmetry, and stochasticity – on performance, and describe how these effects determine the desired specifications for the analog resistive memories needed for this application. I present some novel solutions to finesse some of these issues in the near-term, and describe some challenges in designing and implementing the CMOS circuitry around the NVM array. I will end with an outlook on the prospects for analog memory-based DNN hardware accelerators.
 G. W. Burr et al., IEDM Tech. Digest, 29.5 (2014).
 G. W. Burr et al., IEEE Trans. Elec. Dev, 62(11), pp. 3498 (2015).
 G. W. Burr et al., IEDM Tech. Digest, 4.4 (2015).
 P. Narayanan et al., IBM J. Res. Dev., 61(4/5), 11:1-11 (2017).
 S. Ambrogio et al., Nature, to appear (2018).
Millions of people worldwide suffer from neurological disease and injury leading to paralysis, which is often so severe that people are unable to feed themselves or communicate. Cortically-controlled brain-machine interfaces (BMIs) aim to restore some of this lost function by converting neural activity from the brain into control signals for prosthetic devices. I will describe some of our group's recent investigations into basic motor neurophysiology focused on understanding neural population dynamics, pre-clinical BMIs focused on high-performance control algorithm design, and translational BMI development and pilot clinical trial results focused on helping establish clinical viability.
The deployment of artificial intelligence (AI), particularly of systems that learn from data and experience, is rapidly expanding in our society. Verified artificial intelligence (AI) is the goal of designing AI-based systems that have strong, verified assurances of correctness with respect to mathematically-specified requirements. In this talk, I will consider Verified AI from a formal methods perspective. I will describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges. I will illustrate these challenges and principles with examples and sample results from the domain of intelligent cyber-physical systems, with a particular focus on autonomous vehicles.