In recent years, deep learning algorithms have been widespread across many practical applications. Algorithms trained by offline back propagation using pre-defined datasets show impressive performance, but state-of-the-art algorithms are compute-/memory-intensive, making it difficult to perform low-power real-time classification, especially on area-/power-constrained embedded hardware platforms.
In this talk, we present our recent research on how hardware designs of machine learning algorithms are efficiently customized for two divergent applications. These include deep convolutional neural networks (VGG, ResNet) for high-throughput image/video applications (e.g. autonomous driving), and compressed neural networks for ECG-based ultra-low-power biomedical applications (e.g. wearable devices). Our FPGA and ASIC prototype designs are presented that improve the energy-efficiency by optimizing computation, memory, and communication for representative neural networks.
Jae-sun Seo received his Ph.D. degree from University of Michigan in 2010. From 2010 to 2013, he was with IBM T. J. Watson Research Center, where he worked on energy-efficient circuits for microprocessors and cognitive computing chip design for the DARPA SyNAPSE project. In January 2014, he joined Arizona State University as an assistant professor in the School of ECEE. His research interests include efficient hardware design for deep learning and neuromorphic computing, as well as integrated power management. During the summer of 2015, he was a visiting faculty at Intel Circuits Research Lab. He was a recipient of IBM Outstanding Technical Achievement Award in 2012 and NSF CAREER Award in 2017.