SystemX BONUS LECTURE: Enabling embedded deep neural networks: Co-optimization across processor architectures, schedulers and model cost-functions

Topic: 
Enabling embedded deep neural networks: Co-optimization across processor architectures, schedulers and model cost-functions
Tuesday, February 18, 2020 - 4:30pm
Venue: 
Packard 202
Speaker: 
Marian Verhelst - KU Leuven
Abstract / Description: 

Deep neural network inference comes with significant computational complexity, making their execution until recently only feasible on power-hungry server or GPU platforms. The lab of Prof. Verhelst is pushing the state of the art on embedded neural network processing for edge and mobile devices, through optimized algorithm-processor co-design. The talk will discuss how to exploit and jointly optimize NPU/TPU processor architectures, dataflow schedulers, quantized neural network models, and model training cost for maximum energy efficiency. The talk will quantify the gains of such co-optimization, and illustrate them within a practical examples.

Bio: 

Marian Verhelst is an associate professor at the MICAS laboratories of the EE Department of KU Leuven. Her research focuses on embedded machine learning, hardware accelerators, self-adaptive circuits and systems, sensor fusion, and low-power edge processing. Before that, she received a PhD from KU Leuven in 2008, was a visiting scholar at the BWRC of UC Berkeley in the summer of 2005, and worked as a research scientist at Intel Labs, Hillsboro OR from 2008 till 2011. Marian is a member of the DATE and ISSCC executive committees, is TPC co-chair of AICAS2020 and tinyML2020, and TPC member DATE and ESSCIRC. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TVLSI, TCAS-II and JSSC and a member of the STEM advisory committee to the Flemish Government. Marian currently holds a prestigious ERC Starting Grant from the European Union and was the laureate of the Royal Academy of Belgium in 2016.