Yuanfang Li (M.S. candidate) and Dr. Ardavan Pedram have been awarded the 2017 IEEE ASAP Best Paper Award

Yuanfang Li and Dr. Ardavan Pedram: Best Paper Award, IEEE ASAP
July 2017

Co-authors Yuanfang Li (MS candidate) and Dr. Ardavan Pedram received the Best Paper Award at the 28th annual IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP).

The conference covers the theory and practice of application-specific systems, architectures and processors – specifically building upon traditional strengths in areas such as computer arithmetic, cryptography, compression, signal and image processing, network processing, reconfigurable computing, application-specific instruction-set processors, and hardware accelerators.

Yuanfang Li is an M.S. candidate and Dr. Ardavan Pedram is a senior research associate who manages the PRISM Project. The PRISM project enables the design of reconfigurable architectures to accelerate the building blocks of machine learning, high performance computing, and data science routines.

 

Congratulations to Yuanfang and Ardavan for their well-deserved award!

 

Abstract "CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks":
Accelerating the inference of a trained DNN is a well studied subject. In this paper we switch the focus to the training of DNNs. The training phase is compute intensive, demands complicated data communication, and contains multiple levels of data dependencies and parallelism. This paper presents an algorithm/architecture space exploration of efficient accelerators to achieve better network convergence rates and higher energy efficiency for training DNNs. We further demonstrate that an architecture with hierarchical support for collective communication semantics provides flexibility in training various networks performing both stochastic and batched gradient descent based techniques. Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations.