Deep learning has been a popular research area due to major successes in perception tasks such as speech recognition and object classification. In the first part of my talk, I will give a brief overview of the main concepts of deep learning. I will focus on recent advances on a topic that my group has been especially actively working on: natural language processing and understanding using Recurrent Neural Networks (RNNs). In the past year, RNNs have done exceptionally well at learning to decode sequences of symbols from input signals. In the main part of this talk, I'll review some recent successes on machine translation, image understanding, and beyond. I'll finish the talk with a discussion of some of the next challenges for deep learning, and some exciting research and applications that people in the field have started looking at.
The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room B03 in the basement of the Gates Computer Science Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.
Oriol Vinyals is a Research Scientist at Google. He works in deep learning with the Google Brain team. Oriol holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego. He is a recipient of the 2011 Microsoft Research PhD Fellowship. He was an early adopter of the new deep learning wave at Berkeley, and in his thesis he focused on non-convex optimization and recurrent neural networks. At Google Brain he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.