Machine learning and communications are intrinsically connected. The fundamental problem of communications, as stated by Shannon, "reproducing at one point either exactly or approximately a message selected at another point," can be considered as a classification problem. With this connection in mind, I will focus on the fundamental joint source-channel coding problem using modern machine learning techniques. I will introduce uncoded "analog" schemes for wireless image transmission, and show their surprising performance both through simulations and practical implementation. This result will be used to motivate unsupervised learning techniques for wireless image transmission, leading to a "deep joint source-channel encoder" architecture, which behaves similarly to analog transmission, and not only improves upon state-of-the-art digital schemes, but also achieves graceful degradation with channel quality, and performs exceptionally well over fading channels despite not utilizing explicit pilot signals or channel state estimation.
In the second part of the talk, I will focus on distributed machine learning, particularly targeting wireless edge networks, and show that ideas from coding and communication theories can help improve their performance. Finally, I will introduce the novel concept of "over-the-air stochastic gradient descent" for wireless edge learning, and show that it significantly improves the efficiency of machine learning across bandwidth and power limited wireless devices compared to the standard digital approach that separates computation and communication. This will close the circle, making another strong case for analog communication in future communication systems.