(10 April, 09:00 CST, Room 416A, 4th Floor)
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor that involves deep mathematical research and wide-ranging practical applications. Traditionally, it involves imposing some structure to encoders (e.g., linearity of codes) and decoders (e.g., iterative decoding), which pares down the complexity of space of encoders and decoders. These structures have entailed tremendous human effort and are the product of substantial ingenuity.
A natural question is whether some of the canonical codes can be “learned from data”. We will begin by illustrating recent successes in this vein. Specifically, we will cover the result that an end-to-end learning of the code can recover the performance of turbo codes for moderate block lengths. In the second part, we will describe recent successes in discovering new codes and PHY algorithms that strictly improve upon the state of the art, examples of which include networks, channels with feedback, and completely end-to-end neural communication algorithms. In terms of methodology, there is a large spectrum of approaches, ranging from (a) using reinforcement learning to optimize existing code in a near black-box fashion, to (b) the neural augmentation method, which introduces learnable components to existing communication algorithms, to (c) the completely neural end-to-end learned algorithms.