Image
Stanford EE

Data Compression with Neural Fields

Summary
Hyunjik Kim and Jonathan Schwarz (Google)
Zoom only
Dec
1
Date(s)
Content

Zoom

 

Abstract: Deep Learning-based Data Compression using non-linear transform coding has recently attracted significant attention due to its ability to achieve very attractive rate-distortion (RD) curves. Unfortunately, many existing methods rely on large and expressive architectures with high decoding complexity, rendering current techniques impractical. 

In this talk, we instead introduce a new way of approaching this problem through a functional view of data using Neural Fields, also known as Implicit Neural Representations or NeRFs. We first introduce the general concept of Neural Fields as alternatives to multi-dimensional arrays for data representation and provide a quick overview of foundational work in the field. We then proceed to cover a series of approaches designed to leverage this perspective for Neural Data Compression on a wide range of modalities, touching on architectural improvements, spatial representations, sparse neural networks and meta-learning. Finally, we conclude by introducing C3, our last neural compression method designed for images and videos, showing how we can match the RD performance of Video Compression Transformers on the UVG video benchmark while using less than 0.1% of their MACs/pixel for decoding, thus showing a promising potential avenue towards overcoming one of the open major problems for Neural Compression.

Bios: 

Hyunjik Kim is a Research Scientist at DeepMind in London, working on various topics in Deep Learning. His recent research interest revolves around using Neural Fields; in particular the idea of using them for data compression and doing deep learning directly on the compressed space rather than on traditional array data (e.g. pixels). Hyunjik has made contributions to a wide variety of other topics, including equivariant deep learning, theoretical properties of self-attention, unsupervised representation learning and scaling up Gaussian Processes. Hyunjik obtained his PhD in Machine Learning under Yee Whye Teh’s supervision from the University of Oxford and holds a B.A. in Mathematics and M.Math from the University of Cambridge. He was a research intern at Microsoft Research, Cambridge and DeepMind. 

Jonathan Richard Schwarz is a Postdoctoral Research Fellow at Harvard University focused on various forms of Efficient Machine Learning and their applications for Neural Data Compression and Biomedicine. Jonathan has made contributions to Meta- & Continual Learning, Sparsity in Neural Networks, Implicit Neural Representations and Data Pruning. Previously, he was a Senior Research Scientist at DeepMind, graduated from the joint DeepMind-UCL PhD programme under the supervision of Yee Whye Teh and Peter Latham, spent two years at the Gatsby Computational Neuroscience Unit and holds and graduated top-of-his-class from The University of Edinburgh.