Dates
Friday, January 24, 2020 - 11:00am to Friday, January 24, 2020 - 12:00pm
Location
SCGP 313
Event Description

TITLE: Towards a Theory of Encode/Decoder Architectures by Andrej Risteski of CMU

ABSTRACT: A common choice of architecture in representation learning (i.e., learning a good embedding of the data) is an encoder/decoder architecture, which tries to map a part of the input into a good latent representation (via an encoder), and predict the remaining part of the input (via a decoder). Two common examples are universal machine translation: where one tries to learn to translate between any pair of a set of languages via a common latent language, given paired up corpora for only a part of the pairs; and contextual encoders -- where one tries to predict a part of the image, given the rest of the image.

We will give a framework for analyzing the sample complexity of such architectures -- i.e., how many pairs of languages do we need to have paired up corpora for? How many image prediction tasks do we have to solve to get a good representation?

Event Title
Talk: Towards a Theory of Encode/Decoder Architectures by Andrej Risteski