
[1312.6114] Auto-Encoding Variational Bayes - arXiv.org
Dec 20, 2013 · We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold.
[1906.02691] An Introduction to Variational Autoencoders - arXiv.org
Jun 6, 2019 · Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models. In this work, we provide an introduction to variational autoencoders and some important extensions. Bibliographic Explorer (What is the Explorer?) Connected Papers (What is Connected Papers?)
Variational autoencoder - Wikipedia
In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. [1] It is part of the families of probabilistic graphical models and variational Bayesian methods. [2]
[1711.00937] Neural Discrete Representation Learning - arXiv.org
Nov 2, 2017 · In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static.
VAE Explained - Papers With Code
A Variational Autoencoder is a type of likelihood-based generative model. It consists of an encoder, that takes in data x as input and transforms this into a latent representation z, and a …
[PDF] Auto-Encoding Variational Bayes - Semantic Scholar
This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks and revisits several common regularisers from a variational perspective.
beta-VAE: Learning Basic Visual Concepts with a Constrained...
Feb 6, 2017 · We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of …
[2007.03898] NVAE: A Deep Hierarchical Variational Autoencoder …
Jul 8, 2020 · We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and …
matthewvowels1/Awesome-VAEs - GitHub
Awesome work on the VAE, disentanglement, representation learning, and generative models. I gathered these resources (currently @ ~900 papers) as literature for my PhD, and thought it may come in useful for others. This list includes works relevant to various topics relating to VAEs.
In this paper, I investigate the use of a disentangled VAE for downstream image classification tasks. I train a dis-entangled VAE in an unsupervised manner, and use the learned encoder as a feature extractor on top of which a linear classifier is learned. The models are trained and evaluated on the MNIST handwritten digits dataset. Experi-