Home
Blog
Light
Dark
Automatic
Deep learning
Steerable Partial Differential Operators for Equivariant Neural Networks
We present a framework for equivariant partial differential operators, generalizing existing approaches and narrowing the gap between PDOs and convolutions.
Erik Jenner
,
Maurice Weiler
PDF
Cite
Code
Deep Implicit layers
Several new architectures for neural networks, such as Neural ODEs and deep equlibirum models can be understood as replacing classical layers that explicitly specify how to compute the output with implicit layers. These layers describe which conditions the output should specify but leave the actual computation up to some solver that can be chosen arbitrarily. This post contains a brief introduction to the main ideas behind implicit layers.
Mar 3, 2021
6 min read
VAEs from a generative perspective
Variational autoencoders are usually introduced as a probabilistic extension of autoencoders with regularization. An alternative view is that the encoder arises naturally as a tool for efficiently training the decoder. This is the perspective I take in this post, deriving VAEs without assuming an autoencoder architecture a priori.
Jan 6, 2021
12 min read
Cite
×