Several new architectures for neural networks, such as Neural ODEs and
deep equlibirum models can be understood as replacing classical layers
that explicitly specify how to compute the output with implicit layers.
These layers describe which conditions the output should specify but
leave the actual computation up to some solver that can be chosen arbitrarily.
This post contains a brief introduction to the main ideas behind implicit layers.