We know that a rotated fish is still a fish. How can we design Neural Network architectures which respect such facts(Symmetries)? Equivariant Neural Networks leverage knowledge of the symmetries of a learning problem to improve data efficiency, and have been shown to enjoy better scaling behavior than non-equivariant methods in problems with symmetries.
The manifold hypothesis is that real life high-dimensional data of interest often lives in an unknown lower-dimensional manifold embedded in ambient space.
Can we have a certificate of guarantee in a quality control sense for Neural Networks? NO. Neural Network are too wiggly, hence they always overfit. But all hope is not lost. Neural Networks combined with human supervision and good design leads to faster workflows.
Let's learn (almost) all Badminton Rules via an exibition match.