This is how a neural network learns to add, multiply and compare handwritten digits WITHOUT knowing their values

alt

I described in a previous post, how useful are autoencoders in automated labeling. The main property of these networks is their ability to learn features/patterns in the data. This is in fact not specific to autoencoders and can be implemented using other unsupervised techniques, mainly PCA.
The ability to detect and learn features in data can be used in other areas.

In this post, I will present some applications of convolutional autoencoders:

  • First, a convolutional autoencoder will be trained on MNIST data.
  • After the training of the encoder and decoder, we will freeze their weights and use them with additional dense layers to "learn" arithmetic operations, namely addition, multiplication and comparison.
    The trick is to never explicitly associate the handwritten digits in MNIST dataset with their respective labels. We will see that the neural networks will be nevertheless able to reach 97+% accuracy in all cases on unseen data.

The first step of the design is described in the following diagram:

alt

In the second step, we will use the encoder in series with dense layers to perform arithmetic operations: addition, multiplication and comparison. We will train only the dense layer weights, and supply the results of the operations as labels. note that we will not supply the digits values (labels).

alt

Training an autoencoder on MNIST data