A new algorithm for training sparse autoencoders
Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 norm increases the tendency that each data representation is "individually" sparse. Moreover, by using the proposed sparse autoencoder, once the model parameters are learned, the sparse representation of any new data is obtained simply by a matrix-vector multiplication without performing any optimization. When applied to the MNIST, CIFAR-10, and OPTDIGITS datasets, the results show that the proposed model guarantees a sparse representation for each input data which leads to better classification results.