Year of Graduation
Sequential Learning of Sparse ResNet Blocks
Applied Mathematics and Information Science
Neural network sparsification is an active field of research that has several practical applications such as model compression for mobile devices. There are two main approaches: weight pruning and Bayesian sparsification. The second method generally produces better results and allows training models from scratch. However, this statement is not correct for deep models. Experiments show that training of deep convolutional networks with sparse variational dropout does not converge. This graduation thesis suggests using layerwise learning as a workaround and explores its applicability to residual networks. Experiments show that sequential sparsification approach can be applied to deep residual networks. Moreover, models that are sparsified using this approach have high compression ratio and they do not suffer from serious accuracy loss.