Effects of Image Resolution on Accuracy and Robustness in CNNs and Smoothed Classifiers
Daniel A. Nissan
Abstract
Generalisation matters where training data ends and test data begins. It is easy for deep neural networks (DNNs) to memorize the training data set but separating the feature space too strictly leads to poor performance on novel data points, whereas a linear separation in most cases does not suffice for the training data to begin with.
This bias-variance trade-off leads to the study of generalisation which is what allows us to walk the fine line between overfitting and underfitting. There are many miscellaneous strategies which can enhance the effectiveness of generalisation in machine learning models, such as data augmentation, selection of appropriate activation functions, hyperparameter tweaking, early stopping, and batch normalisation.
However, DNNs’ accuracy is more reliable than their robustness. A promising and simple technique to obtain certified robustness is randomized smoothing. In this paper, I analyse accuracy and robustness of CNN classifiers before and after the application of randomized smoothing in the context of the MNIST dataset.
Furthermore, I examine how this behaviour extrapolates to different input space dimensionalities and network architectures. Results showed: Modern CNN architectures can correctly classify noised MNIST samples under heavy noise that is out of range for regular smoothing (σ=255) therefore completely flattening the trade-off across all input scales in accuracy and robustness one would usually expect between regular and smoothed classifiers.
This is hypothesized to be a consequence of the low complexity of MNIST thus warranting further examination in more complex datasets.
Keywords
Machine Learning
CNNs
Randomized Smoothing
Robustness
MNIST