Marcelo Romero , Matheus Gutoski , Leandro Takeshi Hattori , Manassés Ribeiro & Heitor Silvério Lopes
Abstract: Transfer learning is a paradigm that consists in training and testing classifiers with datasets drawn from distinct distributions. This technique allows to solve a particular problem using a model that was trained for another purpose. In the recent years, this practice has become very popular due to the increase of public available pre-trained models that can be fine-tuned to be applied in different scenarios. However, the relationship between the datasets used for training the model and the test data is usually not addressed, specially where the fine-tuning process is done only for the fully connected layers of a Convolutional Neural Network with pre-trained weights. This work presents a study regarding the relationship between the datasets used in a transfer learning process in terms of the performance achieved by models complexities and similarities. For this purpose, we fine-tune the final layer of Convolutional Neural Networks with pre-trained weights using diverse soft biometrics datasets. An evaluation of the performances of the models when tested with datasets that are different from the one used for training the model is presented. Complexity and similarity metrics are also used to perform the evaluation.
Keywords: Neural Network, Convolutional Neural Network, Transfer Learning, Soft Biometrics, Data Complexity, Data Similarity.
DOI code: 10.21528/lnlm-vol18-no2-art5
PDF file: vol18-no2-art5.pdf
BibTex file: vol18-no2-art5.bib