Transfer learning for classification and prediction of time series for next generation networks
Résumé
Transfer learning (TL) is a useful technique that enables the wide spreading of neural networks after re-adaptation of their weights. In ths paper, two methods are introduced for transfer learning of recurrent neural networks: D-LSTM (Long Short Term Memory with deep layers) and CNN-1D (Convolutional Neural Network of One Dimension). The first is used to improve the prediction of time series when datasets are too small to obtain satisfactory results. The second enables personalizing and hence re-adaptation of an already-trained network to a new class of time series. In fact, the CNN-1D classification is applied to those real datasets to classify different behaviors in a large city. We show that our architecture drastically improves prediction when transfer learning is used in the same class of behavior but also on different classes of behaviors.