dc.description.abstract |
"Machine Learning (ML) has led to the building of cutting-edge Artificial Intelligence (AI) systems to solve complex real-world problems with less human interventions. The Artificial Neural Network (ANN) is one of the popular approaches used in ML to implement solutions to complex problems. Deep Neural Networks (DNNs),capable of learning and represent complex features and relationships from the given data set aiming at producing better solutions to problems than typical ANNs. The advanced DNNs claimed to cope with highly dynamic environments. However, continuous learning is an essential requirement for dynamic adaptations. One of the inherent problems of DNN in continuous adaptation is that it leads the Neural Network (NN) to forget the previously learned information. This phenomenon in DNN is known as Catastrophic Forgetting (CF).CF is defined as the tendency of many ML algorithms to forget previous tasks when they learn new ones. Therefore, CF is a significant barrier in deep-learning models.
This research is aimed at detecting CF in Convolutional Neural Networks (CNN)s, exploring a number of solutions reported to reduce the effect of CF and developing an approach to increase the accuracy of CNN by diminishing the effect of CF. There exist several techniques to overcome CF in NNs. For some real-world use cases, it is extremely difficult to find labeled datasets. Without a properly labeled dataset, the supervised learning-based CF mechanisms are less appealing for such use cases. In contrast, unsupervised CF signals cannot utilize the maximum benefits of a labeled dataset. Therefore, this research is primarily targeted at using a signal of self-supervised mechanisms to overcome the CF in CNN, specifically, by injecting the signal to the CNN. However, the same mechanism can be generalized to overcome CF in other NNs." |
en_US |