Abstract:
"The fact which is initialized to be carried on to this research is the inability of blind people to recognize the facial emotions of others whom they are talking with by only considering their utterances. Someone’s utterance may not exactly expose the facial emotions every time.
Thermal images are supposed to process in this model. The first step is creating a Machine Learning model and training the model with higher accuracy. To train the model, the pre-processing part of the model is built up with several core functions such as detecting the faces using HaarCascade Libraries and saving those faces in separate folders for each emotion, accessing these images one by one, and dividing the extracted face into 9 unequal regions, segmenting each region into 3 dominant colors and the most suitable color for the patch of temperature expansion is selected to calculate the pixel density, and saving all these values in a CSV file. As the next step of the Model, the Support Vector Machine(SVM) algorithm is used
to classify emotions.
The outcome of this research at this stage will be an automated system that plays a specific music track relevant to each emotion when the user inserts a thermal image into the system. The system will recognize the emotion and plays the relevant music track after the classification in
the ML model while displaying the emotion on the screen."