dragon dragon

Wuhan Ligong Daxue Xuebao (Jiaotong Kexue Yu Gongcheng Ban)/Journal of Wuhan University of Technology (Transportation Science and Engineering)

About the Journal

[This article belongs to Volume - 49, Issue - 07]

Abstract :

Facial expression is a crucial component of biometry; it has emerged as a vibrant and significant study domain in the past decade owing to its role in conveying individuals' emotional states. Although this analysis can be conducted using alternative features such as voice, body gestures, and social and contextual parameters, facial expression remains the most expressive medium for humans to convey emotions due to its high levels of directness, friendliness, convenience, and robustness. Artificial intelligence research uses deep learning techniques in human-computer interactions as an efficient system application process. An average person can exhibit or possess seven distinct facial expressions depending on the circumstance: surprise, disgust, anger, sadness, happiness, neutrality, and fear. These emotions are considered universal across humanity in most studies.The Automated FER System (AFERS or FERS) is a complex process that enables machines to autonomously identify emotions without the assistance of human beings. Researchers in this subject are striving to improve models and techniques, as well as extract various aspects, to allow for more accurate computer predictions of emotion. Deep learning architectures can manage vast volumes of data while producing superior results compared to typical emotional analysis methods. In the current study, three techniques are presented to enhance the accuracy of face expression categorization. The first suggested technique is based on the CycleGAN method, but the second proposed method uses a deep learning architecture after preprocessing the dataset. The experimental findings reveal that the test accuracy of the two suggested approaches on the FER2013 data set is 92% and 97%, respectively. The third proposed method indicates that the training accuracy and validation accuracy of this method on the Cohn-Kanade (CK+) data set have reached 99.7% and 99.5%, respectively.