TY - GEN
T1 - Real-Time Facial Emotion Recognition Using Deep Learning Approach
AU - Chaitra, R.
AU - Vivekananda Bhat, K.
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Classifying a person's emotional states is done using facial emotion recognition. The goal is to classify each face image into one of the 7 types of facial emotions: fear, disgust, surprise, sadness, neutral, happiness, and anger. To categorize the feeling CNN is utilized, and input is obtained via a variety of grayscale images via data collection and real-time videos. Subsequently, the CNN convolution and pooling layers are used for feature extraction, while the softmax layer is employed for categorization. Some techniques used to reduce the model's overfitting issue include dropout, cluster standardization, and L2 regularization. The model we developed outperforms previous efforts in accurately predicting individual emotions in the experiments conducted on the image collection of facial expressions. Additionally, the model performs well when used to forecast each image's sentiment using real-time video data. The developed deep learning model will collaborate with advancements in neuroscience, contributing to our understanding of the brain's mechanisms for emotion recognition. This may lead to more biologically inspired models and treatments for emotion-related disorders like autism.
AB - Classifying a person's emotional states is done using facial emotion recognition. The goal is to classify each face image into one of the 7 types of facial emotions: fear, disgust, surprise, sadness, neutral, happiness, and anger. To categorize the feeling CNN is utilized, and input is obtained via a variety of grayscale images via data collection and real-time videos. Subsequently, the CNN convolution and pooling layers are used for feature extraction, while the softmax layer is employed for categorization. Some techniques used to reduce the model's overfitting issue include dropout, cluster standardization, and L2 regularization. The model we developed outperforms previous efforts in accurately predicting individual emotions in the experiments conducted on the image collection of facial expressions. Additionally, the model performs well when used to forecast each image's sentiment using real-time video data. The developed deep learning model will collaborate with advancements in neuroscience, contributing to our understanding of the brain's mechanisms for emotion recognition. This may lead to more biologically inspired models and treatments for emotion-related disorders like autism.
UR - https://www.scopus.com/pages/publications/105004661498
UR - https://www.scopus.com/pages/publications/105004661498#tab=citedBy
U2 - 10.1109/ETIS64005.2025.10960833
DO - 10.1109/ETIS64005.2025.10960833
M3 - Conference contribution
AN - SCOPUS:105004661498
T3 - ETIS International Conference on Emerging Technologies for Intelligent Systems, ETIS 2025
BT - ETIS International Conference on Emerging Technologies for Intelligent Systems, ETIS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 International Conference on Emerging Technologies for Intelligent Systems, ETIS 2025
Y2 - 7 February 2025 through 9 February 2025
ER -