TY - JOUR
T1 - Automated emotion recognition
T2 - Current trends and future perspectives
AU - Maithri, M.
AU - Raghavendra, U.
AU - Gudigar, Anjan
AU - Samanth, Jyothi
AU - Prabal Datta Barua, Datta Barua
AU - Murugappan, Murugappan
AU - Chakole, Yashas
AU - Acharya, U. Rajendra
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/3
Y1 - 2022/3
N2 - Background: Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. Objective: This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. Method: This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. Results: There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. Conclusion: Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
AB - Background: Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. Objective: This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. Method: This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. Results: There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. Conclusion: Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
UR - http://www.scopus.com/inward/record.url?scp=85123614714&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85123614714&partnerID=8YFLogxK
U2 - 10.1016/j.cmpb.2022.106646
DO - 10.1016/j.cmpb.2022.106646
M3 - Review article
AN - SCOPUS:85123614714
SN - 0169-2607
VL - 215
JO - Computer Methods and Programs in Biomedicine
JF - Computer Methods and Programs in Biomedicine
M1 - 106646
ER -