MULTIMODAL FACIAL EXPRESSION RECOGNITION USING DIFFERENT ARCHITECTURE MODELS- A SURVEY



Authors

  • Laxmi Patil, Lakshmi Patil

DOI:

https://doi.org/10.15282/jmes.17.1.2023.10.0759


Keywords:

Facial Expression Recognition, Feature Extraction, Data Sets, and Multimodal Facial Expressions.


Abstract

The use of trivial networks for Facial Expression Recognition (FER) has become more popular as researchers tackle the difficult problems of real-world scenarios, such as face occlusion, lighting fluctuations, and different postures. Convolutional Neural Networks (CNNs), one of the Deep Learning (DL) techniques, have significantly improved FER accuracy by recognizing complex patterns in facial expressions. The ability of FER systems to comprehend complex features and a comprehensive context has been made possible by the crucial integration of global and local information. While current research continuously improves FER approaches, investigating novel neural architectures and data augmentation techniques, and offering breakthroughs in fields like human-computer interface and healthcare, benchmark datasets like CK+, JAFFE, and AffectNet have proven crucial.



Published

2023-12-30

How to Cite