MULTIMODAL FACIAL EXPRESSION RECOGNITION USING DIFFERENT ARCHITECTURE MODELS- A SURVEY
Authors
- Laxmi Patil, Lakshmi Patil
DOI:
https://doi.org/10.15282/jmes.17.1.2023.10.0759Keywords:
Facial Expression Recognition, Feature Extraction, Data Sets, and Multimodal Facial Expressions.Abstract
The use of trivial networks for Facial Expression Recognition (FER) has become more popular as researchers tackle the difficult problems of real-world scenarios, such as face occlusion, lighting fluctuations, and different postures. Convolutional Neural Networks (CNNs), one of the Deep Learning (DL) techniques, have significantly improved FER accuracy by recognizing complex patterns in facial expressions. The ability of FER systems to comprehend complex features and a comprehensive context has been made possible by the crucial integration of global and local information. While current research continuously improves FER approaches, investigating novel neural architectures and data augmentation techniques, and offering breakthroughs in fields like human-computer interface and healthcare, benchmark datasets like CK+, JAFFE, and AffectNet have proven crucial.
Published
How to Cite
License
Copyright (c) 2023 Publishing
This work is licensed under a Creative Commons Attribution 4.0 International License.