A multi-modal visual emotion recognition method to instantiate an ontology

Descripción del Articulo

This research was supported by the FONDO NA-CIONAL DE DESARROLLO CIENT´FICO, TEC-NOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA -FONDECYT as executing entity of CONCYTEC under grant agreement no. 01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism, Autonomous and Semantic web based.
Detalles Bibliográficos
Autores: Heredia J.P.A., Cardinale Y., Dongo I., Díaz-Amado J.
Formato: objeto de conferencia
Fecha de Publicación:2021
Institución:Consejo Nacional de Ciencia Tecnología e Innovación
Repositorio:CONCYTEC-Institucional
Lenguaje:inglés
OAI Identifier:oai:repositorio.concytec.gob.pe:20.500.12390/2959
Enlace del recurso:https://hdl.handle.net/20.500.12390/2959
https://doi.org/10.5220/0010516104530464
Nivel de acceso:acceso abierto
Materia:Visual Expressions
Emotion Ontology
Emotion Recognition
Multi-modal Method
https://purl.org/pe-repo/ocde/ford#1.05.01
id CONC_0515743669e2a7f218c9c4658f0f2ede
oai_identifier_str oai:repositorio.concytec.gob.pe:20.500.12390/2959
network_acronym_str CONC
network_name_str CONCYTEC-Institucional
repository_id_str 4689
dc.title.none.fl_str_mv A multi-modal visual emotion recognition method to instantiate an ontology
title A multi-modal visual emotion recognition method to instantiate an ontology
spellingShingle A multi-modal visual emotion recognition method to instantiate an ontology
Heredia J.P.A.
Visual Expressions
Emotion Ontology
Emotion Ontology
Emotion Recognition
Emotion Recognition
Multi-modal Method
https://purl.org/pe-repo/ocde/ford#1.05.01
title_short A multi-modal visual emotion recognition method to instantiate an ontology
title_full A multi-modal visual emotion recognition method to instantiate an ontology
title_fullStr A multi-modal visual emotion recognition method to instantiate an ontology
title_full_unstemmed A multi-modal visual emotion recognition method to instantiate an ontology
title_sort A multi-modal visual emotion recognition method to instantiate an ontology
author Heredia J.P.A.
author_facet Heredia J.P.A.
Cardinale Y.
Dongo I.
Díaz-Amado J.
author_role author
author2 Cardinale Y.
Dongo I.
Díaz-Amado J.
author2_role author
author
author
dc.contributor.author.fl_str_mv Heredia J.P.A.
Cardinale Y.
Dongo I.
Díaz-Amado J.
dc.subject.none.fl_str_mv Visual Expressions
topic Visual Expressions
Emotion Ontology
Emotion Ontology
Emotion Recognition
Emotion Recognition
Multi-modal Method
https://purl.org/pe-repo/ocde/ford#1.05.01
dc.subject.es_PE.fl_str_mv Emotion Ontology
Emotion Ontology
Emotion Recognition
Emotion Recognition
Multi-modal Method
dc.subject.ocde.none.fl_str_mv https://purl.org/pe-repo/ocde/ford#1.05.01
description This research was supported by the FONDO NA-CIONAL DE DESARROLLO CIENT´FICO, TEC-NOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA -FONDECYT as executing entity of CONCYTEC under grant agreement no. 01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism, Autonomous and Semantic web based.
publishDate 2021
dc.date.accessioned.none.fl_str_mv 2024-05-30T23:13:38Z
dc.date.available.none.fl_str_mv 2024-05-30T23:13:38Z
dc.date.issued.fl_str_mv 2021
dc.type.none.fl_str_mv info:eu-repo/semantics/conferenceObject
format conferenceObject
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12390/2959
dc.identifier.doi.none.fl_str_mv https://doi.org/10.5220/0010516104530464
dc.identifier.scopus.none.fl_str_mv 2-s2.0-85111776744
url https://hdl.handle.net/20.500.12390/2959
https://doi.org/10.5220/0010516104530464
identifier_str_mv 2-s2.0-85111776744
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.ispartof.none.fl_str_mv Proceedings of the 16th International Conference on Software Technologies, ICSOFT 2021
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.uri.none.fl_str_mv https://creativecommons.org/licenses/by-nc-nd/4.0/
eu_rights_str_mv openAccess
rights_invalid_str_mv https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.publisher.none.fl_str_mv SciTePress
publisher.none.fl_str_mv SciTePress
dc.source.none.fl_str_mv reponame:CONCYTEC-Institucional
instname:Consejo Nacional de Ciencia Tecnología e Innovación
instacron:CONCYTEC
instname_str Consejo Nacional de Ciencia Tecnología e Innovación
instacron_str CONCYTEC
institution CONCYTEC
reponame_str CONCYTEC-Institucional
collection CONCYTEC-Institucional
repository.name.fl_str_mv Repositorio Institucional CONCYTEC
repository.mail.fl_str_mv repositorio@concytec.gob.pe
_version_ 1839175729503272960
spelling Publicationrp08378600rp05703600rp05705600rp08377600Heredia J.P.A.Cardinale Y.Dongo I.Díaz-Amado J.2024-05-30T23:13:38Z2024-05-30T23:13:38Z2021https://hdl.handle.net/20.500.12390/2959https://doi.org/10.5220/00105161045304642-s2.0-85111776744This research was supported by the FONDO NA-CIONAL DE DESARROLLO CIENT´FICO, TEC-NOLÓGICO Y DE INNOVACIÓN TECNOLÓGICA -FONDECYT as executing entity of CONCYTEC under grant agreement no. 01-2019-FONDECYT-BM-INC.INV in the project RUTAS: Robots for Urban Tourism, Autonomous and Semantic web based.Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptation of the EMOTIC dataset. Results show that our method outperforms the single-modal methods. Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedConsejo Nacional de Ciencia, Tecnología e Innovación Tecnológica - ConcytecengSciTePressProceedings of the 16th International Conference on Software Technologies, ICSOFT 2021info:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by-nc-nd/4.0/Visual ExpressionsEmotion Ontology-1Emotion Ontology-1Emotion Recognition-1Emotion Recognition-1Multi-modal Method-1https://purl.org/pe-repo/ocde/ford#1.05.01-1A multi-modal visual emotion recognition method to instantiate an ontologyinfo:eu-repo/semantics/conferenceObjectreponame:CONCYTEC-Institucionalinstname:Consejo Nacional de Ciencia Tecnología e Innovacióninstacron:CONCYTEC20.500.12390/2959oai:repositorio.concytec.gob.pe:20.500.12390/29592024-05-30 16:12:30.034https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_14cbinfo:eu-repo/semantics/closedAccessmetadata only accesshttps://repositorio.concytec.gob.peRepositorio Institucional CONCYTECrepositorio@concytec.gob.pe#PLACEHOLDER_PARENT_METADATA_VALUE##PLACEHOLDER_PARENT_METADATA_VALUE##PLACEHOLDER_PARENT_METADATA_VALUE##PLACEHOLDER_PARENT_METADATA_VALUE#<Publication xmlns="https://www.openaire.eu/cerif-profile/1.1/" id="be846f10-32a8-4d49-88a4-ff95925c1ccb"> <Type xmlns="https://www.openaire.eu/cerif-profile/vocab/COAR_Publication_Types">http://purl.org/coar/resource_type/c_1843</Type> <Language>eng</Language> <Title>A multi-modal visual emotion recognition method to instantiate an ontology</Title> <PublishedIn> <Publication> <Title>Proceedings of the 16th International Conference on Software Technologies, ICSOFT 2021</Title> </Publication> </PublishedIn> <PublicationDate>2021</PublicationDate> <DOI>https://doi.org/10.5220/0010516104530464</DOI> <SCP-Number>2-s2.0-85111776744</SCP-Number> <Authors> <Author> <DisplayName>Heredia J.P.A.</DisplayName> <Person id="rp08378" /> <Affiliation> <OrgUnit> </OrgUnit> </Affiliation> </Author> <Author> <DisplayName>Cardinale Y.</DisplayName> <Person id="rp05703" /> <Affiliation> <OrgUnit> </OrgUnit> </Affiliation> </Author> <Author> <DisplayName>Dongo I.</DisplayName> <Person id="rp05705" /> <Affiliation> <OrgUnit> </OrgUnit> </Affiliation> </Author> <Author> <DisplayName>Díaz-Amado J.</DisplayName> <Person id="rp08377" /> <Affiliation> <OrgUnit> </OrgUnit> </Affiliation> </Author> </Authors> <Editors> </Editors> <Publishers> <Publisher> <DisplayName>SciTePress</DisplayName> <OrgUnit /> </Publisher> </Publishers> <License>https://creativecommons.org/licenses/by-nc-nd/4.0/</License> <Keyword>Visual Expressions</Keyword> <Keyword>Emotion Ontology</Keyword> <Keyword>Emotion Ontology</Keyword> <Keyword>Emotion Recognition</Keyword> <Keyword>Emotion Recognition</Keyword> <Keyword>Multi-modal Method</Keyword> <Abstract>Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptation of the EMOTIC dataset. Results show that our method outperforms the single-modal methods. Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved</Abstract> <Access xmlns="http://purl.org/coar/access_right" > </Access> </Publication> -1
score 13.438522
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).