Recurrent neural networks for deception detection in videos
Descripción del Articulo
Deception detection has always been of subject of interest. After all, determining if a person is telling the truth or not could be detrimental in many real-world cases. Current methods to discern deceptions require expensive equipment that need specialists to read and interpret them. In this articl...
Autores: | , , |
---|---|
Formato: | artículo |
Fecha de Publicación: | 2022 |
Institución: | Universidad Peruana de Ciencias Aplicadas |
Repositorio: | UPC-Institucional |
Lenguaje: | inglés |
OAI Identifier: | oai:repositorioacademico.upc.edu.pe:10757/659825 |
Enlace del recurso: | http://hdl.handle.net/10757/659825 |
Nivel de acceso: | acceso embargado |
Materia: | Deception detection Deep learning Facial landmarks recognition Recurrent neural networks Video database |
id |
UUPC_8b28e74637911c4ba62a0834acb5b2fc |
---|---|
oai_identifier_str |
oai:repositorioacademico.upc.edu.pe:10757/659825 |
network_acronym_str |
UUPC |
network_name_str |
UPC-Institucional |
repository_id_str |
2670 |
dc.title.es_PE.fl_str_mv |
Recurrent neural networks for deception detection in videos |
title |
Recurrent neural networks for deception detection in videos |
spellingShingle |
Recurrent neural networks for deception detection in videos Rodriguez-Meza, Bryan Deception detection Deep learning Facial landmarks recognition Recurrent neural networks Video database |
title_short |
Recurrent neural networks for deception detection in videos |
title_full |
Recurrent neural networks for deception detection in videos |
title_fullStr |
Recurrent neural networks for deception detection in videos |
title_full_unstemmed |
Recurrent neural networks for deception detection in videos |
title_sort |
Recurrent neural networks for deception detection in videos |
author |
Rodriguez-Meza, Bryan |
author_facet |
Rodriguez-Meza, Bryan Vargas-Lopez-Lavalle, Renzo Ugarte, Willy |
author_role |
author |
author2 |
Vargas-Lopez-Lavalle, Renzo Ugarte, Willy |
author2_role |
author author |
dc.contributor.author.fl_str_mv |
Rodriguez-Meza, Bryan Vargas-Lopez-Lavalle, Renzo Ugarte, Willy |
dc.subject.es_PE.fl_str_mv |
Deception detection Deep learning Facial landmarks recognition Recurrent neural networks Video database |
topic |
Deception detection Deep learning Facial landmarks recognition Recurrent neural networks Video database |
description |
Deception detection has always been of subject of interest. After all, determining if a person is telling the truth or not could be detrimental in many real-world cases. Current methods to discern deceptions require expensive equipment that need specialists to read and interpret them. In this article, we carry out an exhaustive comparison between 9 different facial landmark recognition based recurrent deep learning models trained on a recent man-made database used to determine lies, comparing them by accuracy and AUC. We also propose two new metrics that represent the validity of each prediction. The results of a 5-fold cross validation show that out of all the tested models, the Stacked GRU neural model has the highest AUC of.9853 and the highest accuracy of 93.69% between the trained models. Then, a comparison is done between other machine and deep learning methods and our proposed Stacked GRU architecture where the latter surpasses them in the AUC metric. These results indicate that we are not that far away from a future where deception detection could be accessible throughout computers or smart devices. |
publishDate |
2022 |
dc.date.accessioned.none.fl_str_mv |
2022-05-06T17:48:32Z |
dc.date.available.none.fl_str_mv |
2022-05-06T17:48:32Z |
dc.date.issued.fl_str_mv |
2022-01-01 |
dc.type.es_PE.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
dc.identifier.issn.none.fl_str_mv |
18650929 |
dc.identifier.doi.none.fl_str_mv |
10.1007/978-3-031-03884-6_29 |
dc.identifier.uri.none.fl_str_mv |
http://hdl.handle.net/10757/659825 |
dc.identifier.eissn.none.fl_str_mv |
18650937 |
dc.identifier.journal.es_PE.fl_str_mv |
Communications in Computer and Information Science |
dc.identifier.eid.none.fl_str_mv |
2-s2.0-85128491751 |
dc.identifier.scopusid.none.fl_str_mv |
SCOPUS_ID:85128491751 |
identifier_str_mv |
18650929 10.1007/978-3-031-03884-6_29 18650937 Communications in Computer and Information Science 2-s2.0-85128491751 SCOPUS_ID:85128491751 |
url |
http://hdl.handle.net/10757/659825 |
dc.language.iso.es_PE.fl_str_mv |
eng |
language |
eng |
dc.relation.url.es_PE.fl_str_mv |
https://link.springer.com/chapter/10.1007/978-3-031-03884-6_29 |
dc.rights.es_PE.fl_str_mv |
info:eu-repo/semantics/embargoedAccess |
eu_rights_str_mv |
embargoedAccess |
dc.format.es_PE.fl_str_mv |
application/html |
dc.publisher.es_PE.fl_str_mv |
Springer Science and Business Media Deutschland GmbH |
dc.source.none.fl_str_mv |
reponame:UPC-Institucional instname:Universidad Peruana de Ciencias Aplicadas instacron:UPC |
instname_str |
Universidad Peruana de Ciencias Aplicadas |
instacron_str |
UPC |
institution |
UPC |
reponame_str |
UPC-Institucional |
collection |
UPC-Institucional |
dc.source.journaltitle.none.fl_str_mv |
Communications in Computer and Information Science |
dc.source.volume.none.fl_str_mv |
1535 CCIS |
dc.source.beginpage.none.fl_str_mv |
397 |
dc.source.endpage.none.fl_str_mv |
411 |
bitstream.url.fl_str_mv |
https://repositorioacademico.upc.edu.pe/bitstream/10757/659825/1/license.txt |
bitstream.checksum.fl_str_mv |
8a4605be74aa9ea9d79846c1fba20a33 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 |
repository.name.fl_str_mv |
Repositorio académico upc |
repository.mail.fl_str_mv |
upc@openrepository.com |
_version_ |
1837188466831523840 |
spelling |
7f080a04dbb03c73e38b9ff1a0b1ead8461ff08118f99169248daa326bf95342533fd7e68213307170565ef90452257aRodriguez-Meza, BryanVargas-Lopez-Lavalle, RenzoUgarte, Willy2022-05-06T17:48:32Z2022-05-06T17:48:32Z2022-01-011865092910.1007/978-3-031-03884-6_29http://hdl.handle.net/10757/65982518650937Communications in Computer and Information Science2-s2.0-85128491751SCOPUS_ID:85128491751Deception detection has always been of subject of interest. After all, determining if a person is telling the truth or not could be detrimental in many real-world cases. Current methods to discern deceptions require expensive equipment that need specialists to read and interpret them. In this article, we carry out an exhaustive comparison between 9 different facial landmark recognition based recurrent deep learning models trained on a recent man-made database used to determine lies, comparing them by accuracy and AUC. We also propose two new metrics that represent the validity of each prediction. The results of a 5-fold cross validation show that out of all the tested models, the Stacked GRU neural model has the highest AUC of.9853 and the highest accuracy of 93.69% between the trained models. Then, a comparison is done between other machine and deep learning methods and our proposed Stacked GRU architecture where the latter surpasses them in the AUC metric. These results indicate that we are not that far away from a future where deception detection could be accessible throughout computers or smart devices.Revisión por paresapplication/htmlengSpringer Science and Business Media Deutschland GmbHhttps://link.springer.com/chapter/10.1007/978-3-031-03884-6_29info:eu-repo/semantics/embargoedAccessDeception detectionDeep learningFacial landmarks recognitionRecurrent neural networksVideo databaseRecurrent neural networks for deception detection in videosinfo:eu-repo/semantics/articleCommunications in Computer and Information Science1535 CCIS397411reponame:UPC-Institucionalinstname:Universidad Peruana de Ciencias Aplicadasinstacron:UPCLICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://repositorioacademico.upc.edu.pe/bitstream/10757/659825/1/license.txt8a4605be74aa9ea9d79846c1fba20a33MD51false10757/659825oai:repositorioacademico.upc.edu.pe:10757/6598252022-05-06 17:48:33.363Repositorio académico upcupc@openrepository.comTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo= |
score |
13.949868 |
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).