Exportación Completada — 

GCTW Alignment for isolated gesture recognition

Descripción del Articulo

In recent years, there has been increasing interest in developing automatic Sign Language Recognition (SLR) systems because Sign Language (SL) is the main mode of communication between deaf people all over the world. However, most people outside the deaf community do not understand SL, generating a...

Descripción completa

Detalles Bibliográficos
Autor: Guzmán Zenteno, Leonardo Braulio
Formato: tesis de maestría
Fecha de Publicación:2018
Institución:Universidad Católica San Pablo
Repositorio:UCSP-Institucional
Lenguaje:inglés
OAI Identifier:oai:repositorio.ucsp.edu.pe:20.500.12590/16008
Enlace del recurso:https://hdl.handle.net/20.500.12590/16008
Nivel de acceso:acceso abierto
Materia:Artificial Intelligence
Video Processing
Alignment of Multiple Sequences
https://purl.org/pe-repo/ocde/ford#1.02.01
id UCSP_ad3b90a6a4e6a436d13eca59c08085d8
oai_identifier_str oai:repositorio.ucsp.edu.pe:20.500.12590/16008
network_acronym_str UCSP
network_name_str UCSP-Institucional
repository_id_str 3854
dc.title.es_PE.fl_str_mv GCTW Alignment for isolated gesture recognition
title GCTW Alignment for isolated gesture recognition
spellingShingle GCTW Alignment for isolated gesture recognition
Guzmán Zenteno, Leonardo Braulio
Artificial Intelligence
Video Processing
Alignment of Multiple Sequences
https://purl.org/pe-repo/ocde/ford#1.02.01
title_short GCTW Alignment for isolated gesture recognition
title_full GCTW Alignment for isolated gesture recognition
title_fullStr GCTW Alignment for isolated gesture recognition
title_full_unstemmed GCTW Alignment for isolated gesture recognition
title_sort GCTW Alignment for isolated gesture recognition
author Guzmán Zenteno, Leonardo Braulio
author_facet Guzmán Zenteno, Leonardo Braulio
author_role author
dc.contributor.advisor.fl_str_mv Cámara Chávez, Guillermo
dc.contributor.author.fl_str_mv Guzmán Zenteno, Leonardo Braulio
dc.subject.es_PE.fl_str_mv Artificial Intelligence
Video Processing
Alignment of Multiple Sequences
topic Artificial Intelligence
Video Processing
Alignment of Multiple Sequences
https://purl.org/pe-repo/ocde/ford#1.02.01
dc.subject.ocde.es_PE.fl_str_mv https://purl.org/pe-repo/ocde/ford#1.02.01
description In recent years, there has been increasing interest in developing automatic Sign Language Recognition (SLR) systems because Sign Language (SL) is the main mode of communication between deaf people all over the world. However, most people outside the deaf community do not understand SL, generating a communication problem, between both communities. Recognizing signs is a challenging problem because manual signing (not taking into account facial gestures) has four components that have to be recognized, namely, handshape, movement, location and palm orientation. Even though the appearance and meaning of basic signs are well-defined in sign language dictionaries, in practice, many variations arise due to different factors like gender, age, education or regional, social and ethnic factors which can lead to significant variations making hard to develop a robust SL recognition system. This project attempts to introduce the alignment of videos into isolated SLR, given that this approach has not been studied deeply, even though it presents a great potential for correctly recognize isolated gestures. We also aim for a user-independent recognition, which means that the system should give have a good recognition accuracy for the signers that were not represented in the data set. The main features used for the alignment are the wrists coordinates that we extracted from the videos by using OpenPose. These features will be aligned by using Generalized Canonical Time Warping. The resultant videos will be classified by making use of a 3D CNN. Our experimental results show that the proposed method has obtained a 65.02% accuracy, which places us 5th in the 2017 Chalearn LAP isolated gesture recognition challenge, only 2.69% away from the first place.
publishDate 2018
dc.date.accessioned.none.fl_str_mv 2019-07-09T16:15:56Z
dc.date.available.none.fl_str_mv 2019-07-09T16:15:56Z
dc.date.issued.fl_str_mv 2018
dc.type.none.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
dc.identifier.other.none.fl_str_mv 1070065
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12590/16008
identifier_str_mv 1070065
url https://hdl.handle.net/20.500.12590/16008
dc.language.iso.es_PE.fl_str_mv eng
language eng
dc.relation.ispartof.fl_str_mv SUNEDU
dc.rights.es_PE.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.uri.es_PE.fl_str_mv https://creativecommons.org/licenses/by/4.0/
eu_rights_str_mv openAccess
rights_invalid_str_mv https://creativecommons.org/licenses/by/4.0/
dc.format.es_PE.fl_str_mv application/pdf
dc.publisher.es_PE.fl_str_mv Universidad Católica San Pablo
dc.publisher.country.es_PE.fl_str_mv PE
dc.source.es_PE.fl_str_mv Universidad Católica San Pablo
Repositorio Institucional - UCSP
dc.source.none.fl_str_mv reponame:UCSP-Institucional
instname:Universidad Católica San Pablo
instacron:UCSP
instname_str Universidad Católica San Pablo
instacron_str UCSP
institution UCSP
reponame_str UCSP-Institucional
collection UCSP-Institucional
bitstream.url.fl_str_mv https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/cfb849c1-9c63-4f32-9cea-4190f90d5be9/download
https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/209e63df-c8fe-40dd-ab96-a2661374f292/download
https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/49e5faf3-4ace-4a79-a630-0bafa79f1c6f/download
https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/c4d9fd9f-e0b8-41d4-bf29-c07ed9193539/download
bitstream.checksum.fl_str_mv 15e5824eef08f0a2225537037e2292c9
8a4605be74aa9ea9d79846c1fba20a33
bfc17b4858c8699c55bcef90480a526f
25f1f1636aff5e6ce0d867a7131c58db
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional de la Universidad Católica San Pablo
repository.mail.fl_str_mv dspace@ucsp.edu.pe
_version_ 1851053051967700992
spelling Cámara Chávez, GuillermoGuzmán Zenteno, Leonardo Braulio2019-07-09T16:15:56Z2019-07-09T16:15:56Z20181070065https://hdl.handle.net/20.500.12590/16008In recent years, there has been increasing interest in developing automatic Sign Language Recognition (SLR) systems because Sign Language (SL) is the main mode of communication between deaf people all over the world. However, most people outside the deaf community do not understand SL, generating a communication problem, between both communities. Recognizing signs is a challenging problem because manual signing (not taking into account facial gestures) has four components that have to be recognized, namely, handshape, movement, location and palm orientation. Even though the appearance and meaning of basic signs are well-defined in sign language dictionaries, in practice, many variations arise due to different factors like gender, age, education or regional, social and ethnic factors which can lead to significant variations making hard to develop a robust SL recognition system. This project attempts to introduce the alignment of videos into isolated SLR, given that this approach has not been studied deeply, even though it presents a great potential for correctly recognize isolated gestures. We also aim for a user-independent recognition, which means that the system should give have a good recognition accuracy for the signers that were not represented in the data set. The main features used for the alignment are the wrists coordinates that we extracted from the videos by using OpenPose. These features will be aligned by using Generalized Canonical Time Warping. The resultant videos will be classified by making use of a 3D CNN. Our experimental results show that the proposed method has obtained a 65.02% accuracy, which places us 5th in the 2017 Chalearn LAP isolated gesture recognition challenge, only 2.69% away from the first place.Trabajo de investigaciónapplication/pdfengUniversidad Católica San PabloPEinfo:eu-repo/semantics/openAccesshttps://creativecommons.org/licenses/by/4.0/Universidad Católica San PabloRepositorio Institucional - UCSPreponame:UCSP-Institucionalinstname:Universidad Católica San Pabloinstacron:UCSPArtificial IntelligenceVideo ProcessingAlignment of Multiple Sequenceshttps://purl.org/pe-repo/ocde/ford#1.02.01GCTW Alignment for isolated gesture recognitioninfo:eu-repo/semantics/masterThesisSUNEDUMaestro en Ciencia de la ComputaciónUniversidad Católica San Pablo. Facultad de Ingeniería y ComputaciónMaestríaCiencia de la ComputaciónEscuela Profesional de Ciencia de la ComputaciónORIGINALGUZMAN_ZENTENO_LEO_GCT.pdfGUZMAN_ZENTENO_LEO_GCT.pdfapplication/pdf4202775https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/cfb849c1-9c63-4f32-9cea-4190f90d5be9/download15e5824eef08f0a2225537037e2292c9MD51LICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/209e63df-c8fe-40dd-ab96-a2661374f292/download8a4605be74aa9ea9d79846c1fba20a33MD52TEXTGUZMAN_ZENTENO_LEO_GCT.pdf.txtGUZMAN_ZENTENO_LEO_GCT.pdf.txtExtracted texttext/plain95204https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/49e5faf3-4ace-4a79-a630-0bafa79f1c6f/downloadbfc17b4858c8699c55bcef90480a526fMD53THUMBNAILGUZMAN_ZENTENO_LEO_GCT.pdf.jpgGUZMAN_ZENTENO_LEO_GCT.pdf.jpgGenerated Thumbnailimage/jpeg3528https://repositorio.ucsp.edu.pe/backend/api/core/bitstreams/c4d9fd9f-e0b8-41d4-bf29-c07ed9193539/download25f1f1636aff5e6ce0d867a7131c58dbMD5420.500.12590/16008oai:repositorio.ucsp.edu.pe:20.500.12590/160082023-07-26 01:28:55.437https://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccessopen.accesshttps://repositorio.ucsp.edu.peRepositorio Institucional de la Universidad Católica San Pablodspace@ucsp.edu.peTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=
score 13.997017
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).