Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot
Descripción del Articulo
To this point, there has been extensive research investigating human-robot motion retargeting, but the vast majority of existing methods rely on sensors or multiple cameras to detect human poses and movements, while many other methods are not suitable for usage on real-time scenarios. The current pa...
Autores: | , , |
---|---|
Formato: | artículo |
Fecha de Publicación: | 2024 |
Institución: | Universidad Peruana de Ciencias Aplicadas |
Repositorio: | UPC-Institucional |
Lenguaje: | inglés |
OAI Identifier: | oai:repositorioacademico.upc.edu.pe:10757/676113 |
Enlace del recurso: | http://hdl.handle.net/10757/676113 |
Nivel de acceso: | acceso embargado |
Materia: | Geometry Human pose estimation Humanoid robot Motion retargeting Vectors |
id |
UUPC_38873f177df8c2adac8edff3b98b5c1a |
---|---|
oai_identifier_str |
oai:repositorioacademico.upc.edu.pe:10757/676113 |
network_acronym_str |
UUPC |
network_name_str |
UPC-Institucional |
repository_id_str |
2670 |
dc.title.es_PE.fl_str_mv |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
title |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
spellingShingle |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot Burga, Oscar Geometry Human pose estimation Humanoid robot Motion retargeting Vectors |
title_short |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
title_full |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
title_fullStr |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
title_full_unstemmed |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
title_sort |
Real-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robot |
author |
Burga, Oscar |
author_facet |
Burga, Oscar Villegas, Jonathan Ugarte, Willy |
author_role |
author |
author2 |
Villegas, Jonathan Ugarte, Willy |
author2_role |
author author |
dc.contributor.author.fl_str_mv |
Burga, Oscar Villegas, Jonathan Ugarte, Willy |
dc.subject.es_PE.fl_str_mv |
Geometry Human pose estimation Humanoid robot Motion retargeting Vectors |
topic |
Geometry Human pose estimation Humanoid robot Motion retargeting Vectors |
description |
To this point, there has been extensive research investigating human-robot motion retargeting, but the vast majority of existing methods rely on sensors or multiple cameras to detect human poses and movements, while many other methods are not suitable for usage on real-time scenarios. The current paper presents an integrated solution for performing realtime human-to-robot pose retargeting utilizing only regular monocular images and video as input data. We use deep learning models to perform three-dimensional human pose estimation on the monocular images and video, after which we calculate a set of joint angles that the robot must utilize to reproduce the detected human pose as accurately as possible. We evaluate our solution on Softbank’s NAO robot and show that it is possible to reproduce promising approximations and imitations of human motions and poses on the NAO robot, although it is subject to the limitations imposed by the robot’s degrees of freedom, joint constraints, and movement speed limitations. Category: Real-Time Systems |
publishDate |
2024 |
dc.date.accessioned.none.fl_str_mv |
2024-10-14T17:05:06Z |
dc.date.available.none.fl_str_mv |
2024-10-14T17:05:06Z |
dc.date.issued.fl_str_mv |
2024-01-01 |
dc.type.es_PE.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
dc.identifier.issn.none.fl_str_mv |
19764677 |
dc.identifier.doi.none.fl_str_mv |
10.5626/JCSE.2024.18.1.47 |
dc.identifier.uri.none.fl_str_mv |
http://hdl.handle.net/10757/676113 |
dc.identifier.eissn.none.fl_str_mv |
20938020 |
dc.identifier.journal.es_PE.fl_str_mv |
Journal of Computing Science and Engineering |
dc.identifier.eid.none.fl_str_mv |
2-s2.0-85195078940 |
dc.identifier.scopusid.none.fl_str_mv |
SCOPUS_ID:85195078940 |
dc.identifier.isni.none.fl_str_mv |
0000 0001 2196 144X |
identifier_str_mv |
19764677 10.5626/JCSE.2024.18.1.47 20938020 Journal of Computing Science and Engineering 2-s2.0-85195078940 SCOPUS_ID:85195078940 0000 0001 2196 144X |
url |
http://hdl.handle.net/10757/676113 |
dc.language.iso.es_PE.fl_str_mv |
eng |
language |
eng |
dc.rights.es_PE.fl_str_mv |
info:eu-repo/semantics/embargoedAccess |
eu_rights_str_mv |
embargoedAccess |
dc.format.es_PE.fl_str_mv |
application/html |
dc.publisher.es_PE.fl_str_mv |
Korean Institute of Information Scientists and Engineers |
dc.source.es_PE.fl_str_mv |
Universidad Peruana de Ciencias Aplicadas (UPC) Repositorio Académico - UPC |
dc.source.none.fl_str_mv |
reponame:UPC-Institucional instname:Universidad Peruana de Ciencias Aplicadas instacron:UPC |
instname_str |
Universidad Peruana de Ciencias Aplicadas |
instacron_str |
UPC |
institution |
UPC |
reponame_str |
UPC-Institucional |
collection |
UPC-Institucional |
dc.source.journaltitle.none.fl_str_mv |
Journal of Computing Science and Engineering |
dc.source.volume.none.fl_str_mv |
18 |
dc.source.issue.none.fl_str_mv |
1 |
dc.source.beginpage.none.fl_str_mv |
47 |
dc.source.endpage.none.fl_str_mv |
56 |
bitstream.url.fl_str_mv |
https://repositorioacademico.upc.edu.pe/bitstream/10757/676113/1/license.txt |
bitstream.checksum.fl_str_mv |
8a4605be74aa9ea9d79846c1fba20a33 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 |
repository.name.fl_str_mv |
Repositorio académico upc |
repository.mail.fl_str_mv |
upc@openrepository.com |
_version_ |
1837187183411200000 |
spelling |
951111b32d9f844746bf06f7214e1753300cf65483c181a0c94ec152a0130d30bf6300533fd7e68213307170565ef90452257a500Burga, OscarVillegas, JonathanUgarte, Willy2024-10-14T17:05:06Z2024-10-14T17:05:06Z2024-01-011976467710.5626/JCSE.2024.18.1.47http://hdl.handle.net/10757/67611320938020Journal of Computing Science and Engineering2-s2.0-85195078940SCOPUS_ID:851950789400000 0001 2196 144XTo this point, there has been extensive research investigating human-robot motion retargeting, but the vast majority of existing methods rely on sensors or multiple cameras to detect human poses and movements, while many other methods are not suitable for usage on real-time scenarios. The current paper presents an integrated solution for performing realtime human-to-robot pose retargeting utilizing only regular monocular images and video as input data. We use deep learning models to perform three-dimensional human pose estimation on the monocular images and video, after which we calculate a set of joint angles that the robot must utilize to reproduce the detected human pose as accurately as possible. We evaluate our solution on Softbank’s NAO robot and show that it is possible to reproduce promising approximations and imitations of human motions and poses on the NAO robot, although it is subject to the limitations imposed by the robot’s degrees of freedom, joint constraints, and movement speed limitations. Category: Real-Time Systemsapplication/htmlengKorean Institute of Information Scientists and Engineersinfo:eu-repo/semantics/embargoedAccessUniversidad Peruana de Ciencias Aplicadas (UPC)Repositorio Académico - UPCJournal of Computing Science and Engineering1814756reponame:UPC-Institucionalinstname:Universidad Peruana de Ciencias Aplicadasinstacron:UPCGeometryHuman pose estimationHumanoid robotMotion retargetingVectorsReal-Time Retargeting of Human Poses from Monocular Images and Videos to the NAO Robotinfo:eu-repo/semantics/articleLICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://repositorioacademico.upc.edu.pe/bitstream/10757/676113/1/license.txt8a4605be74aa9ea9d79846c1fba20a33MD51false10757/676113oai:repositorioacademico.upc.edu.pe:10757/6761132024-10-14 17:05:07.926Repositorio académico upcupc@openrepository.comTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo= |
score |
13.7211075 |
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).