Bipedal locomotion based on a hybrid RL model in IS-MPC

Descripción del Articulo

Maintaining the stability of bipedal walking remains a major challenge in humanoid robotics, primarily due to the large number of hyperparameters involved and the need to adapt to dynamic environments and external disturbances. Traditional methods for determining these hyperparameters, such as heuri...

Descripción completa

Detalles Bibliográficos
Autor: Figueroa Mosquera, Nícolas Francisco
Formato: tesis doctoral
Fecha de Publicación:2025
Institución:Pontificia Universidad Católica del Perú
Repositorio:PUCP-Tesis
Lenguaje:inglés
OAI Identifier:oai:tesis.pucp.edu.pe:20.500.12404/31525
Enlace del recurso:http://hdl.handle.net/20.500.12404/31525
Nivel de acceso:acceso abierto
Materia:Androides--Locomoción
Control predictivo
Aprendizaje automático (Inteligencia artificial)
https://purl.org/pe-repo/ocde/ford#2.00.00
id PUCP_ea935d3e6a1730f33d44129f02820968
oai_identifier_str oai:tesis.pucp.edu.pe:20.500.12404/31525
network_acronym_str PUCP
network_name_str PUCP-Tesis
repository_id_str .
dc.title.none.fl_str_mv Bipedal locomotion based on a hybrid RL model in IS-MPC
title Bipedal locomotion based on a hybrid RL model in IS-MPC
spellingShingle Bipedal locomotion based on a hybrid RL model in IS-MPC
Figueroa Mosquera, Nícolas Francisco
Androides--Locomoción
Control predictivo
Aprendizaje automático (Inteligencia artificial)
https://purl.org/pe-repo/ocde/ford#2.00.00
title_short Bipedal locomotion based on a hybrid RL model in IS-MPC
title_full Bipedal locomotion based on a hybrid RL model in IS-MPC
title_fullStr Bipedal locomotion based on a hybrid RL model in IS-MPC
title_full_unstemmed Bipedal locomotion based on a hybrid RL model in IS-MPC
title_sort Bipedal locomotion based on a hybrid RL model in IS-MPC
author Figueroa Mosquera, Nícolas Francisco
author_facet Figueroa Mosquera, Nícolas Francisco
author_role author
dc.contributor.advisor.fl_str_mv Tafur Sotelo, Julio César
Kheddar, Abderrahmane
dc.contributor.author.fl_str_mv Figueroa Mosquera, Nícolas Francisco
dc.subject.none.fl_str_mv Androides--Locomoción
Control predictivo
Aprendizaje automático (Inteligencia artificial)
topic Androides--Locomoción
Control predictivo
Aprendizaje automático (Inteligencia artificial)
https://purl.org/pe-repo/ocde/ford#2.00.00
dc.subject.ocde.none.fl_str_mv https://purl.org/pe-repo/ocde/ford#2.00.00
description Maintaining the stability of bipedal walking remains a major challenge in humanoid robotics, primarily due to the large number of hyperparameters involved and the need to adapt to dynamic environments and external disturbances. Traditional methods for determining these hyperparameters, such as heuristic approaches, are often time- consuming and potentially suboptimal. In this thesis, we present an integrated approach combining advanced control and reinforcement learning techniques to improve the stability of bipedal walking, particularly in the face of ground disturbances and speed variations. Our main contribution lies in the integration of two complementary approaches: (1) an intrinsically stable model predictive control (IS-MPC) combined with whole-body admittance control, and (2) a reinforcement learning module implemented in the mc_rtc framework. This system allows for continuous monitoring of the robot’s current states, maintaining recursive feasibility, and optimizing parameters in real time. Additionally, we propose an innovative reward function that combines changes in single and double support times, postural recovery, divergent motion control, and action generation based on training optimization. The optimization of the weights of this reward function plays a crucial role, and we systematically explore different configurations to maximize the robot’s stability and performance. Furthermore, this thesis introduces a novel approach that integrates experience variabil- ity (a criterion for determining changes in locomotion-manipulation) and experience accumulation (an efficient way to store and select acquired experiences) in the develop- ment of reinforcement learning (RL) agents and humanoid robots. This approach not only improves adaptability and efficiency in unpredictable environments but also facili- tates more sophisticated modeling of these environments, significantly enhancing the systems’ ability to cope with real-world complexities. By combining these techniques with advanced reinforcement learning methods, such as Proximal Policy Optimization (PPO) and Model-Agnostic Meta-Learning (MAML), and integrating stability-based self-learning, we strengthen the systems’ generalization capabilities, enabling rapid and effective learning in new and unprecedented situations. The evaluation of our approach was conducted through simulations and real-world experiments using the HRP-4 robot, demonstrating the effectiveness of the intrinsically stable predictive controller and the proposed reinforcement learning system. The results show a significant improvement in the robot’s stability and adaptability, thereby consolidating our contribution to the field of humanoid robotics.
publishDate 2025
dc.date.accessioned.none.fl_str_mv 2025-08-14T15:40:48Z
dc.date.created.none.fl_str_mv 2025
dc.date.issued.fl_str_mv 2025-08-14
dc.type.none.fl_str_mv info:eu-repo/semantics/doctoralThesis
format doctoralThesis
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/20.500.12404/31525
url http://hdl.handle.net/20.500.12404/31525
dc.language.iso.none.fl_str_mv eng
language eng
dc.relation.ispartof.fl_str_mv SUNEDU
dc.rights.none.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by-nc/2.5/pe/
eu_rights_str_mv openAccess
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc/2.5/pe/
dc.publisher.es_ES.fl_str_mv Pontificia Universidad Católica del Perú
dc.publisher.country.none.fl_str_mv PE
dc.source.none.fl_str_mv reponame:PUCP-Tesis
instname:Pontificia Universidad Católica del Perú
instacron:PUCP
instname_str Pontificia Universidad Católica del Perú
instacron_str PUCP
institution PUCP
reponame_str PUCP-Tesis
collection PUCP-Tesis
bitstream.url.fl_str_mv https://tesis.pucp.edu.pe/bitstreams/0096d6c9-48e8-4560-9126-164399be13a2/download
https://tesis.pucp.edu.pe/bitstreams/aa9765d4-4469-4006-bef3-16a7e0bf2a21/download
https://tesis.pucp.edu.pe/bitstreams/92c50278-307f-48f0-a169-33241a3aeb27/download
https://tesis.pucp.edu.pe/bitstreams/1fd93db2-e789-4289-b7cc-eb9554178fe4/download
https://tesis.pucp.edu.pe/bitstreams/4d00a715-ead8-4755-ae18-56ed117afc8b/download
https://tesis.pucp.edu.pe/bitstreams/b71cb481-88d6-4d3d-80ac-bab5e9f70fad/download
https://tesis.pucp.edu.pe/bitstreams/a0c77220-a1dc-42b8-b02c-9278dca787ee/download
https://tesis.pucp.edu.pe/bitstreams/a0aac0b5-eae8-424a-93d0-0ca5d8642419/download
bitstream.checksum.fl_str_mv e867d477ae9ab9e356dbb24ac0dd26a3
6b3f1056197510abceadc5a6691a83d7
29566c19d6c029587e3c8492ea72c569
bb9bdc0b3349e4284e09149f943790b4
fea14e1a773b8e216a8c07ea77e27b19
0b14a7a23cac46908a3412949d82866b
9a0d78bb4148a3a3e7859066bb246ec7
690182db365fa6f8cce3e95c3a72b8a8
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio de Tesis PUCP
repository.mail.fl_str_mv raul.sifuentes@pucp.pe
_version_ 1841712737295532032
spelling Tafur Sotelo, Julio CésarKheddar, AbderrahmaneFigueroa Mosquera, Nícolas Francisco2025-08-14T15:40:48Z20252025-08-14http://hdl.handle.net/20.500.12404/31525Maintaining the stability of bipedal walking remains a major challenge in humanoid robotics, primarily due to the large number of hyperparameters involved and the need to adapt to dynamic environments and external disturbances. Traditional methods for determining these hyperparameters, such as heuristic approaches, are often time- consuming and potentially suboptimal. In this thesis, we present an integrated approach combining advanced control and reinforcement learning techniques to improve the stability of bipedal walking, particularly in the face of ground disturbances and speed variations. Our main contribution lies in the integration of two complementary approaches: (1) an intrinsically stable model predictive control (IS-MPC) combined with whole-body admittance control, and (2) a reinforcement learning module implemented in the mc_rtc framework. This system allows for continuous monitoring of the robot’s current states, maintaining recursive feasibility, and optimizing parameters in real time. Additionally, we propose an innovative reward function that combines changes in single and double support times, postural recovery, divergent motion control, and action generation based on training optimization. The optimization of the weights of this reward function plays a crucial role, and we systematically explore different configurations to maximize the robot’s stability and performance. Furthermore, this thesis introduces a novel approach that integrates experience variabil- ity (a criterion for determining changes in locomotion-manipulation) and experience accumulation (an efficient way to store and select acquired experiences) in the develop- ment of reinforcement learning (RL) agents and humanoid robots. This approach not only improves adaptability and efficiency in unpredictable environments but also facili- tates more sophisticated modeling of these environments, significantly enhancing the systems’ ability to cope with real-world complexities. By combining these techniques with advanced reinforcement learning methods, such as Proximal Policy Optimization (PPO) and Model-Agnostic Meta-Learning (MAML), and integrating stability-based self-learning, we strengthen the systems’ generalization capabilities, enabling rapid and effective learning in new and unprecedented situations. The evaluation of our approach was conducted through simulations and real-world experiments using the HRP-4 robot, demonstrating the effectiveness of the intrinsically stable predictive controller and the proposed reinforcement learning system. The results show a significant improvement in the robot’s stability and adaptability, thereby consolidating our contribution to the field of humanoid robotics.Maintenir la stabilité de la marche bipède reste un défi majeur en robotique humanoïde, principalement en raison du grand nombre d’hyperparamètres impliqués et de la nécessité de s’adapter à des environnements dynamiques et à des perturbations externes. Les méthodes traditionnelles pour déterminer ces hyperparamètres, telles que les approches heuristiques, sont souvent chronophages et potentiellement sous-optimales. Dans cette thèse, nous présentons une approche intégrée combinant des techniques avancées de contrôle et d’apprentissage par renforcement pour améliorer la stabilité de la marche bipède, en particulier face à des perturbations du sol et à des variations de vitesse. Notre contribution principale réside dans l’intégration de deux approches complémen- taires : (1) un système de contrôle prédictif intrinsèquement stable (IS-MPC) combiné à un contrôle d’admittance pour l’ensemble du corps, et (2) un module d’apprentissage par renforcement implémenté dans le cadre mc_rtc. Ce système permet de surveiller en continu les états actuels du robot, de maintenir la faisabilité récursive et d’optimiser les paramètres en temps réel. De plus, nous proposons une fonction de récompense innovante qui combine les changements dans les temps de support simple et double, la récupération posturale, le contrôle divergent du mouvement et la génération d’actions basées sur l’optimisation de l’entraînement. L’optimisation des poids de cette fonction de récompense joue un rôle crucial, et nous explorons systématiquement différentes configurations pour maximiser la stabilité et les performances du robot. Par ailleurs, cette thèse introduit une approche novatrice qui intègre la variabilité de l’expérience (un critère pour déterminer les changements dans la locomotion- manipulation) et l’accumulation de l’expérience (une manière efficace de stocker et de sélectionner les expériences acquises) dans le développement d’agents d’apprentissage par renforcement (RL) et de robots humanoïdes. Cette approche améliore non seule- ment l’adaptabilité et l’efficacité dans des environnements imprévisibles, mais facilite également une modélisation plus sophistiquée de ces environnements, améliorant con- sidérablement la capacité des systèmes à faire face aux complexités du monde réel. En combinant ces techniques avec des méthodes avancées d’apprentissage par ren- forcement, telles que l’Optimisation de Politique Proximale (PPO) et l’Apprentissage Meta-Agnostique de Modèles (MAML), et en intégrant l’auto-apprentissage basé sur la stabilité, nous renforçons les capacités de généralisation des systèmes, permettant un apprentissage rapide et efficace dans des situations nouvelles et inédites. L’évaluation de notre approche a été réalisée à travers des simulations et des expériences en conditions réelles utilisant le robot HRP-4, démontrant l’efficacité du contrôleur pré- dictif intrinsèquement stable et du système d’apprentissage par renforcement proposé. Les résultats montrent une amélioration significative de la stabilité et de l’adaptabilité du robot, consolidant ainsi notre contribution au domaine de la robotique humanoïde.Mantener la estabilidad de la marcha bípeda sigue siendo un desafío importante en la robótica humanoide, principalmente debido al gran número de hiperparámetros involucrados y a la necesidad de adaptarse a entornos dinámicos y perturbaciones externas. Los métodos tradicionales para determinar estos hiperparámetros, como los enfoques heurísticos, suelen ser consumidores de tiempo y potencialmente subóptimos. En esta tesis, presentamos un enfoque integral que combina técnicas avanzadas de control y aprendizaje por refuerzo para mejorar la estabilidad de la marcha bípeda, especialmente frente a perturbaciones del suelo y variaciones de velocidad. Nuestra contribución principal radica en la integración de dos enfoques complementar- ios: (1) un sistema de control predictivo intrínsecamente estable (IS-MPC) combinado con un control de admisión para todo el cuerpo, y (2) un módulo de aprendizaje por refuerzo implementado en el marco mc_rtc. Este sistema permite monitorear continu- amente los estados actuales del robot, mantener la factibilidad recursiva y optimizar los parámetros en tiempo real. Además, proponemos una función de recompensa inno- vadora que combina cambios en los tiempos de soporte simple y doble, recuperación postural, control divergente del movimiento y generación de acciones basadas en la optimización del entrenamiento. La optimización de los pesos de esta función de recom- pensa juega un papel crucial, y exploramos sistemáticamente diferentes configuraciones para maximizar la estabilidad y el rendimiento del robot. Asimismo, esta tesis introduce un enfoque novedoso que integra la variabilidad de la experiencia (un criterio para determinar cambios en la locomoción-manipulación) y la acumulación de experiencia (una forma eficiente de almacenar y seleccionar ex- periencias adquiridas) en el desarrollo de agentes de aprendizaje por refuerzo (RL) y robots humanoides. Este enfoque no solo mejora la adaptabilidad y eficiencia en entornos impredecibles, sino que también facilita un modelado más sofisticado de estos entornos, mejorando significativamente la capacidad de los sistemas para enfrentar las complejidades del mundo real. Al combinar estas técnicas con métodos avanzados de aprendizaje por refuerzo, como la Optimización de Política Proximal (PPO) y el Aprendizaje Meta-Agnóstico de Modelos (MAML), y al integrar el autoaprendizaje basado en la estabilidad, reforzamos las capacidades de generalización de los sistemas, permitiendo un aprendizaje rápido y efectivo en situaciones nuevas y sin precedentes. La evaluación de nuestro enfoque se realizó a través de simulaciones y experimentos en condiciones reales utilizando el robot HRP-4, demostrando la eficacia del controlador predictivo intrínsecamente estable y del sistema de aprendizaje por refuerzo propuesto. Los resultados evidencian una mejora significativa en la estabilidad y adaptabilidad del robot, consolidando nuestra contribución al campo de la robótica humanoide.engPontificia Universidad Católica del PerúPEinfo:eu-repo/semantics/openAccesshttp://creativecommons.org/licenses/by-nc/2.5/pe/Androides--LocomociónControl predictivoAprendizaje automático (Inteligencia artificial)https://purl.org/pe-repo/ocde/ford#2.00.00Bipedal locomotion based on a hybrid RL model in IS-MPCinfo:eu-repo/semantics/doctoralThesisreponame:PUCP-Tesisinstname:Pontificia Universidad Católica del Perúinstacron:PUCPSUNEDUDoctor en IngenieríaDoctoradoPontificia Universidad Católica del Perú. Escuela de PosgradoIngeniería06470028https://orcid.org/0000-0003-3415-1969https://orcid.org/0000-0001-9033-974244192217732028Perez Zuñiga, Carlos GustavoTafur Sotelo, Julio CésarKheddar, AbderrahmaneBarrientos, AntonioRossi, AlessandraBayro Corrochano, Eduardo JoséSeriai, Abdelhak-DjamelSlawiñski, Emanuelhttps://purl.org/pe-repo/renati/level#doctorhttps://purl.org/pe-repo/renati/type#tesisORIGINALFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdfFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdfTexto completoapplication/pdf13885077https://tesis.pucp.edu.pe/bitstreams/0096d6c9-48e8-4560-9126-164399be13a2/downloade867d477ae9ab9e356dbb24ac0dd26a3MD51trueAnonymousREADFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdfFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdfReporte de originalidadapplication/pdf29362068https://tesis.pucp.edu.pe/bitstreams/aa9765d4-4469-4006-bef3-16a7e0bf2a21/download6b3f1056197510abceadc5a6691a83d7MD52falseAdministratorREADCC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81031https://tesis.pucp.edu.pe/bitstreams/92c50278-307f-48f0-a169-33241a3aeb27/download29566c19d6c029587e3c8492ea72c569MD53falseAnonymousREADLICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://tesis.pucp.edu.pe/bitstreams/1fd93db2-e789-4289-b7cc-eb9554178fe4/downloadbb9bdc0b3349e4284e09149f943790b4MD54falseAnonymousREADTEXTFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdf.txtFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdf.txtExtracted texttext/plain242594https://tesis.pucp.edu.pe/bitstreams/4d00a715-ead8-4755-ae18-56ed117afc8b/downloadfea14e1a773b8e216a8c07ea77e27b19MD55falseAnonymousREADFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdf.txtFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdf.txtExtracted texttext/plain29097https://tesis.pucp.edu.pe/bitstreams/b71cb481-88d6-4d3d-80ac-bab5e9f70fad/download0b14a7a23cac46908a3412949d82866bMD57falseAdministratorREADTHUMBNAILFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdf.jpgFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_BIPEDAL_LOCOMOTION.pdf.jpgGenerated Thumbnailimage/jpeg12338https://tesis.pucp.edu.pe/bitstreams/a0c77220-a1dc-42b8-b02c-9278dca787ee/download9a0d78bb4148a3a3e7859066bb246ec7MD56falseAnonymousREADFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdf.jpgFIGUEROA_MOSQUERA_NICOLAS_FRANCISCO_T.pdf.jpgGenerated Thumbnailimage/jpeg7589https://tesis.pucp.edu.pe/bitstreams/a0aac0b5-eae8-424a-93d0-0ca5d8642419/download690182db365fa6f8cce3e95c3a72b8a8MD58falseAdministratorREAD20.500.12404/31525oai:tesis.pucp.edu.pe:20.500.12404/315252025-08-15 09:03:38.235http://creativecommons.org/licenses/by-nc/2.5/pe/info:eu-repo/semantics/openAccessopen.accesshttps://tesis.pucp.edu.peRepositorio de Tesis PUCPraul.sifuentes@pucp.peTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0IG93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLCB0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZyB0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sIGluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlIHN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yIHB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZSB0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQgdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uIGFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LCB5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZSBjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdCBzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkIHdpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRCBCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUgRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSCBDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZSBzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMgbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=
score 13.987529
Nota importante:
La información contenida en este registro es de entera responsabilidad de la institución que gestiona el repositorio institucional donde esta contenido este documento o set de datos. El CONCYTEC no se hace responsable por los contenidos (publicaciones y/o datos) accesibles a través del Repositorio Nacional Digital de Ciencia, Tecnología e Innovación de Acceso Abierto (ALICIA).