1
tesis doctoral
Publicado 2023
Enlace
Enlace
State-of-the-art control and robotics challenges have long been tackled using model-based control methods like model predictive control (MPC) and reinforcement learning (RL). These methods excel in complex dynamic domains, such as manipulation tasks, but struggle with real-world issues like wear-and-tear, uncalibrated sensors, and misspecifications. These factors often perturb system dynamics, leading to the 'reality gap' problem when robots transition from simulations to real-world environments. This work aims to bridge this gap by combining RL and control in a learning framework that adapts MPC to robot decisions, optimizing performance despite uncertainties in dynamics model parameters. This thesis presents three key contributions to robotics control. The first is a novel reward-based framework for refining stochastic Model Predictive Control (MPC). It utilizes Bayesian Optimization (...