My life goal is to endow robots with the agility, dexterity, robustness, and autonomy of animals and humans, enabling them to work effectively in diverse environments, from human living and working spaces to unstructured field settings. In practice, I aim to design robots that can understand tasks, interpret their environment, and be self-aware; learn from other creatures; and execute their duties so elegantly and efficiently that they can eventually outperform their teachers.

Numerical optimal control and reinforcement learning are the main focuses of my research. I believe perception, prediction, and motion planning are inherently interconnected, and that we cannot solve AI problems by treating them merely as information processing challenges. Instead, we must develop physical robots that gain knowledge and intelligence through direct interaction with the physical world.

Although the world is full of chaos, it is governed by the principle of optimality. This principle is so ubiquitous that it underlies thermodynamics, fluid mechanics, the theory of relativity, quantum mechanics, particle physics, string theory, optics, and more. Thanks to the work of great scientists, we now have powerful mathematical tools to formulate these problems. And thanks to advances in Deep Learning and acceleration hardware, we have the computational power to solve them in our generation.