Kormushev, P., Ugurlu, B., Calinon, S., Tsagarakis, N. and Caldwell, D.G. (2011)
Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization
In Proc. of the IEEE/RSJ Intl Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, pp. 318-324.
Abstract
We present a learning-based approach for minimizing the electric energy consumption during walking of a passively-compliant bipedal robot. The energy consumption is reduced by learning a varying-height center-of-mass trajectory which uses efficiently the robot's passive compliance. To do this, we propose a reinforcement learning method which evolves the policy parameterization dynamically during the learning process and thus manages to find better policies faster than by using fixed parameterization. The method is first tested on a function approximation task, and then applied to the humanoid robot COMAN where it achieves significant energy reduction.
Bibtex reference
@inproceedings{Kormushev11IROS, author="Kormushev, P. and Ugurlu, B. and Calinon, S. and Tsagarakis, N. and Caldwell, D. G.", title="Bipedal Walking Energy Minimization by Reinforcement Learning with Evolving Policy Parameterization", booktitle="Proc. {IEEE/RSJ} Intl Conf. on Intelligent Robots and Systems ({IROS})", year="2011", month="September", address="San Francisco, CA, USA", pages="318--324" }
Video
The compliant humanoid robot COMAN learns to walk with two different gaits: one with fixed height of the center of mass, and one with varying height. The varying-height center-of-mass trajectory was learned by reinforcement learning in order to minimize the electric energy consumption during walking. The optimized walking gait achieves 18% reduction of the energy consumption in the sagittal plane, due to the passive compliance - the springs in the knees and ankles of the robot are able to store and release energy efficiently. In addition, the varying-height walking looks more natural and smooth than the conventional fixed-height walking.