Autonomously Learning Walking Policy Using Deep Neural Networks
Vik, Christian
Training agents to learn policies on complex problems has been computationally too
expensive for a long time. Even after the advent of the neural network, the computational
requirements exceeded the abilities of our current hardware. Now that our hardware is able to
handle the larger computational loads of deep neural networks, we are able to leverage their
power against harder, more complex problems.
The problem of autonomous walking is a well-defined and explore one in computer
science and engineering. The famous DARPA grand challenge charged teams to build robots to
accomplish a series of complex tasks aimed to push the boundaries of what has been
previously accomplished in the field. The Google DeepMind team famously taught an AlphaGo
(and later AlphaGo Zero) to beat the best human player in the world at the one of most complex
games we have. Not long afterwards Google DeepMind showed off learned policies for a
number of agents with different bodies to finish complex obstacle ridden courses by attacking
the problems using Deep Reinforcement Learning. The power of the deep neural network is
shown in the variety and complexity of tasks it has accomplished, consistently outperforming
human adversaries in many fields.
In this project we will attempt to similarly learn a use deep reinforcement learning to
allow a quadruped agent to walk. By applying different deep learning techniques and varying
network architecture we hope to solve the autonomous walking problem for our agent.
↧