Business is booming.

DeepMind AI learns to play soccer using decades of match simulations

An artificial intelligence has learned to skillfully control digital humanoid footballers by processing decades of football matches in just a few weeks


August 31, 2022

AI learned to control digital humanoid soccer players

Liu et al., Sci. Robot. 7, eabo0235

Artificial intelligence has learned to play football. Learning from decades of computer simulations, an AI brought digital humanoids from swinging toddlers to skilled players.

Researchers at AI research firm DeepMind taught the AI ​​how to play soccer in a computer simulation through an athletic curriculum that resembles an accelerated version of a human baby growing into a soccer player. The AI ​​gained control of digital humanoids with realistic body masses and joint movements.

“We don’t put babies in an 11 against 11 game,” says Guy Lever at Deep Mind. “First they learn to walk around, then they learn to dribble a ball, then you might play one against one or two against two.”

In the first phase of the curriculum, the digital humanoids were trained to run naturally by mimicking motion-capture video clips of people playing soccer. A second phase involved practicing dribbling and shooting with the ball through a form of trial-and-error machine learning that rewards the AI ​​for staying close to the ball.

The first two phases represented about 1.5 years of simulation training time, which the AI ​​went through in about 24 hours. But after five simulated years of soccer games, more complex behaviors began to emerge beyond movement and ball control. “They learned coordination, but they also learned movement skills that we hadn’t explicitly set up as training exercises before,” says Nicolas Heess at Deep Mind.

The third phase of the training challenged the digital humanoids to score goals in two-on-two matches. Teamwork skills, such as anticipating where to get a pass, developed over the course of about 20 to 30 simulated years of competitions, or the equivalent of two to three weeks in the real world. This led to demonstrable improvements in the off-ball scoring capabilities of the digital humanoids, a real measure of how often a player gets into a favorable position on the field.

Such simulations do not immediately lead to flashy soccer-playing robots. The digital humanoids were trained on simplified rules that allowed for fouls, provided a wall-like boundary around the field, and avoided set pieces such as throw-ins or goal kicks.

Long apprenticeships make work harder to hand over directly to real soccer robots, says Sven Behnke at the University of Bonn in Germany. However, it would be interesting to see if DeepMind’s approach is competitive in the annual RoboCup 3D Simulation Leaguehe says.

The DeepMind team has started teaching real robots how to push a ball towards a goal and plans to investigate whether the same AI training strategy works outside of football.

Reference magazine: Science Robotics, DOI: 10.1126/scirobotics.abo0235

More on these topics: