Articles by FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
FavTutor
  • AI News
  • Data Structures
  • Web Developement
  • AI Code GeneratorNEW
  • Student Help
  • Main Website
No Result
View All Result
Articles by FavTutor
No Result
View All Result
Home AI News, Research & Latest Updates

AI Soccer Evolution: Google Trains Robots with Improved Skills

Dhruv Kudalkar by Dhruv Kudalkar
April 16, 2024
Reading Time: 5 mins read
Google Deepmind AI Soccer Research
Follow us on Google News   Subscribe to our newsletter

Google DeepMind Researchers successfully trained 20-inch tall humanoid robots to play 1v1 soccer matches using a deep reinforcement learning approach. With the help of this, the robots learned to run, kick, block, get up from falls, and score goals without the need for any manual programming.

Soccer players have to master a range of dynamic skills, from turning and kicking to chasing a ball. How could robots do the same? ⚽

We trained our AI agents to demonstrate a range of agile behaviors using reinforcement learning.

Here’s how. 🧵 https://t.co/RFBxLG6SMn pic.twitter.com/4B4S2YiVLh

— Google DeepMind (@GoogleDeepMind) April 11, 2024

Robots Soccer Training Approach

Researchers at Google DeepMind have designed a technique to train low-cost off-the-shelf bipedal robots to play multi-robot soccer well beyond the level of agility and fluency that is intuitively expected from this type of robot. They used a deep reinforcement learning-based (RL) approach and performed their tests on the widely available Robotis OP3 robots.

The researchers explained that generating robust motor skills in bipedal robots is a challenging task because of the inability of current control methods to generalize to specific tasks. The researchers utilized a deep RL technique to control the body of the Robotis OP3 robots thus allowing them to play one-on-one matches.

With the help of the training method, they aimed at making robots learn a wide variety of tasks such as walking, kicking, getting up from a fall, scoring goals, and defending. They divided the training pipeline into two stages:

Agent training setup Google Soccer

They first trained two separate skill policies: one for getting up from the ground and another for scoring a goal against an untrained opponent.

For the get-up skill, they used a sequence of target poses to bias the policy towards a stable and collision-free trajectory. For the soccer skill, the agent was trained to score as many goals as possible against an untrained opponent.

In the second stage, these skills were distilled into a single agent, which was then trained using self-play, where the opponent was drawn from a pool of partially trained copies of the agent itself.

During this stage, the agent played against increasingly stronger opponents, which were sampled from a pool of partially trained copies of the agent itself (self-play).

They used policy distillation to enable the agent to learn from the skill policies, regularizing the agent’s policy towards the relevant skill policy depending on the agent’s state.

This allowed the agent to integrate previously learned skills, refine them for the complete soccer task, and anticipate the opponent’s actions. They utilized various techniques such as shaping rewards, domain randomization, and random perturbations to enhance exploration and ensure safe transfer to real robots.

To improve the robustness of the policies and facilitate safe transfer to real robots, the researchers employed techniques such as domain randomization, perturbations during training, and shaping reward terms.

The robots were trained in a simulated environment using the MuJoCo physics engine and then the resulting policy was directly deployed on real Robotis OP3 miniature humanoid robots without any fine-tuning.

Here are The Results

The trained robots exhibited impressive performance in real-world tests. They outperformed specialized manually designed controllers in key behaviours such as walking (181% faster), turning (302% faster), getting up (63% less time), and kicking (34% faster with a run-up approach). T

The robots also demonstrated opponent awareness, adaptive footwork, and the ability to quickly recover from falls:

Our players were able to walk, turn, kick and stand up faster than manually programmed skills on this type of robot. 🔁

They could also combine movements to score goals, anticipate ball movements and block opponent shots – thereby developing a basic understanding of a 1v1 game. pic.twitter.com/1Bty4q9tDN

— Google DeepMind (@GoogleDeepMind) April 11, 2024

During 1v1 soccer matches, the robots showcased a variety of emergent behaviours, including agile movements, recovery from falls, object interaction, and strategic behaviours such as defending and protecting the ball with their bodies.

Remarkably, these robots exhibited higher speed compared to those controlled by traditional scripted methods, indicating the potential utility of this framework for orchestrating more intricate interactions among multiple robots.

They smoothly transitioned between different behaviours and adapted their tactics based on the game context. This approach enabled emergent behaviours to be discovered and optimized for specific contexts, allowing the agent to learn and adapt more effectively.

The results showed that pretraining separate soccer and get-up skills was crucial for success, as attempting to learn end-to-end without these separate skills led to suboptimal solutions. By using a minimal set of pre-trained skills, they simplified reward design, improved exploration, and avoided poor locomotion outcomes.

The ability of the robots to learn and adapt to various complicated situations, combine different skills, and make strategic decisions during soccer gameplay highlights the potential of deep reinforcement learning in the field of AI-based robotics.

Such an approach leads to advancements in the development of general robots, rather than training them for specific tasks.

Google Deepmind team is also working on the TacticAI System that provides experts with tactical insights mainly on corner kicks.

Conclusion

The success achieved by researchers at Google DeepMind marks a significant step forward in the creation of intelligent autonomous robots. The deep reinforcement learning approach to training, combined with techniques like domain randomization and self-play, can enable robots to learn complex tasks and adapt to dynamic environments.

ShareTweetShareSendSend
Dhruv Kudalkar

Dhruv Kudalkar

Hello, I'm Dhruv Kudalkar, a final year undergraduate student pursuing a degree in Information Technology. My research interests revolve around Generative AI and Natural Language Processing (NLP). I constantly explore new technologies and strive to stay up-to-date in these fields, driven by a passion for innovation and a desire to contribute to the ever-evolving landscape of intelligent systems.

RelatedPosts

Candidate during Interview

9 Best AI Interview Assistant Tools For Job Seekers in 2025

May 1, 2025
AI Generated Tom and Jerry Video

AI Just Created a Full Tom & Jerry Cartoon Episode

April 12, 2025
Amazon Buy for Me AI

Amazon’s New AI Makes Buying from Any Website Easy

April 12, 2025
Microsoft New AI version of Quake 2

What Went Wrong With Microsoft’s AI Version of Quake II?

April 7, 2025
AI Reasoning Model Better Method

This Simple Method Can Make AI Reasoning Faster and Smarter

April 3, 2025

About FavTutor

FavTutor is a trusted online tutoring service to connects students with expert tutors to provide guidance on Computer Science subjects like Java, Python, C, C++, SQL, Data Science, Statistics, etc.

Categories

  • AI News, Research & Latest Updates
  • Trending
  • Data Structures
  • Web Developement
  • Data Science

Important Subjects

  • Python Assignment Help
  • C++ Help
  • R Programming Help
  • Java Homework Help
  • Programming Help

Resources

  • About Us
  • Contact Us
  • Editorial Policy
  • Privacy Policy
  • Terms and Conditions

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.

No Result
View All Result
  • AI News
  • Data Structures
  • Web Developement
  • AI Code Generator
  • Student Help
  • Main Website

Website listed on Ecomswap. © Copyright 2025 All Rights Reserved.