Scientists are able to train four-legged robots to play badminton against human enemies, and have given up across courts to play gatherings of up to 10 shots.
By combining full body movements and visual perception, the robot called “Anymal” has learned to adapt how to reach Shuttlecock and return it successfully on the net thanks to artificial intelligence (AI).
This shows that four-legged robots can be constructed as opponents in “complex and dynamic sports scenarios.”
Anymal is a four-legged dog-like robot weighing 110 pounds (50 kilograms) and about 1.5 feet (0.5 meters) tall. With four legs, Absmal-like quadruped robots can move across challenging terrain, moving up and down obstacles.
Researchers previously taught how to add arms to these dog-like machines and grab a handle to get a specific object or an open door. However, coordinating limb control and visual perception in a dynamic environment remains a challenge for robotics.
Related: Watch “Robot Dog” scrambling basic parkour courses with the help of AI
“Sports are suitable for this type of research because it allows for gradually increasing competitiveness and difficulty,” Yuntao Ma, a robot researcher at EthZürich and now a robot researcher at startup Light Robotics, told Live Science.
You might like it
New tricks to teach new dogs
In this study, MA and his team attached a dynamic arm holding the badminton racket at a 45-degree angle to a standard ANSYMAL robot.
Adding his arms, the robot stood 5 feet tall, 3 inches (1.6 m) with 18 joints. Each of the four legs had three joints and six joints on the arm. Researchers have designed a complex embedded system that controls arm and leg movement.
The team also added a stereo camera. The stereo camera was stacked with two lenses on the right side of the center front of the robot body. The two lenses allowed us to process visual information about the incoming shuttlecock in real time, and to resolve where they were heading.
The robots were then taught to become badminton players through reinforcement learning. With this type of machine learning, robots have learned to explore their environments and use trial and error to find and track Shuttlecock, navigate towards it, and swing the racket.
To do this, researchers first create a simulated environment consisting of badminton courts, with the robot’s virtual counterpart standing in the center. The virtual shuttlecock was provided near the center of the opponent’s court, with the robot being tasked with tracking its location and estimating the flight trajectory.
Researchers then created a rigorous training regimen to teach Anymal how to attack Shuttlecocks. Virtual coaches can reward the robot for a variety of characteristics, such as racket position, racket head angle, swing speed, and more. Importantly, swing rewards were time-based to encourage accurate and timely hits.
Shutttlecock could land anywhere on the court, so the robots also paid off if they moved efficiently across the court and didn’t speed up unnecessary. Anymal’s goal was to maximize the amount that was rewarded in all exams.
Based on 50 million trials of this simulation training, the researchers created a neural network that allows them to control the movement of all 18 joints and move towards the Shuttlecock for hits.
Fast learners
After the simulation, the scientists transferred the neural network to the robot, and Ansymal passed its pace in the real world.
Here, the robot was trained to find and track down a bright orange shuttlecock provided by another machine. This allowed researchers to control the speed, angle and landing location of Shuttlecocks. Anymal had to cross the courts to hit Shuttlecock at the speed of returning Shuttlecock to the center of the court online.
Following extensive training, the researchers discovered that the robot could track the shuttlecock and accurately return it at a swing speed of around 39 feet per second (12 meters per second).
Also, Anymal adjusted its movement patterns based on the distance traveled to Shuttlecock and the duration it reached. When the shuttlecock was scheduled to land a few feet (0.5 meters) apart, the robot didn’t have to move, but at about 5 feet (1.5 meters), Anymal scrambled all four legs to reach Shuttlecock. Approximately 7 feet (2.2 m) away, the robot was hanged towards the shuttlecock, producing an elevation period that extended the reach of the arm in the direction of the target by 3 feet (1 m).
“It’s not too trivial to control a robot to see a shut look lock,” Ma said. If the robot is looking at the shuttlecock, it cannot move very quickly. But if you haven’t seen it, you don’t know where you need to go. “This trade-off has to happen in a somewhat intelligent way,” he said.
MA was surprised that he had a good understanding of how the robot moves all 18 joints in a coordinated manner. The motors in each joint are particularly difficult because they learn independently, but in the final movement they must operate in tandem.
The team also discovered that after the robot spontaneously hit, it began to move naturally back to the center of the court, similar to how human players prepare for the shuttlecock.
However, researchers noted that the robots do not take into account the movements of the other party. This is an important way for human players to predict Shuttlecock’s trajectory. Including human pose estimates can help improve Anymal’s performance, the team said in the study. You could also add a neck joint to allow the robot to monitor the shuttle cock more time, MA pointed out.
He believes that the study ultimately has applications beyond sports. For example, robots can balance dynamic visual perception with agile movement, which could support debris removal during disaster relief efforts, he said.
Source link