Humanoid robot teleoperator manages to kick himself where it hurts
|
By
Trevor Mogg Published December 28, 2025 |
Humanoid robots have come on leaps and bounds in recent times, and much is expected of the advanced machines in the coming year.
The process of training humanoid robots can take various forms. Unitree’s G1 robot for example, is trained partly through teleoperation whereby a human operator wears a motion-capture suit or uses controllers to perform particular movements or entire tasks, with the robot mirroring the movements in real time.
The process generates data that feeds into imitation learning algorithms, providing the robot with new autonomous skills. Additional reinforcement learning hones the model to make the movements smoother and more effective.
But teleoperation clearly carries some risks, especially if you get too close to the robot you’re training.
Take this recent viral video (below), which appears to show Unitree’s G1 robot in a training session. The teleoperator is performing a number of martial arts moves as he moves around a small space that includes the humanoid robot.
Everything appears to be going smoothly as the robot mirrors the teleoperator’s kicks with great precision.
But the teleoperator then turns slightly to perform a big kick. Unfortunately for the teleoperator, the robot, mimicking his movements, performs the same kick, catching the guy right where it hurts.
He drops to the ground, letting out a yelp of pain as he falls. Of course, the robot falls to the ground too. If it’d been equipped with speech capabilities, we’d have likely heard a yelp, as well.
The teleoperator learned the hard way that training a humanoid robot using this method has to be done with great care and attention. One wrong move and you could find yourself writhing on the ground in agony.
Unitree unveiled the impressive G1 humanoid robot in 2024 and made it available to purchase in early 2025 for around $13,000. The Chinese company is targeting research institutions, universities, and businesses for R&D in humanoid robotics and AI.
Related Posts
New study shows AI isn’t ready for office work
A reality check for the "replacement" theory
Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns
The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.
Microsoft tells you to uninstall the latest Windows 11 update
https://twitter.com/hapico0109/status/2013480169840001437?s=20