Imagine a robot that could fold your laundry, make your bed, cook dinner or restock milk at the grocery store. Engineers have long taught robots single tasks, but getting them to handle more complicated, variable jobs has been difficult — despite heavy investment in robotics.
A research team in Switzerland reports progress toward robots that can take complex human instructions. The advance opens possibilities for useful helpers, and also prompts questions about whether such machines could someday be turned toward harm.
Sthithpragya Gupta, a robotics researcher at École Polytechnique Fédérale de Lausanne, says he dreams of a robot that can make his coffee: “If I could just say ‘a little bit of sugar, a bit more creamer,’ that would be a dream come true.” A central challenge is adaptability. Robots can be trained to repeat a precise action — like a backhand shot — reliably, but small changes in the environment or task often break those behaviors. Humans adapt naturally; transferring that flexibility to robots has proved hard.
Gupta and colleagues published a paper in Science Robotics describing a new machine-learning method that leans on what they call kinematic intelligence — a robot’s internal understanding of how its own body can move safely through space. In demonstration videos, single-arm robots watch a human toss a ball into a small container. The robots then pick up a ball and mimic the action, compensating for their own positions and nonhuman bodies. The learned behavior can be passed on to other robots.
Robert Platt, an engineering and robotics researcher at Northeastern University, called the work a “breakthrough” and said it addresses a critical problem in robot learning. He cautioned that predicting when such robots will become common is difficult — change can happen quickly, as recent advances in large language models showed.
If robots can self-correct and teach each other, does that make them self-aware? Some scholars say no. Susan Schneider, who studies AI at Florida Atlantic University, notes these systems do impressive learning but lack the felt experience that defines consciousness. “Consciousness is the felt quality of experience,” she says — the inner sensation of drinking an espresso or seeing a sunset — which these machines do not have.
Still, the technology raises ethical and safety concerns. Schneider warns that later versions could be weaponized. The researchers have built safety protocols to prevent harm, but they acknowledge the need for broader guardrails. Gupta urges regulatory frameworks to govern who operates robots and how they are used.
Humans stand at an inflection point with robotics. The work is exciting and promising, but its future direction — beneficial or dangerous — remains uncertain.