robotics

Humanoid Robots Still Can't Master Physics

March 23, 2026 · 4 min read

Humanoid Robots Still Can't Master Physics

A decade ago, humanoid robots were notorious for their clumsy falls and awkward movements, struggling with basic tasks that humans perform effortlessly. Today, companies like Tesla and Agility Robotics showcase sleek robots performing complex maneuvers, suggesting a revolution in robotics. Yet despite dramatic advances in artificial intelligence and hardware, these humanoids still can't reliably handle everyday s like navigating all types of stairs or opening various doors. This persistent limitation reveals a fundamental gap in robotic capabilities that goes beyond flashy demonstrations.

The core problem, according to roboticists interviewed for the research, is that humanoid robots haven't truly mastered physics—specifically, the control of force and inertia. While AI has transformed how robots perceive and plan, and new hardware has made them more nimble, they still lack the sophisticated force regulation that humans develop naturally through years of interacting with their environment. This missing capability prevents them from achieving what researchers call "multipurpose mobile manipulation"—the ability to move anywhere and handle almost anything without causing damage.

Three major technological shifts have brought humanoid robotics to its current state. First, deep learning neural networks running on fast GPU chips dramatically improved computer vision and reinforcement learning, allowing robots to better perceive and interact with their environments. Then in 2016, a revolution in actuation replaced heavy hydraulic mechanisms with smaller, proprioceptive electric motors that gave legged robots animal-like flexibility. Most recently, large language models adapted for robotics enabled autonomous planning of multistep tasks, creating a unified pipeline for perception, planning, and control.

These advances produced remarkable improvements, as seen in Boston Dynamics' Atlas robot. Where the 2015 version moved haltingly and won second place in the DARPA Robotics , today's Atlas performs fluid breakdancing moves and autonomously moves irregular items between bins while dealing with human interference. This transformation came from using reinforcement learning to train neural networks as whole-body controllers through countless digital simulations, eliminating the need for hand-engineered algorithms that modeled simplified physics.

However, the data shows that position-based control—where robots learn to move between defined poses—only goes so far. As Pulkit Agrawal of MIT's Improbable AI Lab explains, "These things are about [controlling] forces, if you want to do them at speeds of a human." Current approaches often work around this limitation by having robots move slowly, like using a car to carefully push a chair into position. This explains why Atlas moves deliberately when grasping auto parts but glides smoothly when only interacting with the floor.

The hardware itself has evolved to help with force regulation. Proprioceptive electric actuators, pioneered by Sangbae Kim at MIT, act as both motors and force sensors, eliminating the need for separate force sensors and reducing complexity. These "quasi-direct drive" actuators convert electrical current into proportional force with minimal error, making them transparent to force control. Yet even with this hardware improvement, neural networks trained through simulation don't explicitly learn the physics of force—they learn generalized policies for body positioning where force regulation happens indirectly or as a side effect.

Experts agree that current workarounds won't deliver the all-purpose dexterity needed for practical humanoid robots. Carolina Parada of Google DeepMind acknowledges that while their vision-language-action models go surprisingly far with position-based control, "it's very likely you [would] have to do some additional work" for true mastery. Humans feel forces working against them when opening bottles or turning doorknobs, but humanoids mostly don't, lacking the complex musculoskeletal and nervous systems that evolution gave humans.

Researchers are exploring different paths forward. Agrawal studies combining force control with reinforcement learning by having humanoids learn compliant behaviors in simulation. Russ Tedrake of MIT advocates for large-scale data collection and pretrained models similar to ChatGPT's development. Frank Park, author of the textbook "Modern Robotics," believes current AI approaches need complete rebuilding to make physics fundamentals learnable at a foundational level. The field has reached what Tedrake calls "the Volta stage"—making progress without fully understanding the fundamentals, much like early electricity experiments preceded Maxwell's equations.

What's striking isn't just the technical debate but how the scientific ethos has changed. Jonathan Hurst of Agility Robotics recalls warnings that reinforcement learning might enable robots to walk before we understood how it works—and that's essentially what's happening. Yet despite this gap in fundamental understanding, the technological bones are good, and progress continues. The question remains whether breakthroughs will come from better hardware, smarter software, or some combination that finally gives robots the physical intelligence humans take for granted.