Skip to Content
Artificial intelligence

Why the humanoid workforce is running late

Humanoid hype is dividing frenzied investors from skeptical roboticists, with reporters caught in between.

lackluster humanoid robot
Stephanie Arnett/MIT Technology Review | Adobe Stock

On Thursday I watched Daniela Rus, one of the world’s top experts on AI-powered robots, address a packed room at a Boston robotics expo. Rus spent a portion of her talk busting the notion that giant fleets of humanoids are already making themselves useful in manufacturing and warehouses around the world. 

That might come as a surprise. For years AI has made it faster to train robots, and investors have responded feverishly. Figure AI, a startup that aims to build general-purpose humanoid robots for both homes and industry, is looking at a $1.5 billion funding round (more on Figure shortly), and there are commercial experiments with humanoids at Amazon and auto manufacturers. Bank of America predicts wider adoption of these robots around the corner, with a billion humanoids at work by 2050.

But Rus and many others I spoke with at the expo suggest that this hype just doesn’t add up.

Humanoids “are mostly not intelligent,” she said. Rus showed a video of herself speaking to an advanced humanoid that smoothly followed her instruction to pick up a watering can and water a nearby plant. It was impressive. But when she asked it to “water” her friend, the robot did not consider that humans don’t need watering like plants and moved to douse the person. “These robots lack common sense,” she said. 

I also spoke with Pras Velagapudi, the chief technology officer of Agility Robotics, who detailed physical limitations the company has to overcome too. To be strong, a humanoid needs a lot of power and a big battery. The stronger you make it and the heavier it is, the less time it can run without charging, and the more you need to worry about safety. A robot like this is also complex to manufacture.

Some impressive humanoid demos don’t overcome these core constraints as much as they display other impressive features: nimble robotic hands, for instance, or the ability to converse with people via a large language model. But these capabilities don’t necessarily translate well to the jobs that humanoids are supposed to be taking over (it’s more useful to program a long list of detailed instructions for a robot to follow than to speak to it, for example). 

This is not to say fleets of humanoids won’t ever join our workplaces, but rather that the adoption of the technology will likely be drawn out, industry specific, and slow. It’s related to what I wrote about last week: To people who consider AI a “normal” technology, rather than a utopian or dystopian one, this all makes sense. The technology that succeeds in an isolated lab setting will appear very different from the one that gets commercially adopted at scale. 

All of this sets the scene for what happened with one of the biggest names in robotics last week. Figure AI has raised a tremendous amount of investment for its humanoids, and founder Brett Adcock claimed on X in March that the company was the “most sought-after private stock in the secondary market.” Its most publicized work is with BMW, and Adcock has shown videos of Figure’s robots working to move parts for the automaker, saying that the partnership took just 12 months to launch. Adcock and Figure have generally not responded to media requests and don’t make the rounds at typical robot trade shows. 

In April, Fortune published an article quoting a spokesperson from BMW, alleging that the pair’s partnership involves fewer robots at a smaller scale than Figure has implied. On April 25, Adcock posted on LinkedIn that “Figure’s litigation counsel will aggressively pursue all available legal remedies—including, but not limited to, defamation claims—to correct the publication’s blatant misstatements.” The author of the Fortune article did not respond to my request for comment, and a representative for Adcock and Figure declined to say what parts of the article were inaccurate. The representative pointed me to Adcock’s statement, which lacks details. 

The specifics of Figure aside, I think this conflict is quite indicative of the tech moment we’re in. A frenzied venture capital market—buoyed by messages like the statement from Nvidia CEO Jensen Huang that “physical AI” is the future—is betting that humanoids will create the largest market for robotics the field has ever seen, and that someday they will essentially be capable of most physical work. 

But achieving that means passing countless hurdles. We’ll need safety regulations for humans working alongside humanoids that don’t even exist yet. Deploying such robots successfully in one industry, like automotive, may not lead to success in others. We’ll have to hope that AI will solve lots of problems along the way. These are all tll things that roboticists have reason to be skeptical about. 

Roboticists, from what I’ve seen, are normally a patient bunch. The first Roomba launched more than a decade after its conception, and it took more than 50 years to go from the first robotic arm ever to the millionth in production. Venture capitalists, on the other hand, are not known for such patience. 

Perhaps that’s why Bank of America’s new prediction of widespread humanoid adoption was met with enthusiasm by investors but enormous skepticism by roboticists. Aaron Prather, a director at the robotics standards organization ASTM, said on Thursday that the projections were “wildly off-base.” 

As we’ve covered before, humanoid hype is a cycle: One slick video raises the expectations of investors, which then incentivizes competitors to make even slicker videos. This makes it quite hard for anyone—a tech journalist, say—to peel back the curtain and find out how much impact humanoids are poised to have on the workforce. But I’ll do my darndest.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Deep Dive

Artificial intelligence

How a new type of AI is helping police skirt facial recognition bans

Adoption of the tech has civil liberties advocates alarmed, especially as the government vows to expand surveillance of protesters and students.

Google DeepMind’s new AI agent cracks real-world problems better than humans can

AlphaEvolve uses large language models to find new algorithms that outperform the best human-made solutions for data center management, chip design, and more.

Inside the story that enraged OpenAI

In 2019, Karen Hao, a senior reporter with MIT Technology Review, pitched writing a story about a then little-known company, OpenAI. This is what happened next.

We need to start thinking of AI as “normal”

As technologists frame AI as either utopian or dystopian, two researchers offer a third option.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.