From VEX Robotics to Silicon Valley: Why Physical Intelligence Is Harder Than It Looks

According to ACM TechNews (Wednesday, December 17, 2025), ACM Fellow Rodney Brooks argues that Silicon Valley’s current obsession with humanoid robots is misguided and overhyped. Drawing on decades of experience, he contends that general-purpose, humanlike robots remain far from practical, unsafe to deploy widely, and unlikely to achieve human-level dexterity in the near future. Brooks cautions that investors are confusing impressive demonstrations and AI training techniques with genuine real-world capability. Instead, he argues that meaningful progress will come from specialized, task-focused robots designed to work alongside humans rather than replace them. The original report was published in The New York Times under the title “Rodney Brooks, the Godfather of Modern Robotics, Says the Field Has Lost Its Way.”

I read the New York Times coverage of Rodney Brooks’ argument that Silicon Valley’s current enthusiasm for humanoid robots is likely to end in disappointment. Brooks is widely respected in the robotics community. He co-founded iRobot and has played a major role in shaping modern robotics research. His critique is not anti-technology rhetoric but a perspective grounded in long experience with the practical challenges of engineering physical systems. He makes a similar case in his blog post, “Why Today’s Humanoids Won’t Learn Dexterity”.

Here’s what his core points seem to be:

Why he thinks this boom will fizzle

  • The industry is betting huge sums on general-purpose humanoid robots that can do everything humans do—walk, manipulate objects, adapt to new tasks—based on current AI methods. Brooks argues that belief in this near-term is “pure fantasy” because we still lack the basic sensing and physical dexterity that humans take for granted.
  • He emphasizes that visual data and generative models aren’t a substitute for true touch sensing and force control. Current training methods can’t teach a robot to use its hands with the precision and adaptation humans have.
  • Safety and practicality matter too. Humanoid robots that fall or make a mistake could be dangerous around people, which slows deployment and commercial acceptance.
  • He expects a big hype phase followed by a trough of disappointment—a period where money flows out of the industry because the technology hasn’t lived up to its promises.

Where I agree with him

I think Brooks is right that engineering the physical world is harder than it looks. Software breakthroughs like large language models (LLMs) are impressive, but even brilliant language AI doesn’t give a robot the equivalent of muscle, touch, balance, and real-world adaptability. Robots that excel at one narrow task (like warehouse arms or autonomous vacuum cleaners) don’t generalize to ambiguous, unpredictable environments like a home or workplace the way vision-based AI proponents hope. The history of robotics is full of examples where clever demos got headlines long before practical systems were ready.

It would be naive to assume that because AI is making rapid progress in language and perception, physical autonomy will follow instantly with the same methods.

Where I think he might be too pessimistic

Fully dismissing the long-term potential of humanoid robots seems premature. Complex technology transitions often take longer and go in unexpected directions. For example, self-driving cars have taken far longer than early boosters predicted, but we are seeing incremental deployments in constrained zones. Humanoid robots could follow a similar curve: rather than arriving as general-purpose helpers, they may find niches first (healthcare support, logistics, elder care) where the environment and task structure make success easier. Brooks acknowledges that robots will work with humans, but probably not in a human look-alike form in everyday life for decades.

Also, breakthroughs can come from surprising angles. It’s too soon to say that current research paths won’t yield solutions to manipulation, balance, and safety, even if those solutions aren’t obvious yet.

Bottom line

Brooks’ critique is not knee-jerk pessimism. It is a realistic engineering assessment grounded in decades of robotics experience. He is right to question hype and to emphasize that physical intelligence is fundamentally different from digital intelligence.

My experience in VEX Robotics reinforces many of his concerns, even though VEX robots are not humanoid. Building competition robots showed me how fragile physical systems can be. Small changes in friction, battery voltage, alignment, or field conditions routinely caused failures that no amount of clever code could fully anticipate. Success came from tightly scoped designs, extensive iteration, and task-specific mechanisms rather than general intelligence. That contrast makes the current humanoid hype feel misaligned with how robotics actually progresses in practice, where reliability and constraint matter more than appearance or breadth.

Dismissing the possibility of humanoid robots entirely may be too strict, but expecting rapid, general-purpose success is equally misguided. Progress will likely be slower, more specialized, and far less dramatic than Silicon Valley forecasts suggest.

— Andrew

4,811 hits

How Computational Linguistics Is Powering the Future of Robotics?

As someone who’s been involved in competitive robotics through VEX for several years and recently started diving into computational linguistics, I’ve been wondering: how do these two fields connect?

At first, it didn’t seem obvious. VEX Robotics competitions (like the one my team Ex Machina participated in at Worlds 2025) are mostly about designing, building, and coding autonomous and driver-controlled robots to complete physical tasks. There’s no direct language processing involved… at least not yet. But the more I’ve learned, the more I’ve realized that computational linguistics plays a huge role in making real-world robots smarter, more useful, and more human-friendly.

Here’s what I’ve learned about how these two fields intersect and where robotics is heading.


1. Human-Robot Communication

The most obvious role of computational linguistics in robotics is helping robots understand and respond to human language. This is powered by natural language processing (NLP), a core area of computational linguistics. Think about assistants like Alexa or social robots like Pepper. They rely on language models and parsing techniques to interpret what we say and give meaningful responses.

This goes beyond voice control. It’s about making robots that can hold conversations, answer questions, or even ask for clarification when something is unclear. For robots to work effectively with people, they need language skills, not just motors and sensors.


2. Task Execution and Instruction Following

Another fascinating area is how robots can convert human instructions into actual actions. For example, if someone says, “Pick up the red cup from the table,” a robot must break that down: What object? What location? What action?

This is where semantic parsing comes in—turning language into structured data the robot can use to plan its moves. In VEX, we manually code our autonomous routines, but imagine if a future version of our robot could listen to instructions in plain English and adapt its behavior in real time.


3. Understanding Context and Holding a Conversation

Human communication is complex. We often leave things unsaid, refer to past ideas, or use vague phrases like “that one over there.” Research in discourse modeling and context tracking helps robots manage this complexity.

This is especially useful in collaborative environments. Think hospital robots assisting nurses, or factory robots working alongside people. They need to understand not just commands but also user intent, tone, and changing context.


4. Multimodal Understanding

Robots don’t just rely on language. They also use vision, sensors, and spatial awareness. A good example is interpreting a command like, “Hand me the tool next to the blue box.” The robot has to match those words with what it sees.

This is called multimodal integration, where the robot combines language and visual information. In my own robotics experience, we’ve used vision sensors to detect field elements, but future robots will need to combine that visual input with spoken instructions to act intelligently in dynamic spaces.


5. Emotional and Social Intelligence

This part really surprised me. Sentiment analysis and affective computing are helping robots detect emotions in voice or text, which makes them more socially aware.

This could be important for assistive robots that help the elderly, teach kids, or support people with disabilities. It’s not just about understanding words. It’s about understanding people.


6. Learning from Language

Computational linguistics also helps robots learn and adapt over time. Instead of hardcoding every behavior, researchers are working on ways for robots to learn from manuals, online resources, or natural language feedback.

This is especially exciting as large language models continue to evolve. Imagine a robot reading its own instruction manual or watching a video tutorial and figuring out how to do a new task.


Looking Ahead

While none of this technology is part of the current VEX Robotics competition (at least not yet), understanding how computational linguistics connects to robotics gives me a whole new appreciation for where robotics is going. It also makes me excited about studying this intersection more deeply in college.

Whether it’s through smarter voice assistants, more helpful home robots, or AI systems that respond naturally, computational linguistics is quietly shaping the next generation of robotics.

— Andrew

Ex Machina Goes Global: VEX Worlds 2025 Recap

From May 6 to May 8, 2025, my team and I had the chance to compete in the VEX Robotics World Championship—held at the Kay Bailey Hutchison Convention Center in Dallas. This annual event brings together the top-performing teams from around the globe for the VEX IQ, VEX V5, and VEX U competitions. We were there to represent Team 66475C – Ex Machina in the VEX V5 High School division.

Since 2021, my teams have qualified for Worlds five years in a row—each time representing Washington as one of the state’s top contenders. This year, we were proud to win the State Championship, earn our ticket to Dallas, and compete in the Design Division, which included 83 qualified teams from all over the world.

And we made it count:
🏆 Design Division Champions
🌍 Top 8 globally among 831 teams
💥 Quarterfinalists overall

Huge thanks to our incredible partner team 1010G (TenTon Robotics) from British Columbia, Canada, who helped make our division title possible. If you’re curious about how it all unfolded, you can catch the recap here:
👉 Watch the recap


My Role

As Main Builder, I utilized 3D modeling software to design the robot, ensuring efficient planning and resource management. I was actively involved in constructing all aspects of the robot, including the drive base and various subsystems. In this role, I also managed the team of builders, ensuring their work was properly integrated and aligned with the overall design, fostering collaboration and maintaining high standards throughout the building process.


Participating in this kind of international competition is incredibly rewarding—not just for the technical skills, but for what it teaches you about teamwork, dealing with pressure, and adapting to the unexpected. And honestly, one of the best parts is just making friends from all over the world.

If you’re interested in robotics, I highly recommend giving this competition a shot.

Coming soon: I’ll be sharing updates on my summer AI projects—stay tuned!

— Andrew

Blog at WordPress.com.

Up ↑