Humanoid Robot Forum 2025: Where Industrial Innovation Takes Center Stage

If you’re as interested in the future of robotics as I am, here’s an event you’ll want to keep an eye on. The Humanoid Robot Forum 2025 is happening on September 23, 2025, in Seattle (my city), Washington. Organized by the Association for Advancing Automation (A3), this one-day event brings together experts from the robotics and AI industries to explore how humanoid robots are being developed and deployed in real-world settings.

What makes this event exciting to me is that it focuses not just on hardware, but also on how technologies like AI and simulation are shaping the next generation of human-like robots. One of the keynotes I’m especially looking forward to is from Amit Goel, Head of Robotics Ecosystem at NVIDIA. His talk, “Advancing Humanoid Robotics Through Generative AI and Simulation,” will dive into how generative AI can help design, train, and test robot behaviors in simulated environments before deploying them in the real world. As someone who’s been exploring AI and NLP through my own projects, this intersection of AI and robotics is something I’m eager to learn more about.

The full agenda includes sessions and speakers from:

  • Diligent
  • Apptronik
  • Agility Robotics
  • PSYONIC
  • GXO
  • Association for Advancing Automation (A3)
  • Boston Dynamics
  • UCSD Advanced Robotics and Controls Lab
  • WiBotic
  • Cobot
  • NVIDIA
  • Cambridge Consultants
  • Toyota Research Institute
  • Sanctuary AI
  • True Ventures

Topics will include scaling up robotic hardware, AI-driven perception and control, power management, investment trends, and more. For anyone curious about how humanoid robots might start appearing in warehouses, hospitals, or even homes, this forum gives a front-row seat to what’s happening in the field.

Even though I won’t be attending in person (I’ve got school, college apps, and robotics season keeping me busy), I’ll definitely be keeping an eye out for takeaways and speaker highlights.

You can check out the full agenda and register for the event here:
👉 Humanoid Robot Forum 2025

— Andrew

How Computational Linguistics Is Powering the Future of Robotics?

As someone who’s been involved in competitive robotics through VEX for several years and recently started diving into computational linguistics, I’ve been wondering: how do these two fields connect?

At first, it didn’t seem obvious. VEX Robotics competitions (like the one my team Ex Machina participated in at Worlds 2025) are mostly about designing, building, and coding autonomous and driver-controlled robots to complete physical tasks. There’s no direct language processing involved… at least not yet. But the more I’ve learned, the more I’ve realized that computational linguistics plays a huge role in making real-world robots smarter, more useful, and more human-friendly.

Here’s what I’ve learned about how these two fields intersect and where robotics is heading.


1. Human-Robot Communication

The most obvious role of computational linguistics in robotics is helping robots understand and respond to human language. This is powered by natural language processing (NLP), a core area of computational linguistics. Think about assistants like Alexa or social robots like Pepper. They rely on language models and parsing techniques to interpret what we say and give meaningful responses.

This goes beyond voice control. It’s about making robots that can hold conversations, answer questions, or even ask for clarification when something is unclear. For robots to work effectively with people, they need language skills, not just motors and sensors.


2. Task Execution and Instruction Following

Another fascinating area is how robots can convert human instructions into actual actions. For example, if someone says, “Pick up the red cup from the table,” a robot must break that down: What object? What location? What action?

This is where semantic parsing comes in—turning language into structured data the robot can use to plan its moves. In VEX, we manually code our autonomous routines, but imagine if a future version of our robot could listen to instructions in plain English and adapt its behavior in real time.


3. Understanding Context and Holding a Conversation

Human communication is complex. We often leave things unsaid, refer to past ideas, or use vague phrases like “that one over there.” Research in discourse modeling and context tracking helps robots manage this complexity.

This is especially useful in collaborative environments. Think hospital robots assisting nurses, or factory robots working alongside people. They need to understand not just commands but also user intent, tone, and changing context.


4. Multimodal Understanding

Robots don’t just rely on language. They also use vision, sensors, and spatial awareness. A good example is interpreting a command like, “Hand me the tool next to the blue box.” The robot has to match those words with what it sees.

This is called multimodal integration, where the robot combines language and visual information. In my own robotics experience, we’ve used vision sensors to detect field elements, but future robots will need to combine that visual input with spoken instructions to act intelligently in dynamic spaces.


5. Emotional and Social Intelligence

This part really surprised me. Sentiment analysis and affective computing are helping robots detect emotions in voice or text, which makes them more socially aware.

This could be important for assistive robots that help the elderly, teach kids, or support people with disabilities. It’s not just about understanding words. It’s about understanding people.


6. Learning from Language

Computational linguistics also helps robots learn and adapt over time. Instead of hardcoding every behavior, researchers are working on ways for robots to learn from manuals, online resources, or natural language feedback.

This is especially exciting as large language models continue to evolve. Imagine a robot reading its own instruction manual or watching a video tutorial and figuring out how to do a new task.


Looking Ahead

While none of this technology is part of the current VEX Robotics competition (at least not yet), understanding how computational linguistics connects to robotics gives me a whole new appreciation for where robotics is going. It also makes me excited about studying this intersection more deeply in college.

Whether it’s through smarter voice assistants, more helpful home robots, or AI systems that respond naturally, computational linguistics is quietly shaping the next generation of robotics.

— Andrew

Blog at WordPress.com.

Up ↑