Humanoid Robot Forum 2025: Where Industrial Innovation Takes Center Stage

If you’re as interested in the future of robotics as I am, here’s an event you’ll want to keep an eye on. The Humanoid Robot Forum 2025 is happening on September 23, 2025, in Seattle (my city), Washington. Organized by the Association for Advancing Automation (A3), this one-day event brings together experts from the robotics and AI industries to explore how humanoid robots are being developed and deployed in real-world settings.

What makes this event exciting to me is that it focuses not just on hardware, but also on how technologies like AI and simulation are shaping the next generation of human-like robots. One of the keynotes I’m especially looking forward to is from Amit Goel, Head of Robotics Ecosystem at NVIDIA. His talk, “Advancing Humanoid Robotics Through Generative AI and Simulation,” will dive into how generative AI can help design, train, and test robot behaviors in simulated environments before deploying them in the real world. As someone who’s been exploring AI and NLP through my own projects, this intersection of AI and robotics is something I’m eager to learn more about.

The full agenda includes sessions and speakers from:

  • Diligent
  • Apptronik
  • Agility Robotics
  • PSYONIC
  • GXO
  • Association for Advancing Automation (A3)
  • Boston Dynamics
  • UCSD Advanced Robotics and Controls Lab
  • WiBotic
  • Cobot
  • NVIDIA
  • Cambridge Consultants
  • Toyota Research Institute
  • Sanctuary AI
  • True Ventures

Topics will include scaling up robotic hardware, AI-driven perception and control, power management, investment trends, and more. For anyone curious about how humanoid robots might start appearing in warehouses, hospitals, or even homes, this forum gives a front-row seat to what’s happening in the field.

Even though I won’t be attending in person (I’ve got school, college apps, and robotics season keeping me busy), I’ll definitely be keeping an eye out for takeaways and speaker highlights.

You can check out the full agenda and register for the event here:
👉 Humanoid Robot Forum 2025

— Andrew

How NLP Helps Robots Handle Interruptions: A Summary of JHU Research

I recently came across an awesome study from Johns Hopkins University describing how computational linguistics and NLP can make robots better conversational partners by teaching them how to handle interruptions, a feature that feels basic for humans but is surprisingly hard for machines.


What the Study Found

Researchers trained a social robot powered by a large language model (LLM) to manage real-time interruptions based on speaker intent. They categorized interruptions into four types: Agreement, Assistance, Clarification, and Disruption.

By analyzing human conversations from interviews to informal discussions, they designed strategies tailored to each interruption type. For example:

  • If someone agrees or helps, the robot pauses, nods, and resumes speaking.
  • When someone asks for clarification, the robot explains and continues.
  • For disruptive interruptions, the robot can either hold the floor to summarize its remaining points before yielding to the human user, or it can stop talking immediately.

How NLP Powers This System

The robot uses an LLM to:

  1. Detect overlapping speech
  2. Classify the interrupter’s intent
  3. Select the appropriate response strategy

In tests involving tasks and conversations, the system correctly interpreted interruptions about 89% of the time and responded appropriately 93.7% of the time.


Why This Matters in NLP and Computational Linguistics

This work highlights how computational linguistics and NLP are essential to human-robot interaction.

  • NLP does more than generate responses; it helps robots understand nuance, context, and intent.
  • Developing systems like this requires understanding pause cues, intonation, and conversational flow, all core to computational linguistics.
  • It shows how multimodal AI, combining language with behavior, can enable more natural and effective interactions.

What I Found Most Interesting

The researchers noted that users didn’t like when the robot “held the floor” too long during disruptive interruptions. It reminded me how pragmatic context matters. Just like people expect some rules in human conversations, robots need these conversational skills too.


Looking Ahead

This research expands what NLP can do in real-world settings like healthcare, education, and social assistants. For someone like me who loves robots and language, it shows how computational linguistics helps build smarter, more human-friendly AI systems.

If you want to dive deeper, check out the full report from Johns Hopkins:
Talking robots learn to manage human interruptions

— Andrew

How Computational Linguistics Is Powering the Future of Robotics?

As someone who’s been involved in competitive robotics through VEX for several years and recently started diving into computational linguistics, I’ve been wondering: how do these two fields connect?

At first, it didn’t seem obvious. VEX Robotics competitions (like the one my team Ex Machina participated in at Worlds 2025) are mostly about designing, building, and coding autonomous and driver-controlled robots to complete physical tasks. There’s no direct language processing involved… at least not yet. But the more I’ve learned, the more I’ve realized that computational linguistics plays a huge role in making real-world robots smarter, more useful, and more human-friendly.

Here’s what I’ve learned about how these two fields intersect and where robotics is heading.


1. Human-Robot Communication

The most obvious role of computational linguistics in robotics is helping robots understand and respond to human language. This is powered by natural language processing (NLP), a core area of computational linguistics. Think about assistants like Alexa or social robots like Pepper. They rely on language models and parsing techniques to interpret what we say and give meaningful responses.

This goes beyond voice control. It’s about making robots that can hold conversations, answer questions, or even ask for clarification when something is unclear. For robots to work effectively with people, they need language skills, not just motors and sensors.


2. Task Execution and Instruction Following

Another fascinating area is how robots can convert human instructions into actual actions. For example, if someone says, “Pick up the red cup from the table,” a robot must break that down: What object? What location? What action?

This is where semantic parsing comes in—turning language into structured data the robot can use to plan its moves. In VEX, we manually code our autonomous routines, but imagine if a future version of our robot could listen to instructions in plain English and adapt its behavior in real time.


3. Understanding Context and Holding a Conversation

Human communication is complex. We often leave things unsaid, refer to past ideas, or use vague phrases like “that one over there.” Research in discourse modeling and context tracking helps robots manage this complexity.

This is especially useful in collaborative environments. Think hospital robots assisting nurses, or factory robots working alongside people. They need to understand not just commands but also user intent, tone, and changing context.


4. Multimodal Understanding

Robots don’t just rely on language. They also use vision, sensors, and spatial awareness. A good example is interpreting a command like, “Hand me the tool next to the blue box.” The robot has to match those words with what it sees.

This is called multimodal integration, where the robot combines language and visual information. In my own robotics experience, we’ve used vision sensors to detect field elements, but future robots will need to combine that visual input with spoken instructions to act intelligently in dynamic spaces.


5. Emotional and Social Intelligence

This part really surprised me. Sentiment analysis and affective computing are helping robots detect emotions in voice or text, which makes them more socially aware.

This could be important for assistive robots that help the elderly, teach kids, or support people with disabilities. It’s not just about understanding words. It’s about understanding people.


6. Learning from Language

Computational linguistics also helps robots learn and adapt over time. Instead of hardcoding every behavior, researchers are working on ways for robots to learn from manuals, online resources, or natural language feedback.

This is especially exciting as large language models continue to evolve. Imagine a robot reading its own instruction manual or watching a video tutorial and figuring out how to do a new task.


Looking Ahead

While none of this technology is part of the current VEX Robotics competition (at least not yet), understanding how computational linguistics connects to robotics gives me a whole new appreciation for where robotics is going. It also makes me excited about studying this intersection more deeply in college.

Whether it’s through smarter voice assistants, more helpful home robots, or AI systems that respond naturally, computational linguistics is quietly shaping the next generation of robotics.

— Andrew

WAIC 2025: What Geoffrey Hinton’s “Tiger” Warning Taught Me About AI’s Future

At the end of July (7/26 – 7/28), Shanghai hosted the 2025 World Artificial Intelligence Conference (WAIC), drawing over 1,200 participants from more than 40 countries. Even though I wasn’t there, I followed the conference closely, especially the keynote from Geoffrey Hinton, the so-called “Godfather of AI.” His message? AI is advancing faster than we expect, and we need global cooperation to make sure it stays aligned with human values.

Hinton’s talk was historic. It was his first public appearance in China, and he even stood throughout his address despite back pain, which was noted by local media. One quote really stuck with me: “Humans have grown accustomed to being the most intelligent species in the world – what if that’s no longer the case?” That’s a big question, and as someone who’s diving deeper into computational linguistics and large language models, I felt both amazed and a little uneasy.

His warning compared superintelligent AI to a tiger we’re raising as a pet. If we’re not careful, he said, “the tiger” might one day turn on us. The point wasn’t to scare everyone, but to highlight why we can’t rely on simply pulling the plug if AI systems surpass human intelligence. Hinton believes we need to train AI to be good from the beginning because shutting it down later might not be an option.

WAIC 2025 wasn’t all doom and gloom though. Hinton also talked about the huge potential of AI to accelerate science. For example, he highlighted DeepMind’s AlphaFold as a breakthrough that solved a major biology challenge, predicting protein structures. That shows how powerful AI can be when guided properly.

What stood out the most was the recurring theme of cooperation. Hinton and others, like former Google CEO Eric Schmidt, emphasized the need for global partnerships on AI safety and ethics. Hinton even signed the “Shanghai AI Safety Consensus” with other experts to support international collaboration. The message was clear: no single country can or should handle AI’s future alone.

As a high school student passionate about AI and language, I’m still learning how these pieces fit together. But events like WAIC remind me that the future of AI isn’t just about building smarter systems, it’s also about making sure they work for everyone.

For those interested, here’s a more detailed summary of Hinton’s latest speech: Pandaily Report on WAIC 2025

You can also explore the official WAIC website here: https://www.worldaic.com.cn/

— Andrew

ACL 2025 New Theme Track: Generalization in NLP Models

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) will be happening in Vienna, Austria from July 27 to August 1. I won’t be attending in person, but as someone planning to study and do research in computational linguistics and NLP in college, I’ve been following the conference closely to keep up with the latest trends.

One exciting thing about this year’s ACL is its new theme track: Generalization of NLP Models. According to the official announcement:

“Following the success of the ACL 2020–2024 Theme tracks, we are happy to announce that ACL 2025 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP.

Generalization is crucial for ensuring that models behave robustly, reliably, and fairly when making predictions on data different from their training data. Achieving good generalization is critically important for models used in real-world applications, as they should emulate human-like behavior. Humans are known for their ability to generalize well, and models should aspire to this standard.

The theme track invites empirical and theoretical research and position and survey papers reflecting on the Generalization of NLP Models. The possible topics of discussion include (but are not limited to) the following:

  • How can we enhance the generalization of NLP models across various dimensions—compositional, structural, cross-task, cross-lingual, cross-domain, and robustness?
  • What factors affect the generalization of NLP models?
  • What are the most effective methods for evaluating the generalization capabilities of NLP models?
  • While Large Language Models (LLMs) significantly enhance the generalization of NLP models, what are the key limitations of LLMs in this regard?

The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards.”

This year’s focus on generalization really highlights where the field is going—toward more robust, ethical, and real-world-ready NLP systems. It’s not just about making cool models anymore, but about making sure they work well across different languages, cultures, and use cases.

If you’re into reading papers like I am, especially ones that dig into how NLP systems can perform reliably on new or unexpected inputs, this theme track will be full of insights. I’m looking forward to checking out the accepted papers when they’re released.

You can read more at the official conference page: ACL 2025 Theme Track Announcement

— Andrew

I-Language vs. E-Language: What Do They Mean in Computational Linguistics?

In the summer of 2025, I started working on a computational linguistics research project using Twitch data under the guidance of Dr. Sidney Wong, a Computational Sociolinguist. As someone who is still pretty new to this field, I was mainly focused on learning how to conduct literature reviews, help narrow down research topics, clean data, build models, and extract insights.

One day, Dr. Wong suggested I look into the concept of I-language vs. E-language from theoretical linguistics. At first, I wasn’t sure why this mattered. I thought, Isn’t language just… language?

But as I read more, I realized that understanding this distinction changes how we think about language data and what we’re actually modeling when we work with NLP.

In this post, I want to share what I’ve learned about I-language and E-language, and why this distinction is important for computational linguistics research.


What Is I-Language?

I-language stands for “internal language.” This idea was proposed by Noam Chomsky, who argued that language is fundamentally a mental system. I-language refers to the internal, cognitive grammar that allows us to generate and understand sentences. It is about:

  • The unconscious rules and structures stored in our minds
  • Our innate capacity for language
  • The mental system that explains why we can produce and interpret sentences we’ve never heard before

For example, if I say, “The cat sat on the mat,” I-language is the system in my brain that knows the sentence is grammatically correct and what it means, even though I may never have said that exact sentence before.

I-language focuses on competence (what we know about our language) rather than performance (how we actually use it in real life).


What Is E-Language?

E-language stands for “external language.” This is the language we actually hear and see in the world, such as:

  • Conversations between Twitch streamers and their viewers
  • Tweets, Reddit posts, books, and articles
  • Any linguistic data that exists outside the mind

E-language is about observable language use. It includes everything from polished academic writing to messy chat messages filled with abbreviations, typos, and slang.

Instead of asking, “What knowledge do speakers have about their language?”, E-language focuses on, “What do speakers actually produce in practice?”


Why Does This Matter for Computational Linguistics?

When it comes to computational linguistics and NLP, this distinction affects:

1. What We Model

  • I-language-focused research tries to model the underlying grammatical rules and mental representations. For example, building a parser that captures syntax structures based on linguistic theory.
  • E-language-focused research uses real-world data to build models that predict or generate language based on patterns, regardless of theoretical grammar. For example, training a neural network on millions of Twitch comments to generate chat responses.

2. Research Goals

If your goal is to understand how humans process and represent language cognitively, you’re leaning towards I-language research. This includes computational psycholinguistics, cognitive modeling, and formal grammar induction.

If your goal is to build practical NLP systems for tasks like translation, summarization, or sentiment analysis, you’re focusing on E-language. These projects care about performance and usefulness, even if the model doesn’t match linguistic theory.


3. How Models Are Evaluated

I-language models are evaluated based on how well they align with linguistic theory or native speaker intuitions about grammaticality.

E-language models are evaluated using performance metrics, such as accuracy, BLEU scores, or perplexity, based on how well they handle real-world data.


My Thoughts as a Beginner

When Dr. Wong first told me about this distinction, I thought it was purely theoretical. But now, while working with Twitch data, I see the importance of both views.

For example:

  • If I want to study how syntax structures vary in Twitch chats, I need to think in terms of I-language to analyze grammar.
  • If I want to build an NLP model that generates Twitch-style messages, I need to focus on E-language to capture real-world usage patterns.

Neither approach is better than the other. They just answer different types of questions. I-language is about why language works the way it does, while E-language is about how language is actually used in the world.


Final Thoughts

Understanding I-language vs. E-language helps me remember that language isn’t just data for machine learning models. It’s a human system with deep cognitive and social layers. Computational linguistics becomes much more meaningful when we consider both perspectives: What does the data tell us? and What does it reveal about how humans think and communicate?

If you’re also just starting out in this field, I hope this post helps you see why these theoretical concepts matter for practical NLP and AI work. Let me know if you want a follow-up post about other foundational linguistics ideas for computational research.

— Andrew

What Is Computational Linguistics (and How Is It Different from NLP)?

When I first got interested in this field, I kept seeing the terms computational linguistics and natural language processing (NLP) used almost interchangeably. At first, I thought they were the same thing. By delving deeper through reading papers, taking courses, and conducting research, I realized that although they overlap significantly, they are not entirely identical.

So in this post, I want to explain the difference (and connection) between computational linguistics and NLP from the perspective of a high school student who’s just getting started, but really interested in understanding both the language and the tech behind today’s AI systems.


So, what is computational linguistics?

Computational linguistics is the science of using computers to understand and model human language. It’s rooted in linguistics, the study of how language works, and applies computational methods to test linguistic theories, analyze language structure, or build tools like parsers and grammar analyzers.

It’s a field that sits at the intersection of computer science and linguistics. Think syntax trees, morphology, phonology, semantics, and using code to work with all of those.

For example, in computational linguistics, you might:

  • Use code to analyze sentence structure in different languages
  • Create models that explain how children learn grammar rules
  • Explore how prosody (intonation and stress) changes meaning in speech
  • Study how regional dialects appear in online chat platforms like Twitch

In other words, computational linguistics is often about understanding language (how it’s structured, how it varies, and how we can model it with computers).


Then what is NLP?

Natural language processing (NLP) is a subfield of AI and computer science that focuses on building systems that can process and generate human language. It’s more application-focused. If you’ve used tools like ChatGPT, Google Translate, Siri, or even grammar checkers, you’ve seen NLP in action.

While computational linguistics asks, “How does language work, and how can we model it?”, NLP tends to ask, “How can we build systems that understand or generate language usefully?”

Examples of NLP tasks:

  • Sentiment analysis (e.g., labeling text as positive, negative, or neutral)
  • Machine translation
  • Named entity recognition (e.g., tagging names, places, dates)
  • Text summarization or question answering

In many cases, NLP researchers care more about whether a system works than whether it matches a formal linguistic theory. That doesn’t mean theory doesn’t matter, but the focus is more on performance and results.


So, what’s the difference?

The line between the two fields can get blurry (and many people work in both), but here’s how I think of it:

Computational LinguisticsNLP
Rooted in linguisticsRooted in computer science and AI
Focused on explaining and modeling languageFocused on building tools and systems
Often theoretical or data-driven linguisticsOften engineering-focused and performance-driven
Examples: parsing syntax, studying morphologyExamples: sentiment analysis, machine translation

Think of computational linguistics as the science of language and NLP as the engineering side of language technology.


Why this matters to me

As someone who’s really interested in computational linguistics, I find myself drawn to the linguistic side of things, like how language varies, how meaning is structured, and how AI models sometimes get things subtly wrong because they don’t “understand” language the way humans do.

At the same time, I still explore NLP, especially when working on applied projects like sentiment analysis or topic modeling. I think having a strong foundation in linguistics makes me a better NLP researcher (or student), because I’m more aware of the complexity and nuance of language.


Final thoughts

If you’re just getting started, you don’t have to pick one or the other. Read papers from both fields. Try projects that help you learn both theory and application. Over time, you’ll probably find yourself leaning more toward one, but having experience in both will only help.

I’m still learning, and I’m excited to keep going deeper into both sides. If you’re interested too, let me know! I’m always up for sharing reading lists, courses, or just thoughts on cool research.

— Andrew


Summer Programs and Activities in Computational Linguistics: My Personal Experiences and Recommendations

If you’re a high school student interested in computational linguistics, you might be wondering: What are some ways to dive deeper into this field over the summer? As someone who loves language, AI, and everything in between, I’ve spent the past year researching programs and activities, and I wanted to share what I’ve learned (along with some of my personal experiences).


1. Summer Linguistic Institute for Youth Scholars (SLIYS)

What it is:
SLIYS is a two-week summer program run by The Ohio State University’s Department of Linguistics. It focuses on introducing high school students to language analysis and linguistic theory in a fun and rigorous way. Students get to explore syntax, morphology, phonetics, language universals, and even some computational topics.

My experience:
I’m super excited to share that I’ll be participating in SLIYS this summer (July 14 – 25, 2025). I was so happy to be accepted, and I’m looking forward to learning from real linguistics professors and meeting other students who are passionate about language. I’ll definitely share a reflection post after I finish the program, so stay tuned if you want an inside look!

Learn more about SLIYS here.


2. Summer Youth Camp for Computational Linguistics (SYCCL)

What it is:
SYCCL is a summer camp hosted by the Department of Linguistics and the Institute for Advanced Computational Science at Stony Brook University. It introduces high school students to computational linguistics and language technology, covering topics like language data, NLP tools, and coding for language analysis.

My experience:
I had planned to apply for SYCCL this year as well, but unfortunately, its schedule (July 6 – 18, 2025) conflicted with SLIYS, which I had already accepted. Another challenge I faced was that SYCCL’s website wasn’t updated until late April 2025, which is quite late compared to other summer programs. I had actually contacted the university earlier this year and they confirmed it would run again, but I didn’t see the application open until April. My advice is to check their website frequently starting early spring, and plan for potential conflicts with other summer programs.

Learn more about SYCCL here.


3. North American Computational Linguistics Open Competition (NACLO)

What it is:
NACLO is an annual computational linguistics competition for high school students across North America. It challenges students with problems in linguistics and language data analysis, testing their ability to decipher patterns in unfamiliar languages.

My experience:
I’ve tried twice to participate in NACLO at my local test center. Unfortunately, both times the test dates were weekdays that conflicted with my school final exams, so I had to miss them. If you’re planning to participate, I strongly recommend checking the schedule early to make sure it doesn’t overlap with finals or other major commitments. Despite missing it, I still find their practice problems online really fun and useful for thinking like a computational linguist.

Learn more about NACLO here.


4. LSA Summer Institute

What it is:
The Linguistic Society of America (LSA) Summer Institute is an intensive four-week program held every two years at different universities. It offers courses and workshops taught by top linguists and is known as one of the best ways to explore advanced topics in linguistics, including computational linguistics.

My experience:
I was planning to apply for the LSA Summer Institute this year. However, I found out that it is only open to individuals aged 18 and older. I contacted the LSA Institute Registration Office to ask if there could be any exceptions or special considerations for underage participants, but it was disappointing to receive their response: “Unfortunately, the age limit is firm and the organizers will not be considering any exceptions.” So if you’re thinking about applying, my advice is to check the age qualifications early before starting the application process.

Learn more about LSA Summer Institute here.


5. Local University Outreach Events and Courses

Another great way to explore linguistics and computational linguistics is by checking out courses or outreach events at local universities. For example, last summer I took LING 234 (Language and Diversity) at the University of Washington (Seattle). It was an eye-opening experience to study language variation, identity, and society from a college-level perspective. I wrote a reflection about it in my blog post from November 29, 2024. If your local universities offer summer courses for high school students, I highly recommend checking them out.


6. University-Affiliated AI4ALL Summer Programs for High School Students

What it is:
AI4ALL partners with universities to offer summer programs introducing high school students to AI research, ethics, and applications, often including NLP and language technology projects. While these programs are not focused solely on computational linguistics, they provide a great entry point into AI and machine learning, which are essential tools for language technology research.

About AI4ALL:
AI4ALL is a U.S.-based nonprofit focused on increasing diversity and inclusion in artificial intelligence (AI) education, research, development, and policy, particularly for historically underrepresented groups such as Black, Hispanic/Latinx, Indigenous, women, non-binary, low-income, and first-generation college students. Their mission is to make sure the next generation of AI researchers and developers reflects the diversity of the world.

Examples:

  • Stanford AI4ALL
  • Princeton AI4ALL
  • Carnegie Mellon AI4ALL

These programs are competitive and have different focus areas, but all aim to broaden participation in AI by empowering future researchers early.


Final Thoughts

I feel grateful to have these opportunities to grow my passion for computational linguistics, and I hope this list helps you plan your own summer learning journey. Whether you’re solving NACLO problems in your free time or spending two weeks at SLIYS like I will this summer, every step brings you closer to understanding how language and AI connect.

Let me know if you want a future post reviewing SLIYS after I complete it in July!

— Andrew

My Thoughts on “The Path to Medical Superintelligence”

Recently, I read an article published on Microsoft AI’s blog titled “The Path to Medical Superintelligence”. As a high school student interested in AI, computational linguistics, and the broader impacts of technology, I found this piece both exciting and a little overwhelming.


What Is Medical Superintelligence?

The blog talks about how Microsoft AI is working to build models with superhuman medical reasoning abilities. In simple terms, the idea is to create an AI that doesn’t just memorize medical facts but can analyze, reason, and make decisions at a level that matches or even surpasses expert doctors.

One detail that really stood out to me was how their new AI models also consider the cost of healthcare decisions. The article explained that while health costs vary widely depending on country and system, their team developed a method to consistently measure trade-offs between diagnostic accuracy and resource use. In other words, the AI doesn’t just focus on getting the diagnosis right, but also weighs how expensive or resource-heavy its suggested tests and treatments would be.

They explained that their current models already show impressive performance on medical benchmarks, such as USMLE-style medical exams, and that future models could go beyond question answering to support real clinical decision-making in a way that is both effective and efficient.


What Excites Me About This?

One thing that stood out to me was the potential impact on global health equity. The article mentioned that billions of people lack reliable access to doctors or medical specialists. AI models with advanced medical reasoning could help provide high-quality medical advice anywhere, bridging the gap for underserved communities.

It’s also amazing to think about how AI could support doctors by:

  • Reducing their cognitive load
  • Cross-referencing massive amounts of research
  • Helping with diagnosis and treatment planning

For someone like me who is fascinated by AI’s applications in society, this feels like a real-world example of AI doing good.


What Concerns Me?

At the same time, the blog post emphasized that AI is meant to complement doctors and health professionals, not replace them. I completely agree with this perspective. Medical decisions aren’t just about making the correct diagnosis. Doctors also need to navigate ambiguity, understand patient emotions and values, and build trust with patients and their families in ways AI isn’t designed to do.

Still, even if AI is only used as a tool to support clinicians, there are important concerns:

  • AI could give wrong or biased recommendations if the training data is flawed
  • It might suggest treatments without understanding a patient’s personal situation or cultural background
  • There is a risk of creating new inequalities if only wealthier hospitals or countries can afford the best AI models

Another thought I had was about how roles will evolve. The article mentioned that AI could help doctors automate routine tasks, identify diseases earlier, personalize treatment plans, and even help prevent diseases altogether. This sounds amazing, but it also means future doctors will need to learn how to work with AI systems effectively, interpret their recommendations, and still make the final decisions with empathy and ethical reasoning.


Connections to My Current Interests

While this blog post was about medical AI, it reminded me of my own interests in computational linguistics and language models. Underneath these medical models are the same AI principles I study:

  • Training on large datasets
  • Fine-tuning models for specific tasks
  • Evaluating performance carefully and ethically

It also shows how domain-specific knowledge (like medicine) combined with AI skills can create powerful tools that can literally save lives. That motivates me to keep building my foundation in both language technologies and other fields, so I can be part of these interdisciplinary innovations in the future.


Final Thoughts

Overall, reading this blog post made me feel hopeful about the potential of AI in medicine, but also reminded me of the responsibility AI developers carry. Creating a medical superintelligence isn’t just about reaching a technological milestone. It’s about improving people’s lives safely, ethically, and equitably.

If you’re interested in AI for social good, I highly recommend reading the full article here. Let me know if you want me to write a future post about other applications of AI that I’ve been exploring this summer.

— Andrew

SCiL vs. ACL: What’s the Difference? (A Beginner’s Take from a High School Student)

As a high school student just starting to explore computational linguistics, I remember being confused by two organizations: SCiL (Society for Computation in Linguistics) and ACL (Association for Computational Linguistics). They both focus on language and computers, so at first, I assumed they were basically the same thing.

It wasn’t until recently that I realized they are actually two different academic communities. Each has its own focus, audience, and style of research. I’ve had the chance to engage with both, which helped me understand how they are connected and how they differ.

Earlier this year, I had the opportunity to co-author a paper that was accepted to a NAACL 2025 workshop (May 3–4). NAACL stands for the North American Chapter of the Association for Computational Linguistics. It is a regional chapter that serves researchers in the United States, Canada, and Mexico. NAACL follows ACL’s mission and guidelines but focuses on more local events and contributions.

This summer, I will be participating in SCiL 2025 (July 18–19), where I hope to meet researchers and learn more about how computational models are used to study language structure and cognition. Getting involved with both events helped me better understand what makes SCiL and ACL unique, so I wanted to share what I’ve learned for other students who might also be starting out.

SCiL and ACL: Same Field, Different Focus

Both SCiL and ACL are academic communities interested in studying human language using computational methods. However, they focus on different kinds of questions and attract different types of researchers.

Here’s how I would explain the difference.

SCiL (Society for Computation in Linguistics)

SCiL is more focused on using computational tools to support linguistic theory and cognitive science. Researchers here are often interested in how language works at a deeper level, including areas like syntax, semantics, and phonology.

The community is smaller and includes people from different disciplines like linguistics, psychology, and cognitive science. You are likely to see topics such as:

  • Computational models of language processing
  • Formal grammars and linguistic structure
  • Psycholinguistics and cognitive modeling
  • Theoretical syntax and semantics

If you are interested in how humans produce and understand language, and how computers can help us model that process, SCiL might be a great place to start.

ACL (Association for Computational Linguistics)

ACL has a broader and more applied focus. It is known for its work in natural language processing (NLP), artificial intelligence, and machine learning. The research tends to focus on building tools and systems that can actually use human language in practical ways.

The community is much larger and includes researchers from both academia and major tech companies like Google, OpenAI, Meta, and Microsoft. You will see topics such as:

  • Language models like GPT, BERT, and LLaMA
  • Machine translation and text summarization
  • Speech recognition and sentiment analysis
  • NLP benchmarks and evaluation methods

If you want to build or study real-world AI systems that use language, ACL is the place where a lot of that cutting-edge research is happening.

Which One Should You Explore First?

It really depends on what excites you most.

If you are curious about how language works in the brain or how to use computational tools to test theories of language, SCiL is a great choice. It is more theory-driven and focused on cognitive and linguistic insights.

If you are more interested in building AI systems, analyzing large datasets, or applying machine learning to text and speech, then ACL might be a better fit. It is more application-oriented and connected to the latest developments in NLP.

They both fall under the larger field of computational linguistics, but they come at it from different angles. SCiL is more linguistics-first, while ACL is more NLP-first.

Final Thoughts

I am still early in my journey, but understanding the difference between SCiL and ACL has already helped me navigate the field better. Each community asks different questions, uses different methods, and solves different problems, but both are helping to push the boundaries of how we understand and work with language.

I am looking forward to attending SCiL 2025 this summer, and I will definitely write about that experience afterward. In the meantime, I hope this post helps other students who are just starting out and wondering where to begin.

— Andrew

Blog at WordPress.com.

Up ↑