How to Cold Email Professors for Research Opportunities as a High School Student: My Experience and Advice

One question I hear a lot from high school students (including from myself when I first started) is: How can I find a research opportunity if I don’t already have connections in academia? Many of us don’t have family or school networks tied to university research, so it can feel impossible to break in. But one effective way is through cold emailing professors.

In this post, I’d like to share my personal experiences and practical advice on how to approach cold emailing, especially if you’re a high school student aiming to start your research journey.


1. Identify Professors in Your Research Area

Start by thinking about what you’re genuinely interested in researching. For me, it was computational linguistics and NLP. Then, search faculty pages on university department websites to find professors working in that field. Look at their personal websites or lab pages to understand their recent projects and publications.

Here’s what I learned:
Even if a professor’s website only mentions research positions for undergraduates or graduate students, it doesn’t necessarily mean they’re closed off to high school students. In many cases, if you are academically ready and motivated, they may be open to mentoring you as well.


2. Craft a Polite and Targeted Introduction Email

Your email should briefly:

  • Introduce yourself (name, grade, school)
  • Share your specific research interests
  • Explain why you are reaching out to them in particular, referencing their recent work
  • Mention any relevant projects you’ve done

When I reached out, I shared my school transcript, certificates (such as those from LinkedIn courses or my University of Washington summer class), and most importantly, my previous research projects and sample work. Demonstrating both preparation and passion makes a difference.

In my emails, I often referenced my nonprofit organization, Student Echo, and my research on analyzing survey data using LLMs. Show that you care about their work and that you want to learn under their guidance while contributing meaningfully to their projects.


3. Clarify Your Intentions and Expectations

Make it clear in your email that:

  • You are volunteering your time to assist with research
  • You do not expect compensation or an official title
  • You are simply seeking experience, mentorship, and an opportunity to contribute

Professors are often busy and have limited budgets. By clarifying that you’re offering help without adding financial or administrative burden, you make it easier for them to say yes.


4. Be Patient and Follow Up Politely

Professors receive many emails and have packed schedules. Wait at least two weeks before sending a follow-up email. In my case, some professors responded quickly with a clear “no” but were kind enough to refer me to colleagues. If they don’t, you can politely ask if they know anyone in their department who might accept high school students for research.

If you don’t hear back, don’t be discouraged. I cold emailed five professors at top-tier universities. Four didn’t work out, but one professor replied and became my first research mentor. That one response can change everything.


5. Prepare for Your First Meeting

Once a professor shows interest:

  • Set up a meeting (virtual or in-person, depending on location)
  • Before the meeting, email them your resume, sample research work, and a personal statement outlining your goals and why you’re interested in their lab

During the meeting:

  • Be humble, respectful, and grateful for their time
  • Show confidence and passion about your research interests
  • If they ask technical questions you don’t know, be honest and express your willingness to learn

In my case, after my virtual meeting, the professor invited me to attend his weekly lab meetings with his graduate students. Eventually, he assigned me to collaborate with one of his Ph.D. students. It was such an amazing opportunity, and I’m so grateful for his trust and mentorship.


Final Thoughts

Cold emailing professors can feel intimidating, but remember: every professor was once a student who started somewhere too. If you’re passionate, prepared, and polite, your efforts will eventually pay off. Even one “yes” can open the door to your first professional research experience.

I hope this post helps you take your first step toward finding a research opportunity. Feel free to let me know if you want me to share a sample cold email template in a future post.

Good luck, and keep pushing your curiosity forward!

— Andrew

What I Learned (and Loved) at SLIYS: Two Weeks of Linguistic Discovery at Ohio State

This summer, I had the chance to participate in both SLIYS 1 and SLIYS 2—the Summer Linguistic Institute for Youth Scholars—hosted by the Ohio State University Department of Linguistics. Across two weeks packed with lectures, workshops, and collaborative data collection, I explored the structure of language at every level: from the individual sounds we make to the complex systems that govern meaning and conversation. But if I had to pick just one highlight, it would be the elicitation sessions—hands-on explorations with real language data that made the abstract suddenly tangible.

SLIYS 1: Finding Language in Structure

SLIYS 1 started with the fundamentals—consonants, vowels, and the International Phonetic Alphabet (IPA)—but quickly expanded into diverse linguistic territory: morphology, syntax, semantics, and pragmatics. Each day featured structured lectures covering topics like sociolinguistic variation, morphological structures, and historical linguistics. Workshops offered additional insights, from analyzing sentence meanings to exploring language evolution.

The core experience, however, was our daily elicitation sessions. My group tackled Serbo-Croatian, collaboratively acting as elicitors and transcribers to construct a detailed grammar sketch. We identified consonant inventories, syllable structures (like CV, CVC, and CCV patterns), morphological markers for plural nouns and verb tenses, and syntactic word orders. Through interactions with our language consultant, we tested hypotheses directly, discovering intricacies like how questions were formed using particles like dahlee, and how adjective-noun order worked. This daily practice gave theory immediate clarity and meaning, shaping our skills as linguists-in-training.

SLIYS 2: Choosing My Path in Linguistics

SLIYS 2 built upon our initial foundations, diving deeper into phonological analysis, morphosyntactic properties, and the relationship between language and cognition. This week offered more autonomy, allowing us to select workshops tailored to our interests. My choices included sessions on speech perception, dialectology, semiotics, and linguistic anthropology—each challenging me to think more broadly about language as both cognitive and cultural phenomena.

Yet again, the elicitation project anchored our experience, this time exploring Georgian. Our group analyzed Georgian’s distinctive pluralization system, polypersonal verb agreement (verbs agreeing with both subjects and objects), and flexible sentence orders (SVO/SOV). One fascinating detail we uncovered was how nouns remained singular when preceded by numbers. Preparing our final presentation felt especially rewarding, bringing together the week’s linguistic discoveries in a cohesive narrative. Presenting to our peers crystallized not just what we learned, but how thoroughly we’d internalized it.

More Than Just a Summer Program

What I appreciated most about SLIYS was how seriously it treated us as student linguists. The instructors didn’t just lecture—they listened, challenged us, and encouraged our curiosity. Whether we were learning about deixis or discourse analysis, the focus was always on asking better questions, not just memorizing answers.

By the end of SLIYS 2, I found myself thinking not only about how language works, but why we study it in the first place. Language is a mirror to thought, a map of culture, and a bridge between people—and programs like SLIYS remind me that it’s also something we can investigate, question, and build understanding from.

Moments from SLIYS 2: A Snapshot of a Summer to Remember

As SLIYS 2 came to a close, our instructors captured these Zoom screenshots to help us remember the community, curiosity, and collaboration that made this experience so meaningful.

Special Thanks to the SLIYS 2025 Team

This incredible experience wouldn’t have been possible without the passion, insight, and dedication of the SLIYS 2025 instructors. Each one brought something unique to the table—whether it was helping us break down complex syntax, introducing us to sociolinguistics through speech perception, or guiding us through our elicitation sessions with patience and curiosity. I’m especially grateful for the way they encouraged us to ask deeper questions and think like real linguists.

Special thanks to:

  • Kyler Laycock – For leading with energy, making phonetics and dialectology come alive, and always reminding us how much identity lives in the details of speech.
  • Jory Ross – For guiding us through speech perception and conversational structure, and for sharing her excitement about how humans really process language.
  • Emily Sagasser – For her insights on semantics, pragmatics, and focus structure, and for pushing us to think about how language connects to social justice and cognition.
  • Elena Vaikšnoraitė – For their thoughtful instruction in syntax and psycholinguistics, and for showing us the power of connecting data across languages.
  • Dr. Clint Awai-Jennings – For directing the program with care and purpose—and for showing us that it’s never too late to turn a passion for language into a life’s work.

Thank you all for making SLIYS 1 and 2 an unforgettable part of my summer.

— Andrew

WAIC 2025: What Geoffrey Hinton’s “Tiger” Warning Taught Me About AI’s Future

At the end of July (7/26 – 7/28), Shanghai hosted the 2025 World Artificial Intelligence Conference (WAIC), drawing over 1,200 participants from more than 40 countries. Even though I wasn’t there, I followed the conference closely, especially the keynote from Geoffrey Hinton, the so-called “Godfather of AI.” His message? AI is advancing faster than we expect, and we need global cooperation to make sure it stays aligned with human values.

Hinton’s talk was historic. It was his first public appearance in China, and he even stood throughout his address despite back pain, which was noted by local media. One quote really stuck with me: “Humans have grown accustomed to being the most intelligent species in the world – what if that’s no longer the case?” That’s a big question, and as someone who’s diving deeper into computational linguistics and large language models, I felt both amazed and a little uneasy.

His warning compared superintelligent AI to a tiger we’re raising as a pet. If we’re not careful, he said, “the tiger” might one day turn on us. The point wasn’t to scare everyone, but to highlight why we can’t rely on simply pulling the plug if AI systems surpass human intelligence. Hinton believes we need to train AI to be good from the beginning because shutting it down later might not be an option.

WAIC 2025 wasn’t all doom and gloom though. Hinton also talked about the huge potential of AI to accelerate science. For example, he highlighted DeepMind’s AlphaFold as a breakthrough that solved a major biology challenge, predicting protein structures. That shows how powerful AI can be when guided properly.

What stood out the most was the recurring theme of cooperation. Hinton and others, like former Google CEO Eric Schmidt, emphasized the need for global partnerships on AI safety and ethics. Hinton even signed the “Shanghai AI Safety Consensus” with other experts to support international collaboration. The message was clear: no single country can or should handle AI’s future alone.

As a high school student passionate about AI and language, I’m still learning how these pieces fit together. But events like WAIC remind me that the future of AI isn’t just about building smarter systems, it’s also about making sure they work for everyone.

For those interested, here’s a more detailed summary of Hinton’s latest speech: Pandaily Report on WAIC 2025

You can also explore the official WAIC website here: https://www.worldaic.com.cn/

— Andrew

How Dragon Years Shape Marriages and Births: Evidence from Statistical Analysis

Recently, I came across an interesting article published in the journal Significance, an official magazine of the Royal Statistical Society, the American Statistical Association, and the Statistical Society of Australia. Being a Chinese American, I’m always interested in learning about Chinese culture, in addition to the language. This article explored something I’ve heard a lot from my family but never thought about deeply: Do dragon years really make people get married or have babies more?


What Is This All About?

In Chinese astrology, each lunar year is assigned one of 12 animals. The dragon is considered the most powerful and auspicious. Growing up, I often heard my relatives say it’s best to get married or have children in a dragon year because it brings luck and prosperity.

The article shared the author’s personal story about how his Aunty Li would always nag him about getting married. But in the Year of the Dragon (2024), she suddenly stopped. Why? Because planning a wedding or having a baby in a dragon year takes time, and it was already too late for him to give her a “dragon wedding” or “dragon baby.” This story made me smile because it reminded me of my own family gatherings.


What Did the Research Find?

Researchers looked at birth and marriage data from 1970 to 2023 in six countries: Singapore, China, Malaysia, the UK, Kenya, and Mexico. Here are some highlights that stood out to me:

  • In Singapore, there was a strong positive dragon effect. The fertility rate increased by about 0.17 children per woman in dragon years, which is a noticeable boost.
  • In China, surprisingly, there wasn’t a big dragon effect overall. The researchers suggested this could be because of the one-child policy (1979–2015). Families couldn’t plan for a second dragon baby even if they wanted to.
  • In Malaysia, there was a small positive effect, but it wasn’t as strong as Singapore’s.
  • In countries with tiny Chinese populations (UK, Kenya, Mexico), there was no real dragon effect.
  • Snake years, which follow dragon years and are considered less lucky, showed slightly negative effects on fertility, though these were small and not consistent across countries.

What About Marriage?

The study also looked at marriage rates among ethnic Chinese in Singapore. They expected an increase in dragon years, but the results were mixed. There was no clear pattern, and some dragon years actually had fewer marriages. So, while having a dragon baby seems to matter, a dragon wedding might not be as big of a deal in the data (even though aunties still care a lot about it!).


Why Does This Matter?

For me, reading this was a cool reminder of how cultural beliefs can actually show up in real data. It also shows how statistical models can help us separate superstition from reality. In Singapore, the effect was strong enough that even the prime minister encouraged citizens to “add a little dragon” in his Lunar New Year speech.

At the same time, the study reminded me that traditions, culture, and policies (like China’s one-child policy) all interact to shape what people decide to do with their lives.


Final Thoughts

As a student interested in computational linguistics and social data, I find studies like this inspiring. They connect language, culture, demographics, and data analysis in a meaningful way. Plus, it makes me think about how traditions continue to shape decisions, even in modern societies.

I wonder if my parents also hoped I would be a dragon baby. (Spoiler: I’m not, but at least I wasn’t born in the Year of the Snake either!)

If you’re curious about Chinese culture, statistics, or demographic trends, I highly recommend reading the full article here (if your school has access). Let me know if you want a follow-up post explaining how the statistical model in the paper worked.

— Andrew

ACL 2025 New Theme Track: Generalization in NLP Models

The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) will be happening in Vienna, Austria from July 27 to August 1. I won’t be attending in person, but as someone planning to study and do research in computational linguistics and NLP in college, I’ve been following the conference closely to keep up with the latest trends.

One exciting thing about this year’s ACL is its new theme track: Generalization of NLP Models. According to the official announcement:

“Following the success of the ACL 2020–2024 Theme tracks, we are happy to announce that ACL 2025 will have a new theme with the goal of reflecting and stimulating discussion about the current state of development of the field of NLP.

Generalization is crucial for ensuring that models behave robustly, reliably, and fairly when making predictions on data different from their training data. Achieving good generalization is critically important for models used in real-world applications, as they should emulate human-like behavior. Humans are known for their ability to generalize well, and models should aspire to this standard.

The theme track invites empirical and theoretical research and position and survey papers reflecting on the Generalization of NLP Models. The possible topics of discussion include (but are not limited to) the following:

  • How can we enhance the generalization of NLP models across various dimensions—compositional, structural, cross-task, cross-lingual, cross-domain, and robustness?
  • What factors affect the generalization of NLP models?
  • What are the most effective methods for evaluating the generalization capabilities of NLP models?
  • While Large Language Models (LLMs) significantly enhance the generalization of NLP models, what are the key limitations of LLMs in this regard?

The theme track submissions can be either long or short. We anticipate having a special session for this theme at the conference and a Thematic Paper Award in addition to other categories of awards.”

This year’s focus on generalization really highlights where the field is going—toward more robust, ethical, and real-world-ready NLP systems. It’s not just about making cool models anymore, but about making sure they work well across different languages, cultures, and use cases.

If you’re into reading papers like I am, especially ones that dig into how NLP systems can perform reliably on new or unexpected inputs, this theme track will be full of insights. I’m looking forward to checking out the accepted papers when they’re released.

You can read more at the official conference page: ACL 2025 Theme Track Announcement

— Andrew

America’s AI Action Plan: What It Means for the Future of U.S. AI Leadership

On July 23, 2025, the White House released the long-awaited “National AI Action Plan”, a major step following President Donald Trump’s Executive Order 14179 signed back in January. The goal? To remove barriers to American leadership in artificial intelligence—and the plan outlines how the U.S. government wants to make that happen.

The 28 page report touches on a lot of areas, but here are a few highlights that stood out to me as a student passionate about AI:

  • Open Access to AI Research: The plan calls for expanding access to government-funded AI models and datasets, helping more students, researchers, and small businesses innovate.
  • Workforce Development: There’s a strong emphasis on education, especially training the next generation of AI talent. This could open new opportunities for students like us to get involved earlier.
  • Streamlining Regulations: The plan pushes for cutting red tape that slows down AI development, while still upholding national security and ethical standards.
  • Government Use of AI: Agencies are encouraged to adopt AI technologies to boost efficiency and modernize services. This is another signal that AI is becoming a core part of public infrastructure.

It’s fascinating to see how quickly AI policy is evolving at the national level. I’ll be keeping an eye on how this action plan plays out, especially in terms of education and research access for younger students.

Read the full report here: whitehouse.gov/AIActionPlan

— Andrew

Attending SCiL 2025: My First In-Person Computational Linguistics Conference at the University of Oregon

This July, I had the amazing opportunity to attend the 2025 Society for Computation in Linguistics (SCiL) conference, held at the University of Oregon in Eugene from July 18 to 20. This wasn’t just my first academic conference in person. It was also my first time attending a conference where I was (surprisingly) the only high school student in the room.


Road Trip to Eugene and My Badge Moment

My family and I made the drive from Seattle to Eugene, a nearly 300-mile road trip along I-5. I was super excited (and a little nervous) to be attending a professional conference alongside professors, postdocs, and graduate students.

When I checked in, I got my conference badge and immediately noticed something funny. My badge just said “Andrew Li,” with no school or organization listed, while everyone else had theirs printed with their university or research institute. I guess Redmond High School isn’t in their system yet!


The Crowd: Grad Students, Professors, and Me

The SCiL crowd was mostly made up of college professors and graduate students. At first, I felt a little out of place sitting in rooms full of experts discussing topics in areas such as pragmatics and large language models. But once the sessions started, I realized that even as a student just starting out in the field, there was so much I could follow and even more that I wanted to learn.

The conference covered a wide range of topics, all tied together by a focus on computational modeling in linguistics. You can find the full conference schedule here.

I was especially drawn to Dr. Malihe Alikhani‘s keynote presentation “Theory of Mind in Generative Models: From Uncertainty to Shared Meaning“. Her talk explored how generative models can effectively facilitate communicative grounding by incorporating theory of mind alongside uncertainty and human feedback. What stood out to me most was the idea that positive friction can be intentionally built into conversational systems so that it encourages contemplative thinking such as reflection on uncertain assumptions by both the users and AI systems. I was also fascinated by how generative
models embody core mechanisms of pragmatic reasoning, offering linguists and cognitive
scientists both methodological challenges and opportunities to question how computational
systems reflect and shape our understanding of meaning and interaction.


Networking and New Connections

While I didn’t get the chance to meet Prof. Jonathan Dunn in person as planned (he’s teaching “Computational Construction Grammar” at the LSA 2025 Summer Institute from July 24 through August 7 and won’t arrive until July 23), I still made some great new connections.

One of them was Andrew Liu, a graduate student at the University of Toronto. We chatted about his project, “Similarity, Transformation, and the Newly Found Invariance of Influence Functions,” which he’s presenting during the poster session. He was super friendly and shared valuable advice about studying and doing research in computational linguistics and NLP. Here’s his LinkedIn profile if you’d like to check out his work.

Talking with grad students made me realize how wide the field of computational linguistics really is. Everyone had a different background — some came from linguistics, others from computer science or cognitive science — but they were all united by a shared passion for understanding language through computation.


Final Thoughts

Attending SCiL 2025 was eye-opening. Even though I was probably the youngest person there, I felt inspired, welcomed, and challenged in the best way. It confirmed my passion for computational linguistics /NLP and reminded me how much more I want to learn.

If you’re a high school student curious about computational linguistics/NLP, don’t be intimidated by professional conferences. Dive in, listen closely, ask questions, and you might be surprised by how much you take away.

— Andrew

I-Language vs. E-Language: What Do They Mean in Computational Linguistics?

In the summer of 2025, I started working on a computational linguistics research project using Twitch data under the guidance of Dr. Sidney Wong, a Computational Sociolinguist. As someone who is still pretty new to this field, I was mainly focused on learning how to conduct literature reviews, help narrow down research topics, clean data, build models, and extract insights.

One day, Dr. Wong suggested I look into the concept of I-language vs. E-language from theoretical linguistics. At first, I wasn’t sure why this mattered. I thought, Isn’t language just… language?

But as I read more, I realized that understanding this distinction changes how we think about language data and what we’re actually modeling when we work with NLP.

In this post, I want to share what I’ve learned about I-language and E-language, and why this distinction is important for computational linguistics research.


What Is I-Language?

I-language stands for “internal language.” This idea was proposed by Noam Chomsky, who argued that language is fundamentally a mental system. I-language refers to the internal, cognitive grammar that allows us to generate and understand sentences. It is about:

  • The unconscious rules and structures stored in our minds
  • Our innate capacity for language
  • The mental system that explains why we can produce and interpret sentences we’ve never heard before

For example, if I say, “The cat sat on the mat,” I-language is the system in my brain that knows the sentence is grammatically correct and what it means, even though I may never have said that exact sentence before.

I-language focuses on competence (what we know about our language) rather than performance (how we actually use it in real life).


What Is E-Language?

E-language stands for “external language.” This is the language we actually hear and see in the world, such as:

  • Conversations between Twitch streamers and their viewers
  • Tweets, Reddit posts, books, and articles
  • Any linguistic data that exists outside the mind

E-language is about observable language use. It includes everything from polished academic writing to messy chat messages filled with abbreviations, typos, and slang.

Instead of asking, “What knowledge do speakers have about their language?”, E-language focuses on, “What do speakers actually produce in practice?”


Why Does This Matter for Computational Linguistics?

When it comes to computational linguistics and NLP, this distinction affects:

1. What We Model

  • I-language-focused research tries to model the underlying grammatical rules and mental representations. For example, building a parser that captures syntax structures based on linguistic theory.
  • E-language-focused research uses real-world data to build models that predict or generate language based on patterns, regardless of theoretical grammar. For example, training a neural network on millions of Twitch comments to generate chat responses.

2. Research Goals

If your goal is to understand how humans process and represent language cognitively, you’re leaning towards I-language research. This includes computational psycholinguistics, cognitive modeling, and formal grammar induction.

If your goal is to build practical NLP systems for tasks like translation, summarization, or sentiment analysis, you’re focusing on E-language. These projects care about performance and usefulness, even if the model doesn’t match linguistic theory.


3. How Models Are Evaluated

I-language models are evaluated based on how well they align with linguistic theory or native speaker intuitions about grammaticality.

E-language models are evaluated using performance metrics, such as accuracy, BLEU scores, or perplexity, based on how well they handle real-world data.


My Thoughts as a Beginner

When Dr. Wong first told me about this distinction, I thought it was purely theoretical. But now, while working with Twitch data, I see the importance of both views.

For example:

  • If I want to study how syntax structures vary in Twitch chats, I need to think in terms of I-language to analyze grammar.
  • If I want to build an NLP model that generates Twitch-style messages, I need to focus on E-language to capture real-world usage patterns.

Neither approach is better than the other. They just answer different types of questions. I-language is about why language works the way it does, while E-language is about how language is actually used in the world.


Final Thoughts

Understanding I-language vs. E-language helps me remember that language isn’t just data for machine learning models. It’s a human system with deep cognitive and social layers. Computational linguistics becomes much more meaningful when we consider both perspectives: What does the data tell us? and What does it reveal about how humans think and communicate?

If you’re also just starting out in this field, I hope this post helps you see why these theoretical concepts matter for practical NLP and AI work. Let me know if you want a follow-up post about other foundational linguistics ideas for computational research.

— Andrew

What Is Computational Linguistics (and How Is It Different from NLP)?

When I first got interested in this field, I kept seeing the terms computational linguistics and natural language processing (NLP) used almost interchangeably. At first, I thought they were the same thing. By delving deeper through reading papers, taking courses, and conducting research, I realized that although they overlap significantly, they are not entirely identical.

So in this post, I want to explain the difference (and connection) between computational linguistics and NLP from the perspective of a high school student who’s just getting started, but really interested in understanding both the language and the tech behind today’s AI systems.


So, what is computational linguistics?

Computational linguistics is the science of using computers to understand and model human language. It’s rooted in linguistics, the study of how language works, and applies computational methods to test linguistic theories, analyze language structure, or build tools like parsers and grammar analyzers.

It’s a field that sits at the intersection of computer science and linguistics. Think syntax trees, morphology, phonology, semantics, and using code to work with all of those.

For example, in computational linguistics, you might:

  • Use code to analyze sentence structure in different languages
  • Create models that explain how children learn grammar rules
  • Explore how prosody (intonation and stress) changes meaning in speech
  • Study how regional dialects appear in online chat platforms like Twitch

In other words, computational linguistics is often about understanding language (how it’s structured, how it varies, and how we can model it with computers).


Then what is NLP?

Natural language processing (NLP) is a subfield of AI and computer science that focuses on building systems that can process and generate human language. It’s more application-focused. If you’ve used tools like ChatGPT, Google Translate, Siri, or even grammar checkers, you’ve seen NLP in action.

While computational linguistics asks, “How does language work, and how can we model it?”, NLP tends to ask, “How can we build systems that understand or generate language usefully?”

Examples of NLP tasks:

  • Sentiment analysis (e.g., labeling text as positive, negative, or neutral)
  • Machine translation
  • Named entity recognition (e.g., tagging names, places, dates)
  • Text summarization or question answering

In many cases, NLP researchers care more about whether a system works than whether it matches a formal linguistic theory. That doesn’t mean theory doesn’t matter, but the focus is more on performance and results.


So, what’s the difference?

The line between the two fields can get blurry (and many people work in both), but here’s how I think of it:

Computational LinguisticsNLP
Rooted in linguisticsRooted in computer science and AI
Focused on explaining and modeling languageFocused on building tools and systems
Often theoretical or data-driven linguisticsOften engineering-focused and performance-driven
Examples: parsing syntax, studying morphologyExamples: sentiment analysis, machine translation

Think of computational linguistics as the science of language and NLP as the engineering side of language technology.


Why this matters to me

As someone who’s really interested in computational linguistics, I find myself drawn to the linguistic side of things, like how language varies, how meaning is structured, and how AI models sometimes get things subtly wrong because they don’t “understand” language the way humans do.

At the same time, I still explore NLP, especially when working on applied projects like sentiment analysis or topic modeling. I think having a strong foundation in linguistics makes me a better NLP researcher (or student), because I’m more aware of the complexity and nuance of language.


Final thoughts

If you’re just getting started, you don’t have to pick one or the other. Read papers from both fields. Try projects that help you learn both theory and application. Over time, you’ll probably find yourself leaning more toward one, but having experience in both will only help.

I’m still learning, and I’m excited to keep going deeper into both sides. If you’re interested too, let me know! I’m always up for sharing reading lists, courses, or just thoughts on cool research.

— Andrew


AI-Driven Insights from the Class of 2025 Senior Exit Survey

In late June 2025, I led my nonprofit organization, Student Echo, in a collaboration with Redmond High School to analyze responses from the Class of 2025 Senior Exit Survey. This annual survey, organized by the school’s College & Career Center, collects information on seniors’ post-graduation plans.

While the survey covers multiple areas, our focus was on one key free-response question:
“What additional support do you need before you graduate?”

The College & Career Center team had limited tools to process and interpret open-ended responses at scale. That’s where we came in. Using Student Echo’s AI tools, we analyzed the free-text answers and uncovered themes that could help the school offer more effective and timely support for graduating seniors.


Recommendations:

  • Maintain a master checklist of graduation tasks.
  • Schedule quick counselor check-ins with all seniors.
  • Offer transcript and scholarship submission workshops.
  • Watch for students who indicate confusion passively (“I think I’m good?”).
  • Continue mental health messaging and support for burnout or senioritis.

These recommendations aim to make senior-year support more targeted, equitable, and proactive. We were especially excited to hear that the College & Career Center plans to share our findings with the Counseling Department Chair to explore ways to improve their processes based on our analysis.

The full report is available below.

— Andrew

Blog at WordPress.com.

Up ↑