How Large Language Models Are Changing Relation Extraction in NLP

When you type a question into a search engine like “Who wrote Hamlet?” it does more than match keywords. It connects the dots between “Shakespeare” and “Hamlet,” identifying the relationship between a person and their work. This process of finding and labelling relationships in text is called relation extraction (RE). It powers everything from knowledge graphs to fact-checking systems.

In the past, relation extraction systems were built with hand-crafted rules or required thousands of annotated examples to train. Now, large language models (LLMs) such as GPT, T5, and LLaMA are making it possible to do relation extraction with far less data and more flexibility. In this post, I want to explore what relation extraction is, how LLMs are transforming it, and why this matters for anyone interested in the future of language technology.


The Old Way Versus the New Way

Relation extraction used to rely heavily on feature engineering. Researchers would first hand-craft linguistic or statistical features from the text, such as part-of-speech tags, dependency parses, named entity types, and the words appearing between two entities. They often built lexical patterns like “X, the capital of Y” or “X, founded by Y” to capture specific relationships. They also extracted syntactic paths (for example, the shortest path between two entities in a dependency tree) or used windowed word embeddings around the entities to encode local context. These features were then fed into classifiers like Support Vector Machines, logistic regression models, or decision trees to decide which relationship best described the entity pair. Later, neural networks like BERT and RoBERTa replaced much of this manual feature design by learning their own representations directly from text, but they still needed large, labeled datasets to achieve good performance.

Today, LLMs can often handle relation extraction with just a prompt. Instead of training a model from scratch, you can ask an LLM directly: “Identify the relationship between ‘Einstein’ and ‘the theory of relativity.’” With the right instructions, the model can output the relation (“Einstein proposed the theory of relativity”) even if it has never seen that exact pair in training.

See the diagram showing “classic relation extraction pipeline” versus “LLM-based pipeline” below.


How LLMs Tackle Relation Extraction

LLMs have been trained on massive amounts of text, which gives them a broad sense of how entities and relationships co-occur. This makes them powerful at zero-shot and few-shot learning. In other words, you do not need to show them hundreds of examples of a relation type before they can identify it.

For example, a simple prompt could be:

Prompt:“Read the sentence: ‘Marie Curie discovered radium.’ What is the relationship between the entities?”
Model Output from ChatGPT-5 and Grok-4:

[ChatGPT-5] In the sentence “Marie Curie discovered radium”, the relationship between the entities is:

  • Marie Curie (the subject / agent) performed the action.
  • radium (the object / theme) is what was discovered.

This is a “discoverer–discovery” relationship (also called “person–invention/discovery” or “agent–object” in relation extraction). In other words, Marie Curie is the discoverer and radium is the thing discovered.

[Grok-4] In the sentence “Marie Curie discovered radium,” the relationship between the entities “Marie Curie” and “radium” is that of discoverer and discovered. Marie Curie, a scientist, is the person who identified or found the chemical element radium.

This ability to generalize makes LLMs more versatile, especially for relations in domains where labeled data is scarce, like medicine or law. However, it also introduces risks. LLMs may hallucinate relationships that are not actually in the text or mislabel subtle ones, so careful evaluation is still necessary.


Recent Research Highlights

A major paper, A Survey on Cutting-Edge Relation Extraction Techniques Based on Language Models (Diaz-Garcia & López, 2024), reviews 137 recent ACL papers (2020-2023) that use language models for relation extraction. It shows that BERT-based methods still lead many benchmarks while models like T5 are rising in few-shot and unseen-relation settings.

Other papers from ACL 2024 and 2025 explore how well LLMs handle unseen relation types, cross-domain relation extraction, and low-resource settings. These studies show steady improvements but also highlight open questions about factuality, bias, and consistency.


Why This Matters Beyond Academia

Relation extraction sits at the core of knowledge-driven applications. Building or updating a knowledge graph for a company’s internal documents, mapping patient histories in healthcare, or connecting laws to court cases in legal tech all depend on accurately identifying relationships between entities.

LLMs make it possible to automate these tasks more quickly. Instead of spending months labeling data, organizations can draft knowledge structures with an LLM, then have humans verify or refine the results. This speeds up research and decision-making while expanding access to insights that would otherwise stay hidden in text.


Challenges and Open Questions

While LLMs are powerful, they are not flawless. They may infer relationships that are plausible but incorrect, especially if the prompt is ambiguous. Evaluating relation extraction at scale is also difficult, because many relations are context-specific or only partially expressed. Bias in training data can also skew the relationships a model “sees” as likely or normal.

Researchers are now working on ways to add uncertainty estimates, retrieval-augmented methods (i.e., combining information retrieval with generative models to improve response accuracy and relevance), and better benchmarks to test how well models extract relations across different domains and languages.


My Take as a High Schooler Working in NLP

As someone who has built a survey analysis platform and published research papers about sentiment classification, I find relation extraction exciting because it can connect scattered pieces of information into a bigger picture. Specifically, for projects like my nonprofit Student Echo, a future system could automatically link student concerns to policy areas or resources.

At the same time, I am cautious. Seeing how easily LLMs generate answers reminds me that relationships in text are often subtle. Automating them risks oversimplifying complex realities. Still, the idea that a model can find and organize connections that would take a person hours to spot is inspiring and worth exploring.


Conclusion

Relation extraction is moving from hand-built rules and large labeled datasets to flexible, generalist large language models. This shift is making it easier to build knowledge graphs, extract facts, and understand text at scale. But it also raises new questions about reliability, fairness, and evaluation.

If you want to dig deeper, check out A Survey on Cutting-Edge Relation Extraction Techniques Based on Language Models (arXiv link) or browse ACL 2024–2025 papers on relation extraction. Watching how this field evolves over the next few years will be exciting, and I plan to keep following it for future blog posts.

— Andrew

4,361 hits

Introduction to Zotero: Your Free Personal Research Assistant

At the beginning of this summer (Y2025), I learned about a tool that I wish I had discovered years ago. I hadn’t even heard of Zotero until my research collaborator, Computational Sociolinguist Dr. Sidney Wong, introduced it to me while we were working on our computational linguistics project analyzing Twitch data.

After exploring it and learning to use it for my current research, I now realize how effective and essential Zotero is for managing academic work. Honestly, I wish I could have used it for all my previous research projects.


What is Zotero?

Zotero is a free, easy-to-use tool that helps researchers at any level:

  • Collect sources such as journal articles, books, web pages, and more
  • Organize them into collections and tag them for easy retrieval
  • Annotate PDFs directly within the app with highlights and notes
  • Cite sources seamlessly in any citation style while writing papers
  • Share references and collections with collaborators

It’s like having a personal research assistant that keeps all your readings, citations, and notes organized in one place.


Why I Recommend Zotero for High School Students

As high school students, we often juggle multiple classes, club projects, competitions, and research interests. Zotero makes it easy to:

  • Manage research projects efficiently, especially when writing papers that require formal citations
  • Keep track of readings and annotate PDFs, so you don’t lose key insights
  • Collaborate with teammates or research mentors by sharing folders and annotations
  • Avoid citation mistakes, as it automatically generates references in APA, MLA, Chicago, and many other styles

My Experience Using Zotero

When Dr. Wong first recommended Zotero to me, I was a bit hesitant because I thought, “Do I really need another app?” But after installing it and importing my Twitch-related research papers, I quickly saw its value. Now, I can:

  • Search across all my papers by keyword or tag
  • Keep notes attached to specific papers so I never lose insights
  • Instantly generate BibTeX entries for LaTeX documents or formatted citations for my blog posts and papers

I wish I had known about Zotero earlier, especially during my survey sentiment analysis project and my work preparing research paper submissions. It would have saved me so much time managing citations and keeping literature organized.


Zotero vs. Other Reference Managers: Pros and Cons

Here is a quick comparison of Zotero vs. similar tools like Mendeley and EndNote based on my research and initial use:

Pros of Zotero

  • Completely free and open source with no premium restrictions on core features
  • Easy to use with a clean interface suitable for beginners
  • Browser integration for one-click saving of articles and webpages
  • Excellent plugin support for Word, LibreOffice, and Google Docs
  • ✅ Strong community support and development
  • ✅ Works well for group projects with shared libraries

Cons of Zotero

  • ❌ Limited built-in cloud storage for PDFs (300 MB free; need WebDAV or paid plan for more)
  • ❌ Not as widely used in certain STEM fields compared to Mendeley or EndNote
  • ❌ Slightly fewer advanced citation style editing features than EndNote

Compared to Mendeley

  • Mendeley offers 2 GB free storage and a slightly more modern PDF viewer, but it is owned by Elsevier and some users dislike its closed ecosystem.
  • Zotero, being open-source, is often preferred for transparency and community-driven development.

Compared to EndNote

  • EndNote is powerful and widely used in academia but is expensive (>$100 license), making it inaccessible for many high school students.
  • Zotero offers most of the core features for free with a simpler setup.

Final Thoughts

If you’re a high school student interested in research, I highly recommend checking out Zotero. It’s free, easy to set up, and can make your academic life so much more organized and efficient.

You can explore and download it here. Let me know if you want a future blog post on how I set up my Zotero collections and notes for research projects.

— Andrew

4,361 hits

Latest Applications of NLP to Recommender Systems at RecSys 2025

Introduction

The ACM Conference on Recommender Systems (RecSys) 2025 took place in Prague, Czech Republic, from September 22–26, 2025. The event brought together researchers and practitioners from academia and industry to present their latest findings and explore new trends in building recommendation technologies.

This year, one of the most exciting themes was the growing overlap between natural language processing (NLP) and recommender systems. Large language models (LLMs), semantic clustering, and text-based personalization appeared everywhere, showing how recommender systems are now drawing heavily on computational linguistics. As someone who has been learning more about NLP myself, it is really cool to see how the research world is pushing these ideas forward.


Paper Highlights

A Language Model-Based Playlist Generation Recommender System

Paper Link

Relevance:
Uses language models to generate playlists by creating semantic clusters from text embeddings of playlist titles and track metadata. This directly applies NLP for thematic coherence and semantic similarity in music recommendations.

Abstract:
The title of a playlist often reflects an intended mood or theme, allowing creators to easily locate their content and enabling other users to discover music that matches specific situations and needs. This work presents a novel approach to playlist generation using language models to leverage the thematic coherence between a playlist title and its tracks. Our method consists in creating semantic clusters from text embeddings, followed by fine-tuning a transformer model on these thematic clusters. Playlists are then generated considering the cosine similarity scores between known and unknown titles and applying a voting mechanism. Performance evaluation, combining quantitative and qualitative metrics, demonstrates that using the playlist title as a seed provides useful recommendations, even in a zero-shot scenario.


An Off-Policy Learning Approach for Steering Sentence Generation towards Personalization

Paper Link

Relevance:
Focuses on off-policy learning to guide LLM-based sentence generation for personalized recommendations. Involves NLP tasks like controlled text generation and personalization via language model fine-tuning.

Abstract:
We study the problem of personalizing the output of a large language model (LLM) by training on logged bandit feedback (e.g., personalizing movie descriptions based on likes). While one may naively treat this as a standard off-policy contextual bandit problem, the large action space and the large parameter space make naive applications of off-policy learning (OPL) infeasible. We overcome this challenge by learning a prompt policy for a frozen LLM that has only a modest number of parameters. The proposed Direct Sentence Off-policy gradient (DSO) effectively propagates the gradient to the prompt policy space by leveraging the smoothness and overlap in the sentence space. Consequently, DSO substantially reduces variance while also suppressing bias. Empirical results on our newly established suite of benchmarks, called OfflinePrompts, demonstrate the effectiveness of the proposed approach in generating personalized descriptions for movie recommendations, particularly when the number of candidate prompts and reward noise are large.


Enhancing Sequential Recommender with Large Language Models for Joint Video and Comment Recommendation

Paper Link

Relevance:
Integrates LLMs to enhance sequential recommendations by processing video content and user comments. Relies on NLP for joint modeling of multimodal text (like comments) and semantic user preferences.

Abstract:
Nowadays, reading or writing comments on captivating videos has emerged as a critical part of the viewing experience on online video platforms. However, existing recommender systems primarily focus on users’ interaction behaviors with videos, neglecting comment content and interaction in user preference modeling. In this paper, we propose a novel recommendation approach called LSVCR that utilizes user interaction histories with both videos and comments to jointly perform personalized video and comment recommendation. Specifically, our approach comprises two key components: sequential recommendation (SR) model and supplemental large language model (LLM) recommender. The SR model functions as the primary recommendation backbone (retained in deployment) of our method for efficient user preference modeling. Concurrently, we employ a LLM as the supplemental recommender (discarded in deployment) to better capture underlying user preferences derived from heterogeneous interaction behaviors. In order to integrate the strengths of the SR model and the supplemental LLM recommender, we introduce a two-stage training paradigm. The first stage, personalized preference alignment, aims to align the preference representations from both components, thereby enhancing the semantics of the SR model. The second stage, recommendation-oriented fine-tuning, involves fine-tuning the alignment-enhanced SR model according to specific objectives. Extensive experiments in both video and comment recommendation tasks demonstrate the effectiveness of LSVCR. Moreover, online A/B testing on KuaiShou platform verifies the practical benefits of our approach. In particular, we attain a cumulative gain of 4.13% in comment watch time.


LLM-RecG: A Semantic Bias-Aware Framework for Zero-Shot Sequential Recommendation

Paper Link

Relevance:
Addresses domain semantic bias in LLMs for cross-domain recommendations using generalization losses to align item embeddings. Employs NLP techniques like pretrained representations and semantic alignment to mitigate vocabulary differences across domains.

Abstract:
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions in unseen domains without additional training or fine-tuning, addressing the limitations of traditional models in sparse data environments. Recent advancements in large language models (LLMs) have significantly enhanced ZCDSR by facilitating cross-domain knowledge transfer through rich, pretrained representations. Despite this progress, domain semantic bias arising from differences in vocabulary and content focus between domains remains a persistent challenge, leading to misaligned item embeddings and reduced generalization across domains.

To address this, we propose a novel semantic bias-aware framework that enhances LLM-based ZCDSR by improving cross-domain alignment at both the item and sequential levels. At the item level, we introduce a generalization loss that aligns the embeddings of items across domains (inter-domain compactness), while preserving the unique characteristics of each item within its own domain (intra-domain diversity). This ensures that item embeddings can be transferred effectively between domains without collapsing into overly generic or uniform representations. At the sequential level, we develop a method to transfer user behavioral patterns by clustering source domain user sequences and applying attention-based aggregation during target domain inference. We dynamically adapt user embeddings to unseen domains, enabling effective zero-shot recommendations without requiring target-domain interactions.

Extensive experiments across multiple datasets and domains demonstrate that our framework significantly enhances the performance of sequential recommendation models on the ZCDSR task. By addressing domain bias and improving the transfer of sequential patterns, our method offers a scalable and robust solution for better knowledge transfer, enabling improved zero-shot recommendations across domains.


Trends Observed

These papers reflect a broader trend at RecSys 2025 toward hybrid NLP-RecSys approaches, with LLMs enabling better handling of textual side information (like reviews, titles, and comments) for cold-start problems and cross-domain generalization. This aligns with recent surveys on LLMs in recommender systems, which note improvements in semantic understanding over traditional embeddings.


Final Thoughts

As a high school student interested in computational linguistics, reading about these papers feels like peeking into the future. I used to think of recommender systems as black boxes that just show you more videos or songs you might like. But at RecSys 2025, it is clear the field is moving toward systems that actually “understand” language and context, not just click patterns.

For me, that is inspiring. It means the skills I am learning right now, from studying embeddings to experimenting with sentiment analysis, could actually be part of real-world systems that people use every day. It also shows how much crossover there is between disciplines. You can be into linguistics, AI, and even user experience design, and still find a place in recommender system research.

Seeing these studies also makes me think about the responsibility that comes with more powerful recommendation technology. If models are becoming better at predicting our tastes, we have to be careful about bias, fairness, and privacy. This is why conferences like RecSys are so valuable. They are a chance for researchers to share ideas, critique each other’s work, and build a better tech future together.

— Andrew

4,361 hits

What Is an Annotated Bibliography (And Why Every Junior Researcher Should Make One)

Recently, I was fortunate to work with Dr. Sidney Wong on a computational linguistics research project using Twitch data. As a high school student just stepping into research in the field, I learned a lot—not just about the technical side of computational linguistics, but also about how research is actually done.

One of the most valuable lessons I took away was the importance of using a structured research process, especially when it comes to narrowing down a topic and conducting a literature survey. One tool that stood out to me was the annotated bibliography.

Although our project is still ongoing, I wanted to take a moment to introduce annotated bibliographies to other students who are just beginning their own research journeys.


What Is an Annotated Bibliography?

An annotated bibliography is more than just a list of sources. It’s a carefully organized collection of books, research papers, or articles. Each entry includes a short summary and analysis that helps explain what the source is about, how reliable it is, and how it fits into your research.

Each entry usually includes:

  • A full citation in a standard format (APA, MLA, Chicago, etc.)
  • A brief summary of the key points
  • An evaluation of the source’s quality or credibility
  • A reflection on how the source is useful for your project

In other words, it helps you stay organized and think critically while reading. It’s like building your own research map.


Why It Matters (Especially for Beginners)

When you’re new to a field, it’s easy to feel overwhelmed by all the papers and sources out there. Creating an annotated bibliography helps in several important ways:

1. Keeps you organized

Instead of juggling dozens of open tabs and scattered notes, you have everything in one place with clear summaries and citations.

2. Helps you truly understand what you read

Summarizing and reflecting on a source forces you to go beyond skimming. You learn to recognize the core arguments, methods, and relevance.

3. Highlights gaps in the literature

As you build your list, you’ll start to notice which topics are well studied and which ones aren’t. That can help you identify potential research questions.

4. Makes writing much easier later

When it’s time to write your literature review or paper, you’ll already have the core material prepared.


How I Got Started

When I began working with Dr. Wong on our project about Twitch chat data and language variation, he encouraged me to start building an annotated bibliography early. I started collecting articles on sociolinguistics, computational methods, and prior research involving Twitch or similar platforms.

For each article, I wrote down:

  • What the authors studied
  • How they conducted the research
  • What they concluded
  • And how it connects to my own research

Even though I’m still early in the process, having this document has already helped me organize my thoughts and see where our work fits in the broader field.


Final Thoughts

If you’re just starting out in research, I highly recommend keeping an annotated bibliography from day one. It may seem like extra work at first, but it will pay off in the long run. You’ll read more thoughtfully, remember more of what you read, and write more confidently when it’s time to publish or present.

I’ll share more about our Twitch project once it’s complete. Until then, I hope this helps you take your first step toward building strong research habits.

— Andrew

4,361 hits

Drawing the Lines: The UN’s Push for Global AI Safeguards

On September 22, 2025, the UN General Assembly hosted an extraordinary plea as more than 200 global leaders, scientists, Nobel laureates, and AI experts called for binding international safeguards to prevent the dangerous use of artificial intelligence. The plea is centered on setting “red lines” — clear boundaries that AI must not cross. (Source: NBC News). The open letter urges policymakers to enact the accord by the end of 2026, given the rapid progress of AI capabilities.

This moment struck me as deeply significant not only for AI policy but for how computational linguistics, ethics, and global governance may intersect in the coming years.


Why this matters (beyond headlines)

Often when we read about AI risks it feels abstract, unlikely scenarios decades ahead. But the UN’s call brings the framing into the political and normative domain: this is not just technical risk mitigation, it is now a matter of global legitimacy and enforceable rules.

Some of the proposed red lines include forbidding AI to impersonate humans in a deceptive way, forbidding autonomous self replication, forbidding lethal autonomous weapons systems, and more, as outlined by the Global Call for AI Red Lines and echoed in the World Economic Forum’s overview of AI red lines, which lists “no impersonating a human” and “no self-replication” among the key behaviors to prohibit. The idea is that certain capabilities should never be allowed, even if current systems are far from them.

These red lines are not purely speculative. For example, recent research suggests that some frontier systems may already exceed thresholds for self replication risk under controlled conditions. (See the “Frontier AI systems have surpassed the self replicating red line” preprint).

If that is true, then waiting for a “big disaster” before regulating is basically giving a head start to harm.


How this connects to what I care about (and have written before)

On this blog I often explore how language, algorithmic systems, and society intersect. For example, in “From Language to Threat: How Computational Linguistics Can Spot Radicalization Patterns Before Violence” I touched on how even text models have power and risk when used at scale.

Here the stakes are broader: we are no longer talking about misused speech or social media. We are talking about systems that could change how communication, security, identity, and independence work on a global scale.

Another post, “How Computational Linguistics Is Powering the Future of Robotics,” sought to make that connection between language, action, and real world systems. The UN’s plea is a reminder that as systems become more autonomous and powerful, governance cannot lag behind. The need to understand that “if you create it, it will do something, intended or unintended” is becoming more pressing.


What challenges the red lines initiative faces

This is a big idea, but turning it into reality is super tough. Here’s what I think the main challenges are:

  • Defining and measuring compliance
    What exactly qualifies as “impersonation,” “self replication,” or “lethal autonomous system”? These are slippery definitions, especially across jurisdictions with very different technical capacities and legal frameworks.
  • Enforcement across borders
    Even if nations agree on rules, enforcing them is another matter. Will there be inspections, audits, or sanctions? Who will have the power to penalize violations?
  • Innovation vs. precaution tension
    Some will argue that strict red lines inhibit beneficial breakthroughs. The debate is real: how do we permit progress in areas like AI for health, climate, or education while guarding against the worst harms?
  • Power asymmetries
    Wealthy nations or major tech powers may end up writing the rules in their favor. Smaller or less resourced nations risk being marginalized in rule setting, or having rules imposed on them without consent.
  • Temporal mismatch
    Tech moves fast. Rule formation and global diplomacy tend to move slowly. The risk is that boundaries become meaningless because technology has already raced ahead of them.

What a hopeful path forward could look like

Even with those challenges, I believe this UN appeal is a crucial inflection point. Here is a sketch of what I would hope to see:

  • Incremental binding treaties or protocols
    Rather than one monolithic global pact, we could see modular treaties that cover specific domains (for example military AI, synthetic media, biological risk). Nations can adopt them in phases, giving room for capacity building.
  • Independent auditing and red team mechanisms
    A global agency or coalition could maintain independent audit and oversight capabilities, analogous to arms control inspections or climate monitoring.
  • Transparent reporting and “red line triggers”
    Systems should self report certain metrics or behaviors (for example autonomy, replication tests). If they cross thresholds, that triggers review or suspension.
  • Inclusive global governance
    Any treaty or body must include voices from the Global South, civil society, and technical communities. Otherwise legitimacy will be weak.
  • Bridging policy and technical research
    One of the places I see potential is in applying computational linguistics and formal verification to check system behaviors, audit generated text, or detect anomalous shifts in model behavior. In other words, the tools I often write about can help enforce the rules.
  • Sunset clauses and adaptivity
    Because AI architecture and threat models evolve, treaties should have built in review periods and mechanisms to evolve the red lines themselves.

What this means for us as researchers, citizens, readers

For those of us who study language, algorithms, or AI, the UN appeal is not just a distant policy issue. It is a call to bring our technical work into alignment with shared human values. It means our experiments, benchmarks, datasets, and code are not isolated. They sit within a political and ethical ecosystem.

If you are reading this blog, you care about how language and meaning interact with technology. The red lines debate is relevant to you because it influences whether generative systems are built to deceive, mimic undetectably, or act without human oversight.

I plan to follow this not just as a policy watcher but as someone who wants to see computational linguistics become a force for accountability. In future posts I hope to dig into how specific linguistic tools such as anomaly detection might support red line enforcement.

Thanks for reading. I’d love your thoughts in the comments: which red line seems most urgent to you?

— Andrew

4,361 hits

From Language to Threat: How Computational Linguistics Can Spot Radicalization Patterns Before Violence

Platforms Under Scrutiny After Kirk’s Death

Recently the U.S. House Oversight Committee called the CEOs of Discord, Twitch, and Reddit to talk about online radicalization. This TechCrunch report shows how serious the problem has become, especially after tragedies like the death of Kirk which shocked many communities. Extremist groups are not just on hidden sites anymore. They are using the same platforms where students, gamers, and communities hang out every day. While lawmakers argue about what platforms should do, there is also a growing interest in using computational linguistics to find patterns in online language that could reveal radicalization before it turns dangerous.

How Computational Linguistics Can Detect Warning Signs

Computational linguistics is the science of studying how people use language and teaching computers to understand it. By looking at text, slang, and even emojis, these tools can spot changes in tone, topics, and connections between users. For example, sentiment analysis can show if conversations are becoming more aggressive, and topic modeling can uncover hidden themes in big groups of messages. If these methods had been applied earlier, they might have helped spot warning signs in the kind of online spaces connected to cases like Kirk’s. This kind of technology could help social media platforms recognize early signs of radical behavior while still protecting regular online conversations. In fact, I explored a related approach in my NAACL 2025 paper, “A Bag-of-Sounds Approach to Multimodal Hate Speech Detection”, which shows how combining text and audio features can potentially improve hate speech detection models.

Balancing Safety With Privacy

Using computational linguistics to prevent radicalization is promising but it also raises big questions. On one hand it could help save lives by catching warning signs early, like what might have been possible in Kirk’s case. On the other hand it could invade people’s privacy or unfairly label innocent conversations as dangerous. Striking the right balance between safety and privacy is hard. Platforms, researchers, and lawmakers need to work together to make sure these tools are used fairly and transparently so they actually protect communities instead of harming them.

Moving Forward Responsibly

Online radicalization is a real threat that can touch ordinary communities and people like Kirk. The hearings with Discord, Twitch, and Reddit show how much attention this issue is now getting. Computational linguistics gives us a way to see patterns in language that people might miss, offering a chance to prevent harm before it happens. But this technology only works if it is built and used responsibly, with clear limits and oversight. By combining smart tools with human judgment and community awareness, we can make online spaces safer while still keeping them open for free and fair conversation.


Further Reading

— Andrew

4,361 hits

Rethinking AI Bias: Insights from Professor Resnik’s Position Paper

I recently read Professor Philip Resnik’s thought-provoking position paper, “Large Language Models Are Biased Because They Are Large Language Models,” published in Computational Linguistics 51(3), which is available via open access. This paper challenges conventional perspectives on bias in artificial intelligence, prompting a deeper examination of the inherent relationship between bias and the foundational design of large language models (LLMs). Resnik’s primary objective is to stimulate critical discussion by arguing that harmful biases are an inevitable outcome of the current architecture of LLMs. The paper posits that addressing these biases effectively requires a fundamental reevaluation of the assumptions underlying the design of AI systems driven by LLMs.

What the paper argues

  • Bias is built into the very goal of an LLM. A language model tries to predict the next word by matching the probability patterns of human text. Those patterns come from people. People carry stereotypes, norms, and historical imbalances. If an LLM learns the patterns faithfully, it learns the bad with the good. The result is not a bug that appears once in a while. It is a direct outcome of the objective the model optimizes.
  • Models cannot tell “what a word means” apart from “what is common” or “what is acceptable.” Resnik uses a nurse example. Some facts are definitional (A nurse is a kind of healthcare worker). Other facts are contingent but harmless (A nurse is likely to wear blue clothing at work). Some patterns are contingent and harmful if used for inference (A nurse is likely to wear a dress to a formal occasion). Current LLMs do not have an internal line that separates meaning from contingent statistics or that flags the normative status of an inference. They just learn distributions.
  • Reinforcement Learning from Human Feedback (RLHF) and other mitigations help on the surface, but they have limits. RLHF tries to steer a pre-trained model toward safer outputs. The process relies on human judgments that vary by culture and time. It also has to keep the model close to its pretraining, or the model loses general ability. That tradeoff means harmful associations can move underground rather than disappear. Some studies even find covert bias remains after mitigation (Gallegos et al. 2024; Hofmann et al. 2024). To illustrate this, consider an analogy: The balloon gets squeezed in one place, then bulges in another.
  • The root cause is a hard-core, distribution-only view of language. When meaning is treated as “whatever co-occurs with what,” the model has no principled way to encode norms. The paper suggests rethinking foundations. One direction is to separate stable, conventional meaning (like word sense and category membership) from contextual or conveyed meaning (which is where many biases live). Another idea is to modularize competence, so that using language in socially appropriate ways is not forced to emerge only from next-token prediction. None of this is easy, but it targets the cause rather than only tuning symptoms.

Why this matters

Resnik is not saying we should give up. He is saying that quick fixes will not fully erase harm when the objective rewards learning whatever is frequent in human text. If we want models that reason with norms, we need objectives and representations that include norms, not only distributions.

Conclusion

This paper offers a clear message. Bias is not only a content problem in the data. It is also a design problem in how we define success for our models. If the goal is to build systems that are both capable and fair, then the next steps should focus on objectives, representations, and evaluation methods that make room for norms and constraints. That is harder than prompt tweaks, but it is the kind of challenge that can move the field forward.

Link to the paper: Large Language Models Are Biased Because They Are Large Language Models

— Andrew

4,361 hits

Computational Linguists Help Africa Try to Close the AI Language Gap

Introduction

The fact that African languages are underrepresented in the digital AI ecosystem has gained international attention. On July 29, 2025, Nature published a news article stating that

More than 2,000 languages spoken in Africa are being neglected in the artificial intelligence (AI) era. For example, ChatGPT recognizes only 10–20% of sentences written in Hausa, a language spoken by 94 million people in Nigeria. These languages are under-represented in large language models (LLMs) because of a lack of training data.” (source: AI models are neglecting African languages — scientists want to change that)

Another example is BBC News, released on September 4, 2025, stating that

Although Africa is home to a huge proportion of the world’s languages – well over a quarter according to some estimates – many are missing when it comes to the development of artificial intelligence (AI). This is both an issue of a lack of investment and readily available data. Most AI tools, such as ChatGPT, used today are trained on English as well as other European and Chinese languages. These have vast quantities of online text to draw from. But as many African languages are mostly spoken rather than written down, there is a lack of text to train AI on to make it useful for speakers of those languages. For millions across the continent this means being left out.” (source: Lost in translation – How Africa is trying to close the AI language gap)

To address this problem, linguists and computer scientists are collaborating to create AI-ready datasets in 18 African languages via The African Next Voices project. Funded by the Bill and Melinda Gates Foundation ($2.2-million grant), the project involves recording 9,000 hours of speech across 18 African languages in Kenya, Nigeria, and South Africa. The goal is to create a comprehensive dataset that can be utilized for developing AI tools, such as translation and transcription services, which are particularly beneficial for local communities and their specific needs. The project emphasizes the importance of capturing everyday language use to ensure that AI technologies reflect the realities of African societies. The 18 African languages selected represent only a fraction of the over 2,000 languages spoken across the continent, but project contributors aim to include more languages in the future.

Role of Computational Linguists in the Project

Computational linguists play a critical role in the African Next Voices project. Their key contributions include:

  • Data Curation and Annotation: They guide the transcription and translation of over 9,000 hours of recorded speech in languages like Kikuyu, Dholuo, Hausa, Yoruba, and isiZulu, ensuring linguistic accuracy and cultural relevance. This involves working with native speakers to capture authentic, everyday language use in contexts like farming, healthcare, and education.
  • Dataset Design: They help design structured datasets that are AI-ready, aligning the collected speech data with formats suitable for training large language models (LLMs) for tasks like speech recognition and translation. This includes ensuring data quality through review and validation processes.
  • Bias Mitigation: By leveraging their expertise in linguistic diversity, computational linguists work to prevent biases in AI models by curating datasets that reflect the true linguistic and cultural nuances of African languages, which are often oral and underrepresented in digital text.
  • Collaboration with Technical Teams: They work alongside computer scientists and AI experts to integrate linguistic knowledge into model training and evaluation, ensuring the datasets support accurate translation, transcription, and conversational AI applications.

Their involvement is essential to making African languages accessible in AI technologies, fostering digital inclusion, and preserving cultural heritage.

Final Thoughts

From the perspective of a U.S. high school student interested in pursuing computational linguistics in college, inspired by African Next Voices, here are some final thoughts and conclusions:

  • Impactful Career Path: Computational linguistics offers a unique opportunity to blend language, culture, and technology. For a student like me, the African Next Voices project highlights how this field can drive social good by preserving underrepresented languages and enabling AI to serve diverse communities, which could be deeply motivating.
  • Global Relevance: The project underscores the global demand for linguistic diversity in AI. As a future computational linguist, I can contribute to bridging digital divides, making technology accessible to millions in Africa and beyond, which is both a technical and humanitarian pursuit.
  • Skill Development: The work involves collaboration with native speakers, data annotation, and AI model training/evaluation, suggesting I’ll need strong skills in linguistics, programming (e.g., Python), and cross-cultural communication. Strengthening linguistics knowledge and enhancing coding skills could give me a head start.
  • Challenges and Opportunities: The vast linguistic diversity (over 2,000 African languages) presents challenges like handling oral traditions or limited digital resources. This complexity is exciting, as it offers a chance to innovate in dataset creation and bias mitigation, areas where I could contribute and grow.
  • Inspiration for Study: The focus on real-world applications (such as healthcare, education, and farming) aligns with my interest in studying computational linguistics in college and working on inclusive AI that serves people.

In short, as a high school student, I can see computational linguistics as a field where I can build tools that help people communicate and learn. I hope this post encourages you to look into the project and consider how you might contribute to similar initiatives in the future!

— Andrew

4,361 hits

How to Connect with Professors for Research: A Practical Guide (That Also Works for High School Students)

Recently, I read an article from XRDS: Crossroads, The ACM Magazine for Students (vol. 31, issue 3, 2025). You can find it here. The article is called “Connecting with Your Future Professor: A Practical Guide” by Ph.D. students Swati Rajwal and Avinash Kumar Pandey at Emory University.

Even though the guide is written for students planning to apply for Ph.D. programs, it immediately reminded me of my own experience cold emailing professors to ask about research opportunities as a high school student. Honestly, their advice applies to us too, whether we are looking to join a lab, collaborate on a small project, or simply learn from an expert.

I wanted to share a quick summary of their practical tips for anyone who is thinking about reaching out to professors for research.


1. Engage Deeply with Their Research

Before emailing a professor, make sure you understand their work. This doesn’t mean reading every single paper they’ve ever published, but you should:

  • Look up their Google Scholar or university profile to see what topics they focus on
  • Read their most cited papers to understand their main contributions
  • Explore other outputs like software tools, patents, or public datasets they’ve created

Knowing their research deeply shows that you are serious and respectful of their time.


2. Interact with Their Current Students or Lab Members

If possible, find ways to connect with their current Ph.D. students or research assistants. You can:

  • Learn about the lab environment and expectations
  • Get advice on how to prepare before joining their group
  • Understand the professor’s mentoring style

For high school students like me, this might feel intimidating, but even reading lab websites with student profiles or LinkedIn posts can give hints about the culture.


3. Use Digital Platforms Strategically

The guide suggests checking:

  • Personal websites for updated research, upcoming talks, and recent publications
  • Social media (if they are active) to get a sense of their latest projects, collaborations, and sometimes even their personality

Of course, it’s important to keep boundaries professional, but this context can help you write a more personalized email.


4. Join Open Academic Forums or Reading Groups

Some research groups host open reading groups, seminars, or webinars. Joining these:

  • Exposes you to their research discussions
  • Gives you a chance to ask questions and show your interest
  • Helps you see if their group aligns with your goals and interests

Even if you’re a high school student, you can check if their university department posts public seminar recordings on YouTube or their website.


5. Watch Their Talks or Lectures Online

Many professors have guest lectures or conference presentations recorded online. Watching these helps you:

  • Learn their communication style and main research themes
  • Feel less nervous if you end up meeting them virtually
  • Prepare thoughtful questions when reaching out

6. Attend Academic Conferences

This might be harder for high school students due to cost, but if you get the chance to attend local NLP or AI conferences, take it. These are the best places to:

  • Introduce yourself briefly
  • Ask questions after their talks
  • Follow up later via email referencing your in-person interaction

7. Request Virtual Meetings (Respectfully)

Finally, if you email a professor to ask about research opportunities, consider asking for a short virtual meeting to introduce yourself and learn about their work. The guide emphasizes:

  • Doing your homework beforehand
  • Being concise in your request
  • Understanding that not all professors have time to meet, so be respectful if they decline

Key Caveats They Shared

The authors also noted a few important reminders:

  • Citation counts don’t always reflect research quality, especially for newer professors or niche fields
  • Other students’ experiences in the lab might not fully predict yours, so reflect on your own goals too
  • Digital research is great, but it shouldn’t replace direct communication
  • Always plan ahead for conference interactions or virtual meetings

Final Thoughts

Reading this article made me realize that building connections with professors is not just about sending one perfect cold email. It’s about understanding their work deeply, showing genuine interest, and being respectful of their time.

If you’re a high school student like me hoping to explore research, I think this guide is just as helpful for us. Professors might not always say yes, but thoughtful, well-informed outreach goes a long way.

Let me know if you want me to share a template of how I write my cold emails to professors. I’ve been refining mine and would love to help others start their research journey too.

— Andrew

4,361 hits

Can Taco Bell’s Drive-Through AI Get Smarter?

Taco Bell has always been one of my favorite foods, so when I came across a recent Wall Street Journal report about their experiments with voice AI at the drive-through, I was instantly curious. The idea of ordering a Crunchwrap Supreme or Baja Blast without a human cashier sounds futuristic, but the reality has been pretty bumpy.

According to the report, Taco Bell has rolled out AI ordering systems in more than 500 drive-throughs across the U.S. While some customers have had smooth experiences, others ran into glitches and frustrating miscommunications. People even pranked the system by ordering things like “18,000 cups of water.” Because of this, Taco Bell is rethinking how it uses AI. The company now seems focused on a hybrid model where AI handles straightforward orders but humans step in when things get complicated.

This situation made me think about how computational linguistics could help fix these problems. Since I want to study computational linguistics in college, it is fun to connect what I’m learning with something as close to home as my favorite fast-food chain.


Where Computational Linguistics Can Help

  1. Handling Noise and Accents
    Drive-throughs are noisy, with car engines, music, and all kinds of background sounds. Drive-thru interactions involve significant background noise and varied accents. Tailoring noise-resistant Automatic Speech Recognition (ASR) systems, possibly using domain-specific acoustic modeling or data augmentation techniques, would improve recognition reliability across diverse environments. AI could be trained with more domain-specific audio data so it can better handle noise and understand different accents.
  2. Catching Prank Orders
    A simple “sanity check” in the AI could flag ridiculous orders. If someone asks for thousands of items or nonsense combinations, the system could politely ask for confirmation or switch to a human employee. Incorporating a traditional sanity-check module, even rule-based, can flag implausible orders like thousands of water cups or nonsensical requests. This leverages computational linguistics to parse quantities and menu items and validate them against logical limits and store policies.
  3. Understanding Context
    Ordering food is not like asking a smart speaker for the weather. People use slang, pause, or change their minds mid-sentence. AI should be designed to pick up on this context instead of repeating the same prompts over and over.
  4. Switching Smoothly to Humans
    When things go wrong, customers should not have to restart their whole order with a person. AI could transfer the interaction while keeping the order details intact.
  5. Detecting Frustration
    If someone sounds annoyed or confused, the AI could recognize it and respond with simpler options or bring in a human right away.

Why This Matters

The point of voice AI is not just to be futuristic. It is about making the ordering process easier and faster. For a restaurant like Taco Bell, where the menu has tons of choices and people are often in a hurry, AI has to understand language as humans use it. Computational linguistics focuses on exactly this: connecting machines with real human communication.

I think Taco Bell’s decision to step back and reassess is actually smart. Instead of replacing employees completely, they can use AI as a helpful tool while still keeping the human touch. Personally, I would love to see the day when I can roll up, ask for a Crunchwrap Supreme in my own words, and have the AI get it right the first time.


Further Reading

  • Cui, Wenqian, et al. “Recent Advances in Speech Language Models: A Survey.” Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, 2025, pp. 13943–13970. ACL Anthology
  • Zheng, Xianrui, Chao Zhang, and Philip C. Woodland. “DNCASR: End-to-End Training for Speaker-Attributed ASR.” Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, 2025, pp. 18369–18383. ACL Anthology
  • Imai, Saki, Tahiya Chowdhury, and Amanda J. Stent. “Evaluating Open-Source ASR Systems: Performance Across Diverse Audio Conditions and Error Correction Methods.” Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025), 2025, pp. 5027–5039. ACL Anthology
  • Hopton, Zachary, and Eleanor Chodroff. “The Impact of Dialect Variation on Robust Automatic Speech Recognition for Catalan.” Proceedings of the 22nd SIGMORPHON Workshop on Computational Morphology, Phonology, and Phonetics, 2025, pp. 23–33. ACL Anthology
  • Arora, Siddhant, et al. “On the Evaluation of Speech Foundation Models for Spoken Language Understanding.” Findings of the Association for Computational Linguistics: ACL 2024, 2024, pp. 11923–11938. ACL Anthology
  • Cheng, Xuxin, et al. “MoE-SLU: Towards ASR-Robust Spoken Language Understanding via Mixture-of-Experts.” Findings of the Association for Computational Linguistics: ACL 2024, 2024, pp. 14868–14879. ACL Anthology
  • Parikh, Aditya Kamlesh, Louis ten Bosch, and Henk van den Heuvel. “Ensembles of Hybrid and End-to-End Speech Recognition.” Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), 2024, pp. 6199–6205. ACL Anthology
  • Mujtaba, Dena, et al. “Lost in Transcription: Identifying and Quantifying the Accuracy Biases of Automatic Speech Recognition Systems Against Disfluent Speech.” Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024, pp. 4795–4809. ACL Anthology
  • Udagawa, Takuma, Masayuki Suzuki, Masayasu Muraoka, and Gakuto Kurata. “Robust ASR Error Correction with Conservative Data Filtering.” Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, 2024, pp. 256–266. ACL Anthology

— Andrew

4,361 hits

Blog at WordPress.com.

Up ↑