Back from Hibernation — A Paper, a Robot, and a Lot of Tests

It’s been a while—almost three months since my last post. Definitely not my usual pace. I wanted to check in and share why the blog has been a bit quiet recently—and more importantly, what I’ve been working on behind the scenes.

First, April and May were a whirlwind: I had seven AP exams, school finals, and was deep in preparation for the VEX Robotics World Championship. Balancing school with intense robotics scrimmages and code debugging meant there were a lot of late nights and early mornings—and not much time to write.

But the biggest reason for the radio silence? I’ve been working on a research paper that got accepted to NAACL 2025.

Our NAACL 2025 Paper: “A Bag-of-Sounds Approach to Multimodal Hate Speech Detection”

Over the past few months, I’ve had the opportunity to co-author a paper with Dr. Sidney Wong, focusing on multimodal hate speech detection using audio data. The paper was accepted to the Fifth Workshop on Speech, Vision, and Language Technologies for Dravidian Languages at NAACL 2025.

You can read the full paper here:
👉 A Bag-of-Sounds Approach to Multimodal Hate Speech Detection

What we did:
We explored a “bag-of-sounds” method, training our model on Mel spectrogram features extracted from spoken social media content in Dravidian languages—specifically Malayalam and Tamil. Unlike most hate speech systems that rely solely on text, we wanted to see how well speech-based signals alone could perform.

How it went:
The results were mixed. Our system didn’t perform great on the final test set—but on the training and dev sets, we saw promise. The takeaway? With enough balanced and labeled audio data, speech can absolutely play a role in multimodal hate speech detection systems. It’s a step toward understanding language in more realistic, cross-modal contexts.

More importantly, this project helped me dive into the intersection of language, sound, and AI—and reminded me just how much we still have to learn when it comes to processing speech from low-resource languages.


Thanks for sticking around even when the blog went quiet. I’ll be back soon with a post about my experience at the VEX Robotics World Championship—stay tuned!

— Andrew

My First Solo Publication: A Case Study on Sentiment Analysis in Survey Data

I’m excited to share that my first solo-authored research paper has just been published in the National High School Journal of Science! 🎉

The paper is titled “A Case Study of Sentiment Analysis on Survey Data Using LLMs versus Dedicated Neural Networks”, and it explores a question I’ve been curious about for a while: how do large language models (like GPT-4o or LLaMA-3) compare to task-specific neural networks when it comes to analyzing open-ended survey responses?

If you’ve read some of my earlier posts—like my reflection on the DravidianLangTech shared task or my thoughts on Jonathan Dunn’s NLP book—you’ll know that sentiment analysis has become a recurring theme in my work. From experimenting with XLM-RoBERTa on Tamil and Tulu to digging into how NLP can support corpus linguistics, this paper feels like the natural next step in that exploration.

Why This Matters to Me

Survey responses are messy. They’re full of nuance, ambiguity, and context—and yet they’re also where we hear people’s honest voices. I’ve always thought it would be powerful if AI could help us make sense of that kind of data, especially in educational or public health settings where understanding sentiment could lead to real change.

In this paper, I compare how LLMs and dedicated models handle that challenge. I won’t go into the technical details here (the paper does that!), but one thing that stood out to me was how surprisingly effective LLMs are—even without task-specific fine-tuning.

That said, they come with trade-offs: higher computational cost, more complexity, and the constant need to assess bias and interpretability. There’s still a lot to unpack in this space.

Looking Ahead

This paper marks a milestone for me, not just academically but personally. It brings together things I’ve been learning in courses, competitions, side projects, and books—and puts them into conversation with each other. I’m incredibly grateful to the mentors and collaborators who supported me along the way.

If you’re interested in sentiment analysis, NLP for survey data, or just want to see what a high school research paper can look like in this space, I’d love for you to take a look:
🔗 Read the full paper here

Thanks again for following along this journey. Stay tuned!

Shared Task at DravidianLangTech 2025

In 2025, I had the privilege of participating in the shared task on Sentiment Analysis in Tamil and Tulu as part of the DravidianLangTech@NAACL 2025 conference. The task was both challenging and enlightening, as it required applying machine learning techniques to multilingual data with varying sentiment nuances. This post highlights the work I did, the methodology I followed, and the results I achieved.


The Task at Hand

The goal of the task was to classify text into one of four sentiment categories: Positive, Negative, Mixed Feelings, and Unknown State. The datasets provided were in Tamil and Tulu, which made it a fascinating opportunity to work with underrepresented languages.


Methodology

I implemented a pipeline to preprocess the data, tokenize it, train a transformer-based model, and evaluate its performance. My choice of model was XLM-RoBERTa, a multilingual transformer capable of handling text from various languages effectively. Below is a concise breakdown of my approach:

  1. Data Loading and Inspection:
    • Used training, validation, and test datasets in .xlsx format.
    • Inspected the data for missing values and label distributions.
  2. Text Cleaning:
    • Created a custom function to clean text by removing unwanted characters, punctuation, and emojis.
    • Removed common stopwords to focus on meaningful content.
  3. Tokenization:
    • Tokenized the cleaned text using the pre-trained XLM-RoBERTa tokenizer with a maximum sequence length of 128.
  4. Model Setup:
    • Leveraged XLM-RoBERTaForSequenceClassification with 4 output labels.
    • Configured TrainingArguments to train for 3 epochs with evaluation at the end of each epoch.
  5. Evaluation:
    • Evaluated the model on the validation set, achieving a Validation Accuracy of 59.12%.
  6. Saved Model:
    • Saved the trained model and tokenizer for reuse.

Results

After training the model for three epochs, the validation accuracy was 59.12%. While there is room for improvement, this score demonstrates the model’s capability to handle complex sentiment nuances in low-resource languages like Tamil.


The Code

Below is an overview of the steps in the code:

  • Preprocessing: Cleaned and tokenized the text to prepare it for model input.
  • Model Training: Used Hugging Face’s Trainer API to simplify the training process.
  • Evaluation: Compared predictions against ground truth to compute accuracy.

To make this process more accessible, I’ve attached the complete code as a downloadable file. However, for a quick overview, here’s a snippet from the code that demonstrates how the text was tokenized:

# Tokenize text data using the XLM-RoBERTa tokenizer
def tokenize_text(data, tokenizer, max_length=128):
return tokenizer(
data,
truncation=True,
padding='max_length',
max_length=max_length,
return_tensors="pt"
)

train_tokenized = tokenize_text(train['cleaned'].tolist(), tokenizer)
val_tokenized = tokenize_text(val['cleaned'].tolist(), tokenizer)

This function ensures the input text is prepared correctly for the transformer model.


Reflections

Participating in this shared task was a rewarding experience. It highlighted the complexities of working with low-resource languages and the potential of transformers in tackling these challenges. Although the accuracy could be improved with hyperparameter tuning and advanced preprocessing, the results are a promising step forward.


Download the Code

I’ve attached the full code used for this shared task. Feel free to download it and explore the implementation in detail.


If you’re interested in multilingual NLP or sentiment analysis, I’d love to hear your thoughts or suggestions on improving this approach! Leave a comment below or connect with me via the blog.

Blog at WordPress.com.

Up ↑