Reading List

Welcome to my Reading List! Here, you’ll find a collection of books, podcasts, articles, and videos that I’ve explored to expand my knowledge in computational linguistics, AI, and related topics. I highly recommend checking them out, so I’ve included links with each entry. Enjoy!


Books

AI & Data Literacy: Empowering Citizens of Data Science by Bill Schmarzo
Natural Language Understanding by James Allen
Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins by Garry Kasparov


Blogs

Linguistics Hub


Podcasts

.


Video Lessons and Courses

Machine Learning Tutorial Python | Machine Learning For Beginners
NLP Tutorial Python


Research Papers

I have read many research papers when writing my own research paper A Case Study of Sentiment Analysis on Survey Data Using LLMs versus Dedicated Neural Networks (Published online in The National High School Journal of Science). For simplicity I will put the references page for the paper here:

Cambria, Erik. “Affective Computing and Sentiment Analysis.” IEEE Intelligent Systems, vol. 31, no. 2, 2016, pp. 102–107, https://doi.org/10.1109/MIS.2016.31.

Anzum, Fahim, and Marina L. Gavrilova. “Emotion Detection from Micro-Blogs Using Novel Input Representation.” IEEE Access, vol. 11, 2023, pp. 19512–19522, https://doi.org/10.1109/ACCESS.2023.3248506.

Hussain, Amir. et al. “Information Fusion for Affective Computing and Sentiment Analysis.” Information Infusion, vol. 71, 2021, pp.97-98, https://doi.org/10.1016/j.inffus.2021.02.010.

Groves, Robert M. “Three Eras of Survey Research.” Public Opinion Quarterly, vol. 75, no. 5, Special Issue 2011, pp. 861–871, https://doi.org/10.1093/poq/nfr057.

Wallas, Sherri L. et al. “The State of the Literature on Student Evaluations of Teaching and an Exploratory Analysis of Written Comments: Who Benefits Most?” College Teaching, vol. 67, no. 1, 2018, pp. 1-14, https://doi.org/10.1080/87567555.2018.1483317.

Parker, Michael J. et al. “A Large Language Model Approach to Educational Survey Feedback Analysis.” International Journal of Artificial Intelligence in Education, 2024, https://doi.org/10.1007/s40593-024-00414-0.

Shaik, Thanveer. et al. “Sentiment Analysis and Opinion Mining on Educational Data: A Survey.” Natural Language Processing, vol. 2, 2023, https://doi.org/10.1016/j.nlp.2022.100003.

Hamzah, Almed. et al. “Discovering Trends of Mobile Learning Research Using Topic Modelling Approach.” Online Journals.org, iJIM – vol. 14, no. 9, 2020, https://doi.org/10.3991/ijim.v14i09.11069.

Buskirk, Trent D. et al. “An Introduction to Machine Learning Methods for Survey Researchers.” Survey Practice, vol. 11, no. 1, 2018, https://doi.org/10.29115/SP-2018-0004.

Zhang, Yue. et al. “Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models.” arXiv, arXiv:2309.01219v2, 24 Sep 2023,
https://doi.org/10.48550/arXiv.2309.01219.

Chen, Yulong. et al. “See What LLMs Cannot Answer: A Self-Challenge Framework for Uncovering LLM Weaknesses.” arXiv, arXiv:2408.08978v2, 1 Oct 2024, https://doi.org/10.48550/arXiv.2408.08978.

Lee, R.S.T. Natural Language Processing. In: Artificial Intelligence in Daily Life. Springer, Singapore. 2020. https://doi.org/10.1007/978-981-15-7695-9_6.

Sunar, Ayse., and Khalid, Md. “Natural Language Processing of Student’s Feedback to Instructors: A Systematic Review.” IEEE Transactions on Learning Technologies, vol. 17, 2023, https://ieeexplore.ieee.org/document/10310166/authors#authors.

Abram, Marissa. et al. “Methods to Integrate Natural Language Processing Into Qualitative Research.” International Journal of Qualitative Methods, 2020, https://doi.org/10.1177/1609406920984608.

Jim, Jamin Rahman. et al. “Recent Advancements and Challenges of NLP-based Sentiment Analysis: A State-of-the-art Review.” Natural Language Processing Journal, vol. 6, 2024, https://doi.org/10.1016/j.nlp.2024.100059

Aliwy, Ahmed H. “Tokenization as Preprocessing for Arabic Tagging System.” International Journal of Information and Education Technology, vol. 2, no. 4, 2012, pp. 348-353.

Moudhich, Ihab, and Abdelhadi Fennan. “Evaluating Sentiment Analysis and Word Embedding Techniques on Brexit.” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 13, no. 1, 2024, pp. 695-702.

Vaswani, Ashish. et al. “Attention Is All You Need.” arXiv, arXiv:1706.03762v7, 2 Aug 2023, https://doi.org/10.48550/arXiv.1706.03762.

Liu, B. Opinion Mining and Sentiment Analysis. In: Web Data Mining. Data-Centric Systems and Applications. Springer, Berlin, Heidelberg. 2011. https://doi.org/10.1007/978-3-642-19460-3_11.

Wankhade, Mayur, et al. “A survey on sentiment analysis methods, applications, and challenges.” Artificial Intelligence Review, vol. 55, 2022, pp. 5731–5780. https://doi.org/10.1007/s10462-022-10144-1.

Wang, Jingyi, and Ruijie Xu. “Performance Analysis of Sentiment Classification Based Neural Network.” Applied and Computational Engineering, vol. 5, no. 1, 2023, pp. 513-518.

Samuel, A.L. “Some Studies in Machine Learning Using the Game of Checkers.” IBM Journal of Research and Development. vol. 3, no.3, 1959, pp. 210 – 229. doi: 10.1147/rd.33.0210.

Buscemi, Alessio, and Daniele Proverbio. “ChatGPT vs Gemini vs LLaMA on Multilingual Sentiment Analysis.” arXiv, arXiv:2402.01715v1, 25 Jan 2024, https://doi.org/10.48550/arXiv.2402.01715.

Vanam, Harika, and Jeberson Retna Raj. “Novel Method for Sentiment Analysis in Social Media Data Using Hybrid Deep Learning Model.” Journal of Advanced Research in Applied Sciences and Engineering Technology, vol.32, no. 1, 2023, https://doi.org/10.37934/araset.32.1.272289.

Rosenblatt, F. “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, vol. 65, no. 6, 1958, pp. 386-408, https://doi.org/10.1037/h0042519.

Schmidhuber, Jürgen. “Deep Learning in Neural Networks: An Overview.” Neural Networks, vol. 61, 2015, pp. 85-117.

Elman, J. L. “Finding Structure in Time.” Cognitive Science, vol.14. no. 2, 1990, pp. 179-211.

Hochreiter, S., and Jürgen Schmidhuber. “Long Short-term Memory.” Neural Computation, vol. 9, no. 8, 1997, pp. 1735-1780, https://doi.org/10.1162/neco.1997.9.8.1735.

Naveed, Humza. et al. “A Comprehensive Overview of Large Language Models.” arXiv, arXiv:2307.06435v10, 17 Oct 2024,
https://doi.org/10.48550/arXiv.2307.06435.

Ouyang Long. et al. “Training Language Models to Follow Instructions with Human Feedback.” arXiv, arXiv:2203.02155v1, 4 Mar 2022,
https://doi.org/10.48550/arXiv.2203.02155.

Zhu, Hongyin. “Architectural Foundations for the Large Language Model Infrastructures.” arXiv, arXiv:2408.09205v2, 21 Aug 2024,
https://doi.org/10.48550/arXiv.2408.09205.

Sennrich, Rico. et al. “Neural Machine Translation of Rare Words with Subword Units.” arXiv, arXiv:1508.07909v5, 10 Jun 2016,
https://doi.org/10.48550/arXiv.1508.07909.

Liu, Yinghan. et al. “RoBERTa: A Robustly Optimized BERT Pretraining Approach.” arXiv, arXiv:1907.11692v1, 26 Jul 2019,
https://doi.org/10.48550/arXiv.1907.11692.

Loureiro, Danie,l, et al. “TimeLMs: Diachronic Language Models from Twitter.” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics, 2022, pp. 251–260, https://doi.org/10.18653/v1/2022.acl-demo.25.

Camacho-collados, Jose. et al., “TweetNLP: Cutting-Edge Natural Language Processing for Social Media.” Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Association for Computational Linguistics, 2022, pp. 38–49, https://aclanthology.org/2022.emnlp-demos.5.

Yang, Heng. et al. “Back to Reality: Leveraging Pattern-Driven Modeling to Enable Affordable Sentiment Dependency Learning”. CoRR, vol. abs/2110.08604, 2021, https://arxiv.org/abs/2110.08604.

Yang, Heng. et al. “PyABSA: A Modularized Framework for Reproducible Aspect-Based Sentiment Analysis.” Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM 2023, Birmingham, United Kingdom, October 21-25, 2023, edited by Ingo Frommholz et al., ACM, 2023, pp. 5117–5122, https://doi.org/10.1145/3583780.3614752.

Meta.“Introducing Meta Llama 3: The Most Capable Openly Available LLM to Date.” 2024, https://ai.meta.com/blog/meta-llama-3/#:~:text=Today,%20we%E2%80%99re%20excited%20to%20share%20the%20first%20two%20models%20of.

Subramanya, Amar. “Gemini’s Big Upgrade: Faster Responses with 1.5 Flash, Expanded Access and More.” 2024, https://blog.google/products/gemini/google-gemini-new-features-july-2024/.

OpenAI. “Hello GPT-4o.” 2024, https://openai.com/index/hello-gpt-4o/.

Devlin, Jacob. et al. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, arXiv, arXiv:1810.04805v2, 24 May 2019,
https://doi.org/10.48550/arXiv.1810.04805.

Lossio-Ventura, Juan Antonio et al. “Sentiment Analysis Test Dataset Created from Two COVID-19 Surveys: National Institutes of Health (NIH) and Stanford University.” figshare. Dataset. 2023, https://doi.org/10.6084/m9.figshare.24560584.v2.


Blog at WordPress.com.

Up ↑