The Productivity Paradox of AI in Scientific Research

In January 2026, Nature published a paper with a title that immediately made me pause: Artificial intelligence tools expand scientists’ impact but contract science’s focus (Hao et al. 2026). The wording alone suggests a tradeoff that feels uncomfortable, especially for anyone working in AI while still early in their academic life.

The study, conducted by researchers at the University of Chicago and China’s Beijing National Research Center for Information Science and Technology, analyzes how AI tools are reshaping scientific research. Their findings are striking. Scientists who adopt AI publish roughly three times as many papers, receive nearly five times as many citations, and reach leadership positions one to two years earlier than their peers who do not use these tools (Hao et al. 2026). On the surface, this looks like a clear success story for AI in science.

But the paper’s core argument cuts in a different direction. While individual productivity and visibility increase, the collective direction of science appears to narrow. AI is most effective in areas that already have abundant data and well established methods. As a result, research effort becomes increasingly concentrated in the same crowded domains. Instead of pushing into unknown territory, AI often automates and accelerates what is already easiest to study (Hao et al. 2026).

James Evans, one of the authors, summarized this effect bluntly in an interview with IEEE Spectrum. AI, he argued, is turning scientists into publishing machines while quietly funneling them into the same corners of research (Dolgin 2026). The paradox is clear. Individual careers benefit, but the overall diversity of scientific exploration suffers.

Reading this as a high school senior who works in NLP and computational linguistics was unsettling. AI is the reason I can meaningfully participate in research at this stage at all. It lowers barriers, speeds up experimentation, and makes ambitious projects feasible for small teams or even individuals. At the same time, my own work often depends on large, clean datasets and established benchmarks. I am benefiting from the very dynamics this paper warns about.

The authors emphasize that this is not primarily a technical problem. It is not about whether transformer architectures are flawed or whether the next generation of models will be more creative. The deeper issue is incentives. Scientists are rewarded for publishing frequently, being cited often, and working in areas where success is legible and measurable. AI amplifies those incentives by making it easier to succeed where the path is already paved (Hao et al. 2026).

This raises an uncomfortable question. If AI continues to optimize research for speed and visibility, who takes responsibility for the slow, risky, and underexplored questions that do not come with rich datasets or immediate payoff? New fields rarely emerge from efficiency alone. They require intellectual friction, uncertainty, and a willingness to fail without quick rewards.

Evans has expressed hope that this work acts as a provocation rather than a verdict. AI does not have to narrow science’s focus, but using it differently requires changing what we value as progress (Dolgin 2026). That might mean funding exploratory work that looks inefficient by conventional metrics. It might mean rewarding scientists for opening new questions rather than closing familiar ones faster. Without changes like these, better tools alone will not lead to broader discovery.

For students like me, this tension matters. We are entering research at a moment when AI makes it easier than ever to contribute, but also easier than ever to follow the crowd. The challenge is not to reject AI, but to be conscious of how it shapes our choices. If the next generation of researchers only learns to optimize for what is tractable, science may become faster, cleaner, and more impressive on paper while quietly losing its sense of direction.

AI has the power to expand who gets to do science. Whether it expands what science is willing to ask remains an open question.

References

Hao, Q., Xu, F., Li, Y., et al. “Artificial Intelligence Tools Expand Scientists’ Impact but Contract Science’s Focus.” Nature, 2026. https://doi.org/10.1038/s41586-025-09922-y

Dolgin, Elie. “AI Boosts Research Careers but Flattens Scientific Discovery.” IEEE Spectrum, January 19, 2026. https://spectrum.ieee.org/ai-science-research-flattens-discovery-2674892739

“AI Boosts Research Careers, Flattens Scientific Discovery.” ACM TechNews, January 21, 2026. https://technews.acm.org/archives.cfm?fo=2026-01-jan/jan-21-2026.html

— Andrew

4,811 hits

Is the Increasing Trend of Leveraging LLMs like ChatGPT in Writing Research Papers Concerning?

On August 4, 2025, Science published a tech news piece titled “One-fifth of computer science papers may include AI content,” written by Phie Jacobs, a general assignment reporter at Science. The article reports on a large-scale analysis conducted by researchers at Stanford University and the University of California, Santa Barbara. They examined over 1 million abstracts and introductions and found that by September 2024, 22.5% of computer science papers showed signs of input from large language models such as ChatGPT. The researchers used statistical modeling to detect common word patterns linked to AI-generated writing.

This caught my attention because I was surprised at how common AI-generated content has already become in academic research. I agree with the concern raised in the article, particularly this point:

Although the new study primarily looked at abstracts and introductions, Dmitry Kobak (University of Tübingen data scientist) worries authors will increasingly rely on AI to write sections of scientific papers that reference related works. That could eventually cause these sections to become more similar to one another and create a “vicious cycle” in the future, in which new LLMs are trained on content generated by other LLMs.

From my own experience writing research papers over the past few years, I can see why this concern is valid. If you have followed my blog, you know I have published two research papers and am currently working on a third. While working on my papers, I occasionally used ChatGPT (including its Deep Research) to help find peer-reviewed sources for citations instead of relying solely on search engines like Google Scholar. However, I quickly realized that depending on ChatGPT for this task can be risky. In my case, about 30% of the citations it provided were inaccurate, which meant I had to verify each one manually. For reliable academic sourcing, I found Google Scholar much more trustworthy because current LLMs are still prone to “hallucinations.” You may have encountered other AI tools like Consensus AI, a search engine tailored for scientific research and limited to peer-reviewed academic papers only. Compared to ChatGPT Deep Research, it’s faster and more reliable for academic queries, but I strongly recommend always verifying AI outputs, as both tools can occasionally produce inaccuracies.

The Science article also highlights that AI usage varies significantly across disciplines. “The amount of artificial intelligence (AI)-modified sentences in scientific papers had surged by September 2024, almost two years after the release of ChatGPT, according to an analysis.” The table below shows estimates of AI usage by field, with certain disciplines adopting AI much faster than others. James Zou, a computational biologist at Stanford University, suggests these differences may reflect varying levels of familiarity with AI technology.

While the study from Stanford and UCSB is quite solid, Data Scientist Kobak pointed out that the estimates above could be underreported. One reason for this is that some authors may have started removing “red flag” words from manuscripts to avoid detection. For example, the word “delve” became more common right after ChatGPT launched, but its usage dropped sharply once it became widely recognized as a hallmark of AI-generated text.

If you want to read the full article, you can find it here: Science – One-fifth of computer science papers may include AI content.

— Andrew

Update: Here is another more recent report from Nature.

How I Published My STEM Research in High School (and Where You Can Too)

Publishing as a high school student can be an exciting step toward academic growth and recognition. But if you’re anything like me when I started out, you’re probably wondering: Where do I even submit my work? And maybe more importantly, how do I avoid falling into the trap of predatory or low-quality journals?

In this post, I’ll walk through a curated list of reputable STEM journals that accept high school submissions—along with some honest thoughts from my own publishing journey. Whether you’re writing your first paper or looking for your next outlet, I hope this helps.


📚 10 Reputable Journals for High School Research (Especially STEM)

These are ranked loosely by selectiveness, peer-review rigor, and overall reputation. I’ve included each journal’s website, review cycle, and key details so you can compare.

  1. Columbia Junior Science Journal (CJSJ)
    Selection Rate: ~10-15% (very selective)
    Subjects: Natural sciences, engineering, social sciences
    Peer Review: Professional (Columbia faculty/editors)
    Cycle: Annual (6–9 months)
    🔗 cjsj.org
  2. Journal of Emerging Investigators (JEI)
    Selection Rate: ~70-75%
    Subjects: Biological/physical sciences (hypothesis-driven only)
    Peer Review: Graduate students and researchers
    Cycle: Rolling (7–8 months)
    🔗 emerginginvestigators.org
  3. STEM Fellowship Journal (SFJ)
    Selection Rate: ~15-20%
    Subjects: All STEM fields
    Peer Review: Canadian Science Publishing reviewers
    Cycle: Biannual (4–5 months)
    🔗 journal.stemfellowship.org
  4. International Journal of High School Research (IJHSR)
    Selection Rate: ~20–30%
    Subjects: STEM, behavioral, and social sciences
    Peer Review: Author-secured (3 academic reviewers)
    Cycle: Rolling (3–6 months)
    🔗 ijhsr.terrajournals.org
  5. The Young Researcher
    Selection Rate: ~20–25%
    Subjects: STEM, social sciences, humanities
    Peer Review: Faculty and researchers
    Cycle: Biannual (4–6 months)
    🔗 theyoungresearcher.com
  6. Journal of Student Research (JSR)
    Selection Rate: ~70–80%
    Subjects: All disciplines
    Peer Review: Faculty reviewers
    Cycle: Quarterly (6–7 months)
    🔗 jsr.org
  7. National High School Journal of Science (NHSJS)
    Selection Rate: ~20%
    Subjects: STEM and social sciences
    Peer Review: Student-led with academic oversight
    Cycle: Rolling (3–5 months)
    🔗 nhsjs.com
  8. Journal of High School Science (JHSS)
    Selection Rate: ~18%
    Subjects: STEM, arts (STEAM focus, quantitative research)
    Peer Review: Academic reviewers
    Cycle: Quarterly (4–6 months)
    🔗 jhss.scholasticahq.com
  9. Curieux Academic Journal
    Selection Rate: ~30–40%
    Subjects: STEM, humanities, social sciences
    Peer Review: Student-led with professional oversight
    Cycle: Monthly (fast-track: 2–5 weeks; standard: 1–3 months)
    🔗 curieuxacademicjournal.com
  10. Young Scientists Journal
    Selection Rate: ~40–50%
    Subjects: STEM (research, reviews, blogs)
    Peer Review: Student-led with expert input
    Cycle: Biannual (3–6 months)
    🔗 ysjournal.com

🧠 My Experience with JHSS, JSR, and NHSJS

1. Journal of High School Science (JHSS)
This was the first journal I submitted to on November 13, 2024. The submission process was straightforward, and the portal clearly tracked every stage of the review. I received feedback on December 29, but unfortunately, the reviewer seemed unfamiliar with the field of large language models. The decision was based on two Likert-scale questions:

  • “The paper makes a significant contribution to scholarship.”
  • “The literature review was thorough given the objectives and content.”

The first was marked low, and the second was marked neutral. I shared the feedback with LLM researchers from top-tier universities, and they agreed the review wasn’t well-grounded. So heads up: JHSS does have a formal structure, but you may run into an occasional reviewer mismatch.

2. Journal of Student Research (JSR)
Originally, I was going to submit my second paper here. But I ended up choosing NHSJS because JSR’s review timeline was too long for my goals (6–7 months vs. NHSJS’s 3–5 months). That said, JSR has one of the clearest submission guides I’ve come across:
👉 JSR Submission Info
If you’re not in a rush and want a polished process, it’s a solid option.

3. National High School Journal of Science (NHSJS)
This is where I published my first solo-authored research paper (see my earlier post). What stood out to me:

  • Quick response times
  • Detailed and constructive reviewer feedback

My reviewers gave me 19 major and 6 minor suggestions, each with specific guidance. It was incredibly helpful as a student navigating scientific writing for the first time.

That said, the journal’s submission format was a bit confusing (e.g., its citation style is non-standard), and the guidelines weren’t always followed by other authors. I had to clarify formatting details directly with the editor. So: highly recommend NHSJS—just make sure you confirm your formatting expectations early.


Final Thoughts

If you’re serious about publishing your research, take time to explore your options. The review process can be slow and sometimes frustrating, but it’s one of the best ways to grow as a thinker and writer.

Let me know if you have any questions. I’d be happy to share more from my experience.

— Andrew

Blog at WordPress.com.

Up ↑