When More Shots Don’t Help

LLM Sensitivity and Variability in Social Media Annotation and Stance Detection of Health Information

Authors

DOI:

https://doi.org/10.5117/CCR2026.1.4.SUN

Keywords:

Large Language Models (LLMs), in-context learning, prompt engineering, fine-tuning, stance detection, social media annotation, machine learning classification, HPV vaccination

Abstract

This paper leverages large-language models (LLMs) to experimentally determine strategies for scaling up social media annotation and stance detection of health information, with HPV vaccine-related tweets as a case study. We examine both conventional fine-tuning and emergent in-context learning methods, systematically varying strategies of prompt engineering and in-context learning across widely used LLMs and their variants (e.g., GPT-4, Mistral, Llama 3, and Flan-UL2). Specifically, we varied prompt template design, shot sampling methods, and shot quantity to detect stance on HPV vaccination. Our findings reveal that (a) in-context learning outperformed fine-tuning in stance detection for HPV vaccine social media content; (b) increasing shot quantity does not necessarily enhance performance across models; (c) stratified sampling often outperforms random sampling, with the performance gap more pronounced in smaller model variants, and (d) LLMs and their variants present differing sensitivity to in-context learning conditions. This study highlights the potential and provides an applicable approach for applying LLMs to research on social media annotation and stance detection of health information. 

Author Biographies

  • Luhang Sun, School of Journalism and Mass Communication, University of Wisconsin­–Madison, US

    Luhang Sun (M.A., Peking University) is a Ph.D. candidate in the School of Journalism and Mass Communication, University of Wisconsin–Madison. Her research lies at the intersection of feminist media studies, communication technologies, and computational social science. Specifically, she studies how gender dynamics and gender (in)justice are mediated and complicated by communication technologies, such as digital platforms and AI technologies. Her work spans two interconnected lines of research: (1) auditing and analyzing gender injustice, bias, and antagonism perpetuated by digital technologies, and (2) investigating prosocial media effects to develop feminist interventions.

  • Varsha Pendyala, Department of Electrical & Computer Engineering, University of Wisconsin­–Madison, US

    Varsha Pendyala (M.S. in Electrical and Computer Engineering from the University of Wisconsin-Madison) B.Tech degree in Electrical Engineering from the Indian Institute of Technology, Hyderabad, India. She is a Ph.D. student at the University of Wisconsin-Madison. Her research interests include Machine Learning, Signal Processing, and the development of ML techniques using multi-modal data for emotion recognition.

  • Jonathan Feldman, School of Interactive Computing, Georgia Institute of Technology, US

    Jonathan Feldman is a B.S. candidate in Computer Science and Mathematics at the Georgia Institute of Technology and a research assistant under Professor Munmun De Choudhury in the School of Interactive Computing. His interests lie in leveraging machine learning models, especially multimodal models, across disciplines to process and act upon vast amounts of data.

  • Andrew Zhao , Institute of People and Technology, Georgia Institute of Technology, US

    Andrew Zhao is a Research Scientist in the Institute of People and Technology (IPaT) and SocWeB Lab. Andrew graduated with a Bachelors and Masters in Computer Science from Georgia Tech. He brings years of experience in full-stack development, social computing, and machine learning in social media contexts.

  • Munmun De Choudhury , School of Interactive Computing, Georgia Institute of Technology, US

    Munmun De Choudhury is an Associate Professor at the School of Interactive Computing and Co-Lead of Patient-Centered Care Delivery at the Pediatric Technology Center in Georgia Institute of Technology. Dr. De Choudhury is known for her contributions to the fields of computational social science, human-computer interaction, and digital mental health.

  • Sijia Yang, School of Journalism and Mass Communication, University of Wisconsin­–Madison, US

    Sijia Yang (Ph.D. University of Pennsylvania) is an associate professor at the School of Journalism and Mass Communication at the University of Wisconsin-Madison. His research applies experimental, computational (e.g., multimodal automated content analysis, web-based experiments, causal machine learning), and community-engaged approaches to the study of message effects and persuasion on digital media, particularly in the context of public health and science communication.

  • Dhavan Shah, School of Journalism and Mass Communication, University of Wisconsin­–Madison, US

    Dhavan V. Shah (Ph.D. University of Minnesota) is the McLeod Professor of Communication Research and Maier-Bascom Chair in the School of Journalism and Mass Communication at the University of Wisconsin-Madison, where he is Director of the Mass Communication Research Center and Research Director of the Center for Communication and Civic Renewal. His intersecting lines of research focus on (1) the influence of message framing and cueing on social judgments and behaviors, (2) the communication dynamics that drive civic and political participation, and (3) the role of digital therapeutics in chronic disease management.

Downloads

Published

2026-03-13

Issue

Section

Research Article (regular issue)

How to Cite

Sun, L., Pendyala, V., Chuang, Y.-S., Yang, S., Feldman, J., Zhao , A. ., De Choudhury , M. ., Yang, S., & Shah, D. (2026). When More Shots Don’t Help: LLM Sensitivity and Variability in Social Media Annotation and Stance Detection of Health Information. Computational Communication Research, 8(1). https://doi.org/10.5117/CCR2026.1.4.SUN