Call for Papers: CCR Special Issue on Computational Approaches to Researching Short Videos
Call for Papers: CCR Special Issue on Computational Approaches to Researching Short Videos
Guest Editors
- Jing Zeng, University of Zurich, Switzerland
- Rupert Kiddle, Vrije Universiteit Amsterdam
- Luca Rossi, IT University in Copenhagen, Denmark
Relevance and Rationale
Short videos—commonly referred to as video content ranging from a few seconds to a few minutes—have become a dominant medium for communication, reshaping how knowledge is shared, campaigns are organized, and news is conveyed (Hautea et al., 2021; Newman, 2024). Designed for quick consumption, short video platforms leverage algorithmic recommendation systems to maximize engagement. Prominent examples include TikTok, Douyin, Instagram Reels, and YouTube Shorts.
In recent years, short videos have been at the center of media scrutiny, geopolitical debates, and have inspired a fast-growing body of academic research. However, it is worth pointing out that the short-video industry already gained traction over a decade ago, particularly in Asia. In Europe and the U.S., legacy short-video platforms such as Vine, Dubsmash, and Musical.ly were popular, especially among younger audiences (Kaye et al., 2022). The mainstreaming of short videos accelerated in 2018 with TikTok’s global expansion, leading U.S. tech companies to develop competing services.
Given the rapidly expanding influence of short videos, new methodological frameworks and innovations are needed to empirically examine their sociocultural implications. For computational communication scholars to address these needs, it is crucial for the community to navigate and reflect upon the opportunities and challenges posed by short videos as an evolving research field.
For instance, compared to long-format videos such as those on YouTube, short videos are more computationally affordable to process and analyze due to their “bite-sized” nature. Data accessibility has also improved, with various methods available for retrieving short-video data. TikTok, for example, provides Research APIs, open-source tools that facilitate web scraping (e.g. Peeters, 2024), and data donation (Wedel et al., 2024) is another emerging method for acquiring short video datasets. These advancements create possibilities for large-scale computational research on short-video platforms. Additionally, breakthroughs in video processing, audio models, and the latest multimodal large language models (MLLMs) offer new avenues for capturing and analyzing short-video content, enabling more nuanced insights into user behavior, emergent phenomena, and cross-cultural communication.
Short videos are inherently multimodal, integrating audio, visual elements, and text. Communicative features unique to short videos, such as stitches and duets, also raise new methodological questions regarding how to systematically study emerging forms of intra-video conversation. Despite the aforementioned advances in automated video analysis tools and models, effectively triangulating multimodal insights to answer meaningful communication questions remains a challenge (Lin et al., 2024). Moreover, the sensemaking of visual content is deeply influenced by cultural and contextual factors, and research must develop expertise in these areas to capture the nuances of interpretation and meaning. These subtleties pose additional challenges for automated computational visual analyses.
Furthermore, the development of computational visual research must proceed with caution to ensure ethical conduct and to minimize potential risks. Machine learning-based visual models can inadvertently reinforce biases and prejudices (Buolamwini & Gebru, 2018), and recent controversies surrounding GenAI have further highlighted their tendency to amplify harmful visual stereotypes (Seth et al., 2023). Additionally, short video data itself also presents issues related to privacy and data security. These concerns underscore the importance of ethical vigilance and the need for robust counter-strategies in computational visual research, including short-video studies.
While the preceding reflections provide the motivation behind launching this special issue, they represent only a partial view of the broader challenges and opportunities in the field. To address these complexities more comprehensively, this special issue invites contributions that critically assess, develop, and apply computational approaches to studying short videos. This special issue welcomes research that highlights cross-cultural perspectives and focuses on under-researched user communities and communication practices on short video platforms. Topics of interest include, but are not limited to:
- Short-video data acquisition methodology and tools
- Multimodal analytical tools for short-video research, such as MLLMs, computer vision models, and audio recognition and musicality analysis technologies
- Studies integrating concepts of temporality and sequentiality in media selection and algorithmic curation. For example, examining sequences of video viewership to reveal patterns of user-algorithm interaction and the temporal evolution of content exposure.
- Methodological innovations for analyzing new communicative features of short videos, such as the conversational nature of content, enabled by platform-specific tools like stitches, duets, and other interactive elements.
- Empirical case studies leveraging computational approaches to examine the sociopolitical implications of short-video communication.
- Critical discussions and proposals on research ethics in computational short-video studies.
Submission
Please submit your abstract (600-1,000 words) to the submission portal, before 25 July 2025. When submitting, please select the section: “Special Issue: Short Video”
The abstract should outline the research objective, methodology, and expected contributions. Please also indicate the type of paper (e.g., empirical case study, tool/software development, theoretical/review paper) in the abstract.
Timeline
- Abstract submission deadline: 25 July 2025
- Abstract selection announcement: 8 August 2025
- Full Manuscript submission: 15 November 2025
- SI publication: Fall 2026 (individual papers will be published online-first upon acceptance)
If you have any questions about this special issue, please reach out to: Jing Zeng (j.zeng@ikmz.uzh.ch), Rupert Kiddle (r.t.kiddle@vu.nl) and Luca Rossi (lucr@itu.dk). For questions relating to the CCR account creation or submission process, please reach out to ccreditorialteam@gmail.com.
References
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In R. Binns (Ed.), Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 77–91). PMLR.
Hautea, S., Parks, P., Takahashi, B., & Zeng, J. (2021). Showing they care (or don’t): Affective publics and ambivalent climate activism on TikTok. Social Media + Society, 7(2), 20563051211012344.
Kaye, D. B. V., Zeng, J., & Wikström, P. (2022). TikTok: Creativity and culture in short video. Polity.
Lin, H., Luo, Z., Gao, W., Ma, J., Wang, B., & Yang, R. (2024). Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models. Proceedings of the ACM Web Conference 2024, 2359–2370.
Newman, N., Fletcher, R., Robertson, C. T., Ross Arguedas, A., & Nielsen, R. K. (2024). Reuters Institute digital news report 2024. Reuters Institute for the Study of Journalism.
Peeters, S. (2024). Zeeschuimer (Version v1.11.3) [Computer software]. Zenodo.
Seth, A., Hemani, M., & Agarwal, C. (2023). Dear: Debiasing vision-language models with additive residuals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6820-6829).
Wedel, L., Ohme, J., & Araujo, T. (2024). Augmenting Data Download Packages–Integrating Data Donations, Video Metadata, and the Multimodal Nature of Audio-visual Content. methods, data, analyses, 32.


