APA Style
Mohammad Amaz Uddin, Iqbal H. Sarker. (2026). Fine-tuning vs RAG: A Position Paper from the Perspective of LLM-based Cybersecurity Modeling. Computing&AI Connect, 3 (Article ID: 0031). https://doi.org/Registering DOIMLA Style
Mohammad Amaz Uddin, Iqbal H. Sarker. "Fine-tuning vs RAG: A Position Paper from the Perspective of LLM-based Cybersecurity Modeling". Computing&AI Connect, vol. 3, 2026, Article ID: 0031, https://doi.org/Registering DOI.Chicago Style
Mohammad Amaz Uddin, Iqbal H. Sarker. 2026. "Fine-tuning vs RAG: A Position Paper from the Perspective of LLM-based Cybersecurity Modeling." Computing&AI Connect 3 (2026): 0031. https://doi.org/Registering DOI.
ACCESS
Perspective
Volume 3, Article ID: 2026.0031
Mohammad Amaz Uddin
amazuddin722@gmail.com
Iqbal H. Sarker
m.sarker@ecu.edu.au
1 Department of Computer Science and Engineering, International Islamic University Chittagong, Chattogram 4318, Bangladesh
2 Department of Computer Science and Engineering, BGC Trust University Bangladesh, Chittagong 4381, Bangladesh
3 Centre for Securing Digital Futures, Edith Cowan University, Perth 6027, WA, Australia
* Author to whom correspondence should be addressed
Received: 04 Jul 2025 Accepted: 27 Jan 2026 Available Online: 28 Jan 2026
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) tasks, enabling critical capabilities in fields such as business and finance with their cybersecurity analysis. In particular, LLMs are becoming vital in cybersecurity in providing more accurate solutions to different types of threats, vulnerabilities, and security challenges through pattern recognition, intelligent response generation, and automating threat detection. Although LLMs are pre-trained with a large amount of knowledge, their application in any specific domain, such as cybersecurity, is often limited by the domain knowledge constraints of their pretraining data. To solve this problem, incorporating domain-specific knowledge is more often accomplished through fine-tuning or Retrieval-Augmented Generation (RAG). RAG enriches the model’s responses at inference time by retrieving relevant external information, while fine-tuning embeds new knowledge directly into the model’s parameters. In this position paper, we describe fine-tuning and RAG from the perspective of cybersecurity applications. Moreover, we also discuss the hybrid approach of finetuning and RAG and highlight proper best cases for when to select fine-tuning, RAG, or a hybrid approach based on the desired cybersecurity use case and operational constraints. Overall, this position paper aims to contribute to the ongoing discussion of LLM adaptation processes and best practices for the purpose of applying RAG and or fine-tuning.
Disclaimer: This is not the final version of the article. Changes may occur when the manuscript is published in its final format.
We use cookies to improve your experience on our site. By continuing to use our site, you accept our use of cookies. Learn more