TITLE:
Prototyping Large Language Model from Scratch as 1st Line Customer Engagement & Support Tool
AUTHORS:
Inn Keat Ng, Oscar Yung Qin Koh, Jia Lin Tan, Tong Ming Lim
KEYWORDS:
SMEs, Customer Engagement, Large Language Models, LLaMA 3.2, Fine-Tuning
JOURNAL NAME:
Journal of Computer and Communications,
Vol.13 No.5,
May
27,
2025
ABSTRACT: Small and Medium Enterprises (SMEs) in Malaysia face challenges in managing customer engagement due to resource constraints such as high labour cost and heavy reliance on commercial Large Language Models (LLMs) which often produce inconsistent and irrelevant outputs as one of the major challenges. This study develops a domain-specific base LLM tailored for SMEs from scratch, leveraging on the advanced transformer architectures implemented in LLaMA 3.2 with Rotary Positional Embedding and Grouped Query Attention for enhanced scalability and efficiency. A rigorously curated dataset enabled fine-tuning, resulting in significant improvements in generating relevant and human-like responses. While LLaMA 3.2 outperforms GPT-2, challenges in coherence remain. The findings highlight the potential of LLMs in transforming SME operations and offer a framework for scalable, domain-specific solutions.