Adaptive Curriculum Learning for Efficient Domain-Specific Fine-Tuning with Parameter-Efficient Strategies in LLMs
G. Srinivasa RajuResearch Scholar, Department of Computer Science and Engineering, Centurion University of Technology and Management [CUTM], Vizianagaram, Andhra Pradesh, India. vasuraju1996@gmail.com0009-0005-9287-1739
Dr.A. Sri KrishnaDepartment of Artificial Intelligence, Shri Vishnu Engineering College for Women, Bhimavaram, Andhra Pradesh, India. srikrishna.au@gmail.com0000-0002-9134-011X
Dr. Malijeddi MuraliProfessor, Department of Electronics and Communication Engineering, ACE Engineering College, Hyderabad, India. muralitejas@gmail.com0000-0002-8559-0091
Keywords: Adaptive Curriculum Learning, Parameter-Efficient Fine-Tuning, Large Language Models (LLMs), Domain-Specific Fine-Tuning, Computational Efficiency, Training Efficiency, Model Adaptation.
Abstract
The proposed ACL-PEFT-LLM model combines Adaptive Curriculum Learning (ACL) and Parameters-Efficient Fine-tuning (PEFT) to overcome the shortcomings of current models that employ full fine-tuning, which is computationally expensive and impractical in many domain-specific applications. Although PEFT methods such as LoRA, Prefix Tuning, and Prompt Tuning have been designed to optimize only a few model parameters, thereby reducing memory usage and training time by a significant factor, typically lack an adaptive mechanism to adjust task difficulty during training. By comparison, ACL-PEFT-LLM dynamically adjusts the difficulty of training examples based on the model's current performance, enabling it to start with easier tasks and increase in difficulty. This will guarantee that the model learns effectively without being flooded with challenging examples during the initial stages of learning. The ACL-PEFT-LLM model is superior to the other models in both accuracy and computational efficiency. It has an F1 score of 96.2 and a high accuracy of 96.8, indicating strong task-specific performance across a wide range of datasets, including SST-2, SQuAD, and AIME. Besides, it has a high Accuracy-Efficiency Ratio (AER) of 5.12, indicating a positive trade-off between performance and resource consumption. Compared with other methods, LoRA is most efficient in terms of training time and GPU memory usage, with an accuracy of 94.0 and an F1 score of 93.7, but is marginally lower in performance. Other techniques, such as Full Fine-Tuning, Prefix Tuning, and Adapters, are either less accurate or more resource-intensive, whereas ACL-PEFT-LLM is the most efficient mechanism for domain-specific fine-tuning. ACL-PEFT-LLM offers a potent platform that facilitates efficient domain adaptation at minimal resource usage, which makes it suitable for tasks in domains such as medical, legal, and financial, where computing resources are usually scarce. This model has found a very good compromise between maximum performance and computational efficiency such that tasks that belong to a domain can be efficiently performed without overuse of resources.