
LLM
LLaMA-Factory - Open-source Fine Tuning for LLaMa Models
LLaMA-Factory is an open-source powerful framework designed to streamline the training and fine-tuning of LLaMA models. Built on PyTorch and Hugging Face Transformers, it enables efficient handling of long-sequence training through memory optimization and parallelization techniques, enhancing performance on GPUs like NVIDIA’s A100. Key features include FlashAttention2 and LoRA