Sub4Sub network gives free YouTube subscribers
Get Free YouTube Subscribers, Views and Likes

Fine-tuning Llama 2 on Your Own Dataset | Train an LLM for Your Use Case with QLoRA on a Single GPU

Follow
Venelin Valkov

Full text tutorial (requires MLExpert Pro): https://www.mlexpert.io/promptengine...

Learn how to finetune the Llama 2 7B base model on a custom dataset (using a single T4 GPU). We'll use the QLoRa technique to train an LLM for text summarization of conversations between support agents and customers over Twitter.

Discord:   / discord  
Prepare for the Machine Learning interview: https://mlexpert.io
Subscribe: http://bit.ly/venelinsubscribe
GitHub repository: https://github.com/curiousily/GetThi...

Join this channel to get access to the perks and support my work:
   / @venelin_valkov  

00:00 When to Finetune an LLM?
00:30 Finetune vs Retrieval Augmented Generation (Custom Knowledge Base)
03:38 Text Summarization (our example)
04:14 Text Tutorial on MLExpert.io
04:47 Dataset Selection
05:36 Choose a Model (Llama 2)
06:22 Google Colab Setup
07:26 Process data
10:08 Load Llama 2 Model & Tokenizer
11:18 Training
14:49 Compare Base Model with Finetuned Model
18:08 Conclusion

#llama2 #llm #promptengineering #chatgpt #chatbot #langchain #gpt4 #summarization

posted by blogoffev