Implementation
Instruct Fine-tuning Mistral 7B Instruct using qLora and Supervise Finetuning¶
This is a comprahensive notebook and tutorial on how to fine tune the Mistral-7b-Instruct Model
Meet Mistral 7B Instruct¶
The team at MistralAI has created an exceptional language model called Mistral 7B Instruct. It has consistently delivered outstanding results in a range of benchmarks, which positions it as an ideal option for natural language generation and understanding. This guide will concentrate on how to fine-tune the model for coding purposes, but the methodology can effectively be applied to other tasks.
All the code will be available on my Github. Do drop by and give a follow and a star.
adithya-s-k
Github Code
I also post content about LLMs and what I have been working on Twitter. AdithyaSK (@adithya_s_k) / X
Prerequisites¶
Before delving into the fine-tuning process, ensure that you have the following prerequisites in place:
- GPU: This tutorial cannot run on free Google Colab; it requires more powerful GPUs, such as the A100.
- Python Packages: Ensure that you have the necessary Python packages installed. You can use the following commands to install them:
Let's begin by checking if your GPU is correctly detected:
!nvidia-smi
Let's define a wrapper function which will get completion from the model from a user question
Step 1 - Install necessary packages¶
First, install the dependencies below to get started. As these features are available on the main branches only, we need to install the libraries below from source.
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q datasets scipy
!pip install -q trl
Step 2 - Model loading¶
We'll load the model using QLoRA quantization to reduce the usage of memory
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
Now we specify the model ID and then we load it with our previously defined quantization configuration.
model_id = "mistralai/Mistral-7B-Instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True)
Run a inference on the base model. The model does not seem to understand our instruction and gives us a list of questions related to our query.
def get_completion(query: str, model, tokenizer) -> str:
device = "cuda:0"
prompt_template = """
<s>
[INST]
Below is an instruction that describes a task. Write a response that appropriately completes the request.
{query}
[/INST]
</s>
<s>
"""
prompt = prompt_template.format(query=query)
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encodeds.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(generated_ids)
return (decoded[0])
result = get_completion(query="code the fibonacci series in python using reccursion", model=model, tokenizer=tokenizer)
print(result)
Step 3 - Load dataset for finetuning¶
Lets Load the Dataset¶
For this tutorial, we will fine-tune Mistral 7B Instruct for code generation.
We will be using this dataset which is curated by TokenBender (e/xperiments) and is an excellent data source for fine-tuning models for code generation. It follows the alpaca style of instructions, which is an excellent starting point for this task. The dataset structure should resemble the following:
{
"instruction": "Create a function to calculate the sum of a sequence of integers.",
"input": "[1, 2, 3, 4, 5]",
"output": "# Python code def sum_sequence(sequence): sum = 0 for num in sequence: sum += num return sum"
}
from datasets import load_dataset
dataset = load_dataset("TokenBender/code_instructions_122k_alpaca_style", split="train")
dataset
df = dataset.to_pandas()
df.head(10)
Instruction Fintuning - Prepare the dataset under the format of "prompt" so the model can better understand :
- the function generate_prompt : take the instruction and output and generate a prompt
- shuffle the dataset
- tokenizer the dataset
Formatting the Dataset¶
Now, let's format the dataset in the required Mistral-7B-Instruct-v0.1 format.
Many tutorials and blogs skip over this part, but I feel this is a really important step.
We'll put each instruction and input pair between [INST]
and [/INST]
output after that, like this:
<s>[INST] What is your favorite condiment? [/INST]
Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavor to whatever I'm cooking up in the kitchen!</s>
You can use the following code to process your dataset and create a JSONL file in the correct format:
def generate_prompt(data_point):
"""Gen. input text based on a prompt, task instruction, (context info.), and answer
:param data_point: dict: Data point
:return: dict: tokenzed prompt
"""
prefix_text = 'Below is an instruction that describes a task. Write a response that ' \
'appropriately completes the request.\n\n'
# Samples with additional context into.
if data_point['input']:
text = f"""<s>[INST]{prefix_text} {data_point["instruction"]} here are the inputs {data_point["input"]} [/INST]{data_point["output"]}</s>"""
# Without
else:
text = f"""<s>[INST]{prefix_text} {data_point["instruction"]} [/INST]{data_point["output"]} </s>"""
return text
# add the "prompt" column in the dataset
text_column = [generate_prompt(data_point) for data_point in dataset]
dataset = dataset.add_column("prompt", text_column)
We'll need to tokenize our data so the model can understand.
dataset = dataset.shuffle(seed=1234) # Shuffle dataset here
dataset = dataset.map(lambda samples: tokenizer(samples["prompt"]), batched=True)
Split dataset into 90% for training and 10% for testing
dataset = dataset.train_test_split(test_size=0.2)
train_data = dataset["train"]
test_data = dataset["test"]
After Formatting, We should get something like this¶
{
"text":"<s>[INST] Create a function to calculate the sum of a sequence of integers. here are the inputs [1, 2, 3, 4, 5] [/INST]
# Python code def sum_sequence(sequence): sum = 0 for num in sequence: sum += num return sum</s>",
"instruction":"Create a function to calculate the sum of a sequence of integers",
"input":"[1, 2, 3, 4, 5]",
"output":"# Python code def sum_sequence(sequence): sum = 0 for num in,
sequence: sum += num return sum"
"prompt":"<s>[INST] Create a function to calculate the sum of a sequence of integers. here are the inputs [1, 2, 3, 4, 5] [/INST]
# Python code def sum_sequence(sequence): sum = 0 for num in sequence: sum += num return sum</s>"
}
While using SFT (Supervised Fine-tuning Trainer) for fine-tuning, we will be only passing in the “text” column of the dataset for fine-tuning.
print(test_data)
Step 4 - Apply Lora¶
Here comes the magic with peft! Let's load a PeftModel and specify that we are going to use low-rank adapters (LoRA) using get_peft_model utility function and the prepare_model_for_kbit_training method from PEFT.
from peft import LoraConfig, PeftModel, prepare_model_for_kbit_training, get_peft_model
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
print(model)
Use the following function to find out the linear layers for fine tuning. QLoRA paper : "We find that the most critical LoRA hyperparameter is how many LoRA adapters are used in total and that LoRA on all linear transformer block layers is required to match full finetuning performance."
import bitsandbytes as bnb
def find_all_linear_names(model):
cls = bnb.nn.Linear4bit #if args.bits == 4 else (bnb.nn.Linear8bitLt if args.bits == 8 else torch.nn.Linear)
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16-bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
modules = find_all_linear_names(model)
print(modules)
from peft import LoraConfig, get_peft_model
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=modules,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
trainable, total = model.get_nb_trainable_parameters()
print(f"Trainable: {trainable} | total: {total} | Percentage: {trainable/total*100:.4f}%")
Step 5 - Run the training!¶
from huggingface_hub import notebook_login
notebook_login()
Setting the training arguments:
- for the reason of demo, we just ran it for few steps (100) just to showcase how to use this integration with existing tools on the HF ecosystem.
# from datasets import load_dataset
# data = load_dataset("TokenBender/code_instructions_122k_alpaca_style", split='train')
# data = data.train_test_split(test_size=0.1)
# train_data = data["train"]
# test_data = data["test"]
# import transformers
# tokenizer.pad_token = tokenizer.eos_token
# trainer = transformers.Trainer(
# model=model,
# train_dataset=train_data,
# eval_dataset=test_data,
# args=transformers.TrainingArguments(
# per_device_train_batch_size=1,
# gradient_accumulation_steps=4,
# warmup_steps=0.03,
# max_steps=100,
# learning_rate=2e-4,
# fp16=True,
# logging_steps=1,
# output_dir="outputs_mistral_b_finance_finetuned_test",
# optim="paged_adamw_8bit",
# save_strategy="epoch",
# ),
# data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
# )
Fine-Tuning with qLora and Supervised Fine-Tuning¶
We're ready to fine-tune our model using qLora. For this tutorial, we'll use the SFTTrainer
from the trl
library for supervised fine-tuning. Ensure that you've installed the trl
library as mentioned in the prerequisites.
#new code using SFTTrainer
import transformers
from trl import SFTTrainer
tokenizer.pad_token = tokenizer.eos_token
torch.cuda.empty_cache()
trainer = SFTTrainer(
model=model,
train_dataset=train_data,
eval_dataset=test_data,
dataset_text_field="prompt",
peft_config=lora_config,
args=transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
warmup_steps=0.03,
max_steps=100,
learning_rate=2e-4,
logging_steps=1,
output_dir="outputs",
optim="paged_adamw_8bit",
save_strategy="epoch",
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
Start the training
Let's start the training process¶
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
Share adapters on the 🤗 Hub
new_model = "mistralai-Code-Instruct-Finetune-test" #Name of the model you will be pushing to huggingface model hub
trainer.model.save_pretrained(new_model)
base_model = AutoModelForCausalLM.from_pretrained(
model_id,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
merged_model= PeftModel.from_pretrained(base_model, new_model)
merged_model= merged_model.merge_and_unload()
# Save the merged model
merged_model.save_pretrained("merged_model",safe_serialization=True)
tokenizer.save_pretrained("merged_model")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
# Push the model and tokenizer to the Hugging Face Model Hub
merged_model.push_to_hub(new_model, use_temp_dir=False)
tokenizer.push_to_hub(new_model, use_temp_dir=False)
Step 6 Evaluating the model qualitatively: run an inference!¶
def get_completion_merged(query: str, model, tokenizer) -> str:
device = "cuda:0"
prompt_template = """
<s>
[INST]
Below is an instruction that describes a task. Write a response that appropriately completes the request.
{query}
[/INST]
</s>
"""
prompt = prompt_template.format(query=query)
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encodeds.to(device)
generated_ids = merged_model.generate(**model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id)
decoded = tokenizer.batch_decode(generated_ids)
return (decoded[0])
result = get_completion_merged(query="code the fibonacci series in python using reccursion", model=model, tokenizer=tokenizer)
print(result)