Implementation
Instruct Fine-tuning Gemma using qLora and Supervise Finetuning¶
This is a comprahensive notebook and tutorial on how to fine tune the gemma-7b-it
Model
All the code will be available on my Github. Do drop by and give a follow and a star.
adithya-s-k
Github Code
I also post content about LLMs and what I have been working on Twitter. AdithyaSK (@adithya_s_k) / X
Prerequisites¶
Before delving into the fine-tuning process, ensure that you have the following prerequisites in place:
- GPU: gemma-2b - can be finetuned on T4(free google colab) while gemma-7b requires an A100 GPU.
- Python Packages: Ensure that you have the necessary Python packages installed. You can use the following commands to install them:
Let's begin by checking if your GPU is correctly detected:
!nvidia-smi
Wed Feb 21 17:19:05 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100 80GB PCIe On | 00000001:00:00.0 Off | 0 | | N/A 33C P0 42W / 300W | 4MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | No running processes found | +---------------------------------------------------------------------------------------+
!pip3 install -q -U bitsandbytes==0.42.0
!pip3 install -q -U peft==0.8.2
!pip3 install -q -U trl==0.7.10
!pip3 install -q -U accelerate==0.27.1
!pip3 install -q -U datasets==2.17.0
!pip3 install -q -U transformers==4.38.0
# if you are using google colab
# import os
# from google.colab import userdata
# os.environ["HF_TOKEN"] = userdata.get('HF_TOKEN')
from huggingface_hub import notebook_login
notebook_login()
Step 2 - Model loading¶
We'll load the model using QLoRA quantization to reduce the usage of memory
Now we specify the model ID and then we load it with our previously defined quantization configuration.Now we specify the model ID and then we load it with our previously defined quantization configuration.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "google/gemma-7b-it"
# model_id = "google/gemma-7b"
# model_id = "google/gemma-2b-it"
# model_id = "google/gemma-2b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(model_id, add_eos_token=True)
Prompt/Chat templates¶
Gemma chat template
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
As you can see, each turn is preceded by a <start_of_turn> delimiter and then the role of the entity (either user, for content supplied by the user, or model for LLM responses). Turns finish with the <end_of_turn> token.
def get_completion(query: str, model, tokenizer) -> str:
device = "cuda:0"
prompt_template = """
<start_of_turn>user
Below is an instruction that describes a task. Write a response that appropriately completes the request.
{query}
<end_of_turn>\n<start_of_turn>model
"""
prompt = prompt_template.format(query=query)
encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encodeds.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id)
# decoded = tokenizer.batch_decode(generated_ids)
decoded = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
return (decoded)
result = get_completion(query="code the fibonacci series in python using reccursion", model=model, tokenizer=tokenizer)
print(result)
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
user Below is an instruction that describes a task. Write a response that appropriately completes the request. code the fibonacci series in python using reccursion model a Python function to calculate the nth Fibonacci number using recursion. Here's the code: ```python def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2) # Print the nth Fibonacci number print(fibonacci(10)) ``` **Explanation:** 1. The function `fibonacci` takes an integer `n` as input. 2. If `n` is 0 or 1, it returns the respective base case of 0 or 1. 3. Otherwise, it recursively calculates the Fibonacci number for `n-1` and `n-2` and adds their sum to return the Fibonacci number for `n`. 4. The function continues to recurse until `n` reaches the desired number, and the final result is returned. **Output:** ``` >>> print(fibonacci(10)) 5 ``` In this example, the code calculates the 10th Fibonacci number, which is 5. **Note:** Reccursion can be a powerful technique for solving problems that involve repeated calculations. However, it is important to note that recursion can also lead to stack overflow errors for large values of `n` due to its repeated function calls. For more efficient solutions, iterative approaches are often preferred.
Step 3 - Load dataset for finetuning¶
Lets Load the Dataset¶
For this tutorial, we will fine-tune Mistral 7B Instruct for code generation.
We will be using this dataset which is curated by TokenBender (e/xperiments) and is an excellent data source for fine-tuning models for code generation. It follows the alpaca style of instructions, which is an excellent starting point for this task. The dataset structure should resemble the following:
{
"instruction": "Create a function to calculate the sum of a sequence of integers.",
"input": "[1, 2, 3, 4, 5]",
"output": "# Python code def sum_sequence(sequence): sum = 0 for num in sequence: sum += num return sum"
}
from datasets import load_dataset
dataset = load_dataset("TokenBender/code_instructions_122k_alpaca_style", split="train")
dataset
df = dataset.to_pandas()
df.head(10)
input | output | text | instruction | |
---|---|---|---|---|
0 | [1, 2, 3, 4, 5] | # Python code\ndef sum_sequence(sequence):\n ... | Below is an instruction that describes a task.... | Create a function to calculate the sum of a se... |
1 | str1 = "Hello "\nstr2 = "world" | def add_strings(str1, str2):\n """This func... | Below is an instruction that describes a task.... | Develop a function that will add two strings |
2 | #include <map>\n#include <string>\n\nclass Gro... | Below is an instruction that describes a task.... | Design a data structure in C++ to store inform... | |
3 | [3, 1, 4, 5, 9, 0] | def bubble_sort(arr):\n n = len(arr)\n \n ... | Below is an instruction that describes a task.... | Implement a sorting algorithm to sort a given ... |
4 | Not applicable | import UIKit\n\nclass ExpenseViewController: U... | Below is an instruction that describes a task.... | Design a Swift application for tracking expens... |
5 | Not Applicable | <?php\n$timestamp = $_GET['timestamp'];\n\nif(... | Below is an instruction that describes a task.... | Create a REST API to convert a UNIX timestamp ... |
6 | website: www.example.com \ndata to crawl: phon... | import requests\nimport re\n\ndef crawl_websit... | Below is an instruction that describes a task.... | Generate a Python code for crawling a website ... |
7 | [x*x for x in [1, 2, 3, 5, 8, 13]] | Below is an instruction that describes a task.... | Create a Python list comprehension to get the ... | |
8 | SELECT * FROM products ORDER BY price DESC LIM... | Below is an instruction that describes a task.... | Create a MySQL query to find the most expensiv... | |
9 | Not applicable | public class Library {\n \n // map of books in... | Below is an instruction that describes a task.... | Create a data structure in Java for storing an... |
Instruction Fintuning - Prepare the dataset under the format of "prompt" so the model can better understand :
- the function generate_prompt : take the instruction and output and generate a prompt
- shuffle the dataset
- tokenizer the dataset
Formatting the Dataset¶
Now, let's format the dataset in the required gemma instruction formate.
Many tutorials and blogs skip over this part, but I feel this is a really important step.
<start_of_turn>user What is your favorite condiment? <end_of_turn>
<start_of_turn>model Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavor to whatever I'm cooking up in the kitchen!<end_of_turn>
You can use the following code to process your dataset and create a JSONL file in the correct format:
def generate_prompt(data_point):
"""Gen. input text based on a prompt, task instruction, (context info.), and answer
:param data_point: dict: Data point
:return: dict: tokenzed prompt
"""
prefix_text = 'Below is an instruction that describes a task. Write a response that ' \
'appropriately completes the request.\n\n'
# Samples with additional context into.
if data_point['input']:
text = f"""<start_of_turn>user {prefix_text} {data_point["instruction"]} here are the inputs {data_point["input"]} <end_of_turn>\n<start_of_turn>model{data_point["output"]} <end_of_turn>"""
# Without
else:
text = f"""<start_of_turn>user {prefix_text} {data_point["instruction"]} <end_of_turn>\n<start_of_turn>model{data_point["output"]} <end_of_turn>"""
return text
# add the "prompt" column in the dataset
text_column = [generate_prompt(data_point) for data_point in dataset]
dataset = dataset.add_column("prompt", text_column)
We'll need to tokenize our data so the model can understand.
dataset = dataset.shuffle(seed=1234) # Shuffle dataset here
dataset = dataset.map(lambda samples: tokenizer(samples["prompt"]), batched=True)
Split dataset into 90% for training and 10% for testing
dataset = dataset.train_test_split(test_size=0.2)
train_data = dataset["train"]
test_data = dataset["test"]
After Formatting, We should get something like this¶
{
"instruction":"Create a function to calculate the sum of a sequence of integers",
"input":"[1, 2, 3, 4, 5]",
"output":"# Python code def sum_sequence(sequence): sum = 0 for num in,
sequence: sum += num return sum",
"prompt":"<start_of_turn>user Create a function to calculate the sum of a sequence of integers. here are the inputs [1, 2, 3, 4, 5] <end_of_turn>
<start_of_turn>model # Python code def sum_sequence(sequence): sum = 0 for num in sequence: sum += num return sum <end_of_turn>"
}
While using SFT (Supervised Fine-tuning Trainer) for fine-tuning, we will be only passing in the “text” column of the dataset for fine-tuning.
print(test_data)
Dataset({ features: ['input', 'output', 'text', 'instruction', 'prompt', 'input_ids', 'attention_mask'], num_rows: 24392 })
Step 4 - Apply Lora¶
Here comes the magic with peft! Let's load a PeftModel and specify that we are going to use low-rank adapters (LoRA) using get_peft_model utility function and the prepare_model_for_kbit_training method from PEFT.
from peft import LoraConfig, PeftModel, prepare_model_for_kbit_training, get_peft_model
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
print(model)
GemmaForCausalLM( (model): GemmaModel( (embed_tokens): Embedding(256000, 3072, padding_idx=0) (layers): ModuleList( (0-27): 28 x GemmaDecoderLayer( (self_attn): GemmaSdpaAttention( (q_proj): Linear4bit(in_features=3072, out_features=4096, bias=False) (k_proj): Linear4bit(in_features=3072, out_features=4096, bias=False) (v_proj): Linear4bit(in_features=3072, out_features=4096, bias=False) (o_proj): Linear4bit(in_features=4096, out_features=3072, bias=False) (rotary_emb): GemmaRotaryEmbedding() ) (mlp): GemmaMLP( (gate_proj): Linear4bit(in_features=3072, out_features=24576, bias=False) (up_proj): Linear4bit(in_features=3072, out_features=24576, bias=False) (down_proj): Linear4bit(in_features=24576, out_features=3072, bias=False) (act_fn): GELUActivation() ) (input_layernorm): GemmaRMSNorm() (post_attention_layernorm): GemmaRMSNorm() ) ) (norm): GemmaRMSNorm() ) (lm_head): Linear(in_features=3072, out_features=256000, bias=False) )
import bitsandbytes as bnb
def find_all_linear_names(model):
cls = bnb.nn.Linear4bit #if args.bits == 4 else (bnb.nn.Linear8bitLt if args.bits == 8 else torch.nn.Linear)
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, cls):
names = name.split('.')
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if 'lm_head' in lora_module_names: # needed for 16-bit
lora_module_names.remove('lm_head')
return list(lora_module_names)
modules = find_all_linear_names(model)
print(modules)
['o_proj', 'q_proj', 'up_proj', 'v_proj', 'k_proj', 'down_proj', 'gate_proj']
from peft import LoraConfig, get_peft_model
lora_config = LoraConfig(
r=64,
lora_alpha=32,
target_modules=modules,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, lora_config)
trainable, total = model.get_nb_trainable_parameters()
print(f"Trainable: {trainable} | total: {total} | Percentage: {trainable/total*100:.4f}%")
Trainable: 200015872 | total: 8737696768 | Percentage: 2.2891%
Step 5 - Run the training!¶
Setting the training arguments:
- for the reason of demo, we just ran it for few steps (100) just to showcase how to use this integration with existing tools on the HF ecosystem.
# import transformers
# tokenizer.pad_token = tokenizer.eos_token
# trainer = transformers.Trainer(
# model=model,
# train_dataset=train_data,
# eval_dataset=test_data,
# args=transformers.TrainingArguments(
# per_device_train_batch_size=1,
# gradient_accumulation_steps=4,
# warmup_steps=0.03,
# max_steps=100,
# learning_rate=2e-4,
# fp16=True,
# logging_steps=1,
# output_dir="outputs_mistral_b_finance_finetuned_test",
# optim="paged_adamw_8bit",
# save_strategy="epoch",
# ),
# data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
# )
Fine-Tuning with qLora and Supervised Fine-Tuning¶
We're ready to fine-tune our model using qLora. For this tutorial, we'll use the SFTTrainer
from the trl
library for supervised fine-tuning. Ensure that you've installed the trl
library as mentioned in the prerequisites.
#new code using SFTTrainer
import transformers
from trl import SFTTrainer
tokenizer.pad_token = tokenizer.eos_token
torch.cuda.empty_cache()
trainer = SFTTrainer(
model=model,
train_dataset=train_data,
eval_dataset=test_data,
dataset_text_field="prompt",
peft_config=lora_config,
args=transformers.TrainingArguments(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
warmup_steps=0.03,
max_steps=100,
learning_rate=2e-4,
logging_steps=1,
output_dir="outputs",
optim="paged_adamw_8bit",
save_strategy="epoch",
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
Lets start training¶
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
/home/adithya/miniconda3/envs/gemma-venv/lib/python3.10/site-packages/torch/utils/checkpoint.py:460: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. warnings.warn(
Step | Training Loss |
---|---|
1 | 10.299600 |
2 | 6.640600 |
3 | 7.763200 |
4 | 4.142200 |
5 | 3.619800 |
6 | 4.840600 |
7 | 2.886800 |
8 | 1.334500 |
9 | 1.177300 |
10 | 1.343300 |
11 | 1.306000 |
12 | 1.122000 |
13 | 1.337300 |
14 | 1.083400 |
15 | 1.399100 |
16 | 1.172800 |
17 | 0.838900 |
18 | 1.027200 |
19 | 0.847500 |
20 | 1.065700 |
21 | 0.944100 |
22 | 0.782900 |
23 | 1.030100 |
24 | 0.911900 |
25 | 0.868400 |
26 | 0.768300 |
27 | 0.845200 |
28 | 1.036400 |
29 | 0.766000 |
30 | 0.741500 |
31 | 0.738800 |
32 | 0.779200 |
33 | 0.833000 |
34 | 0.851800 |
35 | 0.824800 |
36 | 0.757800 |
37 | 0.804500 |
38 | 0.962900 |
39 | 0.716000 |
40 | 1.027000 |
41 | 0.743100 |
42 | 0.932800 |
43 | 0.623100 |
44 | 0.671600 |
45 | 0.807100 |
46 | 0.957200 |
47 | 0.643900 |
48 | 0.867800 |
49 | 0.810700 |
50 | 0.871100 |
51 | 0.628500 |
52 | 0.946400 |
53 | 0.918600 |
54 | 0.920300 |
55 | 0.746100 |
56 | 0.914800 |
57 | 0.705700 |
58 | 0.883300 |
59 | 1.016300 |
60 | 0.583200 |
61 | 0.872000 |
62 | 0.617600 |
63 | 0.858700 |
64 | 0.955700 |
65 | 0.854500 |
66 | 0.778000 |
67 | 0.733200 |
68 | 0.871200 |
69 | 0.847700 |
70 | 0.567400 |
71 | 1.078200 |
72 | 0.945800 |
73 | 0.762500 |
74 | 0.618800 |
75 | 0.803600 |
76 | 0.848400 |
77 | 0.504300 |
78 | 0.685300 |
79 | 0.700700 |
80 | 0.636400 |
81 | 0.832700 |
82 | 0.614600 |
83 | 0.899000 |
84 | 0.623800 |
85 | 0.637200 |
86 | 0.697500 |
87 | 0.770200 |
88 | 0.800000 |
89 | 0.748600 |
90 | 0.897100 |
91 | 0.718800 |
92 | 0.703900 |
93 | 0.635600 |
94 | 0.649800 |
95 | 0.627200 |
96 | 0.785200 |
97 | 0.808200 |
98 | 0.720500 |
99 | 0.656600 |
100 | 0.650900 |
TrainOutput(global_step=100, training_loss=1.1844707012176514, metrics={'train_runtime': 188.6612, 'train_samples_per_second': 2.12, 'train_steps_per_second': 0.53, 'total_flos': 3947993787666432.0, 'train_loss': 1.1844707012176514, 'epoch': 0.0})
Share adapters on the 🤗 Hub
new_model = "gemma-Code-Instruct-Finetune-test" #Name of the model you will be pushing to huggingface model hub
trainer.model.save_pretrained(new_model)
base_model = AutoModelForCausalLM.from_pretrained(
model_id,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
merged_model= PeftModel.from_pretrained(base_model, new_model)
merged_model= merged_model.merge_and_unload()
# Save the merged model
merged_model.save_pretrained("merged_model",safe_serialization=True)
tokenizer.save_pretrained("merged_model")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
# Push the model and tokenizer to the Hugging Face Model Hub
merged_model.push_to_hub(new_model, use_temp_dir=False)
tokenizer.push_to_hub(new_model, use_temp_dir=False)
Test out Finetuned Model¶
result = get_completion(query="code the fibonacci series in python using reccursion", model=merged_model, tokenizer=tokenizer)
print(result)
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
user Below is an instruction that describes a task. Write a response that appropriately completes the request. code the fibonacci series in python using reccursion model a Python program to code the Fibonacci series using recursion. Here's a solution: ```python def fibonacci(n): """Calculates the nth Fibonacci number using recursion. Args: n: The index of the Fibonacci number to calculate. Returns: The nth Fibonacci number. """ # Base case: The first two Fibonacci numbers are 0 and 1. if n <= 1: return n # Recursive case: Otherwise, calculate the nth Fibonacci number by adding the previous two numbers. else: return fibonacci(n-1) + fibonacci(n-2) # Print the Fibonacci numbers. for i in range(10): print(fibonacci(i)) ``` **Explanation:** * The `fibonacci` function takes an integer `n` as input. * If `n` is less than or equal to 1, it returns `n` itself, as the base case. * Otherwise, it calculates `n`-th Fibonacci number recursively by adding the previous two numbers. * The function calls itself with smaller and smaller values of `n` until it reaches the base case. **Example Usage:** ```python # Print the Fibonacci numbers from 0 to 10. for i in range(10): print(fibonacci(i)) ``` **Output:** ``` 0 1 1 2 3 5 8 13 21 34 ``` **Note:** * This program calculates Fibonacci numbers recursively, which means that it may not be very efficient for large values of `n` as it repeats calculations. * For more efficient Fibonacci number calculation, consider using an iterative approach rather than recursion.