Data Embedding
import os
from dotenv import load_dotenv
from openai import OpenAI
from rich import print
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_API_KEY)
from openai import OpenAI
client = OpenAI()
batch_input_file = client.files.create(
file=open("batchinput.jsonl", "rb"),
purpose="batch"
)
print(batch_input_file)
FileObject( id='file-5UhXBkBba7yLBUhEALqUQea3', bytes=4758, created_at=1726847544, filename='batchinput.jsonl', object='file', purpose='batch', status='processed', status_details=None )
batch_input_file_id = batch_input_file.id
batch = client.batches.create(
input_file_id=batch_input_file_id,
endpoint="/v1/embeddings",
completion_window="24h",
metadata={
"description": "nightly eval job"
}
)
print(batch)
Batch( id='batch_katOshqMSik1DI8qwXtuHaUj', completion_window='24h', created_at=1726847573, endpoint='/v1/embeddings', input_file_id='file-5UhXBkBba7yLBUhEALqUQea3', object='batch', status='validating', cancelled_at=None, cancelling_at=None, completed_at=None, error_file_id=None, errors=None, expired_at=None, expires_at=1726933973, failed_at=None, finalizing_at=None, in_progress_at=None, metadata={'description': 'nightly eval job'}, output_file_id=None, request_counts=BatchRequestCounts(completed=0, failed=0, total=0) )
from openai import OpenAI
client = OpenAI()
batch_status = client.batches.retrieve(batch.id)
print(batch_status)
Batch( id='batch_katOshqMSik1DI8qwXtuHaUj', completion_window='24h', created_at=1726847573, endpoint='/v1/embeddings', input_file_id='file-5UhXBkBba7yLBUhEALqUQea3', object='batch', status='completed', cancelled_at=None, cancelling_at=None, completed_at=1726847576, error_file_id=None, errors=None, expired_at=None, expires_at=1726933973, failed_at=None, finalizing_at=1726847576, in_progress_at=1726847574, metadata={'description': 'nightly eval job'}, output_file_id='file-fWmorUHxdNWkFGNLreI8JCCE', request_counts=BatchRequestCounts(completed=2, failed=0, total=2) )
file_response = client.files.content(batch_status.output_file_id)
file_response.astream_to_file("result.json")
/tmp/ipykernel_281948/1607969247.py:2: DeprecationWarning: Due to a bug, this method doesn't actually stream the response content, `.with_streaming_response.method()` should be used instead file_response.astream_to_file("result.json")
<coroutine object HttpxBinaryResponseContent.astream_to_file at 0x762f9c6e1070>
OpenAI¶
You will need an OpenAI api key for this tutorial. Login to your platform.openai.com account, click on your profile picture in the upper right corner, and choose 'API Keys' from the menu. Create an API key for this tutorial and save it. You will need it below.
Set your OpenAI api key, and Pinecone api key and environment in the file we created.
import os
from dotenv import load_dotenv
load_dotenv()
Setting up Vector Database¶
We will be using qDrant as the Vector database There are 4 ways to initialize qdrant
- Inmemory
client = qdrant_client.QdrantClient(location=":memory:")
- Disk
client = qdrant_client.QdrantClient(path="./data")
- Self hosted or Docker
client = qdrant_client.QdrantClient(
# url="http://<host>:<port>"
host="localhost",port=6333
)
- Qdrant cloud
client = qdrant_client.QdrantClient(
url=QDRANT_CLOUD_ENDPOINT,
api_key=QDRANT_API_KEY,
)
for this notebook we will be using qdrant cloud
import qdrant_client
# LlamaIndex core imports
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core import Settings
# LlamaIndex vector store import
from llama_index.vector_stores.qdrant import QdrantVectorStore
# creating a qdrant client instance
client = qdrant_client.QdrantClient(
# you can use :memory: mode for fast and light-weight experiments,
# it does not require to have Qdrant deployed anywhere
# but requires qdrant-client >= 1.1.1
# location=":memory:"
# otherwise set Qdrant instance address with:
# url=QDRANT_CLOUD_ENDPOINT,
# otherwise set Qdrant instance with host and port:
host="localhost",
port=6333
# set API KEY for Qdrant Cloud
# api_key=QDRANT_API_KEY,
# path="./db/"
)
vector_store = QdrantVectorStore(client=client, collection_name="01_Data_Ingestion")
Build an Ingestion Pipeline from Scratch¶
We show how to build an ingestion pipeline as mentioned in the introduction.
Note that steps (2) and (3) can be handled via our NodeParser
abstractions, which handle splitting and node creation.
For the purposes of this tutorial, we show you how to create these objects manually.
1. Load Data¶
!mkdir data
!wget --user-agent "Mozilla" "https://arxiv.org/pdf/2307.09288.pdf" -O "../data/llama2.pdf"
--2023-10-13 01:45:14-- https://arxiv.org/pdf/2307.09288.pdf Resolving arxiv.org (arxiv.org)... 128.84.21.199 Connecting to arxiv.org (arxiv.org)|128.84.21.199|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 13661300 (13M) [application/pdf] Saving to: ‘data/llama2.pdf’ data/llama2.pdf 100%[===================>] 13.03M 7.59MB/s in 1.7s 2023-10-13 01:45:16 (7.59 MB/s) - ‘data/llama2.pdf’ saved [13661300/13661300]
import fitz
file_path = "../data/llama2.pdf"
doc = fitz.open(file_path)
2. Use a Text Splitter to Split Documents¶
Here we import our SentenceSplitter
to split document texts into smaller chunks, while preserving paragraphs/sentences as much as possible.
from llama_index.core.node_parser import SentenceSplitter
text_parser = SentenceSplitter(
chunk_size=1024,
# separator=" ",
)
text_chunks = []
# maintain relationship with source doc index, to help inject doc metadata in (3)
doc_idxs = []
for doc_idx, page in enumerate(doc):
page_text = page.get_text("text")
cur_text_chunks = text_parser.split_text(page_text)
text_chunks.extend(cur_text_chunks)
doc_idxs.extend([doc_idx] * len(cur_text_chunks))
3. Manually Construct Nodes from Text Chunks¶
We convert each chunk into a TextNode
object, a low-level data abstraction in LlamaIndex that stores content but also allows defining metadata + relationships with other Nodes.
We inject metadata from the document into each node.
This essentially replicates logic in our SentenceSplitter
.
from llama_index.core.schema import TextNode
nodes = []
for idx, text_chunk in enumerate(text_chunks):
node = TextNode(
text=text_chunk,
)
src_doc_idx = doc_idxs[idx]
src_page = doc[src_doc_idx]
nodes.append(node)
print(nodes[0].metadata)
# print a sample node
print(nodes[0].get_content(metadata_mode="all"))
[Optional] 4. Extract Metadata from each Node¶
We extract metadata from each Node using our Metadata extractors.
This will add more metadata to each Node.
from llama_index.core.extractors import (
QuestionsAnsweredExtractor,
TitleExtractor,
)
from llama_index.core.ingestion import IngestionPipeline
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo")
extractors = [
TitleExtractor(nodes=5, llm=llm),
QuestionsAnsweredExtractor(questions=3, llm=llm),
]
pipeline = IngestionPipeline(
transformations=extractors,
)
nodes = await pipeline.arun(nodes=nodes, in_place=False)
print(nodes[0].metadata)
5. Generate Embeddings for each Node¶
Generate document embeddings for each Node using our OpenAI embedding model (text-embedding-ada-002
).
Store these on the embedding
property on each Node.
from llama_index.embeddings.openai import OpenAIEmbedding
embed_model = OpenAIEmbedding()
for node in nodes:
node_embedding = embed_model.get_text_embedding(
node.get_content(metadata_mode="all")
)
node.embedding = node_embedding
6. Load Nodes into a Vector Store¶
We now insert these nodes into our PineconeVectorStore
.
NOTE: We skip the VectorStoreIndex abstraction, which is a higher-level abstraction that handles ingestion as well. We use VectorStoreIndex
in the next section to fast-track retrieval/querying.
vector_store.add(nodes)
Retrieve and Query from the Vector Store¶
Now that our ingestion is complete, we can retrieve/query this vector store.
NOTE: We can use our high-level VectorStoreIndex
abstraction here. See the next section to see how to define retrieval at a lower-level!
from llama_index.core import VectorStoreIndex
from llama_index.core import StorageContext
index = VectorStoreIndex.from_vector_store(vector_store)
query_engine = index.as_query_engine()
query_str = "Can you tell me about the key concepts for safety finetuning"
response = query_engine.query(query_str)
print(str(response))