๐๐ค๐ฌ A minimal RAG (Retrieval-Augmented Generation) implementation that indexes FAQ data into Azure AI Search with vector embeddings and provides a chat terminal using Azure OpenAI for answering questions based on the retrieved content.
- Copy
.env.example
โ.env
and add your Azure endpoints & keys - poetry install
- To use Vector Search in Azure AI Search, a vectorizer can connect the Azure OpenAI Embedding model to turn your text into a numerical embedding. If you prefer, without using a vectorizer, you can also create the embedding yourself and provide it directly to Azure AI Search for Vector Search.
- Options 1 and 3 demonstrate how to provide the embedding directly without using a vectorizer. -
chat_app.py
usesVectorizedQuery
. - Options 2 and 4 demonstrate the use of a vectorizer along with semantic search. -
chat_app_v2.py
usesVectorizableTextQuery
.
- ๐คPush - upload local data
python push_aisearch_index.py
- ๐คPush - upload local data w/ Vectorizer and Semantic Search
python push_aisearch_index_v2.py
- ๐ฅPull - using Azure AI Search Indexer
python pull_aisearch_index.py
- ๐ฅPull - using Azure AI Search Indexer w/ Vectorizer and Semantic Search
python pull_aisearch_index_v2.py
- To utilize an embedding directly with
VectorizedQuery
python chat_app.py
- To utilize text with a vectorizer, use the
VectorizableTextQuery
method.python chat_app_v2.py
- Type your question or
exit
to quit.
To connect Azure AI Search with Azure AI Foundry, you need to add both a vectorizer and semantic search to the index. Use an existing AI Search index with the Azure AI Search tool
๐ Learn more: csv indexer