Custom components for Haystack for creating embeddings and reranking documents using the Voyage Models.
Voyage’s embedding models are state-of-the-art in retrieval accuracy. These models outperform top performing embedding models like intfloat/e5-mistral-7b-instruct and OpenAI/text-embedding-3-large on the MTEB Benchmark.
-
[v1.8.0 - 07/11/25]:
- The new
VoyageContextualizedDocumentEmbeddercomponent supports Voyage's contextualized chunk embeddings. - Contextualized embeddings encode document chunks "in context" with other chunks from the same document, preserving semantic relationships and reducing context loss for improved retrieval accuracy.
- The new
-
[v1.5.0 - 22/01/25]:
- The new
VoyageRankercomponent can be used to rerank documents using theVoyage Rerankermodels. - Matryoshka Embeddings and Quantized Embeddings can now be created using the
output_dimensionandoutput_dtypeparameters.
- The new
-
[v1.4.0 - 24/07/24]:
- The maximum timeout and number of retries made by the Client can now be set for the embedders using the
timeoutandmax_retriesparameters.
- The maximum timeout and number of retries made by the Client can now be set for the embedders using the
-
[v1.3.0 - 18/03/24]:
- Breaking Change: The import path for the embedders has been changed to
haystack_integrations.components.embedders.voyage_embedders. Please replace all instances offrom voyage_embedders.voyage_document_embedder import VoyageDocumentEmbedderandfrom voyage_embedders.voyage_text_embedder import VoyageTextEmbedderwith
from haystack_integrations.components.embedders.voyage_embedders import VoyageDocumentEmbedder, VoyageTextEmbedder. - The embedders now use the Haystack
SecretAPI for authentication. For more information please see the Secret Management Documentation.
- Breaking Change: The import path for the embedders has been changed to
-
[v1.2.0 - 02/02/24]:
- Breaking Change:
VoyageDocumentEmbedderandVoyageTextEmbeddernow accept themodelparameter instead ofmodel_name. - The embedders have been use the new
voyageai.Client.embed()method instead of the deprecatedget_embeddingandget_embeddingsmethods of the global namespace. - Support for the new
truncateparameter has been added. - The embedders now return the total number of tokens used as part of the
"total_tokens"in the metadata.
- Breaking Change:
-
[v1.1.0 - 13/12/23]: Added support for
input_typeparameter inVoyageTextEmbedderandVoyageDocument Embedder. -
[v1.0.0 - 21/11/23]: Added
VoyageTextEmbedderandVoyageDocument Embedderto embed strings and documents.
pip install voyage-embedders-haystackYou can use Voyage Embedding models with multiple components:
- VoyageTextEmbedder: For generating embeddings for queries.
- VoyageDocumentEmbedder: For creating semantic embeddings for documents in your indexing pipeline.
- VoyageContextualizedDocumentEmbedder: For creating contextualized embeddings where document chunks are embedded together to preserve context and improve retrieval accuracy.
The Voyage Reranker models can be used with the VoyageRanker component.
The VoyageContextualizedDocumentEmbedder uses Voyage's contextualized embedding models to encode document chunks "in context" with other chunks from the same document. This approach preserves semantic relationships between chunks and reduces context loss, leading to improved retrieval accuracy.
Key features:
- Documents are grouped by a metadata field (default:
source_id) - Chunks from the same source document are embedded together
- Maintains semantic connections between related chunks
- Recommended model:
voyage-context-3
For detailed usage examples, see the contextualized embedder example.
Once you've selected the suitable component for your specific use case, initialize the component with the model name and VoyageAI API key. You can also
set the environment variable VOYAGE_API_KEY instead of passing the API key as an argument.
To get an API key, please see the Voyage AI website.
Information about the supported models, can be found on the Voyage AI Documentation.
You can find all the examples in the examples folder.
Below is the example Semantic Search pipeline that uses the Simple Wikipedia Dataset from HuggingFace.
Load the dataset:
# Install HuggingFace Datasets using "pip install datasets"
from datasets import load_dataset
from haystack import Pipeline
from haystack.components.retrievers.in_memory import InMemoryEmbeddingRetriever
from haystack.components.writers import DocumentWriter
from haystack.dataclasses import Document
from haystack.document_stores.in_memory import InMemoryDocumentStore
# Import Voyage Embedders
from haystack_integrations.components.embedders.voyage_embedders import VoyageDocumentEmbedder, VoyageTextEmbedder
# Load first 100 rows of the Simple Wikipedia Dataset from HuggingFace
dataset = load_dataset("pszemraj/simple_wikipedia", split="validation[:100]")
docs = [
Document(
content=doc["text"],
meta={
"title": doc["title"],
"url": doc["url"],
},
)
for doc in dataset
]Index the documents to the InMemoryDocumentStore using the VoyageDocumentEmbedder and DocumentWriter:
doc_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
retriever = InMemoryEmbeddingRetriever(document_store=doc_store)
doc_writer = DocumentWriter(document_store=doc_store)
doc_embedder = VoyageDocumentEmbedder(
model="voyage-2",
input_type="document",
)
text_embedder = VoyageTextEmbedder(model="voyage-2", input_type="query")
# Indexing Pipeline
indexing_pipeline = Pipeline()
indexing_pipeline.add_component(instance=doc_embedder, name="DocEmbedder")
indexing_pipeline.add_component(instance=doc_writer, name="DocWriter")
indexing_pipeline.connect("DocEmbedder", "DocWriter")
indexing_pipeline.run({"DocEmbedder": {"documents": docs}})
print(f"Number of documents in Document Store: {len(doc_store.filter_documents())}")
print(f"First Document: {doc_store.filter_documents()[0]}")
print(f"Embedding of first Document: {doc_store.filter_documents()[0].embedding}")Query the Semantic Search Pipeline using the InMemoryEmbeddingRetriever and VoyageTextEmbedder:
text_embedder = VoyageTextEmbedder(model="voyage-2", input_type="query")
# Query Pipeline
query_pipeline = Pipeline()
query_pipeline.add_component(instance=text_embedder, name="TextEmbedder")
query_pipeline.add_component(instance=retriever, name="Retriever")
query_pipeline.connect("TextEmbedder.embedding", "Retriever.query_embedding")
# Search
results = query_pipeline.run({"TextEmbedder": {"text": "Which year did the Joker movie release?"}})
# Print text from top result
top_result = results["Retriever"]["documents"][0].content
print("The top search result is:")
print(top_result)We welcome contributions from the community! Please take a look at our contributing guide for more details on how to get started.
Pull requests are welcome. For major changes, please open an issue first to discuss the proposed changes.
voyage-embedders-haystack is distributed under the terms of the Apache-2.0 license.
Maintained by Ashwin Mathur.