WeaviateStore
Weaviate is an open source vector database that stores both objects and vectors, allowing for combining vector search with structured filtering. LangChain connects to Weaviate via the weaviate-client package, the official Typescript client for Weaviate.
This guide provides a quick overview for getting started with Weaviate
vector stores. For detailed
documentation of all WeaviateStore
features and configurations head to
the API
reference.
Overviewβ
Integration detailsβ
Class | Package | PY support | Package latest |
---|---|---|---|
WeaviateStore | @langchain/weaviate | β | ![]() |
Setupβ
To use Weaviate vector stores, youβll need to set up a Weaviate instance
and install the @langchain/weaviate
integration package. You should
also install the weaviate-client
package to initialize a client to
connect to your instance with, and the uuid
package if you want to
assign indexed documents ids.
This guide will also use OpenAI
embeddings, which require you
to install the @langchain/openai
integration package. You can also use
other supported embeddings models
if you wish.
- npm
- yarn
- pnpm
npm i @langchain/weaviate @langchain/core weaviate-client uuid @langchain/openai
yarn add @langchain/weaviate @langchain/core weaviate-client uuid @langchain/openai
pnpm add @langchain/weaviate @langchain/core weaviate-client uuid @langchain/openai
Youβll need to run Weaviate either locally or on a server. See the Weaviate documentation for more information.
Credentialsβ
Once youβve set up your instance, set the following environment variables:
// If running locally, include port e.g. "localhost:8080"
process.env.WEAVIATE_URL = "YOUR_WEAVIATE_URL";
// Optional, for cloud deployments
process.env.WEAVIATE_API_KEY = "YOUR_API_KEY";
If you are using OpenAI embeddings for this guide, youβll need to set your OpenAI key as well:
process.env.OPENAI_API_KEY = "YOUR_API_KEY";
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
// process.env.LANGSMITH_TRACING="true"
// process.env.LANGSMITH_API_KEY="your-api-key"
Instantiationβ
Connect a weaviate clientβ
In most cases, you should use one of the connection helper functions to connect to your Weaviate instance:
- connectToWeaviateCloud
- connectToLocal
- connectToCustom
import { WeaviateStore } from "@langchain/weaviate";
import { OpenAIEmbeddings } from "@langchain/openai";
import weaviate from "weaviate-client";
const embeddings = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const weaviateClient = weaviate.connectToWeaviateCloud({
clusterURL: process.env.WEAVIATE_URL!,
options: {
authCredentials: new weaviate.ApiKey(process.env.WEAVIATE_API_KEY || ""),
headers: {
"X-OpenAI-Api-Key": process.env.OPENAI_API_KEY || "",
"X-Cohere-Api-Key": process.env.COHERE_API_KEY || "",
},
},
});
Initiate the vectorStoreβ
To create a collection, specify at least the collection name. If you
donβt specify any properties, auto-schema
creates them.
const vectorStore = new WeaviateStore(embeddings, {
client: weaviateClient,
// Must start with a capital letter
indexName: "Langchainjs_test",
});
To use Weaviateβs named vectors, vectorizers, reranker,
generative-models etc., use the schema
property when enabling the
vector store. The collection name and other properties in schema
will
take precedence when creating the vector store.
const vectorStore = new WeaviateStore(embeddings, {
client: weaviateClient,
schema: {
name: "Langchainjs_test",
description: "A simple dataset",
properties: [
{
name: "title",
dataType: dataType.TEXT,
},
{
name: "foo",
dataType: dataType.TEXT,
},
],
vectorizers: [
vectorizer.text2VecOpenAI({
name: "title",
sourceProperties: ["title"], // (Optional) Set the source property(ies)
// vectorIndexConfig: configure.vectorIndex.hnsw() // (Optional) Set the vector index configuration
}),
],
generative: weaviate.configure.generative.openAI(),
reranker: weaviate.configure.reranker.cohere(),
},
});
Manage vector storeβ
Add items to vector storeβ
Note: If you want to associate ids with your indexed documents, they must be UUIDs.
import type { Document } from "@langchain/core/documents";
import { v4 as uuidv4 } from "uuid";
const document1: Document = {
pageContent: "The powerhouse of the cell is the mitochondria",
metadata: { source: "https://example.com" },
};
const document2: Document = {
pageContent: "Buildings are made out of brick",
metadata: { source: "https://example.com" },
};
const document3: Document = {
pageContent: "Mitochondria are made out of lipids",
metadata: { source: "https://example.com" },
};
const document4: Document = {
pageContent: "The 2024 Olympics are in Paris",
metadata: { source: "https://example.com" },
};
const documents = [document1, document2, document3, document4];
const uuids = [uuidv4(), uuidv4(), uuidv4(), uuidv4()];
await vectorStore.addDocuments(documents, { ids: uuids });
[
'610f9b92-9bee-473f-a4db-8f2ca6e3442d',
'995160fa-441e-41a0-b476-cf3785518a0d',
'0cdbe6d4-0df8-4f99-9b67-184009fee9a2',
'18a8211c-0649-467b-a7c5-50ebb4b9ca9d'
]
Delete items from vector storeβ
You can delete by id as by passing a filter
param:
await vectorStore.delete({ ids: [uuids[3]] });
Query vector storeβ
Once your vector store has been created and the relevant documents have
been added you will most likely wish to query it during the running of
your chain or agent. In weaviateβs v3, the client interacts with
collections
as the primary way to work with objects in the database.
The collection
object can be re-used throughout the codebase ###
Query directly
Performing a simple similarity search can be done as follows. The
Filter
helper class makes it easier to use filters with conditions.
The v3 client streamlines how you use Filter
so your code is cleaner
and more concise.
See this page for more on Weaviate filter syntax.
import { Filters } from "weaviate-client";
const collection = client.collections.use("Langchainjs_test");
const filter = Filters.and(
collection.filter.byProperty("source").equal("https://example.com")
);
const similaritySearchResults = await vectorStore.similaritySearch(
"biology",
2,
filter
);
for (const doc of similaritySearchResults) {
console.log(`* ${doc.pageContent} [${JSON.stringify(doc.metadata, null)}]`);
}
* The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* Mitochondria are made out of lipids [{"source":"https://example.com"}]
If you want to execute a similarity search and receive the corresponding scores you can run:
const similaritySearchWithScoreResults =
await vectorStore.similaritySearchWithScore("biology", 2, filter);
for (const [doc, score] of similaritySearchWithScoreResults) {
console.log(
`* [SIM=${score.toFixed(3)}] ${doc.pageContent} [${JSON.stringify(
doc.metadata
)}]`
);
}
* [SIM=0.835] The powerhouse of the cell is the mitochondria [{"source":"https://example.com"}]
* [SIM=0.852] Mitochondria are made out of lipids [{"source":"https://example.com"}]
Hybrid Searchβ
In Weaviate, Hybrid search
combines the results of a vector search and
a keyword (BM25F) search by fusing the two result sets. To change the
relative weights of the keyword and vector components, set the alpha
value in your query.
Check docs for the full list of hybrid search options.
const results = await vectorStore.hybridSearch("biology", {
limit: 1,
alpha: 0.25,
targetVector: ["title"],
rerank: {
property: "title",
query: "greeting",
},
});
Retrieval Augmented Generation (RAG)β
Retrieval Augmented Generation (RAG) combines information retrieval with generative AI models.
In Weaviate, a RAG query consists of two parts: a search query, and a prompt for the model. Weaviate first performs the search, then passes both the search results and your prompt to a generative AI model before returning the generated response. * @param query The query to search for. * @param options available options for performing the hybrid search * @param generate available options for the generation. Check docs for complete list
const results = await vectorStore.generate(
"hello world",
{
singlePrompt: {
prompt: "Translate this into German: {title}",
},
config: generativeParameters.openAI({
model: "gpt-3.5-turbo",
}),
},
{
limit: 2,
targetVector: ["title"],
}
);
Query by turning into retrieverβ
You can also transform the vector store into a retriever for easier usage in your chains.
const retriever = vectorStore.asRetriever({
// Optional filter
filter: filter,
k: 2,
});
await retriever.invoke("biology");
[
Document {
pageContent: 'The powerhouse of the cell is the mitochondria',
metadata: { source: 'https://example.com' },
id: undefined
},
Document {
pageContent: 'Mitochondria are made out of lipids',
metadata: { source: 'https://example.com' },
id: undefined
}
]
Usage for retrieval-augmented generationβ
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:
- Tutorials: working with external knowledge.
- How-to: Question and answer with RAG
- Retrieval conceptual docs
API referenceβ
For detailed documentation of all WeaviateStore
features and
configurations head to the API
reference.
Relatedβ
- Vector store conceptual guide
- Vector store how-to guides