Pinecone
You can use Pinecone vectorstores with LangChain. To get started, install the integration package and the official Pinecone SDK with:
- npm
- Yarn
- pnpm
npm install -S @langchain/pinecone @pinecone-database/pinecone
yarn add @langchain/pinecone @pinecone-database/pinecone
pnpm add @langchain/pinecone @pinecone-database/pinecone
The below examples use OpenAI embeddings, but you can swap in whichever provider you'd like. Keep in mind different embeddings models may have a different number of dimensions:
- npm
- Yarn
- pnpm
npm install -S @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Index docs
/* eslint-disable @typescript-eslint/no-non-null-assertion */
import { Pinecone } from "@pinecone-database/pinecone";
import { Document } from "@langchain/core/documents";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
// Instantiate a new Pinecone client, which will automatically read the
// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from
// the Pinecone dashboard at https://app.pinecone.io
const pinecone = new Pinecone();
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
const docs = [
new Document({
metadata: { foo: "bar" },
pageContent: "pinecone is a vector db",
}),
new Document({
metadata: { foo: "bar" },
pageContent: "the quick brown fox jumped over the lazy dog",
}),
new Document({
metadata: { baz: "qux" },
pageContent: "lorem ipsum dolor sit amet",
}),
new Document({
metadata: { baz: "qux" },
pageContent: "pinecones are the woody fruiting body and of a pine tree",
}),
];
await PineconeStore.fromDocuments(docs, new OpenAIEmbeddings(), {
pineconeIndex,
maxConcurrency: 5, // Maximum number of batch requests to allow at once. Each batch is 1000 vectors.
});
API Reference:
- Document from
@langchain/core/documents
- OpenAIEmbeddings from
@langchain/openai
- PineconeStore from
@langchain/pinecone
Query docs
/* eslint-disable @typescript-eslint/no-non-null-assertion */
import { Pinecone } from "@pinecone-database/pinecone";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
// Instantiate a new Pinecone client, which will automatically read the
// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from
// the Pinecone dashboard at https://app.pinecone.io
const pinecone = new Pinecone();
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
/**
* Pinecone allows you to partition the records in an index into namespaces.
* Queries and other operations are then limited to one namespace,
* so different requests can search different subsets of your index.
* Read more about namespaces here: https://docs.pinecone.io/guides/indexes/use-namespaces
*
* NOTE: If you have namespace enabled in your Pinecone index, you must provide the namespace when creating the PineconeStore.
*/
const namespace = "pinecone";
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{ pineconeIndex, namespace }
);
/* Search the vector DB independently with metadata filters */
const results = await vectorStore.similaritySearch("pinecone", 1, {
foo: "bar",
});
console.log(results);
/*
[
Document {
pageContent: 'pinecone is a vector db',
metadata: { foo: 'bar' }
}
]
*/
API Reference:
- OpenAIEmbeddings from
@langchain/openai
- PineconeStore from
@langchain/pinecone
Delete docs
/* eslint-disable @typescript-eslint/no-non-null-assertion */
import { Pinecone } from "@pinecone-database/pinecone";
import { Document } from "@langchain/core/documents";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
// Instantiate a new Pinecone client, which will automatically read the
// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from
// the Pinecone dashboard at https://app.pinecone.io
const pinecone = new Pinecone();
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
const embeddings = new OpenAIEmbeddings();
const pineconeStore = new PineconeStore(embeddings, { pineconeIndex });
const docs = [
new Document({
metadata: { foo: "bar" },
pageContent: "pinecone is a vector db",
}),
new Document({
metadata: { foo: "bar" },
pageContent: "the quick brown fox jumped over the lazy dog",
}),
new Document({
metadata: { baz: "qux" },
pageContent: "lorem ipsum dolor sit amet",
}),
new Document({
metadata: { baz: "qux" },
pageContent: "pinecones are the woody fruiting body and of a pine tree",
}),
];
const pageContent = "some arbitrary content";
// Also takes an additional {ids: []} parameter for upsertion
const ids = await pineconeStore.addDocuments(docs);
const results = await pineconeStore.similaritySearch(pageContent, 2, {
foo: "bar",
});
console.log(results);
/*
[
Document {
pageContent: 'pinecone is a vector db',
metadata: { foo: 'bar' },
},
Document {
pageContent: "the quick brown fox jumped over the lazy dog",
metadata: { foo: "bar" },
}
]
*/
await pineconeStore.delete({
ids: [ids[0], ids[1]],
});
const results2 = await pineconeStore.similaritySearch(pageContent, 2, {
foo: "bar",
});
console.log(results2);
/*
[]
*/
API Reference:
- Document from
@langchain/core/documents
- OpenAIEmbeddings from
@langchain/openai
- PineconeStore from
@langchain/pinecone
Maximal marginal relevance search
Pinecone supports maximal marginal relevance search, which takes a combination of documents that are most similar to the inputs, then reranks and optimizes for diversity.
/* eslint-disable @typescript-eslint/no-non-null-assertion */
import { Pinecone } from "@pinecone-database/pinecone";
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
// Instantiate a new Pinecone client, which will automatically read the
// env vars: PINECONE_API_KEY and PINECONE_ENVIRONMENT which come from
// the Pinecone dashboard at https://app.pinecone.io
const pinecone = new Pinecone();
const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX!);
/**
* Pinecone allows you to partition the records in an index into namespaces.
* Queries and other operations are then limited to one namespace,
* so different requests can search different subsets of your index.
* Read more about namespaces here: https://docs.pinecone.io/guides/indexes/use-namespaces
*
* NOTE: If you have namespace enabled in your Pinecone index, you must provide the namespace when creating the PineconeStore.
*/
const namespace = "pinecone";
const vectorStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{ pineconeIndex, namespace }
);
/* Search the vector DB independently with meta filters */
const results = await vectorStore.maxMarginalRelevanceSearch("pinecone", {
k: 5,
fetchK: 20, // Default value for the number of initial documents to fetch for reranking.
// You can pass a filter as well
// filter: {},
});
console.log(results);
API Reference:
- OpenAIEmbeddings from
@langchain/openai
- PineconeStore from
@langchain/pinecone
Related
- Vector store conceptual guide
- Vector store how-to guides