Releases: pinecone-io/pinecone-java-client
v2.1.0 Release
Added: Support to disable TLS for data plane operations
This release adds the support for users to disable TLS verification for data plane operations. Users can disable it by setting enableTLS
parameter of PineconeConfig
class to false
. We do not recommend going to production with TLS verification disabled. Following example shows how to disable TLS verification:
import io.pinecone.clients.Index;
import io.pinecone.configs.PineconeConfig;
import io.pinecone.configs.PineconeConnection;
import io.pinecone.unsigned_indices_model.QueryResponseWithUnsignedIndices;
import io.pinecone.proto.UpsertResponse;
import java.util.Arrays;
public class DisableTLSExample {
public static void main(String[] args) {
PineconeConfig config = new PineconeConfig("api");
config.setHost("localhost:5081");
config.setTLSEnabled(false);
PineconeConnection connection = new PineconeConnection(config);
Index index = new Index(connection, "example-index");
// Data plane operations
// 1. Upsert data
UpsertResponse upsertResponse = index.upsert("v1", Arrays.asList(1f, 2f, 3f));
// 2. Query data
QueryResponseWithUnsignedIndices queryResponse = index.queryByVectorId(1, "v1", true, true);
}
}
What's Changed
- Change "client" to "SDK" by @jseldess in #149
- Updating issue templates by @anawishnoff in #148
- Add support for disabling TLS by @rohanshah18 in #150
- Prep for release v2.1.0 and add proxy config & disabling TLS examples to README by @rohanshah18 in #152
New Contributors
- @jseldess made their first contribution in #149
- @anawishnoff made their first contribution in #148
Full Changelog: v2.0.0...v2.1.0
v2.0.0 Release
Added: API versioning
This updated release of the Pinecone Java SDK depends on API version 2024-07
. This v2 SDK release line should continue to receive fixes as long as the 2024-07
API version is in support.
Added: Deletion Protection
Use deletion protection to prevent your most important indexes from accidentally being deleted. This feature is available for both serverless and pod indexes.
To enable this feature for existing pod indexes, use configurePodsindex()
import io.pinecone.clients.Pinecone;
import org.openapitools.control.client.model.DeletionProtection;
...
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
pinecone.configurePodsIndex(indexName, DeletionProtection.ENABLED);
When deletion protection is enabled, calls to deleteIndex()
will fail until you first disable the deletion protection.
# To disable deletion protection for pods index
pinecone.configurePodsIndex(indexName, DeletionProtection.DISABLED);
If you want to enable this feature at the time of index creation, createIndex
now accepts as an enum argument. The feature is disabled by default.
import io.pinecone.clients.Pinecone;
import org.openapitools.client.model.IndexModel;
import org.openapitools.control.client.model.DeletionProtection;
...
Pinecone pinecone = new Pinecone.Builder("PINECONE_API_KEY").build();
String indexName = "example-index";
String similarityMetric = "cosine";
int dimension = 1538;
String cloud = "aws";
String region = "us-west-2";
IndexModel indexModel = pinecone.createServerlessIndex(indexName, similarityMetric, dimension, cloud, region, DeletionProtection.ENABLED);
What's Changed
- Release candidate/2024 07 by @rohanshah18 in #142
- Update generated code for fixing Collection Model's dimension error by @rohanshah18 in #144
- Prepare for v2.0.0 release and update README + add migration guide by @rohanshah18 in #145
- Clean up README by @rohanshah18 in #146
Full Changelog: v1.2.2...v2.0.0
v1.2.2 Release
Added: Support for proxy configuration using proxyHost and proxyPort
Users can now configure proxy settings without the need to manually instantiate network handler for both data and control plane operations. Until now, users had to instantiate multiple Pinecone classes and pass customManagedChannel
for data plane operations and configure OkHttpClient
separately for control plane operations, involving more complex setup steps.
The update simplifies proxy configuration within the SDK, ensuring easier setup by allowing users to specify proxyHost
and proxyPort
.
Note:
Users need to set up certificate authorities (CAs) to establish secure connections. Certificates verify server identities and encrypt data exchanged between the SDK and servers
Example
- The following examples demonstrates how to configure proxy for data Plane operations via
NettyChannelBuilder
vs. using the newly added proxy config support:
Before:
import io.grpc.HttpConnectProxiedSocketAddress;
import io.grpc.ManagedChannel;
import io.grpc.ProxiedSocketAddress;
import io.grpc.ProxyDetector;
import io.pinecone.clients.Index;
import io.pinecone.configs.PineconeConfig;
import io.pinecone.configs.PineconeConnection;
import io.grpc.netty.GrpcSslContexts;
import io.grpc.netty.NegotiationType;
import io.grpc.netty.NettyChannelBuilder;
import io.pinecone.exceptions.PineconeException;
import javax.net.ssl.SSLException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.util.concurrent.TimeUnit;
import java.util.Arrays;
...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));
PineconeConfig config = new PineconeConfig(apiKey);
String endpoint = System.getenv("PINECONE_HOST");
NettyChannelBuilder builder = NettyChannelBuilder.forTarget(endpoint);
ProxyDetector proxyDetector = new ProxyDetector() {
@Override
public ProxiedSocketAddress proxyFor(SocketAddress targetServerAddress) {
SocketAddress proxyAddress = new InetSocketAddress(proxyHost, proxyPort);
return HttpConnectProxiedSocketAddress.newBuilder()
.setTargetAddress((InetSocketAddress) targetServerAddress)
.setProxyAddress(proxyAddress)
.build();
}
};
// Create custom channel
try {
builder = builder.overrideAuthority(endpoint)
.negotiationType(NegotiationType.TLS)
.keepAliveTimeout(5, TimeUnit.SECONDS)
.sslContext(GrpcSslContexts.forClient().build())
.proxyDetector(proxyDetector);
} catch (SSLException e) {
throw new PineconeException("SSL error opening gRPC channel", e);
}
// Build the managed channel with the configured options
ManagedChannel channel = builder.build();
config.setCustomManagedChannel(channel);
PineconeConnection connection = new PineconeConnection(config);
Index index = new Index(connection, "PINECONE_INDEX_NAME");
// Data plane operations
// 1. Upsert data
System.out.println(index.upsert("v1", Arrays.asList(1F, 2F, 3F, 4F)));
// 2. Describe index stats
System.out.println(index.describeIndexStats());
After:
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));
Pinecone pinecone = new Pinecone.Builder(apiKey)
.withProxy(proxyHost, proxyPort)
.build();
Index index = pinecone.getIndexConnection("PINECONE_INDEX_NAME");
// Data plane operation routed through the proxy server
// 1. Upsert data
System.out.println(index.upsert("v1", Arrays.asList(1F, 2F, 3F, 4F)));
// 2. Describe index stats
System.out.println(index.describeIndexStats());
- The following examples demonstrates how to configure proxy for control Plane operations via OkHttpClient vs. using the newly added proxy config support:
Before:
import io.pinecone.clients.Pinecone;
import okhttp3.OkHttpClient;
import java.net.InetSocketAddress;
import java.net.Proxy;
...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));
// Instantiate OkHttpClient instance and configure the proxy
OkHttpClient client = new OkHttpClient.Builder()
.proxy(new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost, proxyPort)))
.build();
// Instantiate Pinecone class with the custom OkHttpClient object
Pinecone pinecone = new Pinecone.Builder(apiKey)
.withOkHttpClient(client)
.build();
// Control plane operation routed through the proxy server
System.out.println(pinecone.describeIndex("PINECONE_INDEX"));
After:
import io.pinecone.clients.Pinecone;
...
String apiKey = System.getenv("PINECONE_API_KEY");
String proxyHost = System.getenv("PROXY_HOST");
int proxyPort = Integer.parseInt(System.getenv("PROXY_PORT"));
Pinecone pinecone = new Pinecone.Builder(apiKey)
.withProxy(proxyHost, proxyPort)
.build();
// Control plane operation routed through the proxy server
System.out.println(pinecone.describeIndex("PINECONE_INDEX"));
Fixed: Adding source tag and setting custom user agent string for gRPC
The user agent string was not correctly setup for gRPC calls and instead of using the custom sourceTag + user agent string, the user agent string would always default to "netty-java-grpc/<grpc_version>". This update fixes this issue so if the source tag is set by the users, it should be appended to the custom user agent string.
What's Changed
- Allow : in the source tag and add pinecone_test as a source tag for all integration tests by @rohanshah18 in #137
- Add proxy configuration for OkHTTPClient and NettyChannelBuilder by @rohanshah18 in #136
- Fix useragent for grpc by @rohanshah18 in #138
- Update pinecone client version to 1.2.2 and remove redundant client version declaration by @rohanshah18 in #139
Full Changelog: v1.2.1...v1.2.2
v1.2.1 Release
Fixed: Uber jar
The META-INF/services
directory contains service provider configuration files. It wasn't shaded correctly so the users were seeing NameResolverProvider
error when running a data plane operation such as upsert using the uber jar.
Error:
Exception in thread "main" java.lang.IllegalArgumentException: Could not find a NameResolverProvider for index-name-somehost.pinecone.io
.
The error is now fixed and users can use the uber jar for data plane operations successfully.
Example
The following example demonstrates how to use the uber jar in a pom.xml for a maven project:
<dependencies>
<dependency>
<groupId>io.pinecone</groupId>
<artifactId>pinecone-client</artifactId>
<version>1.2.1</version>
<classifier>all</classifier>
</dependency>
</dependencies>
What's Changed
- Shadow META-INF/services directory by @rohanshah18 in #134
- Release v1.2.1 by @rohanshah18 in #135
Full Changelog: v1.2.0...v1.2.1
v1.2.0 Release
Added: apiException as the cause to HttpErrorMapper.mapHttpStatusError to facilitate easier debugging
When a request fails before sending or receiving an HTTP response, the exception cause is now considered. Previously, the client would return an empty error message in such cases.
Added: list vector IDs with pagination token and limit but without prefix
We have added the ability to list vector IDs with pagination token and limit but without prefix. Until now, the users had to use either of the following methods in order to utilize pagination token i.e. they must use it with prefix:
- list(String namespace, String prefix, String paginationToken)
- list(String namespace, String prefix, String paginationToken, int limit)
Example
The following demonstrates how to use the list endpoint with limit and pagination token to get vector IDs from a specific namespace.
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListResponse;
...
Pinecone pinecone = new Pinecone.Builder(System.getenv("PINECONE_API_KEY")).build();
String indexName = "example-index";
Index index = pinecone.getIndexConnection(indexName);
// get the pagination token
String paginationToken = index.list("example-namespace", 3).getPagination().getNext();
// get vectors with limit 3 with the paginationToken obtained from the previous step
ListResponse listResponse = index.list("example-namespace", 3, paginationToken);
What's Changed
- Fix
generateJavadoc
errors so we can run in CI by @austin-denoble in #128 - Added ApiException as a cause to mapHttpStatusError by @kkashkovskii in #127
- Add additional list functions to reach parity with RESTful list requests #130 by @rasharab in #133
- Prep for v1.2.0 release by @rohanshah18 in #131
New Contributors
- @kkashkovskii made their first contribution in #127
- @rasharab made their first contribution in #133
Full Changelog: v1.1.0...v1.2.0
v1.1.0 Release
Added: List vector IDs
We have added the ability to list vector IDs as a part of data plane operations. By default, the list returns up to 100 IDs at a time by default in sorted order. If the limit parameter is set, list returns up to that number of IDs instead. The list operation can be called using any following methods:
- list()
- list(String namespace)
- list(String namespace, int limit)
- list(String namespace, String prefix)
- list(String namespace, String prefix, int limit)
- list(String namespace, String prefix, String paginationToken)
- list(String namespace, String prefix, String paginationToken, int limit)
Briefly, the parameters are explained below:
- prefix – The prefix with which vector IDs must start to be included in the response.
- paginationToken – The token to paginate through the list of vector IDs.
- limit – The maximum number of vector IDs you want to retrieve.
- namespace – The namespace to list vector IDs from.
Example
The following demonstrates how to use the list endpoint to get vector IDs from a specific namespace, filtered by a given prefix.
import io.pinecone.clients.Index;
import io.pinecone.clients.Pinecone;
import io.pinecone.proto.ListResponse;
...
Pinecone pinecone = new Pinecone.Builder(System.getenv("PINECONE_API_KEY")).build();
String indexName = "example-index";
Index index = pinecone.getIndexConnection(indexName);
ListResponse listResponse = index.list("example-namespace", "prefix-");
What's Changed
- Adds v1-migration.md by @ssmith-pc in #112
- Update upsert example by @rohanshah18 in #113
- Add list endpoint by @aulorbe in #115
- Refactor data plane tests to use
TestResourcesManager
, clean up lengthyThead.sleep()
calls, general clean up by @austin-denoble in #99 - Add new
build-and-publish-docs.yml
GitHub Workflow by @austin-denoble in #116 - Remove serverless public preview warning by @austin-denoble in #121
- Fix
setup-gradle
step inbuild-docs
action by @austin-denoble in #123 - Use correct input for
gradle-version
by @austin-denoble in #125 - Update changelogs, README, and SDK version for v1.1 release by @rohanshah18 in #124
Full Changelog: v1.0.0...v1.1.0
v1.0.0 Release
- Existing users will want to checkout the v1.0.0 Migration Guide for a walkthrough of all the new features and changes.
- New users should start with the README
Serverless indexes are currently in public preview, so make sure to review the current limitations and test thoroughly before using in production.
Changes overview
- Renamed
PineconeControlPlaneClient
toPinecone
and added overloaded methods, so you are not required to
construct request objects. - Added data plane wrappers
Index
andAsyncIndex
, which will eliminate the need to create Java classes for request
objects. TheIndex
class is for blocking gRPC stub whileAsyncIndex
is an async gRPC class for data plane
operations. - Removed
PineconeClient
andPineconeConnectionConfig
, and renamedPineconeClientConfig
toPineconeConfig
.
PineconeConfig
supports setting custom gRPC-managed channels for data plane operations along with setting a source
tag. - Updated dependencies to address vulnerabilities:
- io.grpc:grpc-protobuf: from 1.57.0 to 1.61.0
- io.grpc:grpc-stub: from 1.57.0 to 1.61.0
- io.grpc:grpc-netty: from 1.57.0 to 1.61.0
- com.squareup.okhttp3:okhttp → from 4.10.0 to 4.12.0
- Added the following
model
classes to address the limitations of Java not having a native datatype forunsigned 32-bit integer
, which is the expected datatype of Pinecone's backend API. Sparse indices will now accept Javalong
(rather thanint
), with the input range of[0, 2^32 - 1]
. Everything outside of this range will throw aPineconeValidationException
:- QueryResponseWithUnsignedIndices.java
- ScoredVectorWithUnsignedIndices.java
- SparseValuesWithUnsignedIndices.java
- VectorWithUnsignedIndices
- Added read units as a part of
queryResponse
.
What's Changed
- Revert "Revert "Refactor dataplane "" by @rohanshah18 in #69
- Add data plane wrapper and accept sparse indices as unsigned 32-bit integer by @rohanshah18 in #70
- Refactor to improve user-experience and delete support for endpoint construction for data plane operations using projectId, env, and indexName by @rohanshah18 in #72
- Add Usage to QueryResponseWithUnsignedIndices by @austin-denoble in #74
- Regenerate gRPC classes after removing grpc-gateway / grpc -> OpenAPI spec metadata by @austin-denoble in #78
- Add PineconeDataPlaneClient Interface by @austin-denoble in #76
- Refactor configs and disable collections and configure tests by @rohanshah18 in #77
- Fix Collection and ConfigureIndex tests by @austin-denoble in #79
- Add upsert(List, String namespace), Rename clients, add createIndexConnection() and createAsyncIndexConnection() by @rohanshah18 in #80
- Refactor
CollectionTest
,ConfigureIndexTest
, andIndexManager
to improve integration test speed and reliability by @austin-denoble in #81 - Add Create Index Error + Optional-argument integration tests for Pod + Serverless by @austin-denoble in #82
- Add data plane tests for serverless index by @rohanshah18 in #83
- Add
user-agent
to control plane operations, addsourceTag
, re-enablePinecone
client unit tests by @austin-denoble in #84 - Update OkHttpClient dependency version to 4.12.0 to address vulnerability issues and clean up codebase by @rohanshah18 in #86
- Poll for Index ready during cleanup in
ConfigureIndexTest
by @austin-denoble in #87 - Update gRPC version to 1.60.2 to address vulnerability concerns and fix data plane integration tests by @rohanshah18 in #88
- Handle nulls for some creation and configure methods and provide alternative method for creating collections by @aulorbe in #91
- Add
TestIndexResourcesManager
andCleanupAllTestResourcesListener
by @austin-denoble in #89 - Abstract away Request objs in the ConfigureIndex method by @aulorbe in #93
- Add concurrent HashMap for storing indexNames and connection objects by @rohanshah18 in #92
- Add new CreateServerlessIndex method by @aulorbe in #94
- [Fix] Accept additional properties in API JSON responses by @jhamon in #95
- [Chore] Build output shows which tests are run by @jhamon in #96
- [Chore] Bump GHA gradle action by @jhamon in #97
- Add new createPodsIndex method by @aulorbe in #98
- Deprecate createIndex method by @aulorbe in #105
- Add new
queryByVectorId
andqueryByVector
functions toIndexInterface
by @austin-denoble in #106 - Add javadoc docstrings to
Pinecone
class by @austin-denoble in #104 - Add doc-strings for unsigned indices model classes by @rohanshah18 in #108
- Updated javadoc for Pinecone class by @ssmith-pc in #111
- Add javadoc docstrings to
IndexInterface
,Index
,AsyncIndex
classes by @austin-denoble in #109 - Add docstrings for configs by @rohanshah18 in #110
- Update README and examples to reflect v1 changes by @aulorbe in #107
New Contributors
- @aulorbe made their first contribution in #91
- @ssmith-pc made their first contribution in #111
Full Changelog: v0.8.1...v1.0.0
v0.8.1 Release
Updated: Class PodSpecMetadataConfig
is replaced with java class CreateIndexRequestSpecPodMetadataConfig
When new properties are added to API responses, it causes the Java client to error. The control plane java generated code was hence updated to ignore additional fields of the api response. As a result of this change, users who were relying on PodSpecMetadataConfig
will now have to replace it with CreateIndexRequestSpecPodMetadataConfig
class.
Example
Following example shows how to replace PodSpecMetadataConfig
with CreateIndexRequestSpecPodMetadataConfig
.
// v0.8.0
PodSpecMetadataConfig podSpecMetadataConfig = new PodSpecMetadataConfig();
List<String> indexedItems = Arrays.asList("A", "B", "C", "D");
podSpecMetadataConfig.setIndexed(indexedItems);
CreateIndexRequestSpecPod requestSpecPod = new CreateIndexRequestSpecPod()
.pods(2)
.podType("p1.x2")
.replicas(2)
.metadataConfig(podSpecMetadataConfig)
.sourceCollection("step");
// v0.8.1: replace the class name
CreateIndexRequestSpecPodMetadataConfig podSpecMetadataConfig = new CreateIndexRequestSpecPodMetadataConfig();
List<String> indexedItems = Arrays.asList("A", "B", "C", "D");
podSpecMetadataConfig.setIndexed(indexedItems);
CreateIndexRequestSpecPod requestSpecPod = new CreateIndexRequestSpecPod()
.pods(2)
.podType("p1.x2")
.replicas(2)
.metadataConfig(podSpecMetadataConfig)
.sourceCollection("step");
What's Changed
- Update changelogs, sdk version, and user-agent for v0.8.1 release by @rohanshah18 in #103
- [Fix] Accept additional properties in API JSON responses release by @jhamon #101
Full Changelog: v0.8.0...v0.8.1
v0.8.0 Release
Added: Control plane operations for serverless indexes
Java SDK now supports control plane operations for serverless indexes. Users can now create
, list
, describe
, and delete
serverless indexes. Note that the PineconeIndexOperationClient
has been renamed to PineconeControlPlaneClient
.
Example
The following example shows how to use the create
, list
, describe
, and delete
serverless indexes.
import io.pinecone.PineconeControlPlaneClient;
import io.pinecone.helpers.RandomStringBuilder;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openapitools.client.model.*;
import java.util.Objects;
import static io.pinecone.helpers.IndexManager.isIndexReady;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
public class ServerlessIndexOperations {
public void createAndDelete() throws InterruptedException {
String indexName = RandomStringBuilder.build("index-name", 8);
PineconeControlPlaneClient controlPlaneClient = new PineconeControlPlaneClient("PINECONE_API_KEY");
ServerlessSpec serverlessSpec = new ServerlessSpec().cloud(ServerlessSpec.CloudEnum.AWS).region("us-west-2");
CreateIndexRequestSpec createIndexRequestSpec = new CreateIndexRequestSpec().serverless(serverlessSpec);
// Create the index
CreateIndexRequest createIndexRequest = new CreateIndexRequest()
.name(indexName)
.metric(IndexMetric.COSINE)
.dimension(10)
.spec(createIndexRequestSpec);
controlPlaneClient.createIndex(createIndexRequest);
// Wait until index is ready
Thread.sleep(3500);
// Describe the index
IndexModel indexModel = controlPlaneClient.describeIndex(indexName);
assertNotNull(indexModel);
assertEquals(10, indexModel.getDimension());
assertEquals(indexName, indexModel.getName());
assertEquals(IndexMetric.COSINE, indexModel.getMetric());
// List the index
IndexList indexList = controlPlaneClient.listIndexes();
assert !Objects.requireNonNull(indexList.getIndexes()).isEmpty();
// Delete the index
controlPlaneClient.deleteIndex(indexName);
}
}
Updated: Control plane operations for pod indexes
We have updated the api's for create
, configure
, list
, describe
, and delete
operations for pod indexes.
Example
The following example how to create
, list
, describe
, and delete
pod indexes.
import io.pinecone.PineconeControlPlaneClient;
import io.pinecone.helpers.RandomStringBuilder;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openapitools.client.model.*;
import java.util.Objects;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertNotNull;
public class PodIndexOperations {
public void createAndDelete() throws InterruptedException {
String apiKey = System.getenv("PINECONE_API_KEY");
String environment = System.getenv("PINECONE_ENVIRONMENT");
String indexName = RandomStringBuilder.build("index-name", 8);
PineconeControlPlaneClient controlPlaneClient = new PineconeControlPlaneClient(apiKey);
CreateIndexRequestSpecPod podSpec = new CreateIndexRequestSpecPod().environment(environment).podType("p1.x1");
CreateIndexRequestSpec createIndexRequestSpec = new CreateIndexRequestSpec().pod(podSpec);
// Create the index
CreateIndexRequest createIndexRequest = new CreateIndexRequest()
.name(indexName)
.metric(IndexMetric.COSINE)
.dimension(10)
.spec(createIndexRequestSpec);
controlPlaneClient.createIndex(createIndexRequest);
// Wait until index is ready
Thread.sleep(3500);
// Describe the index
IndexModel indexModel = controlPlaneClient.describeIndex(indexName);
assertNotNull(indexModel);
assertEquals(10, indexModel.getDimension());
assertEquals(indexName, indexModel.getName());
assertEquals(IndexMetric.COSINE, indexModel.getMetric());
// List the index
IndexList indexList = controlPlaneClient.listIndexes();
assert !Objects.requireNonNull(indexList.getIndexes()).isEmpty();
// Delete the index
controlPlaneClient.deleteIndex(indexName);
}
}
The following example shows how to scale up and down a pod index
using configure index
operation.
public class PodIndexOperations {
public void scaleUpAndDownPodIndex() {
try {
String apiKey = System.getenv("PINECONE_API_KEY");
String environment = System.getenv("PINECONE_ENVIRONMENT");
String indexName = RandomStringBuilder.build("index-name", 8);
PineconeControlPlaneClient controlPlaneClient = new PineconeControlPlaneClient(apiKey);
// Scale up for the test
ConfigureIndexRequestSpecPod pod = new ConfigureIndexRequestSpecPod().replicas(3);
ConfigureIndexRequestSpec spec = new ConfigureIndexRequestSpec().pod(pod);
ConfigureIndexRequest configureIndexRequest = new ConfigureIndexRequest().spec(spec);
controlPlaneClient.configureIndex(indexName, configureIndexRequest);
// Verify the scaled up replicas
PodSpec podSpec = controlPlaneClient.describeIndex(indexName).getSpec().getPod();
assert (podSpec != null);
assertEquals(podSpec.getReplicas(), 3);
// Scaling down
pod = new ConfigureIndexRequestSpecPod().replicas(1);
spec = new ConfigureIndexRequestSpec().pod(pod);
configureIndexRequest = new ConfigureIndexRequest().spec(spec);
controlPlaneClient.configureIndex(indexName, configureIndexRequest);
// Verify replicas were scaled down
podSpec = controlPlaneClient.describeIndex(indexName).getSpec().getPod();
assert (podSpec != null);
assertEquals(podSpec.getReplicas(), 1);
} catch (Exception exception) {
throw new PineconeException("Test failed: " + exception.getLocalizedMessage());
}
}
}
Added: Support for collections in Java SDK
We have added the support to create
, list
, describe
, and delete
the collections in java sdk.
Example
The following example shows how to create
, list
, describe
, and delete
collections:
public class Collections {
public void testIndexToCollectionHappyPath() throws InterruptedException {
String apiKey = System.getenv("PINECONE_API_KEY");
String environment = System.getenv("PINECONE_ENVIRONMENT");
PineconeControlPlaneClient controlPlaneClient = new PineconeControlPlaneClient(apiKey);
String indexName = RandomStringBuilder.build("collection-test", 8);
ArrayList<String> indexes = new ArrayList<>();
ArrayList<String> collections = new ArrayList<>();
IndexMetric indexMetric = IndexMetric.COSINE;
List<String> upsertIds = Arrays.asList("v1", "v2", "v3");
String namespace = RandomStringBuilder.build("ns", 8);
int dimension = 4;
String collectionName = RandomStringBuilder.build("collection-test", 8);
// Create collection from index
CreateCollectionRequest createCollectionRequest = new CreateCollectionRequest().name(collectionName).source(indexName);
CollectionModel collection = controlPlaneClient.createCollection(createCollectionRequest);
assertEquals(collection.getStatus(), CollectionModel.StatusEnum.INITIALIZING);
// Wait until collection is ready
Thread.sleep(120000);
// List collections
List<CollectionModel> collectionList = controlPlaneClient.listCollections().getCollections();
// Verify collections is listed
boolean collectionFound = false;
if (collectionList != null && !collectionList.isEmpty()) {
for (CollectionModel col : collectionList) {
if (col.getName().equals(collectionName)) {
collectionFound = true;
break;
}
}
}
if (!collectionFound) {
fail("Collection " + collectionName + " was not found when listing collections");
}
// Describe collections
collection = controlPlaneClient.describeCollection(collectionName);
assertEquals(collection.getStatus(), CollectionModel.StatusEnum.READY);
assertEquals(collection.getDimension(), dimension);
assertEquals(collection.getVectorCount(), 3);
assertNotEquals(collection.getVectorCount(), null);
assertTrue(collection.getSize() > 0);
// Delete collections
controlPlaneClient.deleteCollection(collectionName);
collections.remove(collectionName);
Thread.sleep(2500);
}
}
What's Changed
- Add global control plane code by @rohanshah18 in #59
- Update index operations by @rohanshah18 in #62
- Update configure index test and clean up control plane client by @rohanshah18 in #63
- Add collections operations to
PineconeControlPlaneClient
with integration tests by @austin-denoble in #65 - Refactor dataplane by @rohanshah18 in #66
- Revert "Refactor dataplane " by ...
v0.7.4 Release
Fixed: Create and listIndexes calls for gcp-starter environment
Create
and listIndexes
calls for gcp-starter
environment were failing because the path was incorrectly setup. Users can now create and list indexes in gcp-starter
environment using Java SDK.
Added: Retry with assert mechanism
Integration test suite will often fail on the first try, so to make it more robust, I have added retry with assert mechanism.
Deprecated: queries parameter
Added deprecation warning for queries
parameter of QueryRequest
. Use vector
and its associated methods instead.
Example
The following example shows an example of how to use deprecated queries
parameter vs. how to use methods associated with vector
while querying:
package io.pinecone.integration.dataplane;
import io.pinecone.*;
import io.pinecone.helpers.RandomStringBuilder;
import io.pinecone.model.IndexMeta;
import io.pinecone.proto.QueryRequest;
import io.pinecone.proto.QueryResponse;
import io.pinecone.proto.VectorServiceGrpc;
import java.util.Arrays;
public class QueryVectors {
public static void main(String[] args) throws InterruptedException {
PineconeClientConfig config = new PineconeClientConfig()
.withApiKey("YOUR_API_KEY")
.withEnvironment("gcp-starter");
PineconeIndexOperationClient controlPlaneClient = new PineconeIndexOperationClient(config);
String indexName = "index-name";
PineconeClient dataPlaneClient = new PineconeClient(config);
IndexMeta indexMeta = controlPlaneClient.describeIndex(indexName);
String host = indexMeta.getStatus().getHost();
PineconeConnection connection = dataPlaneClient.connect(
new PineconeConnectionConfig()
.withConnectionUrl("https://" + host));
VectorServiceGrpc.VectorServiceBlockingStub blockingStub = connection.getBlockingStub();
String namespace = RandomStringBuilder.build("ns", 8);
// Commented code shows the example of using deprecated queries parameter, which is part of QueryRequest
/*
float[] rawVector = {1.0F, 2.0F, 3.0F};
QueryVector queryVector = QueryVector.newBuilder()
.addAllValues(Floats.asList(rawVector))
.setFilter(Struct.newBuilder()
.putFields("some_field", Value.newBuilder()
.setStructValue(Struct.newBuilder()
.putFields("$lt", Value.newBuilder()
.setNumberValue(3)
.build()))
.build())
.build())
.setNamespace(namespace)
.build();
QueryRequest batchQueryRequest = QueryRequest.newBuilder()
.addQueries(queryVector) // addQueries() is deprecated as it belongs to queries parameter
.setNamespace(namespace)
.setTopK(2)
.setIncludeMetadata(true)
.build();
QueryResponse deprecatedQueryResponse = blockingStub.query(batchQueryRequest);
*/
// Below example shows an example of using addAllVector() which is associated with vector parameter of QueryRequest
Iterable<Float> iterableVector = Arrays.asList(1.0F, 2.0F, 3.0F);
QueryRequest queryRequest = QueryRequest.newBuilder()
.addAllVector(iterableVector)
.setNamespace(namespace)
.setTopK(2)
.setIncludeMetadata(true)
.build();
QueryResponse queryResponse = blockingStub.query(queryRequest);
}
}
What's Changed
- Fix path for gcp-starter env, add assert with retry mechanism, and add deprecation warning in vector_service.proto by @rohanshah18 in #57
- Add source_collection to indexMetaDatabase object and ignore newly added fields by @rohanshah18 in #58
Full Changelog: v0.7.2...v0.7.4