Skip to content

Commit

Permalink
feat: fix: updated class name
Browse files Browse the repository at this point in the history
  • Loading branch information
shahrear33 authored and stainless-app[bot] committed Jan 5, 2025
1 parent cf7daea commit d4f46aa
Show file tree
Hide file tree
Showing 22 changed files with 7,228 additions and 732 deletions.
59 changes: 27 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Vlm Node API Library
# VlmRun Node API Library

[![NPM version](https://img.shields.io/npm/v/vlmrun.svg)](https://npmjs.org/package/vlmrun) ![npm bundle size](https://img.shields.io/bundlephobia/minzip/vlmrun)

This library provides convenient access to the Vlm REST API from server-side TypeScript or JavaScript.
This library provides convenient access to the VlmRun REST API from server-side TypeScript or JavaScript.

The REST API documentation can be found on [vlm.run](https://vlm.run/). The full API of this library can be found in [api.md](api.md).

Expand All @@ -20,32 +20,27 @@ The full API of this library can be found in [api.md](api.md).

<!-- prettier-ignore -->
```js
import Vlm from 'vlmrun';
// Example usage
import VlmRun from 'vlmrun';

const client = new Vlm();

async function main() {
const completion = await client.openai.chatCompletions.create({ messages: [{}] });

console.log(completion.id);
}

main();
```

### Request & Response types

This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:

<!-- prettier-ignore -->
```ts
import Vlm from 'vlmrun';
const client = new VlmRun({
bearerToken: process.env.BEARER_TOKEN || '',
baseUrl: 'https://dev.vlm.run',
});

const client = new Vlm();
const imageUrl = 'https://storage.googleapis.com/vlm-data-public-prod/hub/examples/document.invoice-extraction/invoice_1.jpg';

async function main() {
const params: Vlm.OpenAI.ChatCompletionCreateParams = { messages: [{}] };
const completion: Vlm.OpenAI.Completion = await client.openai.chatCompletions.create(params);
try {
const response = await client.image.generate({
image: imageUrl,
domain: 'document.invoice',
model: 'vlm-1',
});
console.log('Response:', response);
} catch (error) {
console.error('Error:', error);
}
}

main();
Expand Down Expand Up @@ -100,7 +95,7 @@ You can use the `maxRetries` option to configure or disable this:
<!-- prettier-ignore -->
```js
// Configure the default for all requests:
const client = new Vlm({
const client = new VlmRun{
maxRetries: 0, // default is 2
});

Expand All @@ -117,7 +112,7 @@ Requests time out after 1 minute by default. You can configure this with a `time
<!-- prettier-ignore -->
```ts
// Configure the default for all requests:
const client = new Vlm({
const client = new VlmRun{
timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});

Expand All @@ -141,7 +136,7 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi

<!-- prettier-ignore -->
```ts
const client = new Vlm();
const client = new VlmRun);

const response = await client.openai.chatCompletions.create({ messages: [{}] }).asResponse();
console.log(response.headers.get('X-My-Header'));
Expand Down Expand Up @@ -204,13 +199,13 @@ By default, this library uses `node-fetch` in Node, and expects a global `fetch`

If you would prefer to use a global, web-standards-compliant `fetch` function even in a Node environment,
(for example, if you are running Node with `--experimental-fetch` or using NextJS which polyfills with `undici`),
add the following import before your first import `from "Vlm"`:
add the following import before your first import `from "VlmRun"`:

```ts
// Tell TypeScript and the package to use the global web fetch instead of node-fetch.
// Note, despite the name, this does not add any polyfills, but expects them to be provided if needed.
import 'vlmrun/shims/web';
import Vlm from 'vlmrun';
import VlmRun from 'vlmrun';
```

To do the inverse, add `import "vlmrun/shims/node"` (which does import polyfills).
Expand All @@ -223,9 +218,9 @@ which can be used to inspect or alter the `Request` or `Response` before/after e

```ts
import { fetch } from 'undici'; // as one example
import Vlm from 'vlmrun';
import VlmRun from 'vlmrun';

const client = new Vlm({
const client = new VlmRun{
fetch: async (url: RequestInfo, init?: RequestInit): Promise<Response> => {
console.log('About to make a request', url, init);
const response = await fetch(url, init);
Expand All @@ -250,7 +245,7 @@ import http from 'http';
import { HttpsProxyAgent } from 'https-proxy-agent';

// Configure the default for all requests:
const client = new Vlm({
const client = new VlmRun{
httpAgent: new HttpsProxyAgent(process.env.PROXY_URL),
});

Expand Down
Loading

0 comments on commit d4f46aa

Please sign in to comment.