Skip to content

Commit

Permalink
Merge pull request #121 from arshad-yaseen/mistral
Browse files Browse the repository at this point in the history
v1.0.0
  • Loading branch information
arshad-yaseen authored Feb 26, 2025
2 parents 82031de + d5aa544 commit 873005c
Show file tree
Hide file tree
Showing 51 changed files with 1,439 additions and 2,922 deletions.
5 changes: 3 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,11 @@ playground/ # NextJS app for testing changes in real-time
We provide a playground environment to test your changes in real-time:
- The `playground/` directory contains a NextJS app that automatically reflects changes made to the packages
- Before starting the playground, create a `.env.local` file in the `playground/` directory and add your OpenAI API key:
- Before starting the playground, create a `.env.local` file in the `playground/` directory and add your Mistral API key:
```
OPENAI_API_KEY=your_api_key_here
MISTRAL_API_KEY=your_api_key_here
```
Obtain your Mistral API Key from the [Mistral AI Console](https://console.mistral.ai/api-keys).
- Run `pnpm dev:playground` to start the playground application
- When you run `pnpm dev:monacopilot` or `pnpm dev:core`, your changes will be immediately visible in the playground
- Use this playground to verify your changes and test functionality before submitting a PR
Expand Down
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

### Features

- 🎯 Multiple AI Provider Support (Anthropic, OpenAI, Groq, Google, DeepSeek)
- 🔄 Real-time Code Completions
- ⚡️ Efficient Caching System
- 🎨 Context-Aware Suggestions
Expand Down
9 changes: 5 additions & 4 deletions docs/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,10 @@ export default defineConfig({
text: 'Advanced',
items: [
{text: 'Custom Model', link: '/advanced/custom-model'},
{
text: 'Custom Prompt',
link: '/advanced/custom-prompt',
},
{
text: 'Custom Request Handler',
link: '/advanced/custom-request-handler',
Expand All @@ -72,10 +76,7 @@ export default defineConfig({
{
text: 'Guides',
items: [
{
text: 'User-Selectable Models',
link: '/guides/user-selectable-models',
},
{text: 'Upgrade to v1.0.0', link: '/guides/upgrade-to-v1'},
],
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced/cross-language.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Check out the [prompt.ts](https://github.com/arshad-yaseen/monacopilot/blob/main

## Metadata Overview

The request body's `completionMetadata` object contains essential information for crafting a prompt for the LLM to generate accurate completions. See the [Completion Metadata](/configuration/request-options.html#completion-metadata) section for more details.
The request body's `completionMetadata` object contains essential information for crafting a prompt for the LLM to generate accurate completions. See the [Completion Metadata](/advanced/custom-prompt#completion-metadata) section for more details.

## Example Implementation (Python with FastAPI)

Expand Down
68 changes: 60 additions & 8 deletions docs/advanced/custom-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ You can use a custom LLM that isn't built into Monacopilot by setting up a `mode
```javascript
const copilot = new CompletionCopilot(process.env.HUGGINGFACE_API_KEY, {
// You don't need to set the provider if you are using a custom model.
// provider: 'huggingface',
model: {
config: (apiKey, prompt) => ({
endpoint:
Expand All @@ -21,7 +20,7 @@ const copilot = new CompletionCopilot(process.env.HUGGINGFACE_API_KEY, {
'Content-Type': 'application/json',
},
body: {
inputs: prompt.user,
inputs: `${prompt.context}\n\n${prompt.instruction}\n\n${prompt.fileContent}`,
parameters: {
max_length: 100,
num_return_sequences: 1,
Expand All @@ -38,10 +37,36 @@ const copilot = new CompletionCopilot(process.env.HUGGINGFACE_API_KEY, {

The `model` option accepts an object with two functions:

| Function | Description | Type |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
| `config` | A function that receives the API key and prompt data, and returns the configuration for the custom model API request. | `(apiKey: string, prompt: { system: string; user: string }) => { endpoint: string; body?: object; headers?: object }` |
| `transformResponse` | A function that takes the raw/parsed response from the custom model API and returns an object with the `text` property. | `(response: unknown) => { text: string \| null; }` |
| Function | Description | Type |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| `config` | A function that receives the API key and prompt data, and returns the configuration for the custom model API request. | `(apiKey: string, prompt: PromptData) => { endpoint: string; body?: object; headers?: object }` |
| `transformResponse` | A function that takes the raw/parsed response from the custom model API and returns an object with the `text` property. | `(response: unknown) => { text: string \| null; }` |

### Prompt Data Structure

The `prompt` parameter passed to the `config` function has the following structure:

```typescript
interface PromptData {
/**
* Contextual information about the code environment
* @example filename, technologies, etc.
*/
context: string;

/**
* Instructions for the AI model on how to generate the completion
*/
instruction: string;

/**
* The content of the file being edited
*/
fileContent: string;
}
```

### Config Return Value

The `config` function must return an object with the following properties:

Expand All @@ -51,7 +76,34 @@ The `config` function must return an object with the following properties:
| `body` | `object` or `undefined` | The body of the custom model API request. |
| `headers` | `object` or `undefined` | The headers of the custom model API request. |

### Response Transformation

The `transformResponse` function must return an object with the `text` property. This `text` property should contain the text generated by the custom model. If no valid text can be extracted, the function should return `null` for the `text` property.

> [!NOTE]
> Please ensure you are using a high-quality model, especially for coding tasks, to get the best and most accurate completions. Also, use a model with very low response latency (preferably under 1.5 seconds) to enjoy a great experience and utilize the full power of Monacopilot.
## Working with Different Models

When working with different models, you'll need to format the prompt data appropriately for your specific model. For example:

- For models that expect a single string input, you might concatenate the prompt fields:

```javascript
body: {
inputs: `Context: ${prompt.context}\nInstructions: ${prompt.instruction}\nFile: ${prompt.fileContent}`,
// other parameters...
}
```

- For models that accept structured input:
```javascript
body: {
messages: [
{ role: "system", content: prompt.context },
{ role: "user", content: `${prompt.instruction}\n\n${prompt.fileContent}` }
],
// other parameters...
}
```

::: note
Please ensure you are using a high-quality model, especially for coding tasks, to get the best and most accurate completions. The example above shows how to integrate with Hugging Face's GPT-2, but it's important to note that GPT-2 is not recommended for code completion and is shown only as an implementation example. For production use, choose specialized code-optimized models. Also, use a model with very low response latency (preferably under 1 seconds) to enjoy a great experience and utilize the full power of Monacopilot.
:::
101 changes: 101 additions & 0 deletions docs/advanced/custom-prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Custom Prompt
---

# Custom Prompt

You can customize the prompt used for code completions by providing a `customPrompt` function in the options parameter of the `copilot.complete` method. This allows you to tailor how the AI completes your code based on your specific needs.

## Usage

```javascript
copilot.complete({
options: {
customPrompt: metadata => ({
context: 'Your custom codebase context information here',
instruction: 'Your custom instructions for code completion here',
fileContent: 'Your representation of file with cursor position',
}),
},
});
```

The `context`, `instruction`, and `fileContent` properties in the `customPrompt` function are all optional. If you omit any of these properties, the default values for those fields will be used.

## Parameters

The `customPrompt` function receives a `completionMetadata` object, which contains information about the current editor state and can be used to tailor the prompt.

### Completion Metadata

| Property | Type | Description |
| ------------------ | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `language` | `string` or `undefined` | The programming language of the code being completed. |
| `cursorPosition` | `{ lineNumber: number; column: number }` | The current cursor position where the completion should begin. |
| `filename` | `string` or `undefined` | The name of the file being edited. Only available if you have provided the `filename` option in the `registerCompletion` function. |
| `technologies` | `string[]` or `undefined` | An array of technologies used in the project. Only available if you have provided the `technologies` option in the `registerCompletion` function. |
| `relatedFiles` | `object[]` or `undefined` | An array of objects containing the `path` and `content` of related files. Only available if you have provided the `relatedFiles` option in the `registerCompletion` function. |
| `textAfterCursor` | `string` | The text that appears after the cursor position. |
| `textBeforeCursor` | `string` | The text that appears before the cursor position. |

## Return Value Structure

The `customPrompt` function should return a `PromptData` object (or a partial one) with the following properties:

| Property | Type | Description |
| ------------- | ----------------------- | --------------------------------------------------------------------------------------------- |
| `context` | `string` or `undefined` | Information about the codebase context, including technologies, filename, language, etc. |
| `instruction` | `string` or `undefined` | Instructions for how the AI should complete the code after the cursor position. |
| `fileContent` | `string` or `undefined` | The representation of the file content showing where the cursor is positioned for completion. |

## Example

Here's an example of a custom prompt for completing React component code:

```javascript
const customPrompt = ({
textBeforeCursor,
textAfterCursor,
language,
filename,
technologies,
}) => ({
context: `You're working with a ${language} file named ${filename || 'unnamed'} in a project using ${technologies?.join(', ') || 'React'}.`,
instruction:
'Complete the code after the cursor position with appropriate React syntax. Ensure the code follows modern React best practices and matches the style of the existing code.',
fileContent: `${textBeforeCursor}[CURSOR]${textAfterCursor}`,
});
copilot.complete({
options: {customPrompt},
});
```
## Partial Customization
You can customize just one aspect of the prompt while letting the system handle the rest:
```javascript
// Only customize the instruction for code completion
copilot.complete({
options: {
customPrompt: metadata => ({
instruction:
'Complete this code with an efficient algorithm that handles edge cases.',
}),
},
});
// Only customize the context based on project information
copilot.complete({
options: {
customPrompt: ({language, technologies, filename}) => ({
context: `This is a ${language} file named ${filename} in a project using ${technologies?.join(', ')}. The code follows a functional programming paradigm with strict typing.`,
}),
},
});
```
By using a custom prompt, you can guide the AI to complete your code in ways that better match your coding style, project requirements, or specific technologies you're working with.
For additional `completionMetadata` needs, please [open an issue](https://github.com/arshad-yaseen/monacopilot/issues/new).
22 changes: 11 additions & 11 deletions docs/configuration/copilot-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,18 @@ Configure your `CompletionCopilot` instance with different providers, models and
You can specify a different provider and model by setting the `provider` and `model` parameters in the `CompletionCopilot` instance.

```javascript
const copilot = new CompletionCopilot(process.env.ANTHROPIC_API_KEY, {
provider: 'anthropic',
model: 'claude-3-5-haiku',
const copilot = new CompletionCopilot(process.env.MISTRAL_API_KEY, {
provider: 'mistral',
model: 'codestral',
});
```

There are other providers and models available. Here is a list:
Currently, Monacopilot supports the following provider and model:

| Provider | Models | Notes |
| --------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| groq | `llama-3-70b` | Offers moderate accuracy with extremely fast response times. Ideal for real-time completions while typing. |
| openai | `gpt-4o`, `gpt-4o-mini`, `o1-mini (beta model)` | |
| anthropic | `claude-3-5-sonnet`, `claude-3-haiku`, `claude-3-5-haiku` | Claude-3-5-haiku provides an optimal balance between accuracy and response time. |
| google | `gemini-1.5-pro`, `gemini-1.5-flash`, `gemini-1.5-flash-8b` | |
| deepseek | `v3` | Provides highly accurate completions using Fill-in-the-Middle (FIM) technology. While response times are slower, it excels in completion accuracy. Best choice when precision is the top priority. |
| Provider | Models | Notes | API Key |
| -------- | ----------- | ---------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- |
| mistral | `codestral` | Provides accurate code completions using Fill-in-the-Middle (FIM) technology with fast response time | [Get Mistral API Key](https://console.mistral.ai/api-keys) |

:::info
More providers and models will be added in future releases.
:::
16 changes: 4 additions & 12 deletions docs/configuration/register-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,8 @@ registerCompletion(monaco, editor, {

[OnTyping Demo](https://github.com/user-attachments/assets/22c2ce44-334c-4963-b853-01b890b8e39f)

::: info
For the best experience with `onTyping` mode:

- Use super fast, cost-effective models like Groq's `llama-3-70b` for real-time completions
- For higher accuracy needs, consider using `onIdle` mode with models like `claude-3-5-sonnet`, `claude-3-5-haiku`, etc.
- The `onTyping` mode makes more API calls in the background to provide instant suggestions, so choose your model accordingly
:::

::: tip
Use `onTyping` mode with Groq's `llama-3-70b` for super fast, realtime completions while you type.
If you are using `mistral` models with the `onTyping` trigger, it is recommended to use Mistral's pay-as-you-go plan. This ensures you will never hit rate limit errors and allows you to experience super fast and accurate completions.
:::

## Manually Trigger Completions
Expand Down Expand Up @@ -93,21 +85,21 @@ monaco.editor.addEditorAction({

## Multi-File Context

Improve the quality and relevance of Copilot's suggestions by providing additional code context from other files in your project. This feature allows Copilot to understand the broader scope of your codebase, resulting in more accurate and contextually appropriate completions.
Improve Copilot's suggestions by providing code context from other files in your project. This helps Copilot understand your broader codebase and offer more relevant completions.

```javascript
registerCompletion(monaco, editor, {
relatedFiles: [
{
path: './utils.js',
path: './utils.js', // The exact path you'd use when importing
content:
'export const reverse = (str) => str.split("").reverse().join("")',
},
],
});
```

For instance, if you begin typing `const isPalindrome = ` in your current file, Copilot will recognize the `reverse` function from the `utils.js` file you provided earlier. It will then suggest a completion that utilizes this function.
The `path` value should match how you actually import the file in your code. After registering, when you type `const isPalindrome = `, Copilot will suggest code that properly imports and uses the `reverse` function from your utils.js file.

## Filename

Expand Down
Loading

0 comments on commit 873005c

Please sign in to comment.