-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(provider): add martian api #717
base: main
Are you sure you want to change the base?
Conversation
Thanks for letting me contribute! so please feel free to provide any feedback. |
@@ -1053,6 +1054,10 @@ export function constructConfigFromRequestHeaders( | |||
openaiProject: requestHeaders[`x-${POWERED_BY}-openai-project`], | |||
}; | |||
|
|||
const martianConfig = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey! I can see that this mapping was added in the PR. But its not used anywhere in the provider config. Can you please verify if this is required? We can remove it if its not used anywhere
* @param {string} provider - The provider string. | ||
* @returns {Array<string>} - An array of formatted stream chunks. | ||
*/ | ||
export const MartianChatCompleteJSONToStreamResponseTransform: ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JSONToStream transformer is only used for OpenAI provider. It is not required to write it for each provider because it assumes that the json that is passed to the function will always be OpenAI compliant.
Its already handled in responseHandler
function. So I think it would be safe to remove it from here.
const MartianConfig: ProviderConfigs = { | ||
api: MartianAPIConfig, | ||
chatComplete: MartianChatCompleteConfig, | ||
responseTransforms: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For OpenAI compliant providers, you can reuse the openai-base provider's function to avoid writing redundant code.
Please check inference-net provider integration as a reference on how to use the base provider:
gateway/src/providers/inference-net/index.ts
Lines 9 to 11 in 213273d
responseTransforms: responseTransformers(INFERENCENET, { | |
chatComplete: true, | |
}), |
Hey! Thanks for the PR. I have added a few comments to start with. Please let me know if you have any doubts. |
Hey @lcrojano ! Please let us know if you have any doubts related to the comments. |
Moving this PR to draft state. Please update it once the comments are addressed. |
Title:
Implement Martian Router API Integration
Description:
Added configuration for the Martian Router API at withmartian.com/api/openai/v1
Updated provider settings to include Martian API integration.
Related Issues
closes #48