You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The interface seems like it is currently only compatible with GPT4All or LlamaCpp models. I have Fine-Tuned a Vicuna-7b base model and want to utilize that in the Interface. How do I Integrate a CustomLLM into privategpt.py?
Langchain claims they can support custom models with the Class below but how do I implement their CustomLLM class into this particular Interface?
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[: self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
llm = CustomLLM(n=10)
The text was updated successfully, but these errors were encountered:
zfreeman32
changed the title
Using Fine-Tuned CustomLLM Models
Implementing LangChain CustomLLM Class for use with other Models
Sep 26, 2023
The interface seems like it is currently only compatible with GPT4All or LlamaCpp models. I have Fine-Tuned a Vicuna-7b base model and want to utilize that in the Interface. How do I Integrate a CustomLLM into privategpt.py?
Langchain claims they can support custom models with the Class below but how do I implement their CustomLLM class into this particular Interface?
The text was updated successfully, but these errors were encountered: