Skip to content

Intelli 0.5.3

Latest
Compare
Choose a tag to compare
@intelligentnode intelligentnode released this 01 Feb 20:15
· 7 commits to main since this release
13e1a98

New Features 🌟

  • Support NVIDIA hosted models (Deepseek and Llama 3.3) via a unified chatbot interface.
  • Add streaming responses when calling NVIDIA models.
  • Add new embedding provider.

Using NVIDIA Chat Features 💻

from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput

# get your API key from https://build.nvidia.com/
nvidia_bot = Chatbot("YOUR_NVIDIA_KEY", ChatProvider.NVIDIA.value)

# prepare the input
input_obj = ChatModelInput("You are a helpful assistant.", model="deepseek-ai/deepseek-r1", max_tokens=1024, temperature=0.6)
input_obj.add_user_message("What do you think is the secret to balanced life?")

Synchronous response example

response = nvidia_bot.chat(input_obj)

Streaming response example

async def stream_nvidia():
    for i, chunk in enumerate(nvidia_bot.stream(input_obj)):
        print(chunk, end="")  # Print each chunk as it arrives
        if i >= 4:  # Print only the first 5 chunks
            break

# In an async context, you can run:
result = await stream_nvidia()

For more details, check the docs.