Releases: intelligentnode/Intelli
Intelli 0.5.3
New Features 🌟
- Support NVIDIA hosted models (Deepseek and Llama 3.3) via a unified chatbot interface.
- Add streaming responses when calling NVIDIA models.
- Add new embedding provider.
Using NVIDIA Chat Features 💻
from intelli.function.chatbot import Chatbot, ChatProvider
from intelli.model.input.chatbot_input import ChatModelInput
# get your API key from https://build.nvidia.com/
nvidia_bot = Chatbot("YOUR_NVIDIA_KEY", ChatProvider.NVIDIA.value)
# prepare the input
input_obj = ChatModelInput("You are a helpful assistant.", model="deepseek-ai/deepseek-r1", max_tokens=1024, temperature=0.6)
input_obj.add_user_message("What do you think is the secret to balanced life?")
Synchronous response example
response = nvidia_bot.chat(input_obj)
Streaming response example
async def stream_nvidia():
for i, chunk in enumerate(nvidia_bot.stream(input_obj)):
print(chunk, end="") # Print each chunk as it arrives
if i >= 4: # Print only the first 5 chunks
break
# In an async context, you can run:
result = await stream_nvidia()
For more details, check the docs.
Intelli 0.5.1
Offline Whisper Transcription 🎤
Load and use OpenAI's Whisper model offline for audio transcription.
Intellinode module support initial prompt to improve the transcription quality.
Code
Load audio
import soundfile as sf
audio_data, sample_rate = sf.read(file_name)
Inference:
from intelli.wrappers.keras_wrapper import KerasWrapper
wrapper = KerasWrapper(model_name="whisper_large_multi_v2")
result = wrapper.transcript(audio_data, user_prompt="medical content")
check the documentation.
Intelli 0.4.2
New Features 🌟
- Update the agent to support the Llama 3.1 offline model.
- Add offline model capability to the chatbot.
- Unify Keras loader under a dedicated wrapper
KerasWrapper
.
Using the New Features 💻
Intelli v0.2.3
New Features 🌟
- Support for ANTHROPIC Models: Our chatbot integration now supports advanced ANTHROPIC models, including those with large context windows.
- Chatbot Provider Enumeration: The selection of AI providers has been simplified through the use of enumerators.
- Minor Bug Fixes: Adjust the parameter order for the controllers.
Using the New Features 💻
ChatProvider
enum simplifies the selecting providers.
from intelli.function.chatbot import ChatProvider
# check available chatbot providers
for provider in ChatProvider:
print(provider.name)
- Check the chatbot documentation to use claude-3 model.
Contributors
Intelli V0.2.0
New Features 🌟
- Add Keras Agents: Intelli now supports the loading of offline open-source models using
KerasAgent
. - Supported Offline Models:
gemma_2b_en
,gemma_instruct_2b_en
,gemma_7b_en
,gemma_instruct_7b_en
,mistral_7b_en
,mistral_instruct_7b_en
.
Using the New Features 💻
To use the new Keras Agent, instantiate the KerasAgent
class with the appropriate parameters:
from intelli.flow.agents.kagent import KerasAgent
# Setting up a Gemma agent
gemma_params = {
"model": "gemma_instruct_2b_en",
"max_length": 200
}
gemma_agent = KerasAgent(agent_type="text",
mission="writing assistant",
model_params=gemma_params,
log=True)
Prepare the tasks with the user instructions:
from intelli.flow.input.task_input import TextTaskInput
from intelli.flow.tasks.task import Task
# Sample task to write blog post
task1 = Task(
TextTaskInput("write blog post about electric cars"), gemma_agent, log=True
)
# Create more tasks as needed
Execute tasks using SequenceFlow
. The example below shows a single task, but you can include additional tasks for text, image, or vision:
from intelli.flow.sequence_flow import SequenceFlow
# Start SequenceFlow
flow = SequenceFlow([task1], log=True)
final_result = flow.start()
Fore more details check the docs.
Intelli V0.1.5
What's New 🌟
- Add a function to generate a visual image for the flow.
flow.generate_graph_img()
-
Add remote speech model, allowing to generate synthesised speeches from openai or google models.
-
Fix a minor bug in the semantic search functionality.
Intelli V0.0.8
What's New 🌟
- Add vision controller, to switch between openai and gemini image to text engine.
- Update the flow to support vision controller, allowing to build advanced use cases like explanation of flowchart for a coder agent.
- Add cohere chatbot model.
Contributors
Intelli V0.0.6
What's New 🌟
Simplified Chatbot Creation
Create chatbots capable of utilizing various AI backends without altering your core codebase. This feature supports OpenAI, Mistral, and Gemini, simplifying the process of integrating intelligent conversational agents into your applications.
Enhanced Document Interaction
Enable your applications to interact with documents through chat.
Streamlined AI Flows
Create and manage flows of tasks executed by different AI models, enhancing automation and efficiency.
To build async flows with multiple paths, refer to the flow tutorial.