-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate Function Calling LLM Framework for Complex In-Game Actions #512
Open
YetAnotherModder
wants to merge
11
commits into
art-from-the-machine:main
Choose a base branch
from
YetAnotherModder:Separate_Function_Calling
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Separate Function Calling LLM Framework for Complex In-Game Actions #512
YetAnotherModder
wants to merge
11
commits into
art-from-the-machine:main
from
YetAnotherModder:Separate_Function_Calling
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Implemented Basic Function Calling .gitignore : modified this to metacharacter base check *SECRET_KEY.txt to make sure no key commited by accident main.py : now reads a new a new file for the secret key for the separate LLM config_loader.py : new config options for the separate function_llm client function_llm_definitions.py : new definitions for the separate function_llm client mantella_config_value_definitions_new.py.py : new config values for the separate function_llm client conversation.py : creates and instance of the new functionManager object and calls it at when processing player input and when continuing a conversation. The call New function manager.py : used for function management and to evaluate and process the output from the LLM New tool_manager.py : used to build ChatGPT ready functions that can be sent by the new function_llm client game_manager.py : modified sentence_to_json to take into new optional keys and values from the function_manager (NPC Ids and names) communication_constants.py : new related values mantella_route.py: slight modification to handle new function_llm secret keys and instances openai_client.py : New subclass function_client. Unprotected multiple properties. Moved part of the initialization process to a separate function : _set_llm_api_and_key() . Added the option to skip part of the API setup. sentence.py : new properties to handle function_llm output output_manager.py : new properties to store function results. New functions to make the call to the LLM and handle the output : generate_simple_response_from_message_thread & process_tool_call
…k on response Overall: Added veto call for the LLM to do (still in testing) Implement new object class: LLMFunction Built 3 new function calls : "make_npc_wait, "npc_attack_other_npc", "npc_loot_items" Added support for non OpenAI LLMs config_loader.py Added "</tool_call>" as a stop character for the function LLMConversation: Conversation Add check for veto before treating output tools_manager.py Implemented LLMFunction class to the tool_manager's methods communication_constants.py : new constants added for "make_npc_wait, "npc_attack_other_npc", "npc_loot_items" openai_client.py Reworked the chatcompletion in method request_call for the subclass function_client so that it builds differently depending if it's interacting with OpenAI or not Removed vision related checks in the subcalss's request_call method to avoid errors. sentence.py new property (with getter/setter) : has_veto output_manager.py Added veto management to process_response() to allow a response LLM (meaning the standard Mantella one) to refuse a function call. Modified generate_simple_response_from_message_thread to allow management for non OpenAI tools calls Added two new methods to process non OpenAI tool calls: process_pseudo_tool_call() to manage pydantic tool calls like Hermes-2-Pro-Mistral-7B does process_unlabeled_function_content() to manage tool calls from other LLM like hermes-3-llama-3.1-405b function_manager.py Built 3 new function calls : build_wait_function(), .build_attack_NPC_function() , build_loot_items_function() Added support for non OpenAI LLMs process_function_call() added argument management for the new functions Implemented LLMFunction class to the build function methods
… fallback secret key New config option : enable_function_veto to toggle on/off LLM 'free will' Conversation.py Conversation updated with LLM veto management, capacity to append to return output to the Mantella.esp and multiple LLM_warning cleanup call to remove warnings from the message_thread to avoid spamming the LLM. Conversation will now generate warning message in case of successful function calls to the LLM Function_manager.py Refactored function_manager to generate LLM_function class objects by reading the function_inference/functions/ directory for json files. Cleaned the code that handles the LLM output. Added checks that verify which function to load according to tooltips packages defined in the json functions. tools_manager.py New methods get_function_object() & get_all_functions() communication_constants.py & mantella_route.py : Merged from upsteam main branch message_thread.py : new methods to delete all warning type user messages new is_warning property user_messages openai_client.py : Added a fallback secret key check in case the FUNCTION_GPT_SECRET_KEY.txt is absent.
…or loot_items function_manager.py Moved the speakername and player name at the beginning of the process_function_call() Modified the system prompts to refer to the player and NPC in-game names. Added a few output checks in case the LLM sends out incorrect data Abstracted the output checking for existing functions into : handle_function_call_arguments() build_loot_items_tooltips () now has extra arguments attack_npc.json & heal_player.json bugfix : allow_games now include fallout4 loot_items.json bugfix : corrected incorrect variable in llm_feedback & veto_warning move_npc.json Remove NPC_distance as it's not used anymore function_client Replaced 'if not chat_completion or chat_completion.choices.__len__() < 1' with a more complex check as it would lead to crashes with specific LLMs (LLama3 405B) sentence.py target_ids is now a str list to better represent the papyrus side output_manager.py Various optimizations to process_pseudo_tool_call & process_unlabeled_function_content to better handle uncommon local LLM outputs New method _try_parse_json()
…l list for function LLMs config_loader.py - Remove enable function inference option as it is now handled in-game function_llm_definitions.py - Remove enable function inference option as it is now handled in-game. Reworked get_function_llm_api_config_value() & get_function_llm_model_config_value() to properly handle automated LLM lists mantella_config_value_definitions_new.py - Remove enable function inference option as it is now handled in-game conversation.py - Move the handling of post response actions to function_manager.py . Added a heck_LLM_functions_enabled() verification based on in-game state before processing functions or not. function_manager.py: New constants to manage follower state and participants tooltips New properties llm_output_source_ids & llm_output_pending_character_switch check_LLM_functions_enabled(self): Checks if function calling is enabled inside the game check_context_value() : Utility function that adds an extra try block to the context value check before returning it process_function_call() : Refactored to make a sequential series of checks that filters out functions according to NPC type, conversation type and parameter packages Different prompts are now passed along to the LLM in case of multi NPCs conversations Returned LLM functions are now evaluated by comparing to the pre-existing context payload object instead of standalone variables. build_npc_targeting_tooltip() : Now relies on the context_payload class and filters against the ocnversation participants. Can now exclude the player from the target list build_npc_participants_tooltip() : Similar to build_npc_targeting_tooltip() but builds according to the participant list build_loot_items_tooltips() : Now relies on the context_payload class format_LLM_warning(): Refactored to better handle single values and multi values arguments Removed handle_function_call_arguments() handle_function_call_with_multiple_value_arguments() : Reworked argument handler that is able to manage multiple simultaneous argument from a returned function (e.g. : Multiple modes, multiple source NPCs, etc.) take_post_response_actions(): Handles the presence of a <veto> tag in the returned output. Handles the modification of the sentence object in case of a successful function call. Clears output data if a function has been used to make sure the context_payload doesn't stick around across multiple replies. Clears the message thread of warnings message so that they don't get sent to the LLM over and over. attack_npc.json - Updated variables names and settings. Rewrote LLM warnings for clarity. heal_player.json - Rewrote LLM warnings for clarity. loot_items.json - Updated variables names and settings. Rewrote LLM warnings for clarity. move_npc.json - Updated variables names and settings. Rewrote LLM warnings for clarity. multi_attack_npc.json - New action for multi NPC conversations multi_move_npc.json - New action for multi NPC conversations multi_wait_npc.json - New action for multi NPC conversations wait_npc.json - Rewrote LLM warnings for clarity. LLMFunction_class.py - parameter_package_key is now a list instead of string to allow the handling of multiple package keys at once. New classs : ContextPayload - Handles the storage of temporary data related to function calling New class : Target - Handles the storage and output of targets data (names and IDs) New class : Sources - Handles the storage and output of source data (names and IDs) tools_manager.py - Updated commentary for clarity. Removed examples at the end. New method : clear_all_context_payloads() used to remove all temporary date related to function calling that is stored during a LLM call. game_manager.py - Will now store sentence source data to send back to the game by HTTP. Communication_constants: New constant "function_data_source_ids" openai_client.py - function_LLM class method request_call() updated filters to allow proper handling of OpenAI and non-open AI tool calls sentence.py - New source ID property output_manager.py - Added character switch management (not behaving as expected however) settings_ui_constructor.py - visit_ConfigValueSelection() : Added code to allow the WebUI to pull LLM lists from the service databases
function_manager.py Various functions Added a few various self.clear_llm_output_data() calls in case of missing modes parse_items() : reworked to make it clear to the LLM how distances work take_post_response_actions() : added mode handling loot_items.json - Reworked to correctly call a mode as a variable so the function_manager.py picks up on it. New multi_loot_items.json game_manager.py sentence_to_json() - Now sends modes back in the json file to the HTTP dll towards the game scripts communication_constants.py Added multiples contants for game scripts
…able prompts gitignore: Modified file to make json ignore exceptions for function calling specific folders in data/actions/ Moved all functions to data folder. config_loader.py Added LLM function prompts definitions prompt_definitions.py Added new LLM function prompt for each of these different options: Single NPC conversation OpenAI Single NPC conversation (Non OpenAI) Multi NPC conversation OpenAI Multi NPC conversation (Non OpenAI) mantella_config_value_definitions_classic.py & MantellaConfigValueDefinitionsNew : Added LLM Function prompt definitions llm_function_class.py : renamed file to follow underscrore naming structure Added LLMFunctionCondition object, setter, getters llm_tooltip_class.py: Introduct new class that is used to manage custom tooltips (meaning ones built from json files) \tools_manager.py Added extra function and attributes to handle custom tooltips built from json files function_manager.py : New dependencies to support conditions and tooltips built from json files New constants to support items and actor type checks init() : New values to support json built conditions and tooltips process_function_call() : Reworked function filters to filter out according to actor types (followers, settlers and generics) according to the minimum required condition meaning : if the context contains at least one follower actor the LLMfunction will only load if it's flagged as Follower function, if the context contains at least one settler actor the LLMfunction will only load if it's flagged as settler function, and so on. Added new hardcode package checks for build_fo4_npc_carry_item_list_tooltip() Added new custom parameters packages (aka tooltips) and custom conditions check that run based on the json files in /data/actions/ Reworked prompt loading according to config values instead of being hardcoded in build_npc_targeting_tooltip() : Ensure max distance is always 1 to prevent the LLM from refusing move commands towards the player Moved parse_items() to it's own static method New build_fo4_npc_carry_item_list_tooltip(): New tooltip that checks what actors are carrying and sends the info to the LLM. This tooltip won't load if the items are absent New build_custom_tooltip(): New function that can build custom tooltips based on json file. It can support multiple target selection based on ID (required), name (require) and dsitance (optional). It can also support multiple modes as well. New create_tooltip_dict() Used to check if a tooltip is custom or not New compare_tooltip_values() used for key comparison New load_conditions_from_json(): Will load all json files present in data/actions/conditions as condition objects if formatted correctly load_custom_tooltips_from_json(): Will load all json files present in data/actions/tooltips as condition objects if formatted correctly format_with_stop_marker(): Use to format the prompt without creating issues with explanations of the pydandic format present in it.
config_loader.py, function_llm_definitions.py, mantella_config_value_definitions_classic.py & mantella_config_value_definitions_new.py : Added function_llm_timeout to skip to the "roleplay" LLM call if the function LLM call takes too long (default value 15 seconds) function_manager.py : Modified all KEY_ACTOR values to KEY_CONTEXT to ensure that they actually do get updated as the conversation moves along. Added comment in code and moved code around in separate clusters for clarity. Encapsulated part of process_function_call into gather_functions_and_tooltips_to_send() & _handle_generated_function_results() for clarity. build_custom_tooltip() : Added playerName formatting. Adjusted method to be more lenient with missing values. Adjusted method to not output intro and outro text if values are completely missing. Introduced zip_longest() for management of unequal size arrays. Adjusted method so it doesn't list "names", "distances" or "ids" if they are missing. Json files Removed heal_player and replaced with use_item.json conversation.py : Remove print calls and superfluous context updates llm_function_class.py, mantella_route.py, openai_client.py, output_manager.py, function_manager.py : Removed print called and Raise Valuerror and replace with proper logging calls. Removed __has_character_switch_occurred has it's not necessary anymore. tools_manager.py: Removed print called and clarified clear_all_active_tooltips()'s function name mantella_route.py, openai_client.py : removed print calls
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
NOTE:
requirements.txt
has been modified to allow specific data folders and files to be read (JSONs, new GPT key).This PR introduces separate LLM calls to a dedicated LLM dubbed "Function LLM."
LLM calls are structured differently depending on whether they are made to an OpenAI LLM (which includes a
tools
key in the request) or another provider (such as OpenRouter or Local, which do not define tools).This PR introduces a function manager and a tools manager script, as well as new
LLMFunction
andTooltips
classes:Function Limits
A "soft cap" exists on the number of functions that can be sent to an LLM at once to avoid confusion. This largely depends on the model’s training, parameter size, and the complexity of the functions themselves. Therefore, functions that are not relevant (e.g., those involving items not present in the context) are filtered out before being sent.
Currently Supported Actions
Process Flow
Function_manager
parses the context, checks conditions, and available parameters (only the last user response is used to avoid confusion).Function_manager
builds all functions and tooltips to send to the LLM based on conditions and context. The context payload is established at this step.Function_manager
formats and sends the request to the Function LLM using the appropriate prompt specified in the config.output_manager
.Output_manager
processes the response:process_tool_call()
: Handles OpenAI responses.process_pseudo_tool_call()
: Handles non-OpenAI responses that followed the Pydantic<tool_call>
format.process_unlabeled_function_content()
: Attempts to process non-OpenAI responses without a<tool_call>
flag, extracting identifiable data.Function_manager
parses the response data, comparing returned arguments with the context payload. Matches are retained and formatted.Game_manager
packages sources, targets, and modes found in the sentence object and sends a JSON response to the HTTP DLL.NOTE: For non-OpenAI calls,
</tool_call>
is hardcoded as a stop value to prevent excessive generation and unnecessary LLM output.Function Structure
LLMFunction
JSON objects are stored indata/actions/functions/
. Each object includes:GPT_func_name
: Descriptive function name sent to ChatGPT and the in-game engine (prefixed withmantella_
when returned to the game).GPT_func_description
: Brief description of what the function does.function_parameters
: Dictionary of parameter objects specifying type, description, and contained item types (follows OpenAI's function-calling model: [OpenAI Function Calling](https://platform.openai.com/docs/guides/function-calling)).system_prompt_info
: Instructions on function usage (optional but useful for guiding the LLM beyond function descriptions).GPT_required
: Specifies required return values. Parameters containing "target," "source," or "mode" in their names are parsed and matched to the context payload before returning valid results to the in-game engine.allowed_games
: Array of valid games. Functions will not load if there is no match.is_generic_npc_function
,is_follower_function
,is_settler_function
: Boolean values determining when a function should load based on NPC type.is_pre_dialogue
,is_post_dialogue
,is_interrupting
,is_radiant
: Future-proofing fields (not yet in use).is_one_on_one
,is_multi_npc
: Determines function availability in one-on-one or multi-NPC conversations.llm_feedback
: If validated, this description is sent to the roleplay LLM to provide context.parameter_package_key
: Bundle of parameters and conditions required for function loading.veto_warning
: Instruction to the roleplay LLM, allowing it to cancel an action if necessary.conditions
: Specifies a JSON file containing custom checks for function loading.Context and Transient Data
context_payload
: Stores transient data (targets, sources, and modes) and is cleared as needed.targets
: Stores target-related data (IDs, names, distances), with IDs being most important for in-game execution.sources
: Stores actor-related data (names, IDs).modes
: Specifies how an action is executed (e.g., item usage modes).Condition JSONs
Stored in
data/actions/conditions/
, condition JSONs define checks for function loading. Each contains:condition_name
: String identifier.condition_type
: Currently only allowsboolean_check
.operator_type
: Supportsand
oror
.keys_to_check
: Array of context keys to verify.Tooltips
Tooltips provide contextual information to the LLM without redundant function data, reducing confusion and token usage. Custom tooltips are stored in
data/actions/tooltips/
.TargetInfo
: Stores names, distances, and IDs of function targets.ModeInfo
: Stores available function modes.{playername}
can be used for personalized context strings.A sample tooltip (
placeholder_tooltip
) is included in the files for reference.New WebUI Menus
Function Inference Tab:
New Prompt Categories:
NOTE: The flag
NO_REGEX_FORMATTING_PAST_THIS_POINT
prevents the Python script from modifying bracketed{}
values used in Pydantic format explanations.Script Updates
message_thread
now removes LLM warning class messages to prevent duplicate warnings.messages
script introduces a new property for assistant messages to flag LLM warnings for function-roleplay LLM coordination.