You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Request: calculation for the max_sentences as a multiple of the number of participants in Multi-NPC conversation.
I find myself constantly 'goal seeking' for a balance for this value, which is shared between Single and Multi-NPC conversations.
If I set it to 2, I have good back and forth with one follower, but the (very) large group conversations that are now possible with the performance improvements mean that I'm prompting the group to continue manually very frequently.
If I set the value high (6,8,10) then in single conversation, the LLM feels somewhat compelled to actually FILL all that space, and they wax quite Dribble in doing so.
Description:
The Max Sentences per Response variable that is sent to the LLM should be a calculated variable based on
max_sentences x (number of participants in the conversation - 1)
e.g.
Max Sentences Per Response = 3
Solo conversation
3 x (2-1) = 3
Group Convo with 10 Bannered Mare NPCs
3 x (11-1) = 30
Stretch Goal:
Advanced Max Tokens could become a calculated value also?
// Define constants for the floor and ceiling values of the token limits
FLOOR_TOKENS = 20 // Minimum allowable tokens per response
CEILING_TOKENS = 1000 // Maximum allowable tokens per response (hard cap - equal to 3x 15 word sentences from each of 10 participants! Yikes. Edge case that we'll probably never hit, but prevents runaways.)
AVG_TOKENS_PER_SENTENCE = 25 // Average tokens per sentence (15 words per sentence average +- one standard deviation = 12-18 words per sentence in the reply, add a little 3-token extra buffer to allow some wordiness.
MAX_SENTENCES = 3 // Maximum sentences per response per participant (the current max_sentences_per_reply variable0
// Define the number of participants (default is 1 to 20, most common 2-12, very common 2-3)
num_participants = [some known but arbitrary number between 1 and 20, probably]
// Calculate the maximum number of tokens allowed in a response:
max_tokens_per_participant = MAX_SENTENCES * AVG_TOKENS_PER_SENTENCE
// Total tokens for all participants:
// The first participant will get a full token allocation, so we don't subtract their token count from the ceiling.
total_tokens = FLOOR(FLOOR_TOKENS + (max_tokens_per_participant * (num_participants - 1)))
// Ensure total tokens do not exceed the ceiling value
total_tokens = MIN(total_tokens, CEILING_TOKENS)
// Result is the calculated token limit for the conversation, between the defined floor and ceiling.
The text was updated successfully, but these errors were encountered:
Request: calculation for the max_sentences as a multiple of the number of participants in Multi-NPC conversation.
I find myself constantly 'goal seeking' for a balance for this value, which is shared between Single and Multi-NPC conversations.
If I set it to 2, I have good back and forth with one follower, but the (very) large group conversations that are now possible with the performance improvements mean that I'm prompting the group to continue manually very frequently.
If I set the value high (6,8,10) then in single conversation, the LLM feels somewhat compelled to actually FILL all that space, and they wax quite Dribble in doing so.
Description:
The Max Sentences per Response variable that is sent to the LLM should be a calculated variable based on
max_sentences x (number of participants in the conversation - 1)
e.g.
Max Sentences Per Response = 3
Solo conversation
3 x (2-1) = 3
Group Convo with 10 Bannered Mare NPCs
3 x (11-1) = 30
Stretch Goal:
Advanced Max Tokens could become a calculated value also?
// Define constants for the floor and ceiling values of the token limits
FLOOR_TOKENS = 20 // Minimum allowable tokens per response
CEILING_TOKENS = 1000 // Maximum allowable tokens per response (hard cap - equal to 3x 15 word sentences from each of 10 participants! Yikes. Edge case that we'll probably never hit, but prevents runaways.)
AVG_TOKENS_PER_SENTENCE = 25 // Average tokens per sentence (15 words per sentence average +- one standard deviation = 12-18 words per sentence in the reply, add a little 3-token extra buffer to allow some wordiness.
MAX_SENTENCES = 3 // Maximum sentences per response per participant (the current max_sentences_per_reply variable0
// Define the number of participants (default is 1 to 20, most common 2-12, very common 2-3)
num_participants = [some known but arbitrary number between 1 and 20, probably]
// Calculate the maximum number of tokens allowed in a response:
max_tokens_per_participant = MAX_SENTENCES * AVG_TOKENS_PER_SENTENCE
// Total tokens for all participants:
// The first participant will get a full token allocation, so we don't subtract their token count from the ceiling.
total_tokens = FLOOR(FLOOR_TOKENS + (max_tokens_per_participant * (num_participants - 1)))
// Ensure total tokens do not exceed the ceiling value
total_tokens = MIN(total_tokens, CEILING_TOKENS)
// Result is the calculated token limit for the conversation, between the defined floor and ceiling.
The text was updated successfully, but these errors were encountered: