-
Notifications
You must be signed in to change notification settings - Fork 17
/
Copy pathhelp.json
10 lines (10 loc) · 21.5 KB
/
help.json
1
2
3
4
5
6
7
8
9
10
{
"sp_help": "• Use 'Show Text|pysssss' nodes for displaying text output from Plush nodes. Plush outputs text as UTF-8 Unicode, which Show Text can display correctly.\n\n\n****************\n\n\n✦ AI_Selection [input connection]: Attach the Plush 'AI_Chooser' Node to this input so you can select the AI_Service and model you want to use. As of v1.21.11 ChatGPT, Anthropic & Groq services and models are available. \n\n✦ creative_latitude: Higher numbers give the model more freedom to interpret your prompt or image. Lower numbers constrain the model to stick closely to your input.\n\n✦ tokens: A limit on how many tokens are made available for ChatGPT to use, it doesn't have to use them all.\n\n✦ style: Choose the art style you want to base your prompt on. If this list is too long, type a few characters of the style you're looking for and the list will dynamically filter.\n\n✦ artist: Will produce a 'style of' phrase listing the number of artists you indicate. They will be artists that work in the chosen style. Choose 0 if you don't want this.\n\n✦ prompt_style: 'Narrative' is long form grammatically correct creative writing, This is the preferred form for Dall-e. 'Tags' is a terse, stripped down list of visual attributes without grammatical phrasing, This is the preferred form for SD and Midjourney.\n\n✦ max_elements: A limit on the number of distinct descriptions of visual elements in the prompt. Smaller numbers makes a shorter prompt.\n\n✦ style_info: Set to True if you want background information about the art style you chose.",
"wrangler_help": "• Use 'Show Text|pysssss' nodes for displaying text output from Plush nodes. Plush outputs text as UTF-8 Unicode, which Show Text can display correctly.\n\n• Exif Wrangler will extract Exif and/or AI generation workflow metadata from .jpg (.jpeg) and .png images. .jpg photographs can be queried for their camera settings. ComfyUI's .png files will yield certain values from their workflow including the prompt, seed etc. Images from other AI generators may or may not yield data depending on where they store their metadata. For instance Auto 1111 .jpg's will yield their workflow information that's stored in their Exif comment.\n\n**************\n \n✦ write_to_file: Whether or not to save the meta data file you see in the output to a .txt file in the: '.../ComfyUI/output/PlushFiles' directory.\n\n✦ file_prefix: The prefix for the file name of the saved file, this will be appended to a date/time value to make the file unique. The file will have a .txt extension: e.g., 'MyFileName_ew_20240204_193224.txt'\n\n✦ Min_Prompt_len: A filter value for prompts: Exif Wrangler has to distinguish between actual prompts and other long strings in the ComfyUI embeded meta data. Every Note, every text display box, and even some text that's hidden in nodes is included in the JSON that holds this information. This field allows you to set a minimum length for strings to be displayed to help filter out shorter unwanted text strings.\n\n✦ Alpha_Char_Pct: Another prompt filter that works by only allowing text strings that have a percentage of alpha ASCII characters (Aa - Zz plus comma) equal to or higher than this setting. Increasing the percentage screens out strings that have lots of bytes, symbols and numbers. If you use a lot of weightings or Lora values in your prompts that introduce angle brackets, parentheses, brackets and colons, you may have to lower this percentage to see your prompt. \n\n✦ Prompt_Filter_Term: Enter a single term or short phrase here. A particular prompt string will only be included in Possible Prompts if it contains an exact match for this term. This can be used in a couple of ways: \n 1) If you know there's a term you always or frequently use in the prompts, or if you remember part of a particular image prompt's wording, you can add it here before you click the Queue button. \n 2) If, after clicking Queue, a lot of Possible Prompt candidates clutter your output. Find the one you know is the actual prompt, find a unique word or phrase in it e.g.: 'regal'. Enter that word or phrase as a filter term and run Wrangler again. You'll get back an uncluttered response to save as a file.\n\n***************\n\n✦ troubleshooting output: Hook this output up to a text display node to see any INFO/WARNING/ERROR data that's generated during this node's run. ",
"dalle_help": "• Use 'Show Text|pysssss' nodes for displaying text output from Plush nodes. Plush outputs text as UTF-8 Unicode, which Show Text can display correctly.\n\n• Dall-e Image will produce an image .PNG from a text prompt using the Dall-e 3 model from OpenAI. It requires a OpenAI API key.\n\n**************\n\n✦ GPTmodel: The Dall-e model that will generate the image file. Currently this is limited to Dall-e 3.\n\n✦ prompt: The text prompt for the image you want to produce. Be aware that OpenAI will generate their own prompt from your prompt and pass that to the image model.\n\n✦ image_size: Choose a square, portrait or landscape image. The image size format is: Width, Height. The 1792 image sizes cost slightly more tokens.\n\n✦ image_quality: Self explanatory, you can experiment to see if you think there's a noticable difference. The standard quality image costs a few less tokens than hd.\n\n✦ style: Vivid produces a little more contrast and more saturated colors. The choice depends on what type of image you're trying to produce.\n\n✦ batch_size: The number of images you want to produce in one run. The vast majority of the times batches run without incident, but you should be aware that sending image requests to the Dall-e server is not as reliable as running images locally in SD. If the server gets overtaxed, or hiccups you may not get back all the images you requested. This Dall-e node will handle OpenAI server errors gracefully and allow your batch to continue to completion, but sometimes you may get back fewer images than you requested. If you keep the 'troubleshooting' output connected it will report any errors and let you know how many images were processed vs how many you requested.\n\n✦ seed: This works just like a seed in a KSampler except that it doesn't affect a latent or the image. It's simply there for you to set to: 'randomize' or 'increment' if you want Dall-e to run with every Queue, or to 'fixed' if you only want Dall-e to run once per prompt or setting. The Dall_e API doesn't actually pass seed values. This can also be controlled by the 'Global Seed' from the Inspire Pack. \n\n✦ Number_of_Tries: The number of attempts the node will make to try and connect and/or generate an image until successful. This Dall-e node will make the indicated number of attempts for each item in your batch if necessary. \n\n***************\n\n✦ troubleshooting output: Hook this output up to a text display node to see any INFO/WARNING/ERROR data that's generated during this node's run.\n\n✦ Dalle_e_prompt: The prompt that Dall-e 3 generates from your prompt. This is the prompt that actually gets passed to the image model. Hook up a text display node to see it.",
"adv_prompt_help": "• Advanced Prompt Enhancer (APE) uses AI Models to generate text output from any combination of: Instruction, Example_or_Context, Image and Prompt you provide. No API key is needed for Open source Models. This node can use various remote services and models, ChatGPT, Groq, OpenRouter, Sambanova and Anthropic Claude if you have an API key and have stored it in an environment variable (see GitHub ReadMe file). With or without a key it can also connect to various local apps and models e.g.: LM Studio, Oobabooga, Koboldcpp, etc.\n\n• image input: Advanced Prompt Enhancer can send image data (in the form of a b64 image file) to AI vision capable models. If you're sending an image to an AI model be sure both the model and the app or remote service have vision capabilities and can handle image files.\n\n• Examples_or_Context: APE can send example(s) and/or context along with your instructions to the LLM. Examples and Context *always* need to be in the form of: User input, then the delimiter, followed by the model's response. Delimited text entered in this field will automatically create alternatating input to the model for each delimited segment using this pattern. If you want to explicitly tag your text as being user or model input you can preface each delimited segment with <<user>> or <<model>>}. (There's an workflow file: 'How_To_Use_Examples.png' in the 'Example_Worflows' folder with details about using the Examples_or_Context input.)\n\n• Custom_ApiKey: You can define your own Environment Variable to use with AI_Services that require a URL. Custom_ApiKeyAttach the 'Custom APIKey node to this input to pass a custom Environment Variable Name to APE that contains the API Key you want to use. This will only apply to AI_Services that end with '(URL)' \n\n•Context (output): The 'Context' output is an accumulation of the 'Examples_or_Context' input plus the current 'Prompt' and 'LLM_response'. It can be fed directly into the 'Examples_or_Context' input of a second APE node. Before passing this information between nodes, make sure all the Context linked nodes have the same 'example_delimiter' setting. Each node linked in this way will accumulate all of the conversations of the nodes before it.\n\n• API Keys: API keys need to be kept in environment variables. The Environment Variable names that Advanced Prompt Enhancer looks for are: ✦ChatGPT: OPENAI_API_KEY or OAI_KEY; ✦Groq: GROQ_API_KEY; ✦Anthropic: ANTHROPIC_API_KEY; ✦OpenRouter and other remote serivces: LLM_KEY. You can also make your own Environment Variable names and use them with APE by attaching the 'Custom API Key' node. Find instructions on how to create the Enviroment Variable here: https://github.com/glibsonoran/Plush-for-ComfyUI?tab=readme-ov-file#requirements . \n\n**************\n\n• AI_service: This indicates the type of AI service and connection you're going to send your data to. If you're using an AI Service that ends in '(URL)' you'll need to provide a valid URL in the LLM_URL field near the bottom of the node. If you're using 'Oobabooga API' make sure you read the LLM_URL help below. 'Direct Web Connection (URL)' uses a web POST action rather than the OpenAI API Object to communicate with the local or remote AI server, typically this requires an endpoint that has a 'v1/chat/completions' path in the URL. For Example: 'https://openrouter.ai/api/v1/chat/completions'. 'Web Connection Simplified Data (URL)' also uses a web POST action and presents a simplified data structure. Try this if the other AI service methods don't work, it will also require a: 'v1/chat/completions' path. 'OpenAI API Connection (URL)' on the other hand will only require a '/v1' path. For example: 'https://openrouter.ai/api/v1' \n\n• GPTmodel: This field only applies when the LLM field is set to 'ChatGPT'. Select the specific OpenAI ChatGPT model you want to use. If you're inputting an image, make sure the model you choose is vision capable.\n\n• Groq_model: This only applies when you select 'Groq' in the AI_service field. Choose the Groq model you want to use. \n\n• Anthropic_model: This only applies when you select 'Anthropic' from the AI_service field. Choose the Anthopic model you want to use. \n\n• Ollama_model: This will display the model(s) currently loaded in the Ollama front end. In order for models to show up in the drop down Ollama will have to be running with the models you intend to use loaded *before* starting ComfyUI. Note that APE looks for the standard url: http://localhost:11434/api/tags when retrieving the model names. If you've setup Ollama with another url (e.g. different port), you'll need to modify the 'urls.json' file. \n\n• Ollama_model_unload: Select a setting that determines how long the model will stay loaded after your Ollama inference run (Model TTL). This can be used to manage RAM/VRAM, especially when using local video and image models. Setting this to 'Unload After Run' will cause the model to unload itself right after the APE inference is complete, and before your image processing starts, leaving more RAM/VRAM for the video or image model(s). The downside is the Ollama model will have to reload at the start of each new run. If RAM/VRAM is not an issue, 'Keep Alive Indefinitely' will keep the model loaded until the end of your Ollama session, or until you change the setting to 'Unload After Run'. 'No Setting' will apply no further settings to model TTL. If you initially load the model with 'No Settings' it will stay loaded for 5 min after your last run. The 'Unload After Run' and 'Keep Alive Indefinitely' settings are applied/reapplied each time you run the model.\n\n• Optional_model: This is a list of models extracted from the text file: '/custom_nodes/Plush-for-ComfyUI/Opt_models.txt'. This is a user configurable file that's initially empty. It's meant to hold model names for unique remote or local AI services that require a model name to be included with the inference request. These model names only apply to AI_Services that end in '(URL)'. If you enter or remove model names from this file, the changes will only show up after you reboot ComfyUI. Instructions on how to enter these model names is in the comments header of the 'Opt_models.txt' text file. \n\n• creative_latitude: (Temperature) This will set how strictly the LLM adheres to common word relationships and how closely it will follow your instruction and prompt. Setting this value higher allows more creative freedom in interpreting your input and generating its ouptput.\n\n• tokens: The maximum number of tokens that the LLM can use in processing your prompt and return text. This is not the number of tokens it 'will' use, it's the number available that it 'can' use.\n\n• seed: This is a pseudo or mock seed, it has no effect on the text generated, and it's not passed to the LLM. It's used here solely to control when the node will run. It works the same as a KSampler, set it to 'fixed' if you want the node to run only once each time you change your inputs, set it to random or increment/decrement if you want it run with each Queue.\n\n• example_delimiter: You can provide multiple examples or context to the LLM. Providing multiple examples for a given instruction is a type of 'Few Shot Prompting', which can be effective with some LLM's. This field indicates how the node will distinguish each separate example, each separate example or context item will be presented as originating from the User then the Model alternating in that order for as many as you enter. You can choose to separate your examples with a pipe '|' character, two newlines (i.e.: carriage returns) or two colons '::', these are called delimiters and they denote where these separations will occur.\n\n• LLM_URL: When using an LLM other than ChatGPT, Anthropic or Groq you'll need to provide a URL in this field. Typically the AI application you're using (e.g. LM Studio, Oobabooga, OpenRouter), will indicate the URL to use either: After you startup its server if it's a local app, or on a documents or help web page if it's a remote server. For local apps like LM Stuido, it may be in the terminal output or in the UI. Some local AI apps will specify that a particular URL is OpenAI compatible, if so this is the one you want to use. Typically the URLs for local apps have this general format: http://localhost:5001/v1 where '5001' is the port and 'localhost' is interchangable with '127.0.0.1'. If you're using the Oobabooga API or 'Direct Web Connection (URL)' selection your url will need to have /chat/completions appended as part of the url: http://127.0.0.1:5000/v1/chat/completions. \n\n• Number_of_Tries: The number of times Advanced Prompt Enhancer will attempt to connect and generate output from the AI Service until successful. If after the indicated number of tries the process is still not successful, it will fail and display the error information from the 'troubleshooting' output. seed:\n\n**************\n\n• Use the troubleshooting output if you have issues with model connections, or if you want to see exactly which model was used to produce your output (some ChatGPT model names are actually only pointers to the latest specific model in that category) and how many tokens were used.",
"tagger_help": "• Tagger adds tags to the beginning, middle or end of a text block. Tagger can be used whenever you want to add text that needs to appear exactly as written. \n\n**************\n\n• Beginning_tags: The text (tags) you want to appear at the very beginning of the input text block. It will preface all other text in the block. \n\n• Middle_tags: The text (tags) you want to appear in the middle of the text block. These tags will always appear immediately after a comma or period. \n\n• Prefer_middle_tag_after_period: You can indicate a preference for the tags to follow a period by clicking this button. Otherwise the tags may follow a period or a comma whichever is closest to the middle of the text. \n\n• End_tags: Tags that will be appended to the end of the input text block.\n\n• Examples: Beginning_tags: '[An Abstract Painting:| Digital Art:]', Middle_tags: '(Big Black Hat:1.4)', End_tags: 'In the style of Piet Mondrian' ",
"add_params_help": "• BE AWARE THAT CERTAIN PARAMETERS MAY NOT WORK WITH ALL MODELS OR SERVICES. You should display Advanced Prompt Enhancer's 'Troubleshooting' output when testing parameters on a model so you can quickly diagnose issues. Add Parameters allows you to add parameters to your LLM completions request using Advanced Prompt Enhancer (APE). These parameters affect the way the LLM handles your input data. You're probably already familiar with 'temperature' (which is shown as 'creative_latitude' in APE), this node allows you to add other parameters that aren't available in the APE user interface. If you want to see an example of how this node is used I have an example workflow in '/custom_nodes/Plush-For-ComfyUI/Example_Workflows/How_to_use_addParameters.png'. You can find a list of parameters for OpenAI models at this address: https://platform.openai.com/docs/api-reference/chat \n***************\n\n• The 'Add_Parameter(s)' output: This output provides LIST data and will only connect to other nodes that can handle LIST data. The 'Add_Parameter' input on APE is compatible with this output. \n**************** \n\n• Parameter: List your parameters in this text area using the format 'parameter name::value' e.g. 'top_p::0.9' make sure to place two colons between the parameter name and the value. Place each parameter::value pair on a separate line. You don't need commas or semicolons between lines, just a newline. You can add comments in this text area by prefacing each comment line with a '#' character, e.g.:'# my comment'.\n\n✦ Save_to_file: Check this box if you want to save your parameter list and comments to a text file. The file will be placed in: [...ComfyUI/output/PlushFiles]. You'll need to provide a file name also. \n\n✦ File_name: Enter the name of the file you want to save. The file name will begin with the text you provide and also have a unique identifier added. The program automatically adds the .txt extension.",
"extract_json_help": "• Extract JSON lets you extract values from a string JSON that correspond to the JSON keys you enter. If there are duplicate keys in the JSON, the multiple values will be extracted in a list, e.g.: “[‘value1’, ‘value2’]”. If you want to see an example of how this node is used I have an example workflow in '/custom_nodes/Plush-For-ComfyUI/Example_Workflows/How_to_use_additionalParameters.png'. \n***************\n\n✦ The ‘json_string’ input accepts text (string) data that is properly formatted as a JSON. JSON objects or dictionaries will not work as input for this node. If you want to validate that your JSON string is properly formed I recommend using this website: https://jsonformatter.org. Only text(string) data is output from this node. If the output data is contained in a list, per the earlier example, the list will be presented as text (string). The ‘JSON_Obj’ output will not necessarily produce the same JSON that was input. Instead it is a JSON the node assembles that holds only the data associated with the keys you entered. This output is in the form of a JSON Object/dictionary, not text (string).. \n**************** \n\n✦ key_1..2..3 etc: These are the keys you want to retrieve value data from. The node won’t return the keys themselves (except in the JSON_Obj output). It will return the values that are associated with the keys. It’s like if you were accessing an employee database record and you looked up the ‘name’. ‘Name’ would be the key and the employee’s actual first and last name would be the value. The keys correspond numerically to the outputs (e.g. key_1 will output data to string_1, etc.).",
"type_convert_help": "• Converts a string value to its inferred type or types.\n\n******************\n\n✦ Cross_reference_types: When set to True the node will infer the primary data type and also offer equivalent values in other data types. For example: If you provide the node the string value: '1', it will infer the primary data type as Integer. However if Cross_reference_types is set to True it will also provide the Float value: 1.0 and the Boolean value: True, all of which are valid Python represntations of 1. If you were to provide the string value '1.6' with Cross_reference_types set to True, the node would infer the primary data type as Float and also provide the Integer 2, the closest round to the Float value. If Cross_reference_types is set to False, the node will only provide the primary inferred data type. "
}