Skip to content

Commit

Permalink
Merge branch 'main' into multi_agent_type
Browse files Browse the repository at this point in the history
  • Loading branch information
pseudotensor committed Sep 11, 2024
2 parents d37340d + 0bcba17 commit 34d2ee5
Show file tree
Hide file tree
Showing 4 changed files with 23 additions and 8 deletions.
9 changes: 9 additions & 0 deletions openai_server/agent_prompting.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,14 @@ def agent_system_prompt(agent_code_writer_system_message, agent_system_site_pack
* For math, counting, logical reasoning, spatial reasoning, or puzzle tasks, you should try multiple approaches (e.g. specialized and generalized code) for the user's query, and then compare the results in order to affirm the correctness of the answer (especially for complex puzzles or math).
* Keep trying code generation until it verifies the request.
</reasoning>
Constraints on output or response:
<constraints>
* If you need to answer a question about your own output (constrained count, etc.), try to generate a function that makes the constrained textual response.
* Searching for the constrained response is allowed, including iterating the response with the response changing to match user constraints, but you must avoid infinite loops and try generalized approaches instead of simplistic word or character replacement.
* Have common sense and be smart, repeating characters or words just to match a constraint about your response is not likely useful.
* E.g., simple solutions about your response are allowed, such as for "How many words are in your response" can just be a function that generates a sentence that includes the numeric count of the words in that sentence.
* For a response constrained by the user, the self-consistent constrained textual response must appear inside <constrained_output> </constrained_output> XML tags, before giving a TERMINATE.
/constraints>
PDF Generation:
<pdf>
* Strategy: If asked to make a multi-section detailed PDF, first collect source content from resources like news or papers, then make a plan, then break-down the PDF generation process into paragraphs, sections, subsections, figures, and images, and generate each part separately before making the final PDF.
Expand Down Expand Up @@ -222,6 +230,7 @@ def agent_system_prompt(agent_code_writer_system_message, agent_system_site_pack
* As soon as you expect the user to run any code, or say something like 'Let us run this code', you must stop responding and finish your response with 'ENDOFTURN' in order to give the user a chance to respond.
* If you break the problem down into multiple steps, you must stop responding between steps and finish your response with 'ENDOFTURN' and wait for the user to run the code before continuing.
* Only once you have verification that the user completed the task do you summarize and add the 'TERMINATE' string to stop the conversation.
* If it is ever critical to have a constrained response (i.e. referencing your own output) to the user in the final summary, use <constrained_output> </constrained_output> XML tags to encapsulate the final response before TERMINATE.
</stopping>
"""
return agent_code_writer_system_message
Expand Down
17 changes: 11 additions & 6 deletions openai_server/agent_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import requests
from PIL import Image

from openai_server.backend_utils import get_user_dir, run_upload_api
from openai_server.backend_utils import get_user_dir, run_upload_api, extract_xml_tags


def get_have_internet():
Expand Down Expand Up @@ -201,22 +201,27 @@ def get_ret_dict_and_handle_files(chat_result, temp_dir, agent_verbose, internal
if chat_result and hasattr(chat_result, 'cost'):
ret_dict.update(dict(cost=chat_result.cost))
if chat_result and hasattr(chat_result, 'summary') and chat_result.summary:
ret_dict.update(dict(summary=chat_result.summary))
print("Made summary: %s" % chat_result.summary, file=sys.stderr)
print("Existing summary: %s" % chat_result.summary, file=sys.stderr)
else:
if hasattr(chat_result, 'chat_history') and chat_result.chat_history:
summary = chat_result.chat_history[-1]['content']
if not summary and len(chat_result.chat_history) >= 2:
summary = chat_result.chat_history[-2]['content']
if summary:
print("Made summary from chat history: %s" % summary, file=sys.stderr)
ret_dict.update(dict(summary=summary))
chat_result.summary = summary
else:
print("Did NOT make and could not make summary", file=sys.stderr)
ret_dict.update(dict(summary=''))
chat_result.summary = 'No summary or chat history available'
else:
print("Did NOT make any summary", file=sys.stderr)
ret_dict.update(dict(summary=''))
chat_result.summary = 'No summary available'
if chat_result and hasattr(chat_result, 'summary') and chat_result.summary:
if '<constrained_output>' in chat_result.summary and '</constrained_output>' in chat_result.summary:
extracted_summary = extract_xml_tags(chat_result.summary, tags=['constrained_output'])['constrained_output']
if extracted_summary:
chat_result.summary = extracted_summary
ret_dict.update(dict(summary=chat_result.summary))
if agent_venv_dir is not None:
ret_dict.update(dict(agent_venv_dir=agent_venv_dir))
if agent_code_writer_system_message is not None:
Expand Down
2 changes: 1 addition & 1 deletion src/version.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
__version__ = "9ecd53e2b54ee6c4e6af211e0819a5ec94909b0a"
__version__ = "122332ef576358589f3dff64301e7ea0622870f8"
3 changes: 2 additions & 1 deletion src/vision/playv2.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ def get_pipe_make_image(gpu_id):
device = get_device(gpu_id)

pipe = DiffusionPipeline.from_pretrained(
"playgroundai/playground-v2-1024px-aesthetic",
# "playgroundai/playground-v2-1024px-aesthetic",
"playgroundai/playground-v2.5-1024px-aesthetic",
torch_dtype=torch.float16,
use_safetensors=True,
add_watermarker=False,
Expand Down

0 comments on commit 34d2ee5

Please sign in to comment.