Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic test to make sure python runs #103

Closed
wants to merge 10 commits into from
Closed
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .github/workflows/test-action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: Test main.py
on:
push:
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
services:
docker:
image: docker:19.03.12
options: --privileged
steps:
- uses: actions/checkout@v4
- name: Run main.py with fake model
run: |
export SANDBOX_CONTAINER_IMAGE="ubuntu:24.04"
python -m pip install -r requirements.txt
PYTHONPATH=`pwd` python ./opendevin/main.py -d ./ -t "write a hello world script" --model-name=fake
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xingyaoww would be good to run this a second time with the codeact agent (which I broke in an earlier PR 🙃). Would it be hard to respect the fake model and just return a no-op?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, will look at this when i finished #105

8 changes: 4 additions & 4 deletions agenthub/codeact_agent/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,6 @@
from opendevin.agent import Agent, Message, Role
from opendevin.sandbox.sandbox import DockerInteractive

assert (
"OPENAI_API_KEY" in os.environ
), "Please set the OPENAI_API_KEY environment variable."



SYSTEM_MESSAGE = """You are a helpful assistant. You will be provided access (as root) to a bash shell to complete user-provided tasks.
Expand Down Expand Up @@ -64,6 +60,10 @@ def __init__(
- instruction (str): The instruction for the agent to execute.
- max_steps (int): The maximum number of steps to run the agent.
"""
assert (
"OPENAI_API_KEY" in os.environ
), "Please set the OPENAI_API_KEY environment variable."

super().__init__(instruction, workspace_dir, model_name, max_steps)
self._history = [Message(Role.SYSTEM, SYSTEM_MESSAGE)]
self._history.append(Message(Role.USER, instruction))
Expand Down
4 changes: 3 additions & 1 deletion agenthub/langchains_agent/utils/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def __init__(self, task, model_name):
self.task = task
self.model_name = model_name
self.monologue = Monologue(model_name)
self.memory = LongTermMemory()
self.memory = LongTermMemory(local_embeddings=(model_name == 'fake'))

def add_event(self, event):
self.monologue.add_event(event)
Expand All @@ -20,6 +20,8 @@ def add_event(self, event):
self.monologue.condense()

def get_next_action(self, cmd_mgr):
if self.model_name == 'fake':
return Event('finish', {})
action_dict = llm.request_action(
self.task,
self.monologue.get_thoughts(),
Expand Down
3 changes: 2 additions & 1 deletion agenthub/langchains_agent/utils/llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,8 @@ class NewMonologue(BaseModel):
new_monologue: List[Action]

def get_chain(template, model_name):
assert "OPENAI_API_KEY" in os.environ, "Please set the OPENAI_API_KEY environment variable to use langchains_agent."
if model_name != "fake":
assert "OPENAI_API_KEY" in os.environ, "Please set the OPENAI_API_KEY environment variable to use langchains_agent."
llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), model_name=model_name)
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=llm)
Expand Down
7 changes: 5 additions & 2 deletions agenthub/langchains_agent/utils/memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,15 @@
from llama_index.vector_stores.chroma import ChromaVectorStore

class LongTermMemory:
def __init__(self):
def __init__(self, local_embeddings=False):
db = chromadb.Client()
self.collection = db.create_collection(name="memories")
vector_store = ChromaVectorStore(chroma_collection=self.collection)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
self.index = VectorStoreIndex.from_vector_store(vector_store)
if local_embeddings:
self.index = VectorStoreIndex.from_vector_store(vector_store, embed_model='local')
else:
self.index = VectorStoreIndex.from_vector_store(vector_store)
self.thought_idx = 0

def add_event(self, event):
Expand Down
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,5 @@ langchain-openai
langchain-community
llama-index
llama-index-vector-stores-chroma
chromadb
llama-index-embeddings-huggingface
chromadb