Skip to content

Commit

Permalink
Merge pull request #3 from ciare-robotics/feature/mujoco
Browse files Browse the repository at this point in the history
Feature/mujoco
  • Loading branch information
AlexKaravaev authored Apr 11, 2024
2 parents b9e3329 + 0e5175b commit b0b218d
Show file tree
Hide file tree
Showing 22 changed files with 790 additions and 1,534 deletions.
2 changes: 1 addition & 1 deletion .flake8
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[flake8]
ignore = E226,E302,E41,C901
max-line-length = 89
max-line-length = 95
max-complexity = 10
2 changes: 1 addition & 1 deletion .github/workflows/lint_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:

- uses: snok/install-poetry@v1
with:
version: 1.1.12
version: 1.8.2
virtualenvs-create: true
virtualenvs-in-project: true

Expand Down
2 changes: 1 addition & 1 deletion .isort.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ default_section=THIRDPARTY

known_localfolder=
ciare_world_creator,
known_third_party = aiohttp,chromadb,click,langchain,lxml,openai,pandas,pytest,questionary,requests,tabulate,tinydb,tqdm
known_third_party = aiohttp,chromadb,click,langchain,lxml,numpy,obj2mjcf,objaverse,openai,pandas,pytest,questionary,requests,tabulate,tinydb,tqdm,trimesh
13 changes: 8 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,30 +15,32 @@ Imagine a scenario where you want to test the navigation capabilities of a robot

# Features

## Models

Currently it uses gpt-3.5-16k by default, but if you have access to gpt-4, you will be prompted with a selection. Note that gpt-4 performs much better, but not everyone has the invitation from OpenAI to use it.

## Current limitations

Currently it's Proof Of Concept solution. There will be a lot of future development. Right now it really does often hallucinate and it's spatial notion is not that great, but sometimes it generates something cool.

Also complex models(like robots) currently is impossible to include. Work will be done on that in the future. Also complex textures of the models are not properly loading too.

[Objaverse](https://objaverse.allenai.org/) loader for contains a lot of uncurated models with size that is very wrong. Under the hood models try to reason for proper scale, but a lot of times it guesses it wrongly.

## Integration with other simulators
Generate simulation worlds on the fly with LLMs.

Supports selected simulators, with plans to expand support to all major simulators.
Simulator | Supported
-- | --
Gazebo | ![Static Badge](https://img.shields.io/badge/Yes-green)
Mujoco | ![Static Badge](https://img.shields.io/badge/Yes-green)
Nvidia Isaac Sim | ![Static Badge](https://img.shields.io/badge/Planned-yellow)
Unity | ![Static Badge](https://img.shields.io/badge/Planned-yellow)

## Model Database
In the future we want to collect a vast model database from which you can freely choose any model to incorporate into your simulations. It aims to become the largest robotics model database available.

Currently, we use https://app.gazebosim.org/dashboard as database of the models.
Simulator | Model database
-- | --
Gazebo | [Gazebo fuel](https://app.gazebosim.org/dashboard)
Mujoco | [Objaverse](https://objaverse.allenai.org/)

# Examples

Expand All @@ -49,6 +51,7 @@ Currently, we use https://app.gazebosim.org/dashboard as database of the models.
| Surgical room | ![alt text](./docs/examples/surgical_room.png) |
| Warehouse shelves | ![alt text](./docs/examples/warehouse_shelves.png) |
| Usual persons living room| ![alt text](./docs/examples/living_room.png) |
| Couple of shelves | ![alt text](./docs/examples/couple_of_shelves_mjc.png) |


# Getting Started
Expand Down
15 changes: 7 additions & 8 deletions ciare_world_creator/collections/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,16 @@
from chromadb.config import Settings
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction

from ciare_world_creator.model_databases.gazebo import GazeboLoader
from ciare_world_creator.model_databases.base import BaseLoader
from ciare_world_creator.utils.cache import Cache


def fill_index(collection):
def fill_index(collection, loader: BaseLoader):
questionary.print(
f"Generating indicies for chromadb. This might take a while, but it's done only once",
style="bold italic fg:green",
)
loader = GazeboLoader()
models = loader.get_models_full()

models, _ = loader.get_models()
df_models = pd.DataFrame(models)
df_models = df_models.drop_duplicates(subset="name")
df_models["tags"] = df_models["tags"].apply(
Expand All @@ -27,6 +25,7 @@ def fill_index(collection):
df_models["categories"] = df_models["categories"].apply(
lambda x: ", ".join(x) if isinstance(x, list) else ""
)

df_models = df_models.drop_duplicates(subset="name")

batch_size = 100
Expand All @@ -47,7 +46,7 @@ def fill_index(collection):
)


def get_or_create_collection(name: str):
def get_or_create_collection(name: str, loader: BaseLoader):
# We initialize an embedding function, and provide it to the collection.
embedding_function = OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"))

Expand All @@ -61,7 +60,7 @@ def get_or_create_collection(name: str):

# client.delete_collection("models")
model_collection = client.get_or_create_collection(
name="models",
name=name,
embedding_function=embedding_function,
metadata={"hnsw:space": "cosine"},
)
Expand All @@ -71,6 +70,6 @@ def get_or_create_collection(name: str):
f"Indicies for models are not created yet. Filling them",
style="bold italic fg:red",
)
fill_index(model_collection)
fill_index(model_collection, loader)

return model_collection
166 changes: 48 additions & 118 deletions ciare_world_creator/commands/create.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import json
import os
import re
import sys
Expand All @@ -11,19 +12,16 @@

from ciare_world_creator.collections.utils import get_or_create_collection
from ciare_world_creator.contexts_prompts.model import fmt_model_qa_tmpl
from ciare_world_creator.contexts_prompts.place import fmt_place_qa_tmpl
from ciare_world_creator.contexts_prompts.world import fmt_world_qa_tmpl
from ciare_world_creator.model_databases.fetch_worlds import download_world
from ciare_world_creator.model_databases.gazebo import GazeboLoader
from ciare_world_creator.model_databases.objaverse import ObjaverseLoader
from ciare_world_creator.sim_interfaces.gazebo import GazeboSimInterface
from ciare_world_creator.sim_interfaces.mujoco import MujocoSimInterface
from ciare_world_creator.utils.cache import Cache
from ciare_world_creator.utils.json import NumpyEncoder
from ciare_world_creator.utils.style import STYLE
from ciare_world_creator.xml.worlds import (
add_model_to_xml,
check_world,
find_model,
find_world,
save_xml,
)
from ciare_world_creator.xml.worlds import find_model


@click.command(
Expand All @@ -37,17 +35,28 @@ def cli(ctx):

from ciare_world_creator.llm.model import prompt_model

# Only gazebo is supported
loader = GazeboLoader()
full_models = loader.get_models_full()
full_worlds = loader.get_worlds_full()
simulators = ["mujoco", "gazebo"]
chosen_simulator = questionary.select(
message=("Choose simulator to generate world for."),
choices=simulators,
style=STYLE,
).ask()

chosen_model = "gpt-4-turbo" # Gpt-4 is default and cheapest
if chosen_simulator == "gazebo":
# Only gazebo is supported
loader = GazeboLoader()
interface = GazeboSimInterface(chosen_model)
elif chosen_simulator == "mujoco":
loader = ObjaverseLoader()
interface = MujocoSimInterface(chosen_model)
models, worlds = loader.get_models()

world_query = questionary.text(
"Enter query for world generation(E.g Two cars and person next to it)",
style=STYLE,
).ask()

if not world_query:
sys.exit(os.EX_OK)

Expand All @@ -57,39 +66,21 @@ def cli(ctx):
World = Query()
exists = db.search(World.prompt == query)

openai.api_key = os.getenv("OPENAI_API_KEY")
models = openai.Model.list()
allowed_models = ["gpt-3.5-turbo-16k"]
for model in models["data"]:
if model["id"] == "gpt-4":
allowed_models.append("gpt-4")

if len(allowed_models) > 1:
chosen_model = questionary.select(
message=(
"Choose model to generate with. GPT-4 is much better,"
" but also little-bit more expensive"
),
choices=allowed_models,
style=STYLE,
).ask()
else:
chosen_model = allowed_models[0]

if exists:
questionary.print(
f"World already exists at {exists[0]['filepath']}... 🦄",
style="bold italic fg:green",
)
return
# return

model_collection = get_or_create_collection("models")
model_collection = get_or_create_collection("models_" + chosen_simulator, loader)
try:
claim_query_result = model_collection.query(
query_texts=[query],
include=["documents", "distances", "metadatas"],
n_results=100,
n_results=20,
)

except openai.error.AuthenticationError:
questionary.print(
f"OpenAI api key at {cache.cache_path}/openai_api_key incorrect. "
Expand All @@ -105,17 +96,9 @@ def cli(ctx):
claim_query_result["documents"][0], claim_query_result["metadatas"][0]
)
]
generate_world = False # Pretty unstable, disabled for now

generate_world = questionary.confirm(
"Do you want to spawn model in an empty world?"
" Saying no will download world from database, but it's very unstable. Y/n",
style=STYLE,
).ask()

if generate_world is None:
sys.exit(os.EX_OK)

if not generate_world:
if generate_world:
content = fmt_world_qa_tmpl.format(context_str=worlds)

questionary.print("Generating world... 🌎", style="bold fg:yellow")
Expand All @@ -125,7 +108,7 @@ def cli(ctx):
f"World is {world['World']}, downloading it", style="bold italic fg:green"
)

full_world = find_world(world["World"], full_worlds)
full_world = interface.find_world(world["World"], worlds)
template_world_path = None
if world["World"] != "None":
template_world_path = download_world(
Expand All @@ -139,96 +122,43 @@ def cli(ctx):
)
template_world_path = os.path.join(cache.worlds_path, "empty.sdf")
else:
world = {"World": "None"}
template_world_path = os.path.join(cache.worlds_path, "empty.sdf")

if not check_world(template_world_path):
questionary.print(
"Suggested world is malformed. Falling back to empty world",
style="bold italic fg:red",
)
template_world_path = os.path.join(cache.worlds_path, "empty.sdf")

questionary.print(
"Spawning models in the world... 🫖", style="bold italic fg:yellow"
"Selecting models from database... 🫖", style="bold italic fg:yellow"
)
content = fmt_model_qa_tmpl.format(context_str=context)
models = prompt_model(content, query, chosen_model)

for model in models:
if not find_model(model["Model"], full_models):
models = prompt_model(
content,
f"{model} was not found in context list. "
"Generate only the one that are in the context",
chosen_model,
)

questionary.print("Placing models in the world... 📍", style="bold italic fg:yellow")
content = fmt_place_qa_tmpl.format(
context_str=f"Arrange following models: {str(models)}",
world_file=open(template_world_path, "r"),
)

placement = prompt_model(content, query, chosen_model)

# TODO handle ,.; etc
cleaned_query = re.sub(r'[<>:;.,"/\\|?*]', "", query).strip()
world_name = f'world_{cleaned_query.replace(" ", "_")}'

include_elements = []
i = 0

# TODO add asserts on model fields
non_existent_models = []

for model in placement:
# Example usage
m = find_model(model["Model"], full_models)
if not m:
questionary.print(
f"Model {model} was not found in database. "
"LLM hallucinated and made that up, skipping this model...",
style="bold italic fg:red",
)
non_existent_models.append(model)
i = i + 1
continue

include = add_model_to_xml(
m["name"] + str(i),
model["Pose"]["x"],
model["Pose"]["y"],
model["Pose"]["z"],
0,
0,
0,
"https://fuel.gazebosim.org/1.0/"
f"{m['owner']}/models/{m['name'].replace(' ', '%20')}",
)
include_elements.append(include)
i = i + 1
chosen_models = prompt_model(content, query, chosen_model)

# Some models are hallucinated
filtered_models = []
for model in placement:
if model not in non_existent_models:
for model in chosen_models:
if find_model(model["Model"], models):
filtered_models.append(model)
chosen_models = filtered_models

world_path = os.path.join(cache.worlds_path, f"{world_name}.sdf")

save_xml(world_path, template_world_path, include_elements)
questionary.print(
f"Placing {len(chosen_models)} models in the world... 📍",
style="bold italic fg:yellow",
)
cleaned_query = re.sub(r'[<>:;.,"/\\|?*]', "", query).strip()
world_name = f'world_{cleaned_query.replace(" ", "_")}'
world_path = (
os.path.join(cache.worlds_path, world_name) + interface.get_world_extension()
)

if template_world_path != os.path.join(cache.worlds_path, "empty.sdf"):
os.system(f"rm {template_world_path}")
saved_models = interface.add_models(
chosen_models, loader.get_models_full(), query, world_path, template_world_path
)

db.insert(
{
"id": str(uuid.uuid4()),
"name": world_name,
"filepath": world_path,
"prompt": query,
"total_models": placement,
"world_name": world["World"],
"total_models": json.dumps(saved_models, cls=NumpyEncoder),
"world_name": "Empty",
}
)

Expand Down
Loading

0 comments on commit b0b218d

Please sign in to comment.