This example will show
- How to use gpt-4o and other OpenAI vision models in AgentScope
In this example,
- you can have a conversation with OpenAI vision models.
- you can show gpt-4o with your drawings or web ui designs and look for its suggestions.
- you can share your pictures with gpt-4o and ask for its comments,
Just input your image url (both local and web URLs are supported) and talk with gpt-4o.
In May 13, 2024, OpenAI released their new model, gpt-4o, which is a large multimodal model that can process both text and multimodal data.
The following models are tested in this example. For other models, some modifications may be needed.
- gpt-4o
- gpt-4-turbo
- gpt-4-vision
You need to satisfy the following requirements to run this example.
- Install the latest version of AgentScope by
git clone https://github.com/modelscope/agentscope.git cd agentscope pip install -e .
- Prepare an OpenAI API key
First fill your OpenAI API key in conversation_with_gpt-4o.py
, then execute the following command to run the conversation with gpt-4o.
python conversation_with_gpt-4o.py
- Conversation history with gpt-4o.
- My picture