Skip to content

Commit

Permalink
feat(model): Support llama3.1 models (#1744)
Browse files Browse the repository at this point in the history
  • Loading branch information
fangyinc committed Jul 24, 2024
1 parent 4149252 commit 3c5ed9d
Show file tree
Hide file tree
Showing 10 changed files with 125 additions and 2 deletions.
3 changes: 3 additions & 0 deletions README.ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,9 @@ DB-GPTのアーキテクチャは以下の図に示されています:
私たちは、LLaMA/LLaMA2、Baichuan、ChatGLM、Wenxin、Tongyi、Zhipuなど、オープンソースおよびAPIエージェントからの数十の大規模言語モデル(LLM)を含む幅広いモデルをサポートしています。

- ニュース
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
- 🔥🔥🔥 [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it)
- 🔥🔥🔥 [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- 🔥🔥🔥 [DeepSeek-Coder-V2-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)
Expand Down
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,9 @@ At present, we have introduced several key features to showcase our current capa
We offer extensive model support, including dozens of large language models (LLMs) from both open-source and API agents, such as LLaMA/LLaMA2, Baichuan, ChatGLM, Wenxin, Tongyi, Zhipu, and many more.

- News
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
- 🔥🔥🔥 [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it)
- 🔥🔥🔥 [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- 🔥🔥🔥 [DeepSeek-Coder-V2-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)
Expand Down
3 changes: 3 additions & 0 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,9 @@
海量模型支持,包括开源、API代理等几十种大语言模型。如LLaMA/LLaMA2、Baichuan、ChatGLM、文心、通义、智谱等。当前已支持如下模型:

- 新增支持模型
- 🔥🔥🔥 [Meta-Llama-3.1-405B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
- 🔥🔥🔥 [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
- 🔥🔥🔥 [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it)
- 🔥🔥🔥 [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- 🔥🔥🔥 [DeepSeek-Coder-V2-Instruct](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Instruct)
Expand Down
1 change: 1 addition & 0 deletions dbgpt/agent/expand/resources/search_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ def baidu_search(
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:112.0) "
"Gecko/20100101 Firefox/112.0"
}
num_results = int(num_results)
if num_results < 8:
num_results = 8
url = f"https://www.baidu.com/s?wd={query}&rn={num_results}"
Expand Down
9 changes: 9 additions & 0 deletions dbgpt/configs/model_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,15 @@ def get_device() -> str:
"meta-llama-3-8b-instruct": os.path.join(MODEL_PATH, "Meta-Llama-3-8B-Instruct"),
# https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct
"meta-llama-3-70b-instruct": os.path.join(MODEL_PATH, "Meta-Llama-3-70B-Instruct"),
"meta-llama-3.1-8b-instruct": os.path.join(
MODEL_PATH, "Meta-Llama-3.1-8B-Instruct"
),
"meta-llama-3.1-70b-instruct": os.path.join(
MODEL_PATH, "Meta-Llama-3.1-70B-Instruct"
),
"meta-llama-3.1-405b-instruct": os.path.join(
MODEL_PATH, "Meta-Llama-3.1-405B-Instruct"
),
"baichuan-13b": os.path.join(MODEL_PATH, "Baichuan-13B-Chat"),
# please rename "fireballoon/baichuan-vicuna-chinese-7b" to "baichuan-7b"
"baichuan-7b": os.path.join(MODEL_PATH, "baichuan-7b"),
Expand Down
24 changes: 23 additions & 1 deletion dbgpt/model/adapter/hf_adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -403,7 +403,12 @@ class Llama3Adapter(NewHFChatModelAdapter):
support_8bit: bool = True

def do_match(self, lower_model_name_or_path: Optional[str] = None):
return lower_model_name_or_path and "llama-3" in lower_model_name_or_path
return (
lower_model_name_or_path
and "llama-3" in lower_model_name_or_path
and "instruct" in lower_model_name_or_path
and "3.1" not in lower_model_name_or_path
)

def get_str_prompt(
self,
Expand Down Expand Up @@ -431,6 +436,22 @@ def get_str_prompt(
return str_prompt


class Llama31Adapter(Llama3Adapter):
def check_transformer_version(self, current_version: str) -> None:
logger.info(f"Checking transformers version: Current version {current_version}")
if not current_version >= "4.43.0":
raise ValueError(
"Llama-3.1 require transformers.__version__>=4.43.0, please upgrade your transformers package."
)

def do_match(self, lower_model_name_or_path: Optional[str] = None):
return (
lower_model_name_or_path
and "llama-3.1" in lower_model_name_or_path
and "instruct" in lower_model_name_or_path
)


class DeepseekV2Adapter(NewHFChatModelAdapter):
support_4bit: bool = False
support_8bit: bool = False
Expand Down Expand Up @@ -613,6 +634,7 @@ def load(self, model_path: str, from_pretrained_kwargs: dict):
register_model_adapter(QwenAdapter)
register_model_adapter(QwenMoeAdapter)
register_model_adapter(Llama3Adapter)
register_model_adapter(Llama31Adapter)
register_model_adapter(DeepseekV2Adapter)
register_model_adapter(DeepseekCoderV2Adapter)
register_model_adapter(SailorAdapter)
Expand Down
66 changes: 66 additions & 0 deletions docs/blog/2024-07-24-db-gpt-llama-3.1-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
slug: db-gpt-llama-3.1-support
title: DB-GPT Now Supports Meta Llama 3.1 Series Models
authors: fangyinc
tags: [llama, LLM]
---

We are thrilled to announce that DB-GPT now supports inference with the Meta Llama 3.1 series models!

## Introducing Meta Llama 3.1

Meta Llama 3.1 is a state-of-the-art series of language models developed by Meta AI. Designed with cutting-edge techniques, the Llama 3.1 models offer unparalleled performance and versatility. Here are some of the key highlights:

- **Variety of Models**: Meta Llama 3.1 is available in 8B, 70B, and 405B versions, each with both instruction-tuned and base models, supporting contexts up to 128k tokens.
- **Multilingual Support**: Supports 8 languages, including English, German, and French.
- **Extensive Training**: Trained on over 1.5 trillion tokens, utilizing 250 million human and synthetic samples for fine-tuning.
- **Flexible Licensing**: Permissive model output usage allows for adaptation into other large language models (LLMs).
- **Quantization Support**: Available in FP8, AWQ, and GPTQ quantized versions for efficient inference.
- **Performance**: The Llama 3 405B version has outperformed GPT-4 in several benchmarks.
- **Enhanced Efficiency**: The 8B and 70B models have seen a 12% improvement in coding and instruction-following capabilities.
- **Tool and Function Call Support**: Supports tool usage and function calling.

## How to Access Meta Llama 3.1

Your can access the Meta Llama 3.1 models according to [Access to Hugging Face](https://github.com/meta-llama/llama-models?tab=readme-ov-file#access-to-hugging-face).

For comprehensive documentation and additional details, please refer to the [model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md).

## Using Meta Llama 3.1 in DB-GPT

Please read the [Source Code Deployment](../docs/installation/sourcecode) to learn how to install DB-GPT from source code.

Llama 3.1 needs upgrade your transformers >= 4.43.0, please upgrade your transformers:
```bash
pip install --upgrade "transformers>=4.43.0"
```

Please cd to the DB-GPT root directory:
```bash
cd DB-GPT
```

We assume that your models are stored in the `models` directory, e.g., `models/Meta-Llama-3.1-8B-Instruct`.

Then modify your `.env` file:
```env
LLM_MODEL=meta-llama-3.1-8b-instruct
# LLM_MODEL=meta-llama-3.1-70b-instruct
# LLM_MODEL=meta-llama-3.1-405b-instruct
## you can also specify the model path
# LLM_MODEL_PATH=models/Meta-Llama-3.1-8B-Instruct
## Quantization settings
# QUANTIZE_8bit=False
# QUANTIZE_4bit=True
## You can configure the maximum memory used by each GPU.
# MAX_GPU_MEMORY=16Gib
```

Then you can run the following command to start the server:
```bash
dbgpt start webserver
```

Open your browser and visit `http://localhost:5670` to use the Meta Llama 3.1 models in DB-GPT.

Enjoy the power of Meta Llama 3.1 in DB-GPT!
5 changes: 5 additions & 0 deletions docs/blog/authors.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
fangyinc:
name: Fangyin Cheng
title: DB-GPT Core Team
url: https://github.com/fangyinc
image_url: https://avatars.githubusercontent.com/u/22972572?v=4
8 changes: 8 additions & 0 deletions docs/blog/tags.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
llama:
label: LLama
permalink: /llama
description: A series of language models developed by Meta AI
LLM:
label: LLM
permalink: /llm
description: Large Language Models
5 changes: 4 additions & 1 deletion docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,9 @@ const config = {
pages: {
remarkPlugins: [require("@docusaurus/remark-plugin-npm2yarn")],
},

blog: {
showReadingTime: true,
},
theme: {
customCss: require.resolve('./src/css/custom.css'),
},
Expand Down Expand Up @@ -248,6 +250,7 @@ const config = {
position: 'left',
label: "中文文档",
},
{to: '/blog', label: 'Blog', position: 'left'},
],
},
footer: {
Expand Down

0 comments on commit 3c5ed9d

Please sign in to comment.