From 82cbf42869aa5ce6a40e2eede75795ed7b5207ea Mon Sep 17 00:00:00 2001 From: LinghaoChan Date: Mon, 16 Oct 2023 17:30:37 +0800 Subject: [PATCH] update --- README.md | 2 +- resource/docs/cn-README.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 8e6388b..ba70f4a 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ UniMoCap is a community implementation to unify the text-motion mocap datasets. - [x] body-only H3D-format (263-dim, 24 joints) - [x] whole-body SMPL-X-format (322-dim SMPL-X parameters). -***We believe this repository will be useful for training models on larger mocap text-motion data. We will support more T-M mocap datasets in near feature. *** +***We believe this repository will be useful for training models on larger mocap text-motion data. We will support more T-M mocap datasets in near feature.*** We make the data processing as simple as possible. For those who are not familiar with the datasets, we will provide a video tutorial to tell you how to do it in the following weeks. This is a community implementation to support text-motion datasets. For the Chinese community, we provide a Chinese document ([中文文档](./resource/docs/cn-README.md)) for users. diff --git a/resource/docs/cn-README.md b/resource/docs/cn-README.md index 5beb05e..9f42e2a 100644 --- a/resource/docs/cn-README.md +++ b/resource/docs/cn-README.md @@ -9,13 +9,13 @@ UniMoCap是用于统一文本-动作动捕数据集的社区实现。在这个 - [x] 仅身体的H3D格式(263维,24个关节) - [x] 全身的的SMPL-X格式(322维 SMPL-X参数)。 -**我们相信这个仓库对于在更大的文本-动作数据上训练模型将会非常有用。我们会在不久的将来整合更多的文本-动作动捕数据集。** +***我们相信这个仓库对于在更大的文本-动作数据上训练模型将会非常有用。我们会在不久的将来整合更多的文本-动作动捕数据集。*** 我们尽可能简化了数据处理过程。对于对数据集不熟悉的朋友,在接下来的几周,我们将提供一个视频教程来告诉您如何完成。 # 🏃🏼 TODO List -- [ ] 支持SMPL-X动作表示计算(包括手和身体的位置、速度、旋转,预计1-2周内)。 +- [ ] ***支持SMPL-X动作表示计算(包括手和身体的位置、速度、旋转,预计1-2周内)。*** - [ ] 支持`seg`和`seq`两种BABEL的标注子集。 - [ ] 提供教程视频。 - [ ] 支持更多语言文档。