Skip to content

Commit

Permalink
[Doc] Update v0.6 (#430)
Browse files Browse the repository at this point in the history
  • Loading branch information
cenyk1230 authored Apr 27, 2023
1 parent ee62fd0 commit bf094be
Show file tree
Hide file tree
Showing 6 changed files with 37 additions and 25 deletions.
22 changes: 12 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
===

[![PyPI Latest Release](https://badge.fury.io/py/cogdl.svg)](https://pypi.org/project/cogdl/)
[![Build Status](https://travis-ci.org/THUDM/cogdl.svg?branch=master)](https://travis-ci.org/THUDM/cogdl)
[![Build Status](https://app.travis-ci.com/THUDM/cogdl.svg?branch=master)](https://app.travis-ci.com/THUDM/cogdl)
[![Documentation Status](https://readthedocs.org/projects/cogdl/badge/?version=latest)](https://cogdl.readthedocs.io/en/latest/?badge=latest)
[![Downloads](https://pepy.tech/badge/cogdl)](https://pepy.tech/project/cogdl)
[![Coverage Status](https://coveralls.io/repos/github/THUDM/cogdl/badge.svg?branch=master)](https://coveralls.io/github/THUDM/cogdl?branch=master)
Expand All @@ -21,20 +21,22 @@ We summarize the contributions of CogDL as follows:

## ❗ News

- [The CogDL paper](https://arxiv.org/abs/2103.00959) was accepted by [WWW 2023](https://www2023.thewebconf.org/). Find us at WWW 2023! We also release the new **v0.6 release** which adds more examples of graph self-supervised learning, including [GraphMAE](https://github.com/THUDM/cogdl/tree/master/examples/graphmae), [GraphMAE2](https://github.com/THUDM/cogdl/tree/master/examples/graphmae2), and [BGRL](https://github.com/THUDM/cogdl/tree/master/examples/bgrl).

- A free GNN course provided by CogDL Team is present at [this link](https://cogdl.ai/gnn2022/). We also provide a [discussion forum](https://discuss.cogdl.ai) for Chinese users.

- The new **v0.5.3 release** supports mixed-precision training by setting \textit{fp16=True} and provides a basic [example](https://github.com/THUDM/cogdl/blob/master/examples/jittor/gcn.py) written by [Jittor](https://github.com/Jittor/jittor). It also updates the tutorial in the document, fixes downloading links of some datasets, and fixes potential bugs of operators.

- The new **v0.5.2 release** adds a GNN example for ogbn-products and updates geom datasets. It also fixes some potential bugs including setting devices, using cpu for inference, etc.

- The new **v0.5.1 release** adds fast operators including SpMM (cpu version) and scatter_max (cuda version). It also adds lots of datasets for node classification which can be found in [this link](./cogdl/datasets/rd2cd_data.py). 🎉

<details>
<summary>
News History
</summary>
<br/>

- The new **v0.5.2 release** adds a GNN example for ogbn-products and updates geom datasets. It also fixes some potential bugs including setting devices, using cpu for inference, etc.

- The new **v0.5.1 release** adds fast operators including SpMM (cpu version) and scatter_max (cuda version). It also adds lots of datasets for node classification which can be found in [this link](./cogdl/datasets/rd2cd_data.py). 🎉

- The new **v0.5.0 release** designs and implements a unified training loop for GNN. It introduces `DataWrapper` to help prepare the training/validation/test data and `ModelWrapper` to define the training/validation/test steps. 🎉

- The new **v0.4.1 release** adds the implementation of Deep GNNs and the recommendation task. It also supports new pipelines for generating embeddings and recommendation. Welcome to join our tutorial on KDD 2021 at 10:30 am - 12:00 am, Aug. 14th (Singapore Time). More details can be found in https://kdd2021graph.github.io/. 🎉
Expand Down Expand Up @@ -207,7 +209,7 @@ So how do you do a unit test?
</details>

## CogDL Team
CogDL is developed and maintained by [Tsinghua, ZJU, BAAI, DAMO Academy, and ZHIPU.AI](https://cogdl.ai/about/).
CogDL is developed and maintained by [Tsinghua, ZJU, DAMO Academy, and ZHIPU.AI](https://cogdl.ai/about/).

The core development team can be reached at [[email protected]](mailto:[email protected]).

Expand All @@ -216,10 +218,10 @@ The core development team can be reached at [[email protected]](mailto:cogdlte
Please cite [our paper](https://arxiv.org/abs/2103.00959) if you find our code or results useful for your research:

```
@article{cen2021cogdl,
title={CogDL: A Toolkit for Deep Learning on Graphs},
@inproceedings{cen2023cogdl,
title={CogDL: A Comprehensive Library for Graph Deep Learning},
author={Yukuo Cen and Zhenyu Hou and Yan Wang and Qibin Chen and Yizhen Luo and Zhongming Yu and Hengrui Zhang and Xingcheng Yao and Aohan Zeng and Shiguang Guo and Yuxiao Dong and Yang Yang and Peng Zhang and Guohao Dai and Yu Wang and Chang Zhou and Hongxia Yang and Jie Tang},
journal={arXiv preprint arXiv:2103.00959},
year={2021}
booktitle={Proceedings of the ACM Web Conference 2023 (WWW'23)},
year={2023}
}
```
12 changes: 7 additions & 5 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
===

[![PyPI Latest Release](https://badge.fury.io/py/cogdl.svg)](https://pypi.org/project/cogdl/)
[![Build Status](https://travis-ci.org/THUDM/cogdl.svg?branch=master)](https://travis-ci.org/THUDM/cogdl)
[![Build Status](https://app.travis-ci.com/THUDM/cogdl.svg?branch=master)](https://app.travis-ci.com/THUDM/cogdl)
[![Documentation Status](https://readthedocs.org/projects/cogdl/badge/?version=latest)](https://cogdl.readthedocs.io/en/latest/?badge=latest)
[![Downloads](https://pepy.tech/badge/cogdl)](https://pepy.tech/project/cogdl)
[![Coverage Status](https://coveralls.io/repos/github/THUDM/cogdl/badge.svg?branch=master)](https://coveralls.io/github/THUDM/cogdl?branch=master)
Expand All @@ -21,20 +21,22 @@ CogDL的特性包括:

## ❗ 最新

- [CogDL论文](https://arxiv.org/abs/2103.00959)[WWW 2023](https://www2023.thewebconf.org/)接收。 欢迎大家关注我们在WWW 2023的报告! 我们同时发布最新的 **v0.6 release**,添加了一系列图自监督学习的示例,包括[GraphMAE](https://github.com/THUDM/cogdl/tree/master/examples/graphmae), [GraphMAE2](https://github.com/THUDM/cogdl/tree/master/examples/graphmae2), 和[BGRL](https://github.com/THUDM/cogdl/tree/master/examples/bgrl).

- CogDL团队为大家开设了一门免费的GNN课程,大家可以访问[这个链接](https://cogdl.ai/gnn2022/)来获取。我们为大家提供了一个[讨论区](https://discuss.cogdl.ai)来进行交流。

- 最新的 **v0.5.3 release** 支持混合精度(fp16)训练,提供了[Jittor](https://github.com/Jittor/jittor)的初步支持(见[example](https://github.com/THUDM/cogdl/blob/master/examples/jittor/gcn.py))。这个版本更新了文档中的使用教程,修复了一部分数据集的下载链接,修复了某些算子在不同环境下可能的问题。

- 最新的 **v0.5.2 release** 给ogbn-products数据集添加了GNN样例,更新了geom数据集。这个版本同时修复了一些潜在的问题,包括设置不同device,使用cpu进行预测等。

- 最新的 **v0.5.1 release** 添加了一些高效的算子,包括cpu版本的SpMM和cuda版本的scatter_max。这个版本同时增加了很多用于节点分类的[数据集](./cogdl/datasets/rd2cd_data.py)。 🎉

<details>
<summary>
历史
</summary>
<br/>

- 最新的 **v0.5.2 release** 给ogbn-products数据集添加了GNN样例,更新了geom数据集。这个版本同时修复了一些潜在的问题,包括设置不同device,使用cpu进行预测等。

- 最新的 **v0.5.1 release** 添加了一些高效的算子,包括cpu版本的SpMM和cuda版本的scatter_max。这个版本同时增加了很多用于节点分类的[数据集](./cogdl/datasets/rd2cd_data.py)。 🎉

- 最新的 **v0.5.0 release** 为图神经网络的训练设计了一套统一的流程. 这个版本去除了原先的`Task`类,引入了`DataWrapper`来准备training/validation/test过程中所需的数据,引入了`ModelWrapper`来定义模型training/validation/test的步骤. 🎉

- 最新的 **v0.4.1 release** 增加了深层GNN的实现和推荐任务。这个版本同时提供了新的一些pipeline用于直接获取图表示和搭建推荐应用。欢迎大家参加我们在KDD 2021上的tutorial,时间是8月14号上午10:30 - 12:00(北京时间)。 更多的内容可以查看 https://kdd2021graph.github.io/. 🎉
Expand Down
2 changes: 1 addition & 1 deletion cogdl/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.5.3"
__version__ = "0.6"

from .experiments import experiment
from .pipelines import pipeline
22 changes: 14 additions & 8 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,19 +168,25 @@ def find_version(filename):

# -- Options for LaTeX output ------------------------------------------------

latex_engine = "xelatex"
latex_use_xindy = False
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
#'papersize': 'letterpaper',

# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
#'pointsize': '10pt',

# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
#'preamble': '',

# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
#'figure_align': 'htbp',

# Using Package for ZH
'preamble' : r'''
\usepackage{ctex}
''',
}

# Grouping the document tree into LaTeX files. List of tuples
Expand Down
2 changes: 2 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@ We summarize the contributions of CogDL as follows:
❗ News
------------

- [The CogDL paper](https://arxiv.org/abs/2103.00959) was accepted by [WWW 2023](https://www2023.thewebconf.org/). Find us at WWW 2023! We also release the new **v0.6 release** which adds more examples of graph self-supervised learning, including [GraphMAE](https://github.com/THUDM/cogdl/tree/master/examples/graphmae), [GraphMAE2](https://github.com/THUDM/cogdl/tree/master/examples/graphmae2), and [BGRL](https://github.com/THUDM/cogdl/tree/master/examples/bgrl).

- The new **v0.5.3 release** supports mixed-precision training by setting \textit{fp16=True} and provides a basic [example](https://github.com/THUDM/cogdl/blob/master/examples/jittor/gcn.py) written by [Jittor](https://github.com/Jittor/jittor). It also updates the tutorial in the document, fixes downloading links of some datasets, and fixes potential bugs of operators.
- The new **v0.5.2 release** adds a GNN example for ogbn-products and updates geom datasets. It also fixes some potential bugs including setting devices, using cpu for inference, etc.
- The new **v0.5.1 release** adds fast operators including SpMM (cpu version) and scatter_max (cuda version). It also adds lots of datasets for node classification. 🎉
Expand Down
2 changes: 1 addition & 1 deletion examples/dgraph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Implementing environment:
- numpy = 1.21.2
- pytorch >= 1.6.0
- pillow = 9.1.1
- cogdl = 0.5.3
- cogdl >= 0.5.3

## Training

Expand Down

0 comments on commit bf094be

Please sign in to comment.