Releases: d2l-ai/d2l-en
Release v0.14.0
Highlights
We have added both PyTorch and TensorFlow implementations up to Chapter 7 (Modern CNNs).
Improvements
- We updated the text to be framework neutral, such as now we call
ndarray
as tensor. - Readers can click the tab in the HTML version to switch between frameworks, both colab button and discussion thread will change properly.
- We changed the release process, d2l.ai will host the latest release (i.e. the release branch), instead of the contents from the master branch. We unified the version number of both text and the
d2l
package. That's why we jumped from v0.8 to v0.14.0 - The notebook zip contains three folders,
mxnet
,pytorch
andtensorflow
(though we only build the PDF for mxnet yet).
Release v0.8.0
Highlights
D2L is now runnable on Amazon SageMaker and Google Colab.
New Contents
The following chapters are re-organized:
- Natural Language Processing: Pretraining
- Natural Language Processing: Applications
The following sections are added:
- Subword Embedding (Byte-pair encoding)
- Bidirectional Encoder Representations from Transformers (BERT)
- The Dataset for Pretraining BERT
- Pretraining BERT
- Natural Language Inference and the Dataset
- Natural Language Inference: Using Attention
- Fine-Tuning BERT for Sequence-Level and Token-Level Applications
- Natural Language Inference: Fine-Tuning BERT
Improvements
There have been many light revisions and improvements throughout the book.
Release v0.7.0
Highlights
- D2L is now based on the NumPy interface. All the code samples are rewritten.
New Contents
-
Recommender Systems
- Overview of Recommender Systems
- The MovieLens Dataset
- Matrix Factorization
- AutoRec: Rating Prediction with Autoencoders
- Personalized Ranking for Recommender Systems
- Neural Collaborative Filtering for Personalized Ranking
- Sequence-Aware Recommender Systems
- Feature-Rich Recommender Systems
- Factorization Machines
- Deep Factorization Machines
-
Appendix: Mathematics for Deep Learning
- Geometry and Linear Algebraic Operations
- Eigendecompositions
- Single Variable Calculus
- Multivariable Calculus
- Integral Calculus
- Random Variables
- Maximum Likelihood
- Distributions
- Naive Bayes
- Statistics
- Information Theory
-
Attention Mechanisms
- Attention Mechanism
- Sequence to Sequence with Attention Mechanism
- Transformer
-
Generative Adversarial Networks
- Generative Adversarial Networks
- Deep Convolutional Generative Adversarial Networks
-
Preliminaries
- Data Preprocessing
- Calculus
Improvements
- The Preliminaries chapter is improved.
- More theoretical analysis is added to the Optimization chapter.
Preview Version
Hard copies of a D2L preview version based on this release (excluding chapters of Recommender Systems and Generative Adversarial Networks) are distributed at AWS re:Invent 2019 and NeurIPS 2019.
Release v0.6.0
Change of Contents
We heavily revised the following chapters, especially during teaching STAT 157 at Berkeley.
- Preface
- Installation
- Introduction
- The Preliminaries: A Crashcourse
- Linear Neural Networks
- Multilayer Perceptrons
- Recurrent Neural Networks
The Community Are Translating D2L into Korean and Japanese
d2l-ko in Korean (website: ko.d2l.ai) joins d2l.ai! Thank Muhyun Kim, Kyoungsu Lee, Ji hye Seo, Jiyang Kang and many other contributors!
d2l-ja in Japanese (website: ja.d2l.ai) joins d2l.ai! Thank Masaki Samejima!
Thanks to Our Contributors
@alxnorden, @avinashingit, @bowen0701, @brettkoonce, Chaitanya Prakash Bapat, @cryptonaut, Davide Fiocco, @edgarroman, @gkutiel, John Mitro, Liang Pu, Rahul Agarwal, @mohamed-ali, @mstewart141, Mike Müller, @NRauschmayr, @prakhar Srivastav, @sad-, @sfermigier, Sheng Zha, @sundeepteki, @topecongiro, @tpdi, @vermicelli, Vishaal Kapoor, @vishwesh5, @YaYaB, Yuhong Chen, Evgeniy Smirnov, @lgov, Simon Corston-Oliver, @IgorDzreyev, @trungha-ngx, @pmuens, @alukovenko, @senorcinco, @vfdev-5, @dsweet, Mohammad Mahdi Rahimi, Abhishek Gupta, @uwsd, @domkm, Lisa Oakley, @vfdev-5, @bowen0701, @arush15june, @prasanth5reddy.
Release v0.5.0
Contents
-
Translated contents from https://github.com/d2l-ai/d2l-zh, including the following chapters
- Introduction
- A Taste of Deep Learning
- Deep Learning Basics
- Deep Learning Computation
- Convolutional Neural Networks
- Recurrent Neural Networks
- Optimization Algorithms
- Computational Performance
- Computer Vision
- Natural Language Processing
- Appendix
-
Added new contents in the following chapters
- Introduction
- A Taste of Deep Learning
- Deep Learning Basics
- Deep Learning Computation
- Convolutional Neural Networks
Style
- Improved HTML styles
- Improved PDF styles
Chinese Version
v1.0.0-rc0 is released: https://github.com/d2l-ai/d2l-zh/releases/tag/v1.0.0-rc0
The physical book will be published soon.
Thanks to Our Contributors
alxnorden, avinashingit, bowen0701, brettkoonce, Chaitanya Prakash Bapat, cryptonaut, Davide Fiocco, edgarroman, gkutiel, John Mitro, Liang Pu, Rahul Agarwal, mohamed-ali, mstewart141, Mike Müller, NRauschmayr, Prakhar Srivastav, sad-, sfermigier, Sheng Zha, sundeepteki, topecongiro, tpdi, vermicelli, Vishaal Kapoor, vishwesh5, YaYaB