Skip to content

Commit

Permalink
Merge branch 'branch-25.06' into use_cb
Browse files Browse the repository at this point in the history
  • Loading branch information
jakirkham authored Mar 4, 2025
2 parents bb0d0d6 + 3379a95 commit af17f37
Show file tree
Hide file tree
Showing 10 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-128_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ dependencies:
- cuda-version=12.8
- cudf=25.02
- cuml=25.02.*
- cupy
- cupy<13.4
- cxx-compiler
- cython=3.0
- datacompy=0.10
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/all_cuda-128_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ dependencies:
- cuda-version=12.8
- cudf=25.02
- cuml=25.02.*
- cupy
- cupy<13.4
- cxx-compiler
- cython=3.0
- datacompy=0.10
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/dev_cuda-128_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- cuda-sanitizer-api
- cuda-version=12.8
- cudf=25.02
- cupy
- cupy<13.4
- cxx-compiler
- cython=3.0
- datacompy=0.10
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/dev_cuda-128_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ dependencies:
- cuda-sanitizer-api
- cuda-version=12.8
- cudf=25.02
- cupy
- cupy<13.4
- cxx-compiler
- cython=3.0
- datacompy=0.10
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/examples_cuda-128_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ dependencies:
- click>=8
- cudf=25.02
- cuml=25.02.*
- cupy
- cupy<13.4
- datacompy=0.10
- dill=0.3.7
- docker-py=5.0
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/examples_cuda-128_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ dependencies:
- click>=8
- cudf=25.02
- cuml=25.02.*
- cupy
- cupy<13.4
- datacompy=0.10
- dill=0.3.7
- docker-py=5.0
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/runtime_cuda-128_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ dependencies:
- cuda-nvtx=12.8
- cuda-version=12.8
- cudf=25.02
- cupy
- cupy<13.4
- datacompy=0.10
- dill=0.3.7
- docker-py=5.0
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/runtime_cuda-128_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ dependencies:
- cuda-nvtx=12.8
- cuda-version=12.8
- cudf=25.02
- cupy
- cupy<13.4
- datacompy=0.10
- dill=0.3.7
- docker-py=5.0
Expand Down
2 changes: 1 addition & 1 deletion dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -384,7 +384,7 @@ dependencies:
- click>=8
# - cuda-version=12.8 ##
- *cudf
- cupy # Version determined from cudf
- cupy<13.4 # Version determined from cudf
- datacompy=0.10
- dill=0.3.7
- docker-py=5.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ else:
pipeline.add_stage(RecipientFeaturesStage(config))
```

To tokenize the input data we will use Morpheus' `PreprocessNLPStage`. This stage uses the [cuDF subword tokenizer](https://docs.rapids.ai/api/cudf/legacy/user_guide/api_docs/subword_tokenize/#subwordtokenizer) to transform strings into a tensor of numbers to be fed into the neural network model. Rather than split the string by characters or whitespaces, we split them into meaningful subwords based upon the occurrence of the subwords in a large training corpus. You can find more details here: [https://arxiv.org/abs/1810.04805v2](https://arxiv.org/abs/1810.04805v2). All we need to know for now is that the text will be converted to subword token ids based on the vocabulary file that we provide (`vocab_hash_file=vocab file`).
To tokenize the input data we will use Morpheus' `PreprocessNLPStage`. This stage uses the [cuDF subword tokenizer](https://docs.rapids.ai/api/cudf/stable/pylibcudf/api_docs/nvtext/subword_tokenize/) to transform strings into a tensor of numbers to be fed into the neural network model. Rather than split the string by characters or whitespaces, we split them into meaningful subwords based upon the occurrence of the subwords in a large training corpus. You can find more details here: [https://arxiv.org/abs/1810.04805v2](https://arxiv.org/abs/1810.04805v2). All we need to know for now is that the text will be converted to subword token ids based on the vocabulary file that we provide (`vocab_hash_file=vocab file`).

Let's go ahead and instantiate our `PreprocessNLPStage` and add it to the pipeline:

Expand Down

0 comments on commit af17f37

Please sign in to comment.