Skip to content

Commit

Permalink
Set torch upper bound to <2.1.0 (#363)
Browse files Browse the repository at this point in the history
* Set torch upper bound to <2.1.0

Some changes in PyTorch 2.1.0 and later are incompatible with
Curated Transformers 1.x. Fixing these issues would require
API changes. So we set the upper bound on supported PyTorch
versions. We will soon release Curated Transformers 2.0.0,
which is compatible with the lastest PyTorch versions.

* black
  • Loading branch information
danieldk authored Feb 11, 2024
1 parent cd53833 commit 4055d7e
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 12 deletions.
8 changes: 5 additions & 3 deletions curated_transformers/models/falcon/layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,11 @@ def __init__(
n_key_value_heads=attention_config.n_key_value_heads,
),
rotary_embeds=rotary_embeds,
qkv_mode=QkvMode.MERGED_SPLIT_AFTER
if attention_config.n_key_value_heads == 1
else QkvMode.MERGED_SPLIT_BEFORE,
qkv_mode=(
QkvMode.MERGED_SPLIT_AFTER
if attention_config.n_key_value_heads == 1
else QkvMode.MERGED_SPLIT_BEFORE
),
use_bias=attention_config.use_bias,
device=device,
)
Expand Down
4 changes: 1 addition & 3 deletions curated_transformers/tests/models/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,7 @@ class JITMethod(Enum):
TorchCompile = 1
TorchScriptTrace = 2

def convert(
self, model: Module, with_torch_sdp: bool, *args
) -> Tuple[
def convert(self, model: Module, with_torch_sdp: bool, *args) -> Tuple[
Union[Module, torch.ScriptModule],
Callable[[Union[ModelOutput, Dict[str, torch.Tensor]]], Tensor],
]:
Expand Down
6 changes: 2 additions & 4 deletions curated_transformers/tokenizers/legacy/legacy_tokenizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -155,12 +155,10 @@ def _convert_strings(
@abstractmethod
def _decode(
self, input: Iterable[Iterable[int]], skip_special_pieces: bool
) -> List[str]:
...
) -> List[str]: ...

@abstractmethod
def _encode(self, input: Iterable[MergedInputChunks]) -> PiecesWithIds:
...
def _encode(self, input: Iterable[MergedInputChunks]) -> PiecesWithIds: ...


class AddBosEosPreEncoder(PreEncoder):
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
curated-tokenizers>=0.9.1,<1.0.0
huggingface-hub>=0.14
tokenizers>=0.13.3
torch>=1.12.0
torch>=1.12.0,<2.1.0

# Development dependencies
mypy>=0.990,<1.1.0; platform_machine != "aarch64"
Expand Down
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ install_requires =
curated-tokenizers>=0.9.1,<1.0.0
huggingface-hub>=0.14
tokenizers>=0.13.3
torch>=1.12.0
torch>=1.12.0,<2.1.0

[options.extras_require]
quantization =
Expand Down

0 comments on commit 4055d7e

Please sign in to comment.