Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix generation using Jetstream Pytorch #94

Merged
merged 6 commits into from
Sep 23, 2024
Merged

Conversation

tengomucho
Copy link
Collaborator

@tengomucho tengomucho commented Sep 18, 2024

What does this PR do?

Text generation was not correct because the weights in the model were not correctly loaded. This is not something that it was easy to spot just looking at few tokens generated, and it was something that it was actually fixed already in the Jetstream/Pytorch code, but the fix hadn't been ported to optimum-tpu.

This fix implement the necessary weights changes, aligning to Jetstream Pytorch, and tests expected output has been modified accordingly.

I plan to submit a change directly to the Jetstream Pytorch project, so to be able to reuse their modeling code and avoid surprises like this one in the future. EDIT: I added a commit that uses the original model, so there will be no need to request a change.

Also, I changed the way we install Jetstream Pytorch, to make it more reliable, as the pip install with the git revision was failing sometimes.

Text generation was not correct because the weights in the model were
not correctly loaded. This is not something that it was easy to spot
just looking at few tokens generated, and it was something that it was
actually fixed already in the Jetstream/Pytorch code, but the fix hadn't
been ported to optimum-tpu.

This fix implement the necessary weights changes, aligning to Jetstream
Pytorch, and tests expected output has been modified accordingly.
@tengomucho tengomucho force-pushed the investigate-generation branch 2 times, most recently from edb7ee3 to a8cad0a Compare September 18, 2024 15:43
@tengomucho tengomucho marked this pull request as ready for review September 18, 2024 19:47
The main workflow was failing due to an OS error. I suspect that being
related to a problem of space. Separating the workflow will make it
easier to analyse this issue.
I was previously referencing a given git revision and install from
github, but since the Jetstream Pytorch package install its dependencies
from its git submodels, these are installed in temporary directories,
that can disappear afterwards. This happened on CI, making the
installation fail.

To work around that, a dedicated install script has been added, and it
is now used to install that.
Since this is error-prone, a better solution is just to use this.
This hadn't been done before mainly because in the model config we do
not have some of the params anymore (ffn_dim_multiplier and
multiple_of). We do have intermediate_size though, and that is enough to
reconstruct parameters that end up producing the same calculation.

This refactor should allow for future code to follow Jetstream/Pytorch
changes in an easier way.
Copy link
Member

@mfuntowicz mfuntowicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ✅

@@ -330,6 +331,9 @@ def warmup(self, batch: Batch) -> int:
# Counter-intuitively, now we ignore the input batch. Instead, we create dummy batches to cover all possible
# batch sizes and sequence lengths.
seq_len = self.model.config.sequence_length
if os.environ.get("SKIP_WARMUP", "0") == "1":
logger.debug("Skipping warmup")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this is used mostly for debug only? Or can it be turned on for other reasons? In the later case I would use logger.warning if not, logger.debug is fine

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say it's only for debugging. Warmup will check if model can fit in memory and prepare inference so prefill and decode is rather fast afterwards, but it can last around 4-5 minutes, so it can be annoying for debugging the container.

@tengomucho tengomucho merged commit 094d8a8 into main Sep 23, 2024
3 checks passed
@tengomucho tengomucho deleted the investigate-generation branch September 23, 2024 10:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants