Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(devcontainer): 🧑‍💻 add support for reproducible env #141

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

CandiedCode
Copy link
Contributor

@CandiedCode CandiedCode commented Apr 18, 2024

This PR is a continuation of #127 (comment)

Because a lot of this is json, this looks like a lot of lines of change. To keep this PR from growing larger, I only focused on 2 notebooks, XGBoost and pytorch. The other two can be updated to include tests, after this PR has landed.

  • create a dev container with inital setup for python/poetry/pre-commit
  • mount aws/secrets file that is expected in the pytorch notebook
    Note: I used the same key and secret AWS uses in their example seen here
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
  • support black for notebooks and format the notebooks
  • add nbmake tests to pytorch notebook and XGBoostNotebook
  • add github action to run nbmake tests via devcontainer setup
Screenshot 2024-04-21 at 7 24 27 AM

The nbmake tests are on the metadata of the cells. It's not visible in the pull request view from github unless you look at the raw json of the notebook or review the PR in vscode with the gh pull request extension.

Example of reviewing PR with github pull request vscode extension Screenshot 2024-04-21 at 9 45 18 AM
Example of nbmake tests in notebooks

Example where of a failing test to verify nbmake was working

Screenshot 2024-04-21 at 6 06 08 AM Screenshot 2024-04-21 at 6 07 12 AM

Test model file exists

Screenshot 2024-04-21 at 5 59 42 AM

Test model is working as expected

Screenshot 2024-04-21 at 6 00 18 AM

Test model scan gives expected results by creating json report and expecting the values

Screenshot 2024-04-21 at 6 01 26 AM

Test unsafe model report contains expected issues

Screenshot 2024-04-21 at 6 03 02 AM

Note

While writing tests for the pytorch sentiment notebook and comparing the results of the scan from the console vs the results from the json, the console showed skipped.

Should this information not also be part of the summary in the json report?
Screenshot 2024-04-21 at 7 37 59 AM

Also if you like the Test / test-notebooks, since it also acts as end to end tests running model scan, I'd recommend that this is also part of the required actions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant