-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run pytest inside the docker container #444
Conversation
This is needed to properly work inside the container.
Ok, so up to now it is possible to run tests by entering the awb folder and executing the following: docker build -t aiidalab/full-stack-with-browsers:latest tests/
JUPYTER_TOKEN=$(openssl rand -hex 32)
DOCKER_ID=$(docker run -it -d --rm -v $(pwd):/home/jovyan/apps/aiidalab-widgets-base -e JUPYTER_TOKEN=${JUPYTER_TOKEN} aiidalab/full-stack-with-browsers:latest)
docker exec -it ${DOCKER_ID} bash -c "cd apps/aiidalab-widgets-base && pip install -e.[dev,smiles]"
docker exec -it ${DOCKER_ID} bash -c "cd apps/aiidalab-widgets-base && pytest --driver Chrome -s"
docker stop ${DOCKER_ID} I haven't yet figured out how to convert this script to github action. But I am going to try that now. |
docker build -t aiidalab/full-stack-with-browsers:latest tests/ JUPYTER_TOKEN=$(openssl rand -hex 32) DOCKER_ID=$(docker run -it -d --rm -v $(pwd):/home/jovyan/apps/aiidalab-widgets-base -e JUPYTER_TOKEN=${JUPYTER_TOKEN} aiidalab/full-stack-with-browsers:latest) docker exec -it ${DOCKER_ID} bash -c "cd apps/aiidalab-widgets-base && pip install -e.[dev,smiles]" docker exec -it ${DOCKER_ID} bash -c "cd apps/aiidalab-widgets-base && pytest --driver Chrome -s" docker stop ${DOCKER_ID}
This is a great effort, but I am just wondering, are we sure we want to introduce this much additional complexity to the current test suite? btw: When I was thinking about this problem for my own app, I could not wrap my head around how to get code coverage data from a jupyter python kernel. Have you been able to solve that issue? |
I am reducing complexity actually. Everything happens inside the container.
It looks like it is not possible when using selenium. But nbmake does help here, see my PR #441. It executes the notebook cells as a normal python interpreter. |
But, I have to admit, I struggle with the GitHub actions file. It is simply not possible to easily convert the script above into the GitHub actions and let it run. One gets errors that one even doesn't expect. For example, currently, I don't understand why this is happening: running egg_info
creating aiidalab_widgets_base.egg-info
error: could not create 'aiidalab_widgets_base.egg-info': Permission denied
[end of output] Why can I not modify a folder mounted from a container...? |
118bcbb
to
82625ef
Compare
I was referring to the fact of having to build extra Docker image and extra Gihub actions. :-) Also, even if this work, without Selenium interaction the code coverage will not be complete anyway right? Since just executing the notebook is not enough, some code paths get executed through the Selenium interactions. An alternative would be to focus on a more unit-style testing without the notebooks and measure that. Of course that has issues as well. 🤷 |
Let me explain my intention, because I sort of fail to see why it is more complex. Here is what we have now:
So here are the options:
Here is what I am trying to do
So my conclusion. It is fine to keep the selenium tests. But we use them more as smoke tests. The same notebooks run as smoke tests can be run with It is not clear to me what would be an alternative. But if you have an idea - please share. |
2dea6fd
to
3f8d84e
Compare
3f8d84e
to
62e309a
Compare
This seems like a false dichotomy. I would just separate different style of tests into separate Github jobs. |
hey sorry for intruding. nbmake maintainer here. RE the container permissions issue, I've had a lot more success using the Devcontainer CLI in github actions. RE coverage: I am not aware of anybody producing coverage data via nbmake. My initial belief is you won't see any coverage because under the hood we launch a kernel and message it via the local network. Keen to know if I'm wrong! RE testing widgets: What you have put in place for this project is very impressive, and I am aware that others are trying to do the same. Do you recommend this pattern of using Selenium? Could we improve it together? (Issue) |
@alex-treebeard Thanks a lot for chiming in and for the very useful advice!
I think it deserves a try and it will allow us to drop
For the PR itself, I tend to agree with @danielhollas here that for the test coverage, it makes more sense to focus on unit tests. I make a first attempt in #435 where I separate the |
Hi @alex-treebeard 👋 thanks for insightful comments!
Yeah, that's what I was afraid of. I did a bit of googling, and found another similar project, nbval, that claims to be able to generate coverage for code imported to the notebook via integration with pytest-cov. Do you think that is something nbmake could support in the future?
I don't want to speak for others, but it is a new technology for me. It is great when it works and it already caught some bugs for me, but it is a bit hard to get right. It is very easy to create a flaky test if one is not careful, and the Python API documentation is not the best. It would be great to see another examples where people are doing this. We also might be unlucky in that one of the widgets here is doing 3D rendering, which brings its own challenges. You can find some more tests in my repo: |
Thanks, everybody, for the insightful comments. Indeed, I was wrong regarding the test coverage produced by At the moment, I can't continue with this idea due to time constraints. Even if we decide to move back to it at some point, this would require careful thinking. So I am closing the PR. |
fixes #443