-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement multiple .readthedocs.yml files per repo #10001
Conversation
0082c27
to
fb84ad8
Compare
21977cb
to
89fc389
Compare
Thanks for opening this PR 💯 . We had a long conversation about supporting this feature in #8811 and we didn't arrive at consensus at that time. We decided to de-prioritize this work because we weren't sure about what was the right direction. We decided to start with a Sphinx extension and check for adoption (https://github.com/readthedocs/sphinx-multiproject/) to decide whether or not re-prioritize this work. I'll try to use your PR and energy to recover that conversation with the rest of the team and see if we can move forward with a simple and basic implementation at least. |
Thanks for helping clarify! Sorry I missed the longer issue 🙃. Feel free to close this PR if this isn't the direction ya'll want to go :) |
@ewdurbin I like the approach, my thoughts were around maintaining reproduce-ability. This approach will work well from the beginning and without future debt if the configuration path ( There will be a number of scenarios to cater for, some now and some later, here's a bit of a raw dump of things that I find to be actionable... I'll need to hear back if others think that this can make the approach work:
Edited with updates:
|
Thanks for the comprehensive overview @benjaoming. Checking in here to see if there's any appetite for seeing this across the line. We're actively interested in publishing multiple doc sets (built with multiple tools so the multi-sphinx plugin won't work :() for at PyPI and this is blocking that. |
@ewdurbin Thanks for the ping. We definitely want to make sure we can get this working for y'all. I will bring it up in our roadmap planning call tomorrow. 👍 |
@ewdurbin We have talked about this, and have put it on our upcoming sprint. Can you share a bit more about the use case you have here? I looked through the PyPI issue, but didn't see the exact problem. Is this mainly around multiple mkdocs projects, or mkdocs & sphinx mixed? |
We currently have developer focused docs that are being published via read the docs to warehouse.pypa.io. We intend to add end-user documentation and a blog hosted out of the same repo, each on their own domain. Reasoning for multiple disparate sets of docs being built out of pypi/warehouse is to allow us to deliver code, docs, and blog changes in a single merge, allowing them to be reviewed side by side. As far as the mixture of doc building tools, we're just a little burnt out on sphinx and are trying to move away from it 😅 |
@ewdurbin Ha, no worries on Sphinx -- it's not the perfect tool for everything :) We're also using things like Pelican on RTD these days 🐦 Just trying to understand the use case more, and make sure we're building something that will fit it. |
@stsewd @humitos Curious what y'all think about moving forward with this approach to this, but behind a feature flag? So basically, this field shows up on the project after we enable it on the project. That at least unblocks this for users, and gives us a point of contact for anyone who needs this feature, and get some initial data on how folks are using it? If we need to change approach in the future, at least folks will know they opted into this option? |
@ericholscher maybe we want to continue the discussion at #8811? |
We did a call here, and are 👍 on moving forward with this approach (#8811 (comment)). |
Nice! I'll put some time into fleshing this out with tests/docs later this week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels reasonably close. I need to do a bit further of a review, but noted a few places that seemed good to clean up.
I can take over this PR if you don't want to put it over the finish line. We normally deploy on Tuesday's, so we could aim to get this out next week I think.
@ericholscher I wouldn't be circling back to this until probably Friday afternoon, but I am 100% ok with you taking this over. |
@ewdurbin Great. I'll see if someone on our side can get to it this week 👍 |
@ewdurbin I'm gonna have a look here and might be pushing changes to the PR today. LMK if we're overlapping, but I'm guessing your afternoon is happening after I've signed off 👍 |
@benjaoming I'm well over 5 hours from when I'd be poking at this, so no overlap likely :) |
A quick heads up about some recent pre-commit changes from #10105 Basically, just ignore all linting errors that are unrelated to changes, run |
Nice work btw @ewdurbin 🎉 The changes are working and we can build with a custom configuration file path and fail when it doesn't work. I would like to know if we should inject a pseudo-command that just does like "Using custom config file: path.yaml" after the checkout is completed. |
…ion data, relax user input validation, introduce very basic reproducibility of builds
aad5f80
to
80c3140
Compare
Sorry about the force-push there, I am running a bit short on time and while thinking I had a quick-fix for the linting issues, I got stuck with a bunch of unclean changes after resetting and re-running pre-commit. |
I've added a test project here: https://github.com/readthedocs/test-builds/tree/monorepo It seems to work for project1, just sorting out why project2 is failing atm. |
One note on the new test project: We should try out PR builds to see how the feedback works. I'm supposing that the fact that we're appending the project name will make it show up nicely. I'm assuming that it's gonna work fine, and that this can be a post-deployment inspection. |
Just finished running a local test on Read the Docs for Business and it works without modifications 👍 The only issue is that the manual import isn't exposed, but I think that's an entirely separate issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. I made some comments and suggestions, but I think nothing is blocking.
I want to say that I'm still worried/concerned about these changes because it touches a lot of core places that are not easy to test via our testing framework. I recommend to do as much QA locally as possible to avoid deploy surprises and/or downtime. This implies, among others, testing the monorepo
branch from test-builds
but also as many branch from the same repository as possible making sure that Conda, different Python versions, build.commands
, etc work as expected.
Co-authored-by: Manuel Kaufmann <[email protected]>
…ction text, don't say "nested".
…e to talk with me again
Just building a bunch of branches from test-builds locally now... if that all works, I'll go ahead and merge this 👍 |
Very very niche issueWhile testing, I discovered this annoying little "race condition" bug, but it's mostly relevant for testers and not so likely in real life. It's a variation of #9942 If you only have the .readthedocs.yaml file in a special (non-default) branch and you configure the custom path BEFORE a newly added project has finished it's first successful build, then a state of "no branches detected" occurs. This is probably because the Advanced Settings form overwrites the default branch with an empty select input (whatever custom branch was defined hasn't yet been made part of the eligible branches because the build hasn't finished). The workaround is to leave the setting blank, create a successful build and then change the .readthedocs.yaml after that 🤯 Related: #9606 |
Actually, the above issue might occur in real life if someone is manually importing a couple of new projects quickly -- it seems there are a ways to make the checkout step break in the first build, and the only way to recover it is to have a functional default branch setup. So re-visiting Advanced Settings might be required in order to make the first build complete the checkout step. Seeing that it's pretty hard for me to even clearly write down the issue, I think it might be worth noting in #9942 if this is something that you run into @ewdurbin in the future. |
@stsewd @ericholscher I was advised by @humitos to try some different existing builds to see if they still work. I found a Conda environment The build that works fine but fails locally: https://readthedocs.org/projects/test-builds/builds/20117124/ I'm not able to understand what could have gone wrong from the local log: Local build log
What I tried:
Could someone else look into this? |
Have you tested the branch
It seems you are compiling conda on each build. You should take a look at readthedocs.org/scripts/compile_version_upload_s3.sh Lines 4 to 9 in 60c74d5
|
@humitos that worked 👍 I wasn't aware that the tools had to be bootstrapped like that, no. The Development Install guide doesn't mention it: https://dev.readthedocs.io/en/latest/install.html - it'd be interesting with a list that can quickly bootstrap the most commonly used tools. |
I'm unable to come up with new ways that this might fail, so I'm merging it 🤞 @ewdurbin I'm guessing you might be very busy with PyCon, and that's great ❤️ The changes are scheduled to be deployed tomorrow (I can get back with a final confirmation on that), so if you do get around to trying it out, please send me a note here or in #8234 - or open up a new issue if something is borked or sub-optimal 😇 |
I'll be at PyCon 😎 , so if you are around we can take a look at any issue you may have with this during the sprints 😉 |
There's a little public example of the new feature available here: |
An attempted implementation of #8234.
Opening as draft PR. If the approach is reasonable, I'm happy to push this across the line with tests and docs.
Also as is, I'm unable to get a clean test suite pass locally on main... so hoping CI will show me what else I missed.