Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track Maintenance: will journey-script-tests continue to work? #240

Open
kytrinyx opened this issue Aug 22, 2017 · 12 comments
Open

Track Maintenance: will journey-script-tests continue to work? #240

kytrinyx opened this issue Aug 22, 2017 · 12 comments

Comments

@kytrinyx
Copy link
Member

Imported from exercism/DEPRECATED.v2-feedback#64.
Originally posted by @NobbZ.

From my userpoint of view, the process of retrieving has changed massively, I now do even need approval by a mentor.

But from a track-maintainers way, who uses a journey-script for testing the track, it worries me.

Those journey-scripts set up a mini-exercism on travis, which does only know about a single track. Also the CLI is used to fetch exercises from the mini-exercism.

This way of testing does not only ensure, that the tests and example solution do work, but also that all files are served correctly (not getting mangled because of symlink, or accidentaly match ignore pattern, etc).

Will such a script still be possible with nextercism CLI and API? I do not mind to change it a bit to make it suite the new CLI commands or because the mini-exercism has to be set up differently, I mean in general… Or do I need to go back to a script which simply walks the file tree and runs the test without pulling them?

@kytrinyx
Copy link
Member Author

@kytrinyx said:

This is a really good observation, @NobbZ!

I would like to think about a way to make it possible to spin up a very small API so that the journey tests still work.

I need to think for a bit about the best way to accomplish it, but it would be very valuable to have a journey test that we could plug any track into and interact with each of the exercises.

@kytrinyx
Copy link
Member Author

@petertseng said:

now that there is less dynamic work being done (no more readme generation) it is possible that many problems that were previously only checkable via the journey test are now testable via tools that do not require the API.

For example, the most pressing issue was that tracks may intend to have an exercise based off of problem-specifications but have misspelled the exercise slug. This is no longer a problem.

Knowing the nature of the remaining problems where it is useful to have the API will help prioritise this.

@kytrinyx
Copy link
Member Author

@NobbZ said:

But this might still discover accidental symlinking in a repo instead of copying.

@kytrinyx
Copy link
Member Author

@kytrinyx said:

But this might still discover accidental symlinking in a repo instead of copying.

Is this something that configlet could figure out? Or asked differently, do you have an example of what that looks like?

@NobbZ
Copy link
Member

NobbZ commented Jul 25, 2018

I think we can close this.

I've already had issues because of this and now am back to a classical testrunner.

@kytrinyx
Copy link
Member Author

@exercism/kotlin @exercism/java I think it would be worth discussing how to best integration test tracks.

@FridaTveit
Copy link

Sounds like a good idea! 🙂 On the java track we were using the exercism cli to run the tests before merging pull requests, will that definitely not be possible anymore?

@NobbZ
Copy link
Member

NobbZ commented Jul 27, 2018

As I had pending PRs on erlang and wanted a quick solution, I didn't delve into it.

The original error was that I was unable to use fetch for the exercise.

And since I knew about mentoring, and independant mode and mentored mode and approval and stuff, I did not check any further.

I simply assumed that I need to have web-page interaction to choose independant mode at least and was not sure how to do that at best.

This is why I just switched to a classical runner that simply iterates over all exercises defined in the config.json.

@FridaTveit
Copy link

We've experienced issues on the Java track as well. It looks like download requires a token and we get some errors to do with x-api.

I was wondering, more a question for @kytrinyx, if this was you're planning to make possible? If it were possible then that would require fewer changes on our side and would let us keep testing the whole journey 🙂

But if it's going to very difficult to implement that with the exercism cli, or if there's just not enough time for anyone to do it, then we could change our scripts like you have @NobbZ 🙂 I think that would test the most important parts of the track anyway? What do you think @sjwarner-bp?

@kytrinyx
Copy link
Member Author

I've no plans to make it possible to to download an exercise without a token -- that would require changes to the API which are not in line with the product.

What I'd love to discuss is what we are ultimately trying to solve with the journey tests, and how we can best solve that (that might be tweaks that will let the journey tests run, or it could be some other approach).

@FridaTveit
Copy link

Okay. We've changed our journey-test now to be a more classical test runner and not use exercism cli. Using exercism cli would give more thorough tests but I can't think of any particular issues that it would find that using a classical test runner wouldn't at the moment so I'm happy sticking with that 🙂

@iHiD iHiD transferred this issue from exercism/exercism Jun 26, 2019
@NobbZ
Copy link
Member

NobbZ commented Jun 26, 2019

As in the meantime all tracks that had a journeyscript in place have changed their testing to simply iterate the exercises folder or reading the config.json to extract the exercises, I think we can close this issue?

@iHiD iHiD changed the title track-maintanace: will journey-script-tests continue to work? Track Maintenance: will journey-script-tests continue to work? Jun 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants