-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use firefox and xvfb to test webgl and examples. #651
Conversation
To test this locally, make sure you have xvfb, firefox, and mesa installed. The ci tests will no longer work under windows, as some environment variables are set in the npm commands. When you run 'npm run start' and visit 127.0.0.1:9876/debug.html, all tests (both gl and non-gl) are run. The ci tests run the unit tests in PhantomJS and the gl tests in Firefox. There was a problem running the canvas tests in Chrome via karma debug, which has been fixed. So far this has only two of the many selenium tests moved to headless testing. It is enough to show the technique.
@jbeezley and @aashish24 I'd love to get feedback on this, as I'd like to use it to add tests for the line-rendering branch. If you like this approach, we can move the selenium tests to this technique. Doing so will resolve issues #647 and #263, and allow us to close issues #405 (this doesn't use headless-gl, but I think it satisfies the intent) and possibly #218. |
It looks like one of my earlier attempts of install packages added to travis's cache, so this will require a change to the travis file to get back to that state. |
imageTest.imageTest('choropleth', 0.001, done, myMap.onIdle, 0, 2); | ||
}); | ||
it('clean up', function () { | ||
myMap.exit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, I like to put setup and teardown inside beforeEach
and afterEach
groups to reinforce that individual tests shouldn't share state.
if (threshold === undefined) { | ||
threshold = 0.001; | ||
} | ||
$.ajax({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return the promise here to make it easier to chain operations if necessary.
* @param {integer} rafCount: additional number of renderAnimationFrames to | ||
* wait after the delay. | ||
*/ | ||
module.exports.imageTest = function (name, threshold, doneFunc, idleFunc, delay, rafCount) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be much simpler expressed as promises. For the idleFunc, couldn't we just have the person running the test call when it is ready.
setupTestCase()
.then(imageTest)
.then(doSomethingAfter)
.catch(handleErrors);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that neither the onIdle
event nor Jasmine (as opposed to Mocha) support promises as first class interfaces, the callback interface is probably the easiest to use. I think we can keep the callbacks, but I would like to return a promise if possible to make it easier to chain tests together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll return a $.Deferred()
object that resolves after any doneFunc callback.
This is great! as said earlier.. will have a detailed look bit later. |
* Express style middleware to handle REST requests | ||
* to `/notes` on the test server. | ||
* This function returns true if (1) there is an environment variable set | ||
* called "TEST_SAVE_IMAGE", and either (2a) the name of the test appears in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the method used to generate new baselines? Things like this could use an overview in readthedocs, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I figure I should write a script to explicitly generate new baselines, and probably also a replacement for upload_test_cases.py.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should give credit to @ronichoudhury, as some of this I lifted from the candela project.
Currently, when we download files from data.kitware.com, we do it by item id. I think it would be more robust to switch to the sha checksums. I don't know if that was under consideration elsewhere. |
Return the jquery ajax promise from compareImage. Return a deferred object from imageTest.
Yeah, the content addressed endpoints didn't exist when I made the original switch to Girder. We can make an issue for it. More broadly speaking, I think it's a little odd to have to run ctest to get the data files for the examples. It's fine for testing data, but I wonder if it would be better to just use the girder link directly in the examples. |
I agree that it is odd to need to run ctest to get the data. It is probably just ignorance on my part, but I'm not sure how to get ctest to download the data without running |
We should probably just use ExternalData for the data downloads anyway. |
Current coverage is 89.99% (diff: 100%)@@ master #651 diff @@
==========================================
Files 83 83
Lines 8358 8365 +7
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 7416 7528 +112
+ Misses 942 837 -105
Partials 0 0
|
I think using |
Agreed, I meant for when we decide to make the switch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couldn't post earlier because of github being down. I have some minor comments.
@@ -43,6 +43,7 @@ | |||
"gl-vec4": "^1.0.1", | |||
"glob": "^7.0.3", | |||
"imports-loader": "^0.6.5", | |||
"istanbul-combine": "^0.3.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@manthey is this the package that combines the xmls from different runs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, which is needed to have the xml file in the format we send to cdash. We send multiple lcov files to Codecov, which works without this, but cdash expects a single Cobertura file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome! thanks.
@@ -90,6 +90,17 @@ var vglRenderer = function (arg) { | |||
canvas.attr('class', 'webgl-canvas'); | |||
canvas.css('display', 'block'); | |||
$(m_this.layer().node().get(0)).append(canvas); | |||
|
|||
if (window.contextPreserveDrawingBuffer) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The screenshots are actually the canvas drawing buffers, not actual screenshots. There are virtues to this approach: it is easy to ignore the window border, tabs, etc. and it doesn't matter if the div is aligned the same way. It ignores non-canvas data, which is both good and bad. It does require that you can ask the canvas for the drawing buffer, though, hence this flag,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, I see thanks.
…s run. Also, add a baseline_images configuration test to generate a tarball of the baseline images.
@jbeezley I added a cmake test that generates baseline images, and a script that will upload them. The script isn't as clever as the one you used before where it shows each image in turn. I'll update the documentation to match the new process, unless you'd like to see other changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything works for me. I just have some usability notes on the upload script. Perhaps you can also add a few notes to the developer's guide as well?
'--no-generate', dest='make', action='store_false', | ||
help='Don\'t generate baseline images.') | ||
parser.add_argument( | ||
'--build', '-b', default='_build', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we make the build directory default to the current directory?
# Get the folder we want to upload to to ensure it exists | ||
apiRoot = args['dest'].rstrip('/') + '/api/v1/' | ||
headers = {} | ||
if args.get('token'): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be better just to use girder-client here so we can use an apikey or prompt for user/password. It should simplify much of this function as well.
Change the default build directory from _build to .
I was able to run the branch with no issues, however, I am looking into how I can look at the images captured. Did we update the documentation? |
Looks like TEST_SAVE_IMAGE is not exposed in CMakeLists as something that we can set in CCMAKE. Any objection to expose this and other related variable if that makes sense (I haven't dig into the entire CMakeLists though) |
Save the base image explicitly for easier reference (even though it is in the downloaded data). Composite canvases on a white background for consistency.
@manthey after trying this branch from scratch, I noticed few issues: I had to install this manually:
and got this message. I am looking into it but wondering if you have any thoughts? npm ERR! Exit status 1 |
I'll note that there are some Selenium tests that aren't webgl dependent, such as a variety for the d3 renderer. If we want to have non-selenium image tests for these, we can get an image from the xvfb frame buffer and use it in the tests. I've got some exploratory code where this works, but feel it should not be part of this PR. It has the drawback that it won't work on a user's browser using |
Did you |
I am wondering its because of the firefox itself: vwebpack: bundle is now VALID. |
Yup, I realized that and after running npm install I got the karma problem fixed (as I saw that you had karma-* in the plugins). |
What does |
That sounds about that I would expect. Let me look into it. thanks, |
Okay, I am past mesa / xvfb errors now and will post my results and desired wish list. Update:
|
@aashish24 There is a short section on running tests in docs/developers.rst. Would you can to make explicit recommendations on how that should change? I can add more on how to debug tests. @jbeezley Did you figure out the nvidia path issue? Is it something we should add to the docs? Do we want to remove the selenium tests as we add them to headless webgl? |
I couldn't make it work, but since it works fine non-headless, I think it's okay.
Yes, but that can be done later. |
If I set the LD_LIBRARY_PATH and point it to the directory that contains the libGL.so.* of Mesa, everything worked for me. |
I was thinking for example the debug end point would be worth mentioning in the document. |
It is already in there (in developers.rst). |
Great. Then the only thing lest is to add some material on mesa setup? Thoughts? |
@manthey pleasae see my last comment. Thanks |
I'll add a section on the minimal packages needed in Ubuntu 14.04 to get the xvfb tests running. A general osmesa installation guide is out of our scope, but showing how to do it for one OS should suffice. |
I think that is fine. I would suggest we merge it if @jbeezley is happy with the changes as well. |
👏 |
@aashish24 I added some documentation about provisioning Ubuntu 14.04. Let me know if you are okay with this, and then I can merge. |
LGTM +2 |
To test this locally, make sure you have xvfb, firefox, and mesa installed. The ci tests will no longer work under windows, as some environment variables are set in the npm commands.
When you run 'npm run start' and visit 127.0.0.1:9876/debug.html, all tests (both gl and non-gl) are run. The ci tests run the unit tests in PhantomJS and the gl tests in Firefox.
There was a problem running the canvas tests in Chrome via karma debug, which has been fixed.
So far this has only two of the many selenium tests moved to headless testing. It is enough to show the technique. For non-canvas selenium tests, more work will be needed (using screenshots of xvfb rather than asking for the canvas's contents).