Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use firefox and xvfb to test webgl and examples. #651

Merged
merged 15 commits into from
Dec 15, 2016
Merged

Conversation

manthey
Copy link
Contributor

@manthey manthey commented Dec 5, 2016

To test this locally, make sure you have xvfb, firefox, and mesa installed. The ci tests will no longer work under windows, as some environment variables are set in the npm commands.

When you run 'npm run start' and visit 127.0.0.1:9876/debug.html, all tests (both gl and non-gl) are run. The ci tests run the unit tests in PhantomJS and the gl tests in Firefox.

There was a problem running the canvas tests in Chrome via karma debug, which has been fixed.

So far this has only two of the many selenium tests moved to headless testing. It is enough to show the technique. For non-canvas selenium tests, more work will be needed (using screenshots of xvfb rather than asking for the canvas's contents).

To test this locally, make sure you have xvfb, firefox, and mesa installed.  The ci tests will no longer work under windows, as some environment variables are set in the npm commands.

When you run 'npm run start' and visit 127.0.0.1:9876/debug.html, all tests (both gl and non-gl) are run.  The ci tests run the unit tests in PhantomJS and the gl tests in Firefox.

There was a problem running the canvas tests in Chrome via karma debug, which has been fixed.

So far this has only two of the many selenium tests moved to headless testing.  It is enough to show the technique.
@manthey
Copy link
Contributor Author

manthey commented Dec 5, 2016

@jbeezley and @aashish24 I'd love to get feedback on this, as I'd like to use it to add tests for the line-rendering branch. If you like this approach, we can move the selenium tests to this technique. Doing so will resolve issues #647 and #263, and allow us to close issues #405 (this doesn't use headless-gl, but I think it satisfies the intent) and possibly #218.

@manthey
Copy link
Contributor Author

manthey commented Dec 5, 2016

It looks like one of my earlier attempts of install packages added to travis's cache, so this will require a change to the travis file to get back to that state.

imageTest.imageTest('choropleth', 0.001, done, myMap.onIdle, 0, 2);
});
it('clean up', function () {
myMap.exit();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally, I like to put setup and teardown inside beforeEach and afterEach groups to reinforce that individual tests shouldn't share state.

if (threshold === undefined) {
threshold = 0.001;
}
$.ajax({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return the promise here to make it easier to chain operations if necessary.

* @param {integer} rafCount: additional number of renderAnimationFrames to
* wait after the delay.
*/
module.exports.imageTest = function (name, threshold, doneFunc, idleFunc, delay, rafCount) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be much simpler expressed as promises. For the idleFunc, couldn't we just have the person running the test call when it is ready.

setupTestCase()
  .then(imageTest)
  .then(doSomethingAfter)
  .catch(handleErrors);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that neither the onIdle event nor Jasmine (as opposed to Mocha) support promises as first class interfaces, the callback interface is probably the easiest to use. I think we can keep the callbacks, but I would like to return a promise if possible to make it easier to chain tests together.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll return a $.Deferred() object that resolves after any doneFunc callback.

@aashish24
Copy link
Member

This is great! as said earlier.. will have a detailed look bit later.

* Express style middleware to handle REST requests
* to `/notes` on the test server.
* This function returns true if (1) there is an environment variable set
* called "TEST_SAVE_IMAGE", and either (2a) the name of the test appears in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the method used to generate new baselines? Things like this could use an overview in readthedocs, I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I figure I should write a script to explicitly generate new baselines, and probably also a replacement for upload_test_cases.py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should give credit to @ronichoudhury, as some of this I lifted from the candela project.

@manthey
Copy link
Contributor Author

manthey commented Dec 5, 2016

Currently, when we download files from data.kitware.com, we do it by item id. I think it would be more robust to switch to the sha checksums. I don't know if that was under consideration elsewhere.

Return the jquery ajax promise from compareImage.  Return a deferred object from imageTest.
@jbeezley
Copy link
Contributor

jbeezley commented Dec 5, 2016

Yeah, the content addressed endpoints didn't exist when I made the original switch to Girder. We can make an issue for it. More broadly speaking, I think it's a little odd to have to run ctest to get the data files for the examples. It's fine for testing data, but I wonder if it would be better to just use the girder link directly in the examples.

@manthey
Copy link
Contributor Author

manthey commented Dec 5, 2016

I agree that it is odd to need to run ctest to get the data. It is probably just ignorance on my part, but I'm not sure how to get ctest to download the data without running make as an explicit step in the process. For the examples, directly linking to girder would be fine, as we expect web connectivity anyway. Eventually, though, I'd like to see us testing the examples as part of the CI (issue #329), for which purpose downloading the data makes some sense.

@jbeezley
Copy link
Contributor

jbeezley commented Dec 5, 2016

We should probably just use ExternalData for the data downloads anyway.

@codecov-io
Copy link

codecov-io commented Dec 5, 2016

Current coverage is 89.99% (diff: 100%)

Merging #651 into master will increase coverage by 1.26%

@@             master       #651   diff @@
==========================================
  Files            83         83          
  Lines          8358       8365     +7   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
+ Hits           7416       7528   +112   
+ Misses          942        837   -105   
  Partials          0          0          

Powered by Codecov. Last update 1f1b6ed...2dce1f4

@manthey
Copy link
Contributor Author

manthey commented Dec 5, 2016

I think using ExternalData should be a separate PR from this one.

@jbeezley
Copy link
Contributor

jbeezley commented Dec 5, 2016

Agreed, I meant for when we decide to make the switch.

Copy link
Member

@aashish24 aashish24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

couldn't post earlier because of github being down. I have some minor comments.

@@ -43,6 +43,7 @@
"gl-vec4": "^1.0.1",
"glob": "^7.0.3",
"imports-loader": "^0.6.5",
"istanbul-combine": "^0.3.0",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@manthey is this the package that combines the xmls from different runs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, which is needed to have the xml file in the format we send to cdash. We send multiple lcov files to Codecov, which works without this, but cdash expects a single Cobertura file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome! thanks.

@@ -90,6 +90,17 @@ var vglRenderer = function (arg) {
canvas.attr('class', 'webgl-canvas');
canvas.css('display', 'block');
$(m_this.layer().node().get(0)).append(canvas);

if (window.contextPreserveDrawingBuffer) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The screenshots are actually the canvas drawing buffers, not actual screenshots. There are virtues to this approach: it is easy to ignore the window border, tabs, etc. and it doesn't matter if the div is aligned the same way. It ignores non-canvas data, which is both good and bad. It does require that you can ask the canvas for the drawing buffer, though, hence this flag,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, I see thanks.

@manthey
Copy link
Contributor Author

manthey commented Dec 8, 2016

@jbeezley I added a cmake test that generates baseline images, and a script that will upload them. The script isn't as clever as the one you used before where it shows each image in turn.

I'll update the documentation to match the new process, unless you'd like to see other changes.

Copy link
Contributor

@jbeezley jbeezley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything works for me. I just have some usability notes on the upload script. Perhaps you can also add a few notes to the developer's guide as well?

'--no-generate', dest='make', action='store_false',
help='Don\'t generate baseline images.')
parser.add_argument(
'--build', '-b', default='_build',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make the build directory default to the current directory?

# Get the folder we want to upload to to ensure it exists
apiRoot = args['dest'].rstrip('/') + '/api/v1/'
headers = {}
if args.get('token'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be better just to use girder-client here so we can use an apikey or prompt for user/password. It should simplify much of this function as well.

Change the default build directory from _build to .
@aashish24
Copy link
Member

I was able to run the branch with no issues, however, I am looking into how I can look at the images captured. Did we update the documentation?

@aashish24
Copy link
Member

Looks like TEST_SAVE_IMAGE is not exposed in CMakeLists as something that we can set in CCMAKE. Any objection to expose this and other related variable if that makes sense (I haven't dig into the entire CMakeLists though)

Save the base image explicitly for easier reference (even though it is in the downloaded data).

Composite canvases on a white background for consistency.
@aashish24
Copy link
Member

@manthey after trying this branch from scratch, I noticed few issues:

I had to install this manually:

  1. npm install karma-firefox-launcher --save-dev

  2. Ran this xvfb-run -s '-ac -screen 0 1280x1024x24' ctest -VV -R ffheadless

and got this message. I am looking into it but wondering if you have any thoughts?

npm ERR! Exit status 1
6: npm ERR!
6: npm ERR! Failed at the [email protected] ffci script 'GEOJS_TEST_CASE=tests/test-gl.js karma start karma-cov.conf.js --single-run --browsers Firefox'.
6: npm ERR! Make sure you have the latest version of node.js and npm installed.
6: npm ERR! If you do, this is most likely a problem with the geojs package,
6: npm ERR! not with npm itself.
6: npm ERR! Tell the author that this fails on your system:
6: npm ERR! GEOJS_TEST_CASE=tests/test-gl.js karma start karma-cov.conf.js --single-run --browsers Firefox
6: npm ERR! You can get information on how to open an issue for this project with:
6: npm ERR! npm bugs geojs
6: npm ERR! Or if that isn't available, you can get their info via:
6: npm ERR! npm owner ls geojs
6: npm ERR! There is likely additional logging output above.

@manthey
Copy link
Contributor Author

manthey commented Dec 9, 2016

I'll note that there are some Selenium tests that aren't webgl dependent, such as a variety for the d3 renderer. If we want to have non-selenium image tests for these, we can get an image from the xvfb frame buffer and use it in the tests. I've got some exploratory code where this works, but feel it should not be part of this PR. It has the drawback that it won't work on a user's browser using npm run start and localhost:9876/debug.html, as the test framework doesn't necessarily have access to the user's display, but it should work via xvfb in local tests and travis.

@manthey
Copy link
Contributor Author

manthey commented Dec 9, 2016

Did you npm install? That would have installed the karma-firefox-launcher and other packages. If you are installing individual packages, then I would imagine you skipped npm install, and therefore all sorts of packages could be out of date.

@aashish24
Copy link
Member

I am wondering its because of the firefox itself:

vwebpack: bundle is now VALID.
�[32m09 12 2016 15:49:44.363:INFO [karma]: �[39mKarma v0.13.22 server started at http://localhost:9876/
�[32m09 12 2016 15:49:44.368:INFO [launcher]: �[39mStarting browser Firefox
�[32m09 12 2016 15:49:46.172:INFO [Firefox 49.0.0 (Ubuntu 0.0.0)]: �[39mConnected on socket /#3VqPkLML38MQFZYxAAAA with id 26105585
Firefox 49.0.0 (Ubuntu 0.0.0): Executed 0 of 2 SUCCESS (0 secs / 0 secs)
�[1A�[2KERROR: 'No webGL support'

@aashish24
Copy link
Member

Did you npm install? That would have installed the karma-firefox-launcher and other packages. If you are installing individual packages, then I would imagine you skipped npm install, and therefore all sorts of packages could be out of date.

Yup, I realized that and after running npm install I got the karma problem fixed (as I saw that you had karma-* in the plugins).

@manthey
Copy link
Contributor Author

manthey commented Dec 9, 2016

What does xvfb-run -s '-ac -screen 0 1280x1024x24' glxinfo do? If it doesn't print out GLX config information, then you don't have the right packages installed. If you have osmesa, mesa-utils, xvfb, and firefox all installed, then you might have the nvidea library path issue which @jbeezley run into.

@aashish24
Copy link
Member

What does xvfb-run -s '-ac -screen 0 1280x1024x24' glxinfo do? If it doesn't print out GLX config information, then you don't have the right packages installed. If you have osmesa, mesa-utils, xvfb, and firefox all installed, then you might have the nvidea library path issue which @jbeezley run into.

That sounds about that I would expect. Let me look into it. thanks,

@aashish24
Copy link
Member

aashish24 commented Dec 9, 2016

Okay, I am past mesa / xvfb errors now and will post my results and desired wish list.

Update:
I am good now and this is a clean installation machine with nothing on it. My wishlist would be to:

  1. Update the doc and either have a pointer for setting the path or text directly describing how to run tests locally

  2. It would be nice to add text on how to debug tests in case things go wrong.

@manthey
Copy link
Contributor Author

manthey commented Dec 13, 2016

@aashish24 There is a short section on running tests in docs/developers.rst. Would you can to make explicit recommendations on how that should change?

I can add more on how to debug tests.

@jbeezley Did you figure out the nvidia path issue? Is it something we should add to the docs?

Do we want to remove the selenium tests as we add them to headless webgl?

@jbeezley
Copy link
Contributor

Did you figure out the nvidia path issue? Is it something we should add to the docs?

I couldn't make it work, but since it works fine non-headless, I think it's okay.

Do we want to remove the selenium tests as we add them to headless webgl?

Yes, but that can be done later.

@aashish24
Copy link
Member

@jbeezley Did you figure out the nvidia path issue? Is it something we should add to the docs?

If I set the LD_LIBRARY_PATH and point it to the directory that contains the libGL.so.* of Mesa, everything worked for me.

@aashish24
Copy link
Member

@aashish24 There is a short section on running tests in docs/developers.rst. Would you can to make explicit recommendations on how that should change?

I was thinking for example the debug end point would be worth mentioning in the document.

@manthey
Copy link
Contributor Author

manthey commented Dec 13, 2016

It is already in there (in developers.rst).

@aashish24
Copy link
Member

It is already in there (in developers.rst).

Great. Then the only thing lest is to add some material on mesa setup? Thoughts?

@aashish24
Copy link
Member

@manthey pleasae see my last comment. Thanks

@manthey
Copy link
Contributor Author

manthey commented Dec 14, 2016

I'll add a section on the minimal packages needed in Ubuntu 14.04 to get the xvfb tests running. A general osmesa installation guide is out of our scope, but showing how to do it for one OS should suffice.

@aashish24
Copy link
Member

I'll add a section on the minimal packages needed in Ubuntu 14.04 to get the xvfb tests running. A general osmesa installation guide is out of our scope, but showing how to do it for one OS should suffice.

I think that is fine. I would suggest we merge it if @jbeezley is happy with the changes as well.

@aashish24
Copy link
Member

👏

@manthey
Copy link
Contributor Author

manthey commented Dec 14, 2016

@aashish24 I added some documentation about provisioning Ubuntu 14.04. Let me know if you are okay with this, and then I can merge.

@aashish24
Copy link
Member

LGTM +2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants