Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing #4

Open
wants to merge 28 commits into
base: master
Choose a base branch
from
Open

Testing #4

wants to merge 28 commits into from

Conversation

MarkKoz
Copy link
Member

@MarkKoz MarkKoz commented Jul 24, 2019

I did the setup.py before writing tests and then realised I was getting ahead of myself. I had planned to look into using tox to test for 3.5, 3.6, and 3.7 but it may be overkill.

TODO:

  • Add coverage.py dependency
  • Write all tests (100% coverage?)
    • Tracers
      • Array1D
      • Array2D
      • Graph
      • Log
      • Tracer
    • Commander
    • Layouts
    • Randomize
    • JSON output
    • API request
  • Add helper function to drastically reduce repeated code in tests
    • Need to find a way to handle discrepancy in arguments between Commander instances and their keys in command JSON objects.

Modify the class attribute instead of the instance attribute.
* Use a dictionary for command assertion instead of one assertion per
  kv pair
* Explicitly check for the name "Commander"
* Reset object count and commands list to previous values
* Use dictionaries for all assertions rather than checking single keys
* Create a Commander object in setUp() for use with most tests
@MarkKoz
Copy link
Member Author

MarkKoz commented Jul 28, 2019

Regarding the helper function, there are two approaches I can think of, roughly:

def assertCommandEqual(
    self,
    method: str,
    *args: Union[Serializable, Undefined],
    key: Optional[str] = None
):
    cmd = Commander.commands[-1]
    expected_cmd = {
        "key": key,
        "method": method,
        "args": list(args),
    }

    self.assertEqual(expected_cmd, cmd)

or more comprehensively

def test_command(
    self,
    command: Callable,
    *args: Union[Serializable, Undefined],
    key: Optional[str] = UNDEFINED
):
    if key is UNDEFINED:
        key = command.__self__.key

    # This would need to be done recursively
    args = [arg.key if isinstance(arg, Commander) else arg for arg in args]

    cmd = Commander.commands[-1]
    expected_cmd = {
        "key": key,
        "method": command.__name__,
        "args": args,
    }

    self.assertEqual(expected_cmd, cmd)

And really it could be taken a step further by looping over each tracer and calling the latter helper function for each function in a tracer. Suitable arguments for testing each function could be generated automatically based on type annotations lol. Stupid idea, I know, to rely on function definitions for testing like that.

But I am concerned that the second helper function (and especially that idea at the end) would be a bad approach because it'd effectively remove the "explicitness" of tests. But still, can't help but feel there's so much repetition in the tests even with the first helper function just because the tracers are such thin wrappers.

@64json
Copy link
Member

64json commented Jul 28, 2019

I like your idea of randomizing the arguments to test it. 😂 It actually sounds better than the current (manual) way of testing tracers.js.

I was also concerned about the same issue that tests are kinda meaningless at this moment. Should we solely (and more thoroughly) test commander.py and randomize.py? Like testing if it correctly serializes different types of random values, if it prints commands out to visualization.json correctly, etc.

@MarkKoz
Copy link
Member Author

MarkKoz commented Jul 28, 2019

I don't know what the best way to do this is. I am not really familiar with best practices and methodologies for testing. I do agree there should be some testing on the visualization.json file. As for more thorough testing, I don't know what more can be done for commander.py and randomize.py. It seems to me testing serialisation of various data types doesn't make sense since what we'd really be doing is testing Python's standard library's json module instead of our own code and that wouldn't be necessary unless we want to be paranoid and not trust Python to ship with a working and tested json module.

The problem with my suggestion for parameters based on function definitions is that the tests would rely on the function definitions to work. If a function definition were to be changed to something it's not meant to be, the test wouldn't catch that mistake. I feel like right now the only point of having all the tracer tests is to test for function definitions.

@64json
Copy link
Member

64json commented Jul 29, 2019

Hmm good points. I think for now we can stick with the current way of testing. Maybe once we have plenty of algorithms translated from js to java/cpp/py, we can run tests on them (that are not using randomized input) to see if they result in the same visualization.jsons?

@MarkKoz
Copy link
Member Author

MarkKoz commented Jul 30, 2019

Yes, I think some practical usage tests would be good to have. I think we could get away with random inputs if we just seed the RNG.

It reduces code redundancy by performing the common operations of
fetching the last command and constructing the command dictionary
from the given arguments.
* Add type alias equivalent to os.PathLike to support Python 3.5
@MarkKoz
Copy link
Member Author

MarkKoz commented Aug 27, 2019

Could use ideas for how to write a test for get_url. I can test for the return type easily but not sure what else.

@64json
Copy link
Member

64json commented Aug 27, 2019

I think testing if the URL returned from get_url responds with 200 would be good.

@MarkKoz MarkKoz marked this pull request as ready for review August 27, 2019 17:59
@MarkKoz
Copy link
Member Author

MarkKoz commented Aug 27, 2019

I think I came up with something decent by mocking some things. I didn't want to test that it returns 200 because that's effectively testing the API, which is out of scope.

This is ready to be reviewed now. I think we can work on tox and automated testing when we're working on setting up CI/CD and packaging.

@64json
Copy link
Member

64json commented Sep 3, 2019

It seems great to me. I am no expert in Python so I might miss something. I would appreciate if anyone else reading this thread can review once more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants