Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial UITester API for writing toolkit agnostic tests #1107

Merged
merged 14 commits into from
Aug 19, 2020

Conversation

kitchoi
Copy link
Contributor

@kitchoi kitchoi commented Aug 4, 2020

This PR supersedes #861: This proposes a toolkit-agnostic API for testing GUI with TraitsUI. The motivation remains the same, but the user facing API has changed.

Objectives/Benefits for users

  • Test code can be toolkit agnostic.
  • Test code is intuitive to read and to write.
  • Test code should not need to manually process events posted to the GUI event loop.
  • Test code can focus on behaviours with little to no dependencies on UI editor implementation details.
  • User interactions (e.g. clicking a button) are simulated programmatically via posting GUI events, not by modifying the object edited by the editor directly.
  • The API should be extendable for (1) testing custom editors implemented outside of TraitsUI and (2) for use cases when the default simulations provided by TraitsUI are insufficient.

Benefits for TraitsUI

  • With the editor implementation details encapsulated, TraitsUI can refactor its editors' internal implementations without affecting test code in downstream projects. e.g. We would not want to encourage test code to call traitsui.qt4.DateEditor.SimpleEditor.update_object with a QDate object as argument. This will prevent TraitsUI from refactoring its editors, for example, to move the toolkit specific logic out.
  • The API can be used by TraitsUI's own tests.

For the reviewer
This PR provides only the most fundament groundwork for future extensions. I recommend start by reading the documentation of UITester.

For further context, in this test branch, I tried to test the (extended) API with different editors and with more complex UIs. For example, I was able to test the List_editors_demo.py demo example with the following test code:

    from traitsui.testing.api import UITester, locator, command
    tester = UITester(delay=300)
    with tester.create_ui(demo) as ui:
        main_tab = tester.find_by_id(ui, "splitter")
        main_tab.locate(locator.Index(1)).perform(command.MouseClick())

        list_ = tester.find_by_id(ui, "list").locate(locator.Index(0))
        list_.find_by_name("name").perform(command.KeySequence("\b\b\b\bDavid"))

        main_tab.locate(locator.Index(2)).perform(command.MouseClick())
        notebook = tester.find_by_id(ui, "notebook")
        notebook.locate(locator.Index(1)).perform(command.MouseClick())

        main_tab.locate(locator.Index(0)).perform(command.MouseClick())
        cell = tester.find_by_id(ui, "table").locate(locator.Cell(1, 1))
        cell.perform(command.MouseClick())
        cell.perform(command.KeySequence("50"))
        cell.perform(command.KeyPress("Enter"))

And if I run this, I can programmatically simulate user actions on the GUI (no clicking nor typing on my part!):
Aug-04-2020 11-24-53.

Likewise I could also test the TreeEditor (see test code here).

Note that in the above example, the delay parameter, find_by_id and locate methods are not implemented in this PR. The example is meant to show how the API can be extended and we can add those features in separate PRs. For those who are interested, they are implemented in the test branch (here, here, and here) An example Qt specific implementation for supporting the tested user interactions can be seen here. (warnings: They have not been cleaned up and so the implementation may change when I open a PR for them).

Design

  • All the testing infrastructure exist outside of the editors. e.g. we don't implement "mouse_click" directly on top of the ButtonEditor. This is because testing is an entirely separate use case, and will evolve for different reasons compared to the editors themselves. GUI frameworks do typically provide testing frameworks as separate modules / namespaces. e.g. Qt has QtTest. Wx has UIActionSimulator.
  • The API here is inspired by Android Expresso Testing framework, see example.
  • One thing I did not like about the API in Introduce toolkit-agnostic API for GUI testing #861 is that it can be quite difficult for the user to know what actions are implemented for an editor. e.g. mouse_click is implemented for ButtonEditor but not for TextEditor. That is because polymorphism was achieved by subclassing. Here polymorphism is achieved in ways similar to functools.singledispatch: we dispatch implementations by registration and delegation. With that, we can more easily report what actions are supported for a particular editor. This feedback makes the API more user friendly. See the ActionNotSupported error in the PR.
  • To allow users to extend from the API, we need to decide on a stable call signature for implementing toolkit specific actions. There I chose a call signature (UIWrapper, any action type). Future extension to the functionality can then be achieved by adding more methods and attributes on the UIWrapper object. For example, the extension to support delay and locate (in the above example) is achieved by extending the UIWrapper object without having to modify the call signature.

if registries is None:
self._registries = [
_CustomActionRegistry(),
]
Copy link
Contributor Author

@kitchoi kitchoi Aug 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have a second registry (an instance of EditorActionRegistry) here to support testing with TraitsUI editor.

@rahulporuri rahulporuri self-requested a review August 5, 2020 15:39
@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 11, 2020

Although this is intended to become a public API eventually, I am thinking that we should keep this "private" to TraitsUI and test this out with TraitsUI's own tests first. When we confirm the API is mature enough, it can be made public.

For example, the tester package here can be renamed to _tester to make it more obviously private for now.

@corranwebster
Copy link
Contributor

corranwebster commented Aug 11, 2020

A couple of comments:

  • I disagree with the comment that testing is an entirely different use-case. Sure, Qt has a testing framework, for example, but the QAbstractButton class provides a click() slot that allows programatic user-like interactions as part of the public API of the class; it doesn't require you to use the testing framework to access that.
  • it feels like some of this is working at the wrong level. Operations like mouse clicks should be directed at toolkit controls, not at editors, otherwise you are going to end up writing lots of little almost-the-same-but-not-quite handlers for common interactions.
  • what is the advantage of getting the editor via tester.find_by_name vs. ui.ui_info.<name>.editor or ui.get_editors() (or something wrapping these for safety)? Similarly, you have an example interaction get_nested_ui_in_custom_editor; it doesn't seem like there is much to be gained by having this vs. just asking for editor._ui?
  • I can see there is a point for locators where there is a difference between the toolkit implementations, eg. "in a RangeEditor, find me the text field control"
  • some of this is papering over issues we have because we haven't abstracted things properly: it's not in this PR, but it looks to me that things like main_tab.locate(locator.Index(1)) are needed because we have no toolkit-independent abstraction of a Tabbed layout visible to the ui. There is a choice in how we could spend effort here: we could spend a lot of time writing locator code that understands the particular current implementation details for a backend; or we could spend probably a similar amount of time writing a common abstraction which we can traverse with toolkit-independent code.

If this is going to be successful, writing tests needs to be really, really easy, and any custom interactors/locators written for tests need to be super-simple so test writers can be confident they do the right thing.

I'd suggest:

  • refactoring this so that the interactors are generally expecting a toolkit control (or a wrapper around a toolkit control), which will improve re-use (and may also allow use in Pyface and Enable).
  • avoiding adding more ways to do things when there is already a clear way to do things via the existing API; developers writing tests will be familiar with the existing ways of doing things, so adding another way would just add cognitive overhead.
  • when there is a choice between writing custom test code to locate or adapt, consider instead promoting things to part of the formal API or providing a toolkit-independent abstraction. Many of the things that are needed for testing have significant overlap with things that are useful for people wring things like custom Handlers and Controllers with complex behaviour across multiple editors.

Where I see this framework as being particularly useful is for people who are writing integration tests for complex UIs. That isn't a common use-case we have in the TraitsUI tests, but it does lead me to think we should be designing this with a view to making it public (even if it is soft-launched).

(edited to complete the thought in the second bullet point above)

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 11, 2020

I disagree with the comment that testing is an entirely different use-case. Sure, Qt has a testing framework, for example, but the QAbstractButton class provides a click() slot that allows programatic user-like interactions as part of the public API of the class; it doesn't require you to use the testing framework to access that.

A few simple widgets have this sort of methods for programmatically doing these sort of things, that is true. But there are widgets don't have that kind of convenient methods. For example, in order to perform a mouse click on an QAbstractItemView, there is not such click method for performing mouse click. Instead, one need to use QTest to perform the mouse click on the viewport, e.g.:

def mouse_click_item_view(model, view, row, column, delay=0):
""" Perform mouse click on the given QAbstractItemModel (model) and
QAbstractItemView (view) with the given row and column.
Parameters
----------
model : QAbstractItemModel
Model from which QModelIndex will be obtained
view : QAbstractItemView
View from which the widget identified by the index will be
found and clicked.
row : int
Row index
column : int
Column index
Raises
------
LookupError
If the index cannot be located.
Note that the index error provides more
"""
q_model_index = get_valid_model_index(model, row, column)
rect = view.visualRect(q_model_index)
QTest.mouseClick(
view.viewport(), QtCore.Qt.LeftButton,
QtCore.Qt.NoModifier, rect.center(),
delay=delay,
)

In the context of clicking a cell in a table, the test does not need to care what is in the cell. It just instructs the UI to perform a mouse click on a specific cell:

tester.find_by_id(ui, "table").locate(locator.Cell(1, 1)).perform(command.MouseClick())

Likewise there are no click methods for QTreeView, but one can test clicking an item in the tree using QTest:

tree_view.scrollTo(q_model_index)
rect = tree_view.visualRect(q_model_index)
QTest.mouseClick(
tree_view.viewport(),
QtCore.Qt.LeftButton,
QtCore.Qt.NoModifier,
rect.center(),
delay=interactor.delay,
)

Furthermore for testing, there is a distinction between pressing and releasing a key with or without emitting an event indicating value change. For example, to test the auto_set=False flag on the TextEditor, we want to be able to programmatically trigger a key press-and-release (aka key click in Qt terminology) and test it has not actually change the value yet, until the user presses the return key. This is the sort of use cases only found in test context.

it feels like some of this is working at the wrong level. Operations like mouse clicks should be directed at toolkit controls, not at editors, otherwise you are going to end up writing lots of little almost-the-same-but-not-quite handlers for common interactions.

Operations like mouse click or key click are directed at the toolkit controls, but that detail is hidden by the API by design.

The practical reason for this is because currently these toolkit controls are all over the place. For example, the QTreeView is found in _tree in an instance of QTreeEditor. The nested UI in the QTreeEditor is found in... _editor._node_ui. The nested UI for an item in a custom List editor is found in... control._list_pane.layout().itemAtPosition(index).widget()._editor._ui. A nested UI in a notebook style ListEditor is in _uis[index][1].

There are just so much implementation details involved in getting hold of a toolkit control for performing a specific mouse click. The end user should not be bothered by this. The locate() method is to for exactly the purpose of pinning down where a mouse click / key click needs to occur, using simple object (almost a data class).

If there was a toolkit agnostic layer that provide access to all these sorts of controls, that would be nice... but would also require quite a lot of efforts. This API hides these details away so that we can do this kind of refactoring later without affecting test code downstream.

what is the advantage of getting the editor via tester.find_by_name vs. ui.ui_info..editor or ui.get_editors() (or something wrapping these for safety)? Similarly, you have an example interaction get_nested_ui_in_custom_editor; it doesn't seem like there is much to be gained by having this vs. just asking for editor._ui?

We do use ui.get_editors a lot. But it returns a list, and the most common use case is that the list contains only one item, and we take the first one. The UITester.find_by_name is really basic wrapper around this ui.get_editors to create an instance of UserInteractor where most of the test actions happen. What is more interesting is the find_by_name method on the UserInteractor object returned by UITester.find_by_name. It allows you get hold of the nested UI.

Using editor._ui presents two problems:

  1. It invites users to use private attributes, making the boundary between public API and private implementation murky, costing maintenance in the long run.
  2. And suppose we make this _ui attribute public... it only works for basic editor where there is only one nested UI. ListEditor has multiple nested UI, TreeEditor has multiple nested UI, ... it is really not a scalable solution.

The above example shows how we can get hold of a nested UI in a custom editor without exposing implementation details:

tester.find_by_id(ui, "list").locate(locator.Index(1)).find_by_name("name").perform(command.MouseClick())

The first UITester.find_by_id is a friend of find_by_name, but it uses the Item.id to uniquely identify the custom list editor. In that example, there are multiple editors with the same name.
Then locate(locator.Index(1)) combined with the following find_by_name("name") allows us to get the nested UI in the custom list editor at index 1.

some of this is papering over issues we have because we haven't abstracted things properly: it's not in this PR, but it looks to me that things like main_tab.locate(locator.Index(1)) are needed because we have no toolkit-independent abstraction of a Tabbed layout visible to the ui. There is a choice in how we could spend effort here: we could spend a lot of time writing locator code that understands the particular current implementation details for a backend; or we could spend probably a similar amount of time writing a common abstraction which we can traverse with toolkit-independent code.

Indeed that is why we have locate and agreed that is because we don't have a toolkit-independent way to get hold of specific toolkit control on which mouse click and key click can be directed towards.

On the effort front: We currently don't have these locator code because, sadly, we don't have good tests in TraitsUI. We will be writing these locator code anyway if we are to improve the test coverage in TraitsUI. This is an example of existing test code in TraitsUI on this use case:

widget.layout().itemAt(button_idx).widget().click()

In other words, this would be effort we need to spend anyway in order to improve test coverage in TraitsUI. In other other words, I think this kind of locator code would have existed already if TraitsUI had done better testing.

consider instead promoting things to part of the formal API or providing a toolkit-independent abstraction

I am not ruling that out, but currently I find that hard to do without concrete use case and user code. It would be easier to do abstraction where there are already code demonstrating use cases. One thing we want to avoid is early abstraction, and worse, early abstraction that is a public API in a library depended upon externally. I think the implementation of locate can provide the use case to drive that effort and to guide us how we should build that abstraction layer later. When that toolkit independent code is finally ready, we can wrap swap out the location implementation of the tester code with that abstraction layer without affecting downstream projects' test code.

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 11, 2020

refactoring this so that the interactors are generally expecting a toolkit control (or a wrapper around a toolkit control), which will improve re-use (and may also allow use in Pyface and Enable).

I had considered giving just the control to the interactor rather than just the editor... but I couldn't, for two three reasons:

  1. After getting an editor in a UI, if I want to further navigate onto a nested UI, that nested UI is not found in the control but on the editor.
  2. An editor can have multiple controls, like RangeEditor. I don't know which control to get until a location is provided, but to evaluate a location, again I need the editor instance.
  3. This is the part I hope won't be relied on heavily: there is an escape hatch called "CustomAction", which basically allows user to call anything on the editor while providing all the convenience of find_by_name and perform (which kicks the GUI event loop before and after the action). This is flouting the entire intention of hiding internal implementation details, but it is a compromise acknowledging that whatever we do, we won't be able to capture 100% use case. So let others escape from the framework if they really have to, but in a controlled way.

@corranwebster
Copy link
Contributor

Perhaps a quick clarification: when I say that it should be given a control I mean a control not the control (ie. not necessarily editor.control). So for example, the TreeEditor should have a cross-platform way to get the actual TreeView (or whatever equivalent) toolkit control, and that is then given to the interactor.

A key entry handler on a text field in a RangeEditor should be the same as a key entry handler on a TextEditor or on a SearchEditor, otherwise the implementation of everything will be overwhelming.

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 11, 2020

Should I rename the package tester to _tester and then we can use this for writing TraitsUI test code, and then see how this goes?

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 13, 2020

A key entry handler on a text field in a RangeEditor should be the same as a key entry handler on a TextEditor or on a SearchEditor, otherwise the implementation of everything will be overwhelming.

I think I read this the first time, I did not quite understand where this was coming from. Now I guess you might be referring to the situation in TraitsUI where, for example, to add a placeholder to the textbox widget in RangeEditor, TextEditor and SearchEditor, you have to do that three times. I think I can show how this is not an overwhelming issue here.

The code to do with performing a mouse click / key click on a generic widget can be refactored, so that all you are left with is to delegate the right widget to these generic functions for each editor.

For example, to perform a key press on the RangeEditor, I have this implementation:

def key_press_simple_range_editor(interactor, action):
slider = interactor.editor.control.slider
text = interactor.editor.control.text
widget = {
0: slider,
1: text,
}[interactor.location.index]
helpers.key_press(
widget, action.key, delay=interactor.delay
)

(I might change the API of the location object; I don't like using index here)

The implementation for TextEditor and SearchEditor would just call the helpers.key_press with their corresponding Qt widget, which are just interactor.editor.control.

This pattern is really useful for situations where the underlying widget are actually different, but conforming to a common API. For example, QListWidget and QTableView implement QAbstractItemView. So I can refactor everything to do with mouse click or key click of a QAbstractItemView to a separate function.

This is used for EnumEditor(mode="list") (style: custom)

list_widget = interactor.editor.control
helpers.mouse_click_item_view(
model=list_widget.model(),
view=list_widget,
row=interactor.location.index,
column=0,
delay=interactor.delay,
)

This is used for TableEditor:
table_view = interactor.editor.table_view
helpers.mouse_click_item_view(
table_view.model(),
table_view,
interactor.location.row,
interactor.location.column,
delay=interactor.delay,
)

Notice that the location for the first one only needs to refer to index, because it is editing a sequence. The location is expected to this object:

class Index:
""" A location uniquely specified by a single 0-based index.
Attributes
----------
index : int
0-based index
column : int
0-based index
"""
def __init__(self, index):
self.index = index

The one for TableEditor uses a different location object:

class Cell:
""" A location uniquely specified by a row index and a column index.
Attributes
----------
row : int
0-based index
column : int
0-based index
"""
def __init__(self, row, column):
self.row = row
self.column = column

Many TraitsUI editors are composition of basic widgets, e.g. SetEditor has lots of widgets. The code to do with delegating a toolkit control for a given location and action on a per-editor basis allows us to maximize backward compatibility while also maximize the flexibility to change an editor's composition. If the composition of an editor needs to change, we just need to adjust the location code, downstream projects test code won't need to change. We can also add more location objects to support increased complexity of widget compositions in an editor.

This is the main motivation of this testing framework: Maximizing backward and forward compatibility while allowing downstream projects to write test code.

@corranwebster
Copy link
Contributor

It's exactly the fact that there are many editors with sub-controls that I'm trying to address.

What I'm trying to get at can be seen fairly clearly in your example:

def key_press_simple_range_editor(interactor, action):
slider = interactor.editor.control.slider
text = interactor.editor.control.text
widget = {
0: slider,
1: text,
}[interactor.location.index]
helpers.key_press(
widget, action.key, delay=interactor.delay
)

This piece of code has two parts:

  • code which locates the sub-control that you want (lines 479-484)
  • code which interacts with the control (lines 485-487)

It's good that you have factored out the interaction as a "helper", but let's say you want to write a mouse click interactor. Now you need to duplicate the code which locates the control in the new interactor, and use a different "helper" most likely.

Now you could of course factor out the "locate the sub-control" code, but then your interactors just become two-liners of the form "locate the sub-control" followed by "call a helper to do the actual interaction."

So where you'll end up is with tests which look like:

  • locate the editor
  • call an "interactor" which:
    • locates the control (or cell, or whatever) that you want to interact with
    • calls a "helper" which does the actual interactor

I think things would be better if things were written as:

  • locate the editor
  • locate the control (or cell, or whatever) within the editor
  • interact with the control (or cell or whatever)

This PR is mostly about the "locate" things part of your testing infrastructure, but the "things" it is locating are the editor level only. I think that the additional code that locates things within an editor should be part of this framework, so for example, when writing tests for a simple range editor, you have a way to specify additional locators for the various sub-widgets that the tests can use (and it may be that simple functions are enough; but there will be some standard ones - such as give me the main control, or give me this cell in an item view).

There may be some cases where you need slightly different code between the toolkits for locating sub-controls (particularly for things like table views), but that can be dealt with either by contributing different locator functions, or by normalizing the APIs (eg. this sub-widget is always called _foo_control or something like that, and we change the implementations to match that).

So to summarise:

  • this PR is about locating things and then doing things with them, but mainly about the "locating things" part
  • the things you should be locating are toolkit-level things (like individual controls/widgets or parts of item views)
  • the interactors should be expecting toolkit-level things - they are more-or-less what your "helpers" are now

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 13, 2020

I think things would be better if things were written as:

  • locate the editor
  • locate the control (or cell, or whatever) within the editor
  • interact with the control (or cell or whatever)

I think this is what I have achieved in the user-facing API. :)
For example, this is what my current "test code" for a range editor looks like:

from traitsui.testing.api import UITester, command, locator
tester = UITester(delay=200)
with tester.create_ui(demo) as ui:
interactor = tester.find_by_name(ui, "max_n")
slider = interactor.locate(locator.OrderedWidget(0))
text = interactor.locate(locator.OrderedWidget(1))
text.perform(command.KeySequence("\b\b3"))
text.perform(command.KeyPress("Enter"))
for _ in range(5):
slider.perform(command.KeyPress("Right"))
text.perform(command.KeySequence("\b\b\b40"))
text.perform(command.KeyPress("Enter"))
for _ in range(5):
slider.perform(command.KeyPress("Left"))

(Again I think I will change the OrderedWidget(0) to something else, maybe WidgetType(SomeEnum.slider))

find_by_name: Locating the editor and starts things off here
locate(OrderedWidget(0)): Locate the control (or cell, or whatever) within the editor
perform(command.MouseClick()): Interact with the control (or cell or whatever)

Another example, this is how it looks like for a tree editor:

from traitsui.testing.api import command, UITester, locator, query
tester = UITester(delay=100)
with tester.create_ui(demo) as ui:
root_actor = tester.find_by_name(ui, "company")
# Enthought->Department->Business->Jason
node = root_actor.locate(locator.TreeItem((0, 0, 0, 0), 0))
node.perform(command.MouseClick())

find_by_name: Locate the editor
locate(locator.TreeItem((0, 0, 0, 0), 0)): Locate a node on the tree
perform(command.MouseClick()): Click the node

this PR is about locating things and then doing things with them, but mainly about the "locating things" part

Just to clarify, this PR has only implemented the part to do with locating the editor, which is the find_by_name method. The locate method is not in this PR, i.e. the part of the logic with locating sub-control is going to be in a separate PR.

As a preview, the locate method is roughly this:

def locate(self, location):
""" Return a new interactor for performing user actions on a specific
location specified.
Note that this method does not validate whether the location is valid.
Parameters
----------
location : Location
"""
self.perform(command.ValidateLocation(location))
return UserInteractor(
editor=self.editor,
registries=self.registries,
location=location,
delay=self.delay,
)

the interactors should be expecting toolkit-level things - they are more-or-less what your "helpers" are now

If I understand correctly, you would want the locate method to return an interactor that wraps the toolkit specific control, rather than the editor itself.

There are two reasons why I can't do that:
(1) If I did so, I can't have a consistent API for all of my implementations.
All of the toolkit-specific-editor-specific-action-specific implementations have the same signature (interactor, action).
This is how I allow extension to the API to support more location code, more action code, etc.
As I add more support for various editors, adding more location types, more action types, even to add support for delayed action (see the delay parameter, so I can play back the interaction to myself to verify I did the right thing), I did not have to change the call signature at all. The Interactor API and registry API are stable.

(2) The location may not refer to a toolkit control. It could refer to a nested UI (which is a TraitsUI object).
For example, in a custom ListEditor, to locate Index(1), I get a nested UI object first. I don't yet know which toolkit control the user wants to perform interaction on.
It is this example again:

item = tester.find_by_id(ui, "list").locate(locator.Index(7))
item.find_by_name("name").perform(
command.KeySequence("\b\b\b\b\b\bDavid")
)

tester.find_by_id(ui, "list"): locate the custom list editor
.locate(locator.Index(7)): Locate the nested UI at index 7
.find_by_name("name"): Locate the editor for "name" (which is a texteditor) in the nested UI
.perform(command.KeySequence(...)): Perform the action on the text control

@corranwebster
Copy link
Contributor

So this code:

from traitsui.testing.api import UITester, command, locator
tester = UITester(delay=200)
with tester.create_ui(demo) as ui:
interactor = tester.find_by_name(ui, "max_n")
slider = interactor.locate(locator.OrderedWidget(0))
text = interactor.locate(locator.OrderedWidget(1))
text.perform(command.KeySequence("\b\b3"))
text.perform(command.KeyPress("Enter"))
for _ in range(5):
slider.perform(command.KeyPress("Right"))
text.perform(command.KeySequence("\b\b\b40"))
text.perform(command.KeyPress("Enter"))
for _ in range(5):
slider.perform(command.KeyPress("Left"))

looks more-or-less at the right level for how we want to be writing things. My question is looking at that: what sort of object is text and if it doesn't inherit from the same ABC as the editor "interactor", why not?

I think we might also be using the word "interactor" for two different concepts - I think you are using it for the thing that is being acted on, where I'm meaning the code which performs an interaction. So for clarity, I'll try changing how I refer to things:

targets: objects which can be searched for, either as intermediate things (like editors or a table widget, perhaps), or as a leaf; and which can be operated upon (exactly what operations depend upon the type of target; but most operations are at the toolkit-level, like mouse-clicks). Targets potentially go all the way down to sub-widget level (eg. cells in a table, or sub-widgets inside toolkit widgets, like the +/- buttons on a spinbox)

operation: a piece of code that does something like a mouse click or key entry (might just be a function, but more likely to be a callable object, such as a key event operation which needs to know what key(s) to send); the exact implementation is likely toolkit depenedent

target wrappers: a wrapper around a target object which knows how to find targets related to the wrapped target (eg. subwidgets, etc.) and which can has an API to perform an operation in a generic way.

So in this nomenclature:

  • editors, toolkit widgets, cells in a tree view, sub-views of a group, etc. might all be "targets"
  • code which takes a text editor (or perhaps a wrapper around a text editor) and send a key event is an operation, as is something that extracts the stored text from the text editor

So you would end up writing code like:

with tester.create_ui(demo) as ui: 
     range_editor = tester.find_by_name(ui, "max_n")  # range_editor is a target wrapper around an editor
     slider = range_editor.locate(locator.OrderedWidget(0))  # slider is a target wrapper around a toolkit slider
     text = range_editor.locate(locator.OrderedWidget(1))   # text is a target wrapper around a toolkit text widget
     text.perform(KeySequence("\b\b3"))  # KeySequence is an. operation on a toolkit widget
     text.perform(KeyPress("Enter"))   # KeyPress is an. operation on a toolkit widget
  
     for _ in range(5): 
         slider.perform(KeyPress("Right")) 
  
     text.perform(KeySequence("\b\b\b40")) 
     text.perform(KeyPress("Enter")) 
  
     for _ in range(5): 
         slider.perform(KeyPress("Left")) 

The key thing here is that KeyPress and KeySequence are absolutely standard things across all toolkit widgets for which the make sense.

So I think if we can agree on this being the right way to break things down, I think things are OK.

BUT!

...in looking at these examples and considering the pain I've been having with Wx tree widgets, I think there are going to be major problems if we are literally talking about testing things like key presses on sliders. This sort of thing can potentially work when you are talking a homogeneous platform like Android, but we are going to run into problems with cross-platform conventions. For example:

  • does a left key press on a slider even work on all operating systems and platforms we support?
  • if it does work, does it induce the same change in value?
  • even if everything works perfectly across all platforms, does TraitsUI really care about this? (eg. if we added a new backend toolkit which doesn't support left key presses on sliders, is that something that we would consider a major flaw?)

In working at this level we are effectively doing integration tests on the toolkit.

So I think the sorts of operations that we need to be providing and using in test writing need to be a level higher: not "the user pressed the left arrow key on the slider", but rather "the slider changed its value to x (and we don't care how the user did it, maybe it was from a left click and maybe it was from a drag)". Now on some toolkits that may need to be implemented by a bunch of actions because there is no direct API call, and there are occasional places where TraitsUI does make some promises about what happens on individual keystrokes. But they. should be the exception not the rule.

This isn't super-relevant for this PR, since it doesn't really make any claims about what the operations look like, but it is something to keep in mind.

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 13, 2020

I think we might also be using the word "interactor" for two different concepts - I think you are using it for the thing that is being acted on, where I'm meaning the code which performs an interaction. So for clarity, I'll try changing how I refer to things:
...
So I think if we can agree on this being the right way to break things down, I think things are OK.

Yeah I think that makes a lot of sense. Perhaps the UserInteractor and the editor attribute on it can be renamed to better reflect these concepts. Naming thing is hard... please advise!

...in looking at these examples and considering the pain I've been having with Wx tree widgets, I think there are going to be major problems if we are literally talking about testing things like key presses on sliders.

Yes I agree that trying to test things like key presses on Wx are not as trivial as with Qt. While Wx advertises this UIActionSimulator object that is supposed to perform key clicks and mouse clicks like a user would, from what I read, I think it takes over the developer's machine for performing those actions, which we definitely don't want. And more importantly, I think it needs to be instantiated with the wx Frame object before the event loop starts, so I basically could not make it to work without more invasive changes to TraitsUI/Pyface. At the end of the day, I find the test context being very useful here: I could make compromises as long as it is good enough for testing purposes.
For example, this is how I test pressing left/right/up/down key on the RangeEditor with wx:

def _key_press_range_editor_slider(slider, key, delay):
if key not in {"Up", "Down", "Right", "Left"}:
raise ValueError("Unexpected key.")
if not slider.HasFocus():
slider.SetFocus()
value = slider.GetValue()
range_ = slider.GetMax() - slider.GetMin()
step = int(range_ / slider.GetLineSize())
wx.MilliSleep(delay)
if key in {"Up", "Right"}:
position = min(slider.GetMax(), value + step)
else:
position = max(0, value - step)
slider.SetValue(position)
event = wx.ScrollEvent(
wx.wxEVT_SCROLL_CHANGED, slider.GetId(), position
)
wx.PostEvent(slider, event)

It is not beautiful code, but it is good enough for testing the slider part for the RangeEditor. It is firing an event rather than calling the editor update_whatever method directly. With that we can test it is properly hooked up with the Python code that is responsible for updating the trait. This is the kind of test code I think we should be writing anyway in TraitsUI/Pyface tests.

@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 13, 2020

At the end of the day, I find the test context being very useful here: I could make compromises as long as it is good enough for testing purposes.. For example, this is how I test pressing left/right/up/down key on the RangeEditor with wx:

So basically, I allowed myself to cheat a bit with a wx.ScrollEvent because it is for testing only.

@corranwebster
Copy link
Contributor

In terms of naming, "interaction" and "operation" would both be fine for the operation code, avoid "action" because otherwise you will hate life when testing menus and toolbars :) For the target wrappers I'm not keen on "interactor" because they aren't really doing anything, "interactee" might be better, but it's not great; "Target" would be fine, as would something generic like "UIProxy" or "UIWrapper", and so they should have a similarly named method or attribute like "target" or "ui_object", or even if you end up having a class hierarchy you of these you might go for "editor" or "control" or more specific names for the sort of thing that they are. Naming things is hard.

In terms of the level of testing, in your example to test scrolling, I think "cheating" with using a ScrollEvent isn't cheating; it's actually more robust since any change to the slider initiated by the user by any mechanism will eventually generate a ScrollEvent it means we don't need to worry about all the possible ways that a scroll event could be generated and just test generate the behaviour that we really care about ("the position of the slider just changed to x") which makes it much easier to write tests ("does the value of the editor update to something sane"). But again, this is orthogonal to this PR.

@kitchoi kitchoi changed the title Initial UITester API for writing toolkit agnostic tests [WIP] Initial UITester API for writing toolkit agnostic tests Aug 17, 2020
@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 18, 2020

I pushed some name changes:

  • UserInteractor -> UIWrapper
  • editor or Editor -> target
  • EditorActionRegistry -> InteractionRegistry and LocationRegistry (more on this below) (Edited: They are combined again as TargetRegistry)
  • interactor -> wrapper
  • action -> interaction.

There is a larger conceptual change: The target can now be anything. For example, this:

wrapper = tester.find_by_name(ui, "list")

wrapper.target could be an instance of traitsui Editor.
and this:

wrapper = tester.find_by_name(ui, "list").locate(locator.Index(1))

wrapper could be an object that wraps an instance of traitsui Editor and the index 1.

This is because sometimes, we cannot throw away the locator information until we perform a mouse click or a key click. This is the case for TreeEditor and TableEditor. Given a cell or a node index, we need both the index AND the parent widget (e.g. tree widget) in order to perform a mouse click / key click.

With that, the target received by CustomAction may change from time to time when the library (e.g. traitsui) needs to adjust the test support code. None of these adjustment should affect the user-facing facade. Given CustomAction is already a naughty way / lazy way to extend, I decided that it might just be encouraging fragile test code a bit too much, so I removed it. With the CustomAction removed, the registry for it is removed, and the ABC for the registry is removed as well.

Previously the NestedUI class is considered a query. It is now moved to a locator module. The original EditorActionRegistry is now renamed and split into two separate registries, one responsible for interactions (perform, inspect), another one responsible for locating nested target (locate). I am actually in minds whether to combine them into just one registry again..

The abstraction of target eventually led to add the locate method. It allows the UIWrapper to be free of TraitsUI logic.
UITester is still TraitsUI specific, but it is a very very thin wrapper.

@kitchoi kitchoi changed the title [WIP] Initial UITester API for writing toolkit agnostic tests Initial UITester API for writing toolkit agnostic tests Aug 18, 2020
@kitchoi
Copy link
Contributor Author

kitchoi commented Aug 18, 2020

Ready for review.

account = Account(person=Person())
tester = UITester()
with tester.create_ui(account, dict(view=view)) as ui:
wrapper = tester.find_by_name(ui, "person").find_by_name("name")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will raise on this branch because I moved the logic to the the LocationRegistry and as an effort to reduce the scope, I removed the solver for getting the nested UI in a custom InstanceEditor. I should remove this description.

The implementation can be added back later once we have the toolkit specific default registry set up.
See for example:

registry.register_location_solver(
target_class=CustomInstanceEditor,
locator_class=locator.DefaultTarget,
solver=resolve_location_custom_instance_editor,
)

def resolve_location_custom_instance_editor(wrapper, location):
return wrapper.editor._ui

traitsui UI object is always added.
"""

def __init__(self, interaction_registries=None, location_registries=None):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original EditorActionRegistry is now renamed and split into two separate registries, one responsible for interactions (perform, inspect), another one responsible for locating nested target (locate). I am actually in minds whether to combine them into just one registry again..

I think I am going back on this: I will combine the registries and let composition occur there. It will make this argument list more manageable. In so far, one does need to modify both registries at the same time while adding testing supporting to a particular target.

I will push this change, but that would be it (and will still be ready for review).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Now I have to name the registry differently. Still open to the suggestion of separating them again.

Copy link
Contributor

@aaronayres35 aaronayres35 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Just had a few nitpick typo comments. Also I like wrapping both registries inside the TargetRegistry

traitsui/testing/tester/registry.py Outdated Show resolved Hide resolved
traitsui/testing/tester/ui_tester.py Outdated Show resolved Hide resolved
support ``UIWrapper.locate``. If the nested UI has other locator solvers,
then it would be possible to do chain location resolutions, like this::

container.locate(NestedUI()).locate(Index(1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is Index defined in this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it isn't, I will change the Index(1) to NestedUI() which is added here.

traitsui/testing/tester/ui_tester.py Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants