-
-
Notifications
You must be signed in to change notification settings - Fork 671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add window states API #2473
base: main
Are you sure you want to change the base?
Add window states API #2473
Conversation
@freakboy3742 Can you also review this, when you are free? |
Unless there's a design or implementation issue that requires feedback, I generally don't review code until it's passing CI - this is currently failing coverage and Android tests, and I'm going to guess iOS tests as well (although #2476 is likely masking that problem). Is there a specific design or implementation issue where you're seeking feedback prior to getting the tests to pass? |
I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py. |
Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here? Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail. |
Mostly Fullscreen and presentation mode where navigation bar, title bar& menu bars remain hidden.
Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing. I thought maybe adding a delay to the redraw method would work, but it doesn't. The tests only pass when I set the window state directly on the window object or run the test in isolation. I have also tried to identify any problems with the app interface but didn't find any. The same test logic is run in test_window, but there it works properly. The app implementation of presentation mode calls the window implementation of presentation mode. So, the behaviour should be identical. |
@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints. |
I have, and I've already told you the general class of problem you're hitting. You've even hinted at the problem yourself:
If a single test isn't leaving the app in the same state that it found it... well, that's the source of your problem. That's what you need to fix. As for "the implementation is identical"... well, the golden rule of computers applies: if in doubt, the computer is right. The computer doesn't have opinions. It runs the code it has. If you think something is identical, but testing is producing inconsistent results... something about your implementation isn't identical. You're the one proposing this PR; the onus is on you to solve problems with the implementation. If you've got a specific question about the way Toga or the testbed operates, then I can do what I can to provide an explanation, but I can only dedicate time to resolving an open ended "why doesn't it work?" questions when the problem is on my personal todo list - which usually means it's something on Toga's roadmap, or it's the source of a bug that is preventing people from using the published version of Toga. |
The coverage report complains about:
Lines 846 to 849 in 06b7f2a
So, Line 846 is just @property . I first thought maybe the is_in_presentation_mode property is not being invoked in the test, but it is invoked several times in the tests:toga/core/tests/app/test_app.py Line 468 in 06b7f2a
toga/core/tests/app/test_app.py Line 480 in 06b7f2a
Next, it reports missing coverage for Line 852: Lines 851 to 854 in 06b7f2a
So, line 852 is just toga/core/tests/app/test_app.py Line 479 in 06b7f2a
toga/core/tests/app/test_app.py Line 494 in 06b7f2a
So, both of these reported missing coverage don't make much sense to me. Also, the tests expectedly fail on python 3.13 due to Pillow. |
I can't work out what's going on with the coverage report in CI (perhaps a state cache somewhere?); but if I run the tests locally (tox -m test310), the lines in Lines 830 to 843 in 06b7f2a
|
ff84e6a
to
0b0faa9
Compare
c9b3238
to
5560e3d
Compare
f44d48c
to
bc1f902
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - I think we're down to one last issue with the intermediate state testbed tests. I'm not surprised these intermediate tests are necessary to exercise one (or more) specific intermediate transitions - it's just not clear why that is the case. I strongly suspect the fix is to reduce the test cases to a move between three specific transitions, include all the "triples" that are problematic, and then document the fact on the test that ideally we'd test all intermediate transitions, but that would be computationally prohibitive.
testbed/tests/window/test_window.py
Outdated
@pytest.mark.parametrize( | ||
"intermediate_states", | ||
[ | ||
( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok... but again... I'm not clear on what this is testing. It's now a parameterised test with 1 parameterised value... and it's not clear at all what case it's testing, or why that specific case needs to be tested.
Each of the individual transitions should be tested as part of the previous test; if there's a need to test a specific transition through an intermediate state, then that sequence should be tested (and it should be documented why it's an issue).
Ideally, we'd test all possible intermediate states; however, I'll grant that 125 test cases is excessive here, so in the name of practicality, it's acceptable to cut to the subset that we believe to be an issue... but it should be clear which "set of three" is the problem.
(And, to be clear - the same issue applies to the desktop intermediate states test)
I've also found an interesting edge case in manual testing: On GTK, going FULLSCREEN->PRESENTATION->NORMAL results in the restored "normal" size of the window to be the full screen size. I'm guessing this might have something to do with the window not going through an intermediate NORMAL transition between FULLSCREEN and PRESENTATION, resulting in the window's "natural" size being modified. |
I tried to reporduce this on x11 both through the def do_state_switching(self, widget, **kwargs):
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.PRESENTATION
self.main_window.state = WindowState.NORMAL But I could not reproduce the behavior you are describing. Could you confirm, if you did the manual testing in a wayland environment? |
Yes - this was under Wayland. I recreated the test under X, I don't see the problem (and, FWIW, under X, it visually looked like there was a "reduce to NORMAL" step in between the FULLSCREEN and PRESENTATION). |
EDIT: That didn't work. |
Adding a slight delay before switching on wayland worked :D |
# state switching. | ||
if IS_WAYLAND: # pragma: no-cover-if-linux-x | ||
GLib.timeout_add( | ||
10, partial(self._apply_state, WindowState.NORMAL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding "magic numbers" is borderline behavior in a test; but in production code, it's really not a good idea - who is to say that 10ms is enough? It's inherently going to be machine dependent; and as machines change over time, it's unclear how performance changes will impact the chosen "magic" number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, I will try to find any Gtk callback event that would be triggered on completion of Gtk side processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to use Gtk.events_pending()
and Gtk.main_iteration_do()
, but it still didn't work:
diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..1c4407f8a 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,20 +79,16 @@ class Window:
# Add slight delay to prevent glitching on wayland during rapid
# state switching.
if IS_WAYLAND: # pragma: no-cover-if-linux-x
- GLib.timeout_add(
- 10, partial(self._apply_state, WindowState.NORMAL)
- )
- else: # pragma: no-cover-if-linux-wayland
- self._apply_state(WindowState.NORMAL)
+ while Gtk.events_pending():
+ Gtk.main_iteration_do(False)
+ self._apply_state(WindowState.NORMAL)
else:
self._pending_state_transition = None
else:
if IS_WAYLAND: # pragma: no-cover-if-linux-x
- GLib.timeout_add(
- 10, partial(self._apply_state, self._pending_state_transition)
- )
- else: # pragma: no-cover-if-linux-wayland
- self._apply_state(self._pending_state_transition)
+ while Gtk.events_pending():
+ Gtk.main_iteration_do(False)
+ self._apply_state(self._pending_state_transition)
def gtk_delete_event(self, widget, data):
# Return value of the GTK on_close handler indicates whether the event has been
```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also tried Gtk.idle_add()
and it also didn't work:
diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..dea71f108 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,17 +79,15 @@ class Window:
# Add slight delay to prevent glitching on wayland during rapid
# state switching.
if IS_WAYLAND: # pragma: no-cover-if-linux-x
- GLib.timeout_add(
- 10, partial(self._apply_state, WindowState.NORMAL)
- )
+ GLib.idle_add(partial(self._apply_state, WindowState.NORMAL))
else: # pragma: no-cover-if-linux-wayland
self._apply_state(WindowState.NORMAL)
else:
self._pending_state_transition = None
else:
if IS_WAYLAND: # pragma: no-cover-if-linux-x
- GLib.timeout_add(
- 10, partial(self._apply_state, self._pending_state_transition)
+ GLib.idle_add(
+ partial(self._apply_state, self._pending_state_transition)
)
else: # pragma: no-cover-if-linux-wayland
self._apply_state(self._pending_state_transition)
```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have also tested with the following signals, and still they do not work consistently:
map-event
realize
draw
configure-event
state-flags-changed
testbed/tests/window/test_window.py
Outdated
# This set of intermediate states is randomly chosen and is not intended | ||
# to check for specific problematic state transitions. Instead, it is used | ||
# to test that rapidly passing new states to the window.state setter does | ||
# not cause any unexpected glitches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was comment on a previous review that referenced this block of code which seems to have been dismissed - possibly because of the addition of this comment. However, this comment doesn't satisfactorily address the question that was being asked: Why this specific set of states? What is this test doing? "Randomly chosen" doesn't seem likely - because there's nothing "random" about the logic that is causing the transitions. It's validating a very specific sets of transitions through intermediate states - and we need to understand why that specific set of transition tests is required.
As I indicated in my previous review, ideally we'd test every one of the 125 possible intermediate transitions - but that's would be prohibitively time consuming. However, we can reduce to the set that we know to be a problem - which should also map the the set of transitions that rely on complex intermediate state behavior management.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But on the mobile backend, no combination of states are problematic, so which set should I keep in order to test "rapid assignment of new intermediate states" doesn't cause any unexpected problems?
On Android, the states having the most complex behavior are FULLSCREEN and PRESENTATION, which I have included in the intermediate state test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand... if there's no problematic behavior... what is being tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to test that rapidly passing new states to Window.state
setter doesn't cause any glitches, i.e., doing:
def do_state_switching(self, widget, **kwargs):
self.main_window.state = WindowState.PRESENTATION
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.NORMAL
doesn't cause any unexpected glitch.
The set intermediate_states could have been any of the following:
def do_state_switching(self, widget, **kwargs):
self.main_window.state = WindowState.PRESENTATION
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.NORMAL
Or
def do_state_switching(self, widget, **kwargs):
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.NORMAL
Or
def do_state_switching(self, widget, **kwargs):
self.main_window.state = WindowState.MAXIMIZED
self.main_window.state = WindowState.FULLSCREEN
self.main_window.state = WindowState.NORMAL
The set of states, I have chosen to be intermediate_states for the test, is not specifically chosen and do not test for any specific "intermediate state transitional" issue.
However, on the desktop intermediate states test, the set of states chosen to be intermediate_states, are specifically chosen in specific order, and having specific number of states in each set. This is to trigger certain cases, which were causing glitches leading to test failure(on macOS & Gtk). So, keeping these 2 specific intermediate_states test cases is important to ensure any new future changes doesn't change any behavior that can lead to glitches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - if there are known cases that cause glitches then those should be tests, with each individual glitch case is a standalone test, not combined into a 5 state chain - unless the 5 state chain is an essential part of reproducing the problem - in which case, I'd like to have a better understanding of why that specific chain is causing a glitch.
If the mobile case doesn't have any glitch cases, then a test isn't required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you, since the mobile case doesn't have any glitch cases, I will remove the intermediate states from the test.
Fixes #1857
Design discussion: #1884
The following are the API changes:
On the interface:
toga.constants.WindowState
WindowState.NORMAL
WindowState.MAXIMIZED
WindowState.MINIMIZED
WindowState.FULLSCREEN
WindowState.PRESENTATION
toga.Window
toga.Window.state
(getter)toga.Window.state
(setter)toga.Window.full_screen
(getter)toga.Window.full_screen
(setter)toga.App
toga.App.is_in_presentation_mode
(getter)toga.App.enter_presentation_mode()
toga.App.exit_presentation_mode()
toga.App.is_full_screen
(getter)toga.App.set_full_screen()
toga.App.exit_full_screen()
On the backend:
toga.Window
get_window_state()
set_window_state()
toga.App
enter_presentation_mode()
exit_presentation_mode()
However, I did encounter some issues, which I have put as inline comments. I do have some other questions about rubicon for implementing the new API, but I will post them later on.
TODO: Write and modify documentation, fix issues with tests, implement for the other remaining platforms, etc.
PR Checklist: