Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add window states API #2473

Open
wants to merge 285 commits into
base: main
Choose a base branch
from
Open

Add window states API #2473

wants to merge 285 commits into from

Conversation

proneon267
Copy link
Contributor

@proneon267 proneon267 commented Apr 2, 2024

Fixes #1857

Design discussion: #1884

The following are the API changes:

On the interface:

toga.constants.WindowState
WindowState.NORMAL
WindowState.MAXIMIZED
WindowState.MINIMIZED
WindowState.FULLSCREEN
WindowState.PRESENTATION
toga.Window
Added toga.Window.state(getter)
toga.Window.state(setter)
Deprecated toga.Window.full_screen(getter)
toga.Window.full_screen(setter)
toga.App
Added toga.App.is_in_presentation_mode(getter)
toga.App.enter_presentation_mode()
toga.App.exit_presentation_mode()
Deprecated toga.App.is_full_screen(getter)
toga.App.set_full_screen()
toga.App.exit_full_screen()

On the backend:

toga.Window
Added get_window_state()
set_window_state()
toga.App
Added enter_presentation_mode()
exit_presentation_mode()

However, I did encounter some issues, which I have put as inline comments. I do have some other questions about rubicon for implementing the new API, but I will post them later on.

TODO: Write and modify documentation, fix issues with tests, implement for the other remaining platforms, etc.

PR Checklist:

  • All new features have been tested
  • All new features have been documented
  • I have read the CONTRIBUTING.md file
  • I will abide by the code of conduct

@proneon267
Copy link
Contributor Author

@freakboy3742 Can you also review this, when you are free?

@freakboy3742
Copy link
Member

Unless there's a design or implementation issue that requires feedback, I generally don't review code until it's passing CI - this is currently failing coverage and Android tests, and I'm going to guess iOS tests as well (although #2476 is likely masking that problem).

Is there a specific design or implementation issue where you're seeking feedback prior to getting the tests to pass?

@proneon267
Copy link
Contributor Author

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

@freakboy3742
Copy link
Member

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

@proneon267
Copy link
Contributor Author

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Mostly Fullscreen and presentation mode where navigation bar, title bar& menu bars remain hidden.

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing. I thought maybe adding a delay to the redraw method would work, but it doesn't. The tests only pass when I set the window state directly on the window object or run the test in isolation.

I have also tried to identify any problems with the app interface but didn't find any. The same test logic is run in test_window, but there it works properly. The app implementation of presentation mode calls the window implementation of presentation mode. So, the behaviour should be identical.

@proneon267
Copy link
Contributor Author

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

@freakboy3742
Copy link
Member

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

I have, and I've already told you the general class of problem you're hitting. You've even hinted at the problem yourself:

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing.

If a single test isn't leaving the app in the same state that it found it... well, that's the source of your problem. That's what you need to fix.

As for "the implementation is identical"... well, the golden rule of computers applies: if in doubt, the computer is right. The computer doesn't have opinions. It runs the code it has. If you think something is identical, but testing is producing inconsistent results... something about your implementation isn't identical.

You're the one proposing this PR; the onus is on you to solve problems with the implementation. If you've got a specific question about the way Toga or the testbed operates, then I can do what I can to provide an explanation, but I can only dedicate time to resolving an open ended "why doesn't it work?" questions when the problem is on my personal todo list - which usually means it's something on Toga's roadmap, or it's the source of a bug that is preventing people from using the published version of Toga.

@proneon267
Copy link
Contributor Author

proneon267 commented May 23, 2024

The coverage report complains about:

Name              Stmts   Miss Branch BrPart  Cover   Missing
-------------------------------------------------------------
src/toga/app.py     [35](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:36)7      2     82      2  99.1%   846, 852
-------------------------------------------------------------
TOTAL              5067      2   12[38](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:39)      2  99.9%

toga/core/src/toga/app.py

Lines 846 to 849 in 06b7f2a

@property
def is_in_presentation_mode(self) -> bool:
"""Is the app currently in presentation mode?"""
return any(window.state == WindowState.PRESENTATION for window in self.windows)

So, Line 846 is just @property. I first thought maybe the is_in_presentation_mode property is not being invoked in the test, but it is invoked several times in the tests:
assert not app.is_in_presentation_mode

assert app.is_in_presentation_mode

Next, it reports missing coverage for Line 852:

toga/core/src/toga/app.py

Lines 851 to 854 in 06b7f2a

def enter_presentation_mode(
self,
window_list_or_screen_window_dict: list[Window] | dict[Screen, Window],
) -> None:

So, line 852 is just self,. Here also, `enter_presentation_mode() is invoked serveral times during the tests:

app.enter_presentation_mode({app.screens[0]: window1})

app.enter_presentation_mode([window1, window2])

So, both of these reported missing coverage don't make much sense to me.

Also, the tests expectedly fail on python 3.13 due to Pillow.

@freakboy3742
Copy link
Member

I can't work out what's going on with the coverage report in CI (perhaps a state cache somewhere?); but if I run the tests locally (tox -m test310), the lines in app.py that report as uncovered are 836 and 842, which corresponds nicely to to return conditions in set_full_screen():

toga/core/src/toga/app.py

Lines 830 to 843 in 06b7f2a

# DeprecationWarning,
# stacklevel=2,
# )
if self.windows is not None:
self.exit_full_screen()
if windows is None:
return
screen_window_dict = dict()
for window, screen in zip(windows, self.screens):
screen_window_dict[screen] = window
self.enter_presentation_mode(screen_window_dict)
else:
warn("App doesn't have any windows")

Copy link
Member

@freakboy3742 freakboy3742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - I think we're down to one last issue with the intermediate state testbed tests. I'm not surprised these intermediate tests are necessary to exercise one (or more) specific intermediate transitions - it's just not clear why that is the case. I strongly suspect the fix is to reduce the test cases to a move between three specific transitions, include all the "triples" that are problematic, and then document the fact on the test that ideally we'd test all intermediate transitions, but that would be computationally prohibitive.

@pytest.mark.parametrize(
"intermediate_states",
[
(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok... but again... I'm not clear on what this is testing. It's now a parameterised test with 1 parameterised value... and it's not clear at all what case it's testing, or why that specific case needs to be tested.

Each of the individual transitions should be tested as part of the previous test; if there's a need to test a specific transition through an intermediate state, then that sequence should be tested (and it should be documented why it's an issue).

Ideally, we'd test all possible intermediate states; however, I'll grant that 125 test cases is excessive here, so in the name of practicality, it's acceptable to cut to the subset that we believe to be an issue... but it should be clear which "set of three" is the problem.

(And, to be clear - the same issue applies to the desktop intermediate states test)

@freakboy3742
Copy link
Member

I've also found an interesting edge case in manual testing: On GTK, going FULLSCREEN->PRESENTATION->NORMAL results in the restored "normal" size of the window to be the full screen size. I'm guessing this might have something to do with the window not going through an intermediate NORMAL transition between FULLSCREEN and PRESENTATION, resulting in the window's "natural" size being modified.

@proneon267
Copy link
Contributor Author

I've also found an interesting edge case in manual testing: On GTK, going FULLSCREEN->PRESENTATION->NORMAL results in the restored "normal" size of the window to be the full screen size. I'm guessing this might have something to do with the window not going through an intermediate NORMAL transition between FULLSCREEN and PRESENTATION, resulting in the window's "natural" size being modified.

I tried to reporduce this on x11 both through the examples/window app and manually with:

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.PRESENTATION
    self.main_window.state = WindowState.NORMAL

But I could not reproduce the behavior you are describing. Could you confirm, if you did the manual testing in a wayland environment?

@freakboy3742
Copy link
Member

But I could not reproduce the behavior you are describing. Could you confirm, if you did the manual testing in a wayland environment?

Yes - this was under Wayland. I recreated the test under X, I don't see the problem (and, FWIW, under X, it visually looked like there was a "reduce to NORMAL" step in between the FULLSCREEN and PRESENTATION).

@proneon267
Copy link
Contributor Author

proneon267 commented Nov 7, 2024

I did some tests and it seems, Gtk on Wayland just forgets about the window's original size and position, when the window is restored from any other states.

So, I have created a fix accordingly. But since Wayland doesn't allow moving of windows, so I cannot do anything about the forgotten position when restored to NORMAL state from any other state.

EDIT: That didn't work.

@proneon267
Copy link
Contributor Author

Adding a slight delay before switching on wayland worked :D

# state switching.
if IS_WAYLAND: # pragma: no-cover-if-linux-x
GLib.timeout_add(
10, partial(self._apply_state, WindowState.NORMAL)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding "magic numbers" is borderline behavior in a test; but in production code, it's really not a good idea - who is to say that 10ms is enough? It's inherently going to be machine dependent; and as machines change over time, it's unclear how performance changes will impact the chosen "magic" number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I will try to find any Gtk callback event that would be triggered on completion of Gtk side processing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use Gtk.events_pending() and Gtk.main_iteration_do(), but it still didn't work:

diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..1c4407f8a 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,20 +79,16 @@ class Window:
                     # Add slight delay to prevent glitching  on wayland during rapid
                     # state switching.
                     if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                        GLib.timeout_add(
-                            10, partial(self._apply_state, WindowState.NORMAL)
-                        )
-                    else:  # pragma: no-cover-if-linux-wayland
-                        self._apply_state(WindowState.NORMAL)
+                        while Gtk.events_pending():
+                            Gtk.main_iteration_do(False)
+                    self._apply_state(WindowState.NORMAL)
                 else:
                     self._pending_state_transition = None
             else:
                 if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                    GLib.timeout_add(
-                        10, partial(self._apply_state, self._pending_state_transition)
-                    )
-                else:  # pragma: no-cover-if-linux-wayland
-                    self._apply_state(self._pending_state_transition)
+                    while Gtk.events_pending():
+                        Gtk.main_iteration_do(False)
+                self._apply_state(self._pending_state_transition)
 
     def gtk_delete_event(self, widget, data):
         # Return value of the GTK on_close handler indicates whether the event has been
         ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also tried Gtk.idle_add() and it also didn't work:

diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..dea71f108 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,17 +79,15 @@ class Window:
                     # Add slight delay to prevent glitching  on wayland during rapid
                     # state switching.
                     if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                        GLib.timeout_add(
-                            10, partial(self._apply_state, WindowState.NORMAL)
-                        )
+                        GLib.idle_add(partial(self._apply_state, WindowState.NORMAL))
                     else:  # pragma: no-cover-if-linux-wayland
                         self._apply_state(WindowState.NORMAL)
                 else:
                     self._pending_state_transition = None
             else:
                 if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                    GLib.timeout_add(
-                        10, partial(self._apply_state, self._pending_state_transition)
+                    GLib.idle_add(
+                        partial(self._apply_state, self._pending_state_transition)
                     )
                 else:  # pragma: no-cover-if-linux-wayland
                     self._apply_state(self._pending_state_transition)
                     ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also tested with the following signals, and still they do not work consistently:

  • map-event
  • realize
  • draw
  • configure-event
  • state-flags-changed

# This set of intermediate states is randomly chosen and is not intended
# to check for specific problematic state transitions. Instead, it is used
# to test that rapidly passing new states to the window.state setter does
# not cause any unexpected glitches.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was comment on a previous review that referenced this block of code which seems to have been dismissed - possibly because of the addition of this comment. However, this comment doesn't satisfactorily address the question that was being asked: Why this specific set of states? What is this test doing? "Randomly chosen" doesn't seem likely - because there's nothing "random" about the logic that is causing the transitions. It's validating a very specific sets of transitions through intermediate states - and we need to understand why that specific set of transition tests is required.

As I indicated in my previous review, ideally we'd test every one of the 125 possible intermediate transitions - but that's would be prohibitively time consuming. However, we can reduce to the set that we know to be a problem - which should also map the the set of transitions that rely on complex intermediate state behavior management.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But on the mobile backend, no combination of states are problematic, so which set should I keep in order to test "rapid assignment of new intermediate states" doesn't cause any unexpected problems?

On Android, the states having the most complex behavior are FULLSCREEN and PRESENTATION, which I have included in the intermediate state test case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand... if there's no problematic behavior... what is being tested?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to test that rapidly passing new states to Window.state setter doesn't cause any glitches, i.e., doing:

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.PRESENTATION
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.NORMAL

doesn't cause any unexpected glitch.
The set intermediate_states could have been any of the following:

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.PRESENTATION
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.NORMAL

Or

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.NORMAL

Or

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.MAXIMIZED
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.NORMAL

The set of states, I have chosen to be intermediate_states for the test, is not specifically chosen and do not test for any specific "intermediate state transitional" issue.

However, on the desktop intermediate states test, the set of states chosen to be intermediate_states, are specifically chosen in specific order, and having specific number of states in each set. This is to trigger certain cases, which were causing glitches leading to test failure(on macOS & Gtk). So, keeping these 2 specific intermediate_states test cases is important to ensure any new future changes doesn't change any behavior that can lead to glitches.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - if there are known cases that cause glitches then those should be tests, with each individual glitch case is a standalone test, not combined into a 5 state chain - unless the 5 state chain is an essential part of reproducing the problem - in which case, I'd like to have a better understanding of why that specific chain is causing a glitch.

If the mobile case doesn't have any glitch cases, then a test isn't required.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you, since the mobile case doesn't have any glitch cases, I will remove the intermediate states from the test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Window maximize & minimize API functionality
3 participants