Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detection reliability #6

Closed
rafgraph opened this issue May 4, 2017 · 6 comments
Closed

Detection reliability #6

rafgraph opened this issue May 4, 2017 · 6 comments

Comments

@rafgraph
Copy link
Owner

rafgraph commented May 4, 2017

@patrickhlauke, any thoughts on the detection reliability (I'd consider >95% browser weighted use a win)?

As a note, there are two main reasons for creating this:

  1. Setting event listeners: always setting mouse and touch event listeners (basically assuming every device is a hybrid) degrades the experience of touch only, and to a lesser degree mouse only, device users. It is slower and there are some edge cases where it is more reliable to just listen for touch events on a touch only device.
  2. Optimize the UI for the best user experience: this is more based on primaryInput, which is a derivative of deviceType, and concerns things like if I'm showing a photo gallery on a touch only device, the best most native like experience is to show each photo fullscreen with no buttons (swipe left/right for previous/next, swipe down to go back to all photos, tap to show instructions, and long press and hold to show an overlay with info about the photo).

In general, as much as I wish what I build could run perfectly on every device, at some point I have to make some assumptions about how the user will interact with my site (touch/mouse) in order to provide the best user experience for the majority of users. So the reliability of Detect It is key, but need not be 100%.

@patrickhlauke
Copy link

to be honest, i fundamentally disagree with an either/or approach, and would rather favor always supporting all inputs, as the world is more complex than binary "either touch or mouse". while i understand your reasons, i'd still say there's more nuance to be had.

  1. if having both touch and mouse listeners makes your site/app slower, i'd seriously consider having a good hard look at the code. the impact of having registered, but not used, event listeners is negligible to non-existent.
  2. you can have UIs that adapt to whatever input the user is using right at that moment. by all means make an initial assumption "if touch events present, start off showing just the image gallery with no visible buttons; as soon as i detect an actual mousemove, fade the buttons in; as soon as i see a touchstart, fade them back out"

whatever reliability figure you may come up with today, it's going to change as the device landscape changes. personally, i'd stay away from making any numerical claim, and instead simply being upfront about exactly what is detected, and what isn't.

@rafgraph
Copy link
Owner Author

rafgraph commented May 4, 2017

Thanks for the feedback. I fully agree that either mouse or touch is not the way to go, which is why I account for hybrid devices as well, however, I do believe that the vast majority of devices in use are mouse only or touch only, and so always optimizing for hybrid devices is also not the way to go.

The slowness from setting extra listeners is anecdotal and not something I've spent much time on. The bigger problem is in determining that a mouse event was generated from a touch interaction and so should be ignored - while most of the time this can be done, there are some edge cases where the incorrect determination is made, so why introduce those bugs into my app? (I guess it's a trade off - eliminate those bugs at the expense of bugs introduced by not properly detecting the device type up front).

From a UI design perspective I fundamentally disagree with the approach you suggest. To fade buttons in means you have to account for them being on the screen at some point, which effects how the UI is designed, and leads to sub-optimal UI design and user experience on touch only devices.

Regarding a reliability figure, my intention is not to make a claim, but rather mid 90% is my cutoff for using something without a backup (e.g. I use CSS Flexbox and I'm okay that my apps will be broken in browsers that don't support it). I was more wondering if you noticed any glaring holes that would sink the reliability (I know you maintain a substantial device lab and know how these thing work on many more devices than I do). Thanks.

@rafgraph
Copy link
Owner Author

rafgraph commented May 4, 2017

@RByers any thoughts on this?

@RByers
Copy link

RByers commented May 4, 2017

/cc @dtapuska

Setting event listeners: always setting mouse and touch event listeners (basically assuming every device is a hybrid) degrades the experience of touch only, and to a lesser degree mouse only, device users. It is slower and there are some edge cases where it is more reliable to just listen for touch events on a touch only device.

In terms of the edge cases when trying to handle both mouse and touch events, this is exactly what event.sourceCapabilities.firesTouchEvents is designed to solve. Just ignore the mouse events that have this bit yet (though you'll need to use the polyfill or some other strategy on non-Chrome browsers since the API is still Chrome only). Then you can listen to both touch and mouse events and avoid any double-handling for the "mouse" events created via a touch.

Or alternately, if Pointer Events are supported you could listen only for pointer events (necessary if you want touch support on Microsoft Edge anyway), otherwise fall back to either touch or mouse events. With Firefox being close to shipping PE, it'll soon be the case that all major browsers that run on hybrid devices support pointer events (i.e. Apple doesn't make hybrid devices).

Other than that, there really should not be any sort of performance issue. The only events where fractions of a ms could matter events are the move ones (since they can fire at ~60hz) and we never fire a continuous sequence of both touchmove and mousemove (the mouse events get fired for touch only on tap, not drag). Having an extra no-op mousedown and mouseup listener invocation (which just checks firesTouchEvents and returns) should not register in any real-world performance trace. If you can give me a repro otherwise then I'd file a Chrome bug for that we'd dig into.

On the UX side I agree that is challenging and I have no great answers. The pointer/hover/any-pointer/any-hover media queries exist to try to make this easier for you (and, critically, let you say what you really want to know rather than relying on some 95%-case heuristics, so that unusual device configurations can still work).

@rafgraph
Copy link
Owner Author

rafgraph commented May 5, 2017

Thanks @RByers. I like the sourceCapabilities solution, even opened an issue on the polyfill last fall, but until support for it is ubiquitous, using the polyfill and setting mouse event listeners on touch only devices suffers from the phantom mouse bugs that don't belong in a web app running on a touch only device.

Like so much in web dev it's a tradeoff, and I've personally found it more reliable to use an upfront detection (i.e. Detect It) before setting event listeners than to have phantom mouse bugs pop up in unpredictable and not easy to reproduce ways. With the upfront detection I run the risk that my whole app will fail, but it is easy to reproduce and the point of failure easy to identify, so it can be mitigated.

In building real world web apps I make two responsive versions. One for when the primaryInput is mouse and the other for when the primaryInput is touch, and each version scales responsively based on the screen/window size (by two versions I mean different component trees rendered out of the same React app). When the deviceType is hybrid, while the UX is still optimized for touch or mouse the other input type is supported and can control the app (I could make a third version with UX optimized for hybrid that would fade controls in and out so to speak, but the benefits haven't justified the work in my apps so far). This is the best compromise/optimization I came up with and the other major reason for creating Detect It (besides eliminating the phantom mouse bugs).

Out of curiosity, why don't browsers pass on what they know about the device hardware? E.g. hasTouchScreen etc... They already do this for screen size and pixel density. The hover and pointer media queries sort of do this but not really.

Pointer Events are another good option, but until support for them is >95% on a browser weighted use basis, I'm going to need to support touch and mouse events. I also build mostly in React which doesn't support them yet.

The performance issue is anecdotal, and not a significant concern or something I've spent much time on.

The hover and pointer media queries are helpful with UX but they're not supported in Firefox and on some legacy devices (I'd venture to say relying solely on them would tell you what you want to know less than 95% of the time). Detect It actually check's them first to determine the primaryInput type, but also helps fill in the holes when they're not supported by making some assumptions based on other info.

@rafgraph
Copy link
Owner Author

rafgraph commented May 5, 2017

The more I think about it, the more I think Pointer Events are a great solution for the future. The one thing I would add would be upfront identification of pointer types (based on available hardware) and which pointer type is the primary type (not to be confused with primary pointer).

This would aid in making UX/UI decisions about what to present to the user in a crystal clear manner - I know exactly what I'm dealing with for input types and what events they will fire (without any ambiguity of mapping hover and pointer media query results to pointer types).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants