Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interaction with IME #16

Open
TakayoshiKochi opened this issue Jul 10, 2015 · 2 comments
Open

Interaction with IME #16

TakayoshiKochi opened this issue Jul 10, 2015 · 2 comments
Labels

Comments

@TakayoshiKochi
Copy link
Member

This is spun off from my comment in the doc.
Please ignore if the main motivation for the API is pointing devices,
not keyboard or text input devices.

If one of the motivations is to detect low-level input device,
e.g., an Android phone user is typing text on-screen (virtual keyboard) or
bluetooth hardware keyboard, or USB barcode reader etc.,
then ignoring the IME layer would make sense and give the script author
the information about which physical device is being used.

If the author wants to know the details of 'how this input content is generated from user input' -
then the problem becomes very complex, depending on how much detail is needed.
Even with bluetooth keyboard, IME may preprocess the raw input on Android.

A raw keypress might go through very complex path if IME is involved
(maybe go back and forth several times from browser and IME) until
the final resulting character sequence is generated and delivered to the destination web page.
How an application (browser) interacts with system's IME
varies from system to system, it's hard to abstract to an interoperable APIs.

So my gut feeling is that IME should be out of scope for this spec,
but there might be some information about IME which can be useful.
Here are my random ideas:

  • a flag that indicates whether the input is directly from the hardware input device or not
    ('raw' or 'cooked')
  • if the device input is preprocessed, what is the preprocessor (IME, autocorrect,
    gesture/handwriting recognizer)?
  • For what language/locale the preprocessor is intended for (may not be a single locale)?
  • Is there in-flight raw input data or not (similar to compositionupdate and compositionend)
    to distinguish the state is 'in progress' or 'somewhat finished'.
@RByers RByers added the spec label Jul 10, 2015
@RByers
Copy link
Member

RByers commented Jul 10, 2015

@TakayoshiKochi thank you!

My motivation at the moment is just around pointing devices (as you can see by the only defined property being firesTouchEvents), but the goal is definitely to expand to text input scenarios (i.e. we're making sure that KeyboardEvent, InputEvent and CompositionEvent all get a sourceDevice property that makes sense). I know @garykac (UI events spec editor) has some plans for how he wants to extend this to address some text input issues. I can see why that's very complex, and I won't pretend to know what the right design is. Hopefully we can add more things only in a very use-case focused way to avoid the risks here.

Given the extremely narrow scope of the API at the moment, I think the only detail related to text input that matters right now is what sourceDevice.firesTouchEvents should return for events fired due to typing on a touchscreen (Android virtual keyboard etc.). Regardless of the points you raise, I think it should be false (whether we're talking about a piece of the physical touchscreen, or the virtual keyboard device - in either case the web page doesn't get touch events).

Is there anything in the spec today that you'd be concerned might limit our options around IME scenarios in the future? In practice I don't expect sourceDevice to be really useful on anything other than MouseEvent and FocusEvent at the moment, so it seems very unlikely to me that developers would accidentally take a dependency on anything that could cause us a problem later when we try to extend this to IME scenarios. WDYT?

@TakayoshiKochi
Copy link
Member Author

Yeah, to be honest, I don't think of a good use cases where IME matters for this input device spec.

In the good old days, there was a clear distinction whether the input was directly from keyboard
or IME, but nowadays the border has become blurry. Take an example of Google Latin keyboard
of Android, it almost directly types characters, while it has more advanced features like auto
correction, word prediction or gesture recognition.

I'd say it's hard to abstract classes to classify any text input device which already exists or will
appear in foreseeable future is virtually impossible, but probably we should not give up
coming up a good way which is better than a UA string or USB device ID like solution
for each input device.

I second your opinion that we should add things in use-case focused way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants