-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Support for Touch devices for MapD Draw #29
base: master
Are you sure you want to change the base?
Conversation
Hi @chinmaydalvi , thank you for the contribution! Did this work for you on a touch device? It seems to only drag the map around, rather than draw the lasso shape, when I try it hooked into mapd-charting, on the example you mentioned in #28: https://omnisci.github.io/mapd-charting/example/example5.html |
Hi @jrajav In the PR I have added the missing touch devices events for MapD Draw Library. Above example is working perfectly fine in case of Nontouch devices as all mouse pointer events are present already in the Library. But on touch devices, the above example is not working as touch events are missing in Library. In this PR I have added those missing touch devices events in Library so that the above example can work in case of touch devices. Regarding your query related to Lasso Tool example Lasso Tool is created on the top of mapd-draw Library. In mapd charting library, custom code of your Lasso tool example is written. To support Lasso Tool on touch devices we need to add support in custom code of Lasso example too which is there in mapd charting Library but Before that, we need to add Basic touch events support in Mapd draw library as without these changes in mapd draw library we can't directly provide support in Lasso's custom code Therefore In this Pull request, I have added the touch events support for Mapd draw Library first. I hope that you would understand Thanks, |
…nd change the size of edges
Hi @jrajav |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Chimnay,
First of all thanks for submitting a PR, and our apologies for such a delay in responding to your request. This is obviously our fault and is due to a lack of ownership for this library on our end. I was the original author of this library, but work in a different engineering department at omnisci and therefore do not actively look at frontend-related repos much.
For a little history, I intentionally avoided adding touch event support primarily for the design I had in mind from an interactions standpoint. I wanted the interactions to be as intuitive as possible with some guides along the way (scale/rotate cursors for example to indicate what transform action could be done). This was much more difficult to do with touch. For example, in shape edit mode, how will you let the user know you're about to perform a scale vs a rotate when the click/touch point to activate those transforms are very close together. This is easy to solve with mouse events since we can make use of mouse hovering. This is not easy to do with touch. Additionally I think to make the touch-clicking more robust a larger buffer needs to be put around the touch point when doing the intersection lookup. I found it hard to click on the scale/rotate blocks in touch mode on an iphone S-plus screen, tho I should do some more testing on a larger touch screen.
Although you made a great effort here, I'm concerned it is not fully complete particularly from a UI perspective. Have you been actively using your fork in a production or test environment? If so, how is it performing for you? Have you had any issues with it being intuitive to use?
I'm not against approving your PR with some cosmetic changes as a first step, but we'll need to handle some of the missing pieces relatively soon for completeness because full touch support is not quite there with these changes when compared with the mouse side.
|
||
// This method will Add clientX, clientY & offsetX and offsetY for Touch events | ||
function getTouchCoordinates(event, canvas) { | ||
event.clientX = event.touches[0].clientX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned whether grabbing the index 0 location of the touches will be consistent and produce stable interactions. For example, what would happen if you did a two-finger touch, and you removed the touch from the first finger, but kept the second finger in contact and moved it around. Or what would happen if you first pressed with one finger, then pressed with a second finger, then removed the first finger, and moved the second finger around.
I wonder if it would be best to limit the scope of the touch event handling to just a single touch. Anything more than that would cancel the event. Thoughts?
TOUCHMOVE: "touchmove" | ||
} | ||
|
||
const DOUBLE_CLICK_DELAY = 600 // To detect the double click in case of touch screen |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
600 ms seems a bit too much for a double-click. How did you come up with this magic number? Did you feel it was appropriate in your testing? My first thought is that it should be at max 300ms.
// This method is used to stop Mouse Event propagation Triggered from the Touch event | ||
setDenyMouseEventFlag(event) { | ||
if (event.touches) { | ||
this.denyMouseEvent = true | ||
} else if (event.type === EventsTypes.MOUSEUP) { | ||
// set the Flag false at the end of mouse event i.e on MouseUp Event | ||
this.denyMouseEvent = false | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain the scenario you're trying to avoid with setting the denyMouseEvent property? If I'm following along appropriately, it seems you're trying to avoid mouse/touch conflicts with a mouse and touch-screen configured system. That's a configuration we should be concerned with, but the logic seems strange.
So, if a touch event comes thru, all mouse events are ignored and continue to be ignored until a mouse click comes around again. So a mouse click needs to happen to re-activate mouse events. At first glance this seems unintuitive and I think we could avoid such secret interactions with other currently existing state, such as _dragInfo
. But I may not quite get what you're trying to do. An explanation of your intent here might help my understanding.
// This function allow the movement of Parent Container (In our case it is Map) when user clicks anywhere on Map except on Shape | ||
// As well as it's changes the icon of mouse for Desktop devices | ||
_makeParentElementMovable() { | ||
removeCustomCursor() | ||
this._parent.style.cursor = "default" // Change the Cursor icon for desktop device | ||
for (let j = 0; j < this._parent.childNodes.length; j += 1) { | ||
this._parent.childNodes[j].style.cursor = "default" // Change the Cursor icon for desktop device | ||
if (this._parent.childNodes[j].nodeName.toLowerCase() !== "canvas") { | ||
this._parent.childNodes[j].style.pointerEvents = "auto" // Allow movemnet of parent container i.e Map | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you further explain the intent of this? IIRC the original intent of the mapd-draw design was to stay out of the way of DOM layer and event propogation as much as possible and instead give responsibility to a middle-layer to manage. If it's just the cursor state of sub-dom elements, my first inclination is that should be handled by such a middle layer. For immerse, that middle-layer is in mapd-charting.
} | ||
if (event.touches) { | ||
// Use previously assigned event obj to get the offsetX & Y and clientX & Y calculation | ||
event = this.previousEventObj |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain your reasoning for using the previous event coordinates here. What's wrong with just calling getTouchCoordinates
on this event?
} | ||
|
||
_mousemoveCB(event) { | ||
this.setDenyMouseEventFlag(event) | ||
if (this.denyMouseEvent && !event.touches) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's a lot of checks for event.touches
since you've collapsed the mouse/touch callbacks together. I think it'd be a bit more readable to turn this into a utility function: isTouchEvent(event)
. It would return true if the event was a touch event.
event.preventDefault() | ||
} | ||
if ((Date.now() - this.firstTapTime) < DOUBLE_CLICK_DELAY) { | ||
this._dblclickCB(event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks ok. Did you verify that this works like a mouse double-click event. In particular I'm wondering if a mouse double-click event actually runs the mousedown/mouseup twice first before firing the double click event. If that's true, then this looks right because the touchstart/touchend would be fired twice before _dblclickCB
is called.
Merge Checklist
🔧 Issue(s) fixed:
🚬 Smoke Test
🚢 Merge
Things Did accomplish in PR