Firefox/Projects/Multitouch Polish/DOM Events: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 23: Line 23:
Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.
Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.


== Question to ask ==
== Questions to ask ==


==== Aggregated values ====
===== Aggregated values =====
For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Do we leave it simple and let a JS library do the work if needed? Or should we make this information always available?
For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Do we leave it simple and let a JS library do the work if needed? Or should we make this information always available?


Line 31: Line 31:
Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. Also there are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page
Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. Also there are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page


==== Gestures and touch ====
==== Touch gestures vs. touch input ====
Using gestures and input at the same time is an ambiguous interaction. For example, if a finger is moved from bottom to the top of the screen, how can we know if the desired action is to pan (scroll) the page, or get touch events being sent about the movement. Is this up for the web page to decide? How can it switch modes and which modes can probably work at the same time?

Revision as of 17:36, 13 August 2009

Multitouch events

This wiki page will be used to describe the current state of the touch events being implemented, and to discuss what the format of these events should be, what kind of information they should provide, etc.

The current implementation is being done in Windows 7 with the touch API, but the design should be platform agnostic.

Current events

  • MozTouchDown
  • MozTouchMove
  • MozTouchRelease

Currently they inherit from MouseEvent and adds a streamId property which uniquely identifies a tracking point. In Win7 this id is provided by the OS/driver layer and is valid only while the same touch point is being tracked. After the finger is released, the id can be (and will be) reused.

Each event is related to each single touch point, so if there are 3 fingers touching the screen, 3 MozTouchMove events can possibly be dispatched for each loop in the message loop.

Things to add

Number of touch points

Some uses of touch events may need to track various points at once. This can be handled by observing MozTouchDown/Release, but a field with the current number of touch points could be added to simplify things

Size and pressure

Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.

Questions to ask

Aggregated values

For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Do we leave it simple and let a JS library do the work if needed? Or should we make this information always available?

Compatibility with webkit

Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. Also there are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page

Touch gestures vs. touch input

Using gestures and input at the same time is an ambiguous interaction. For example, if a finger is moved from bottom to the top of the screen, how can we know if the desired action is to pan (scroll) the page, or get touch events being sent about the movement. Is this up for the web page to decide? How can it switch modes and which modes can probably work at the same time?