Design of Gesture Recognition Libraries for Web

If you are a web developer, you might have dealt with mouse and touch gestures. You might have written code to detect a swipe from a stream of native browser events like touchmove, or you might have used a dedicated software library for the task. If you were mad enough, you might even have thought about building a neat gesture recognition library yourself. I happen to be that mad kind. To help us all, this article attempts to put together what is needed to build such a gesture library.

Therefore I reviewed multiple gesture recognition libraries and analysed their design approaches and interfaces. This article lists the findings, including how do the libraries name the gestures, how do they expose the recognised data for the developer, and how do their interfaces bind gestures to elements. You can also find links to the libraries, common pitfalls in gesture detection, conflicts with the browsers, and ways to deal with them. There are lots to cover so hold on to your hats and enjoy.

Table of Contents

1. List of gesture recognition libraries

I picked the following software libraries for the review by using web search, my knowledge, and pre-existing lists of gesture handling packages. The libs can be grouped in two main categories: the general purpose libs and the component frameworks. Note that some of the libraries are under active development and their features prone to change. Therefore I recorded the reviewed versions and their release months.

1.1. General-purpose gesture libraries

Hammer.js – Library for detecting mouse and touch gestures. Version 2.0.8 April 2016.
ZingTouch – Gesture detection library for modern web. Version 1.0.6 March 2018.
Interact.io – Drag-n-drop, resize and multitouch interaction. Version 1.10.11 March 2021.
jQuery Finger – One-finger mouse and touch gestures for jQuery. Version 0.1.6 October 2016.
jquery.touch – Mouse and touch gestures for jQuery. Version 1.1.0 March 2017.
jQuery Touch Events – Mouse and touch gestures for jQuery. Version 2.0.3 April 2020.
AlloyFinger – Minimalistic touch event abstraction. Version 0.1.15 January 2019.

1.2. Component frameworks or plugins that do gesture recognition

nippleJS – Joystick simulation with mouse and touch. Version 0.9.1 March 2022.
React Tappable – Basic tap, hold, and pinch gestures for React. Version 1.0.4 April 2018.
React-Touch – Touch gesture bindings for React components. Version 0.4.4 July 2018.
Vue-Tap-and-Hold – Lightweight tap and hold events for Vue. Version 1.0.7 May 2018.
Vue Touch Events – Tap, swipe, and hold events for Vue 2. Version 3.2.2 May 2021.
Vue3 Touch Events – Tap, swipe, and hold for Vue 3. Version 4.1.0 May 2021.
YUI – A vast UI lib that has some gesture recognition. Version 3.18.1 October 2014.
Ext JS – Components and gesture recognition for Sencha. Version 7.5.1 February 2022.
Tapspace – A lib for zoomable UIs with gesture recognition. Version 1.6.0 October 2020.

1.3. Additional gesture libraries worth to mention

These libs were not included in the review because they were either out of scope of the review or too limited or outdated in features. Yet I saw them worth to mention.

Pressure.js – Handling of force touch and pointer pressure. Out of scope of the review. Nice examples.
Deeptissue.js – Touch abstraction for MSPoint and WebKit Touch. Outdated.
Hover on Touch – Alternative hover function with touch. Narrow purpose. Good concept.
RangeTouch – Improved touch experience for range input. Narrow purpose. Solid implementation.
Swiper – Touch-based swipe navigation. Out of scope of the review. Very popular.
React Point – Normalise mouse and touch events to simple point events in React. Narrow purpose.

2. What is gesture recognition?

What does gesture recognition in web browsers mean? When a user interacts with a web browser via mouse, touches, stylus or any other pointer-like device, the browser emits a sequence of input events, usually about 30 to 60 times per second. These include mouse events like mousedown and touch events like touchmove. Each event carries raw data about the interaction, like the x and y coordinates and a reference to the element that triggered the event. The task for the gesture recogniser is to first capture the event sequence and then resolve if the sequence resembles a larger gesture, like a swipe, rotation, or even something as simple as a click or hold.

The gesture recognition is not trivial. Especially so in the web where input methods and environments vary from a single mouse on a large desktop monitor to multitouch on a small mobile screen. The needs to recognise gestures vary too, from the need to build a simple slider component on your page to the need to provide a general-purpose gesture library for all web developers. Therefore, as we saw above, multiple libraries and tools exist.

3. Definitions

For clarity, let us first distinguish the following general concepts before diving into the individual gestures.

A software library, or lib for short, is a modular piece of code intended to act as a building block for other software. Also known as a software package or module.
A host application, or app for short, is the application for end users that builds upon software libraries such as a gesture recognition library.
An application programming interface, or API for short, is provided by the lib and acts as a vocabulary for the host app developer to access the features of the library.

The end user, or user for short, is the human who interacts with the application via gestures by using mouse, touch or other pointing device.
The app developer, or dev for short, is the coder who integrates the gesture library into the host app.
The gesture library designer, or lib designer for short, is the one who designed and built or will need to design and build the gesture recognition library.

A native browser event or just browser event is a raw, low-level event created and emitted by the web browser when the user interacts on the page with a mouse, touch surface, or other pointing device.
A gesture event is an abstract event constructed and emitted by the gesture library after it recognises a possible gesture in a sequence of browser events.

The target element or target is the HTML element on which the user performs the gesture. To avoid confusion, note that the target element emits events that propagate further and therefore make the target element a source of events for the gesture recognisers. From the perspective of the user it is a target and from the perspective of the recogniser it is a source.

The gesture triggers are the conditions on the target element that lead to the recognition of the gesture.
The gesture effect is what the gesture causes on the web page. For example, the user moves a finger up on a mobile web page so the page scrolls down. The finger movement is the trigger and the page scroll is the effect. The distinction is important as some of the libs handle only the triggers and leaves the effect for the developer, while others do both.

Also because some gestures capture rotation angles and directions, let ccw be our abbreviation for the counter-clockwise rotation. Let ccw+x be the ccw rotation when it is relative to the positive x axis. Respectively, let cw+x denote clockwise rotation from the positive x axis. The unit of rotation, usually degrees or radians, will be noted next to this direction abbreviation.

4. Typology of gestures

Next, I have listed names that the libs use for the gestures, like “tap” or “hold”. Naming things is a convention and a good API often follows conventions to make things easy for the dev to understand. The list below groups the names by gesture type and briefly describes each gesture. The list notes the words used in libs’ documentation and the exact names of emitted events.

Most of the libs have a basic event for each gesture, like “rotate” and then additional events for its life cycle, like “rotatestart”, “rotatemove”, and “rotateend”. I documented the both kinds. Also, some of the libs treat gestures as abilities to be given to elements and that is reflected in the naming. Below you can find names for the abilities too.

4.1. Browser gestures

For reference, let us begin by listing the native, low-level UI events emitted by the modern web browsers:

  • mousedown, mouseenter, mouseleave, mousemove, mouseout, mouseover, mouseup
  • click: fires after mousedown+mouseup [mdn]
  • dblclick: fires after two rapid click events [mdn]
  • auxclick: fires after clicking any mouse button except the primary button.
  • contextmenu: usually right mouse button or context menu key [mdn]
  • touchstart, touchmove, touchend, touchcancel [w3c]
  • pointerover, pointerenter, pointerdown, pointermove, pointerup, pointercancel, pointerout, pointerleave, gotpointercapture, lostpointercapture [mdn]
  • wheel: default action is to scroll or zoom the document [w3c]

There are also non-standard events implemented by some browsers:

  • gesturestart, gesturechange, gestureend in Safari [mdn]
  • MSGestureStart, MSGestureEnd, MSGestureTap, MSGestureHold, MSGestureChange, MSInertiaStart in Internet Explorer [mdn]
  • MouseScrollEvent in Firefox [mdn]
  • mousewheel in various browsers [mdn]

4.2. Tap

The tap gesture is the most basic gesture there is. Tap is commonly used for pointing, selection, and activation.

Common names: tap, click, single tap

Event names:
tap : the name used by practically every reviewed lib
singletap : jquery-touch-events, extjs
singleTap : alloyfinger

Life cycle events:
tapstart, tapmove, tapend : jquery-touch-events
down, move, up, cancel : interactjs
tapcancel : extjs

Ability names: tappable : react-tappable

Some libs, like jQuery Touch Events and Ext JS, define both tap and singletap gestures. How are they different? If the user performs multiple taps in sequence, each tap will immediately fire a tap event. In contrast, the singletap event will fire only when there was one single tap. Therefore the singletap event cannot fire immediately because it needs to be distinguished from a double tap.

4.3. Double tap

The double tap gesture consists of two consecutive taps happening quickly in the same place. The double tap is commonly used for activation and zooming in.

Common names: double tap, double click

Event names:
doubletap : hammer, interactjs, jquery.finger, jquery-touch-events, extjs
doubleTap : jquery.touch, alloyfinger

4.4. Hold

The hold gesture happens when a user keeps the pointer still and pressed at least for a moment. The hold is commonly used for activation, selection, and secondary action. Sometimes it used to emulate mouse hover in devices without mouse. The names for the gesture are plenty. It is amazing how one simple gesture can have so many different names.

Common names: hold, press, long tap, long press, tap hold, tap and hold

Event names:
hold : interactjs, vue-tap-and-hold, vue3-touch-events
press : hammer, jquery.finger, react-tappable, vue3-touch-events
tapAndHold : jquery.touch
taphold : jquery-touch-events, hover-on-touch
longtap : vue-touch-events, vue3-touch-events
longTap : alloyfinger
touchhold : vue-touch-events
longpress : extjs

Additional life cycle events:
pressUp : hammer
release : vue3-touch-events

Ability names:
holdable : react-touch

The difference between a long tap and a hold, according to Vue-Touch-Events, is that while the long tap triggers immediately after the mouse button is released or the finger lifted, the hold triggers immediately after the required time duration has passed and thus before the release.

4.5. Move

The move gesture where user presses the pointer down, moves the pointer and then lifts it. The gesture is commonly used for drag and drop, scrolling pages, panning viewports, and reordering items; anything that attempts to move things from a place to another. The move gesture also has many names, often representing intentions of the gesture.

Common names: drag, pan, move, translate, slide

Event names:
drag : jquery.finger, vue3-touch-events, extjs
pan : hammer, zingtouch
pressMove : alloyfinger
moved : vue-touch-events
gesturemove : yui

Directional events:
panleft, panright, panup, pandown : hammer

Life cycle events:
panstart, panmove, panend, pancancel : hammer
dragstart, dragmove, draginertiastart, dragend : interactjs
dragstart, drag, dragend, dragcancel : extjs
dragStart, dragEnd, dragEnter, dragOver, dragLeave, drop : jquery.touch
start, moving, moved, end : vue-touch-events
press, drag.once, drag, release : vue3-touch-events
gesturemovestart, gesturemove, gesturemoveend : yui

Ability names:
draggable : interactjs, react-touch

Related concepts:
dropzone onto which draggables can be dropped : interactjs

4.6. Swipe

The swipe gesture could be described as a quick move gesture to a certain direction. The swipe is commonly used in mobile devices to navigate between views, to remove or hide items, such as notifications, and to reveal items and buttons by sliding away a panel covering them.

Common names: swipe, flick

Event names:
swipe : hammer, zingtouch, alloyfinger, vue-touch-events, vue3-touch-events, extjs
flick : jquery.finger, yui

Life cycle events:
swipeend : jquery-touch-events
swipestart, swipe, swipecancel : extjs

Directional events:
swipeleft, swiperight, swipeup, swipedown : hammer, jquery-touch-events
swipeUp, swipeDown, swipeLeft, swipeRight : jquery.touch, react-touch
swipe.up, swipe.down, swipe.left, swipe.right : vue-touch-events, vue3-touch-events

Ability names:
swipeable : react-touch

Additionally, there is edge swipe that is a subclass of the swipe gesture. It is a swipe that began near the edge of the target element. Ext JS implements the gesture with the following event names: edgeswipe, edgeswipestart, edgeswipeend, edgeswipecancel.

4.7. Pinch

The pinch gesture is a multifinger gesture where the pointers are moving either towards or away from each other. The pinch is used for scaling and zooming in and out. In geographical map applications the pinch is often combined with the move and rotation gestures to explore the map.

Common names: pinch zoom, expand, scale, distance

Event names:
pinch : hammer, alloyfinger, zingtouch, extjs
expand : zingtouch
distance : zingtouch

Directional events:
pinchin, pinchout : hammer

Life cycle events:
pinchstart, pinchmove, pinchend, pinchcancel : hammer
pinchstart, pinch, pinchend, pinchcancel : extjs

4.8. Rotate

Like the pinch, the rotate gesture requires two or more pointers. During the gesture the pointers move around their mean point, constructing a rotation angle to apply to elements.

The gesture is used to orientate images or rotate the viewport, although it is a bit tedious for human hands to perform rotations over 90 degrees in this way. On the good side, the libs seem to agree on the name.

Common names: rotate, rotation

Event names:
rotate : hammer, zingtouch, alloyfinger, extjs

Life cycle events:
rotatestart, rotatemove, rotateend, rotatecancel : hammer
rotatestart, rotate, rotateend, rotatecancel : extjs

4.9. Transform pinch

Transform pinch is a gesture that combines the move, pinch, and rotate gestures. For some libs, like react-tappable, the transform gesture is called the pinch gesture. The transform is used for moving and rotating objects and for navigation. Due to its general nature, the gesture can act as the base class for other gestures on the software level.

Common names: gesture, transform, multipoint, pinch, multitouch

Event names:
gesture : zingtouch, tapspace
pinch : react-tappable

Life cycle events:
gesturestart, gesturemove, gestureend : interactjs, tapspace
multipointStart, multipointEnd : alloyfinger
pinchStart, pinchMove, pinchEnd : react-tappable

Ability names:
gesturable : interactjs
pinchable : react-tappable
touchable : tapspace

4.10. Resize

The resize gesture is a move gesture that the user performs near the edge of the target element with the intention to resize the element. In addition to resizing widgets, it is used for modifying area selection and cropping images. It can be debated if the resize is truly a gesture or a way to utilise the move gesture. Yet, one of the libs implements it.

Life cycle events:
resizestart, resizemove, resizeinertiastart, resizeend : interactjs

Ability names:
resizable : interactjs

4.11. Hover

Although the hover is impossible to execute on touch devices, on the desktop the mouse cursor can be moved over the element without clicking. It can be used for previewing and to make the click target stand out before the user decides to click. Some of the libs consider this a gesture and recognise it accordingly. Often developers of touch-targeted apps simulate or replace the mouse hover with the hold gesture.

Common names: hover, over

Event names:
rollover : vue3-touch-events

4.12. Wheel

The wheel gesture happens when the user rolls the mouse wheel. It is used for scrolling the page and zooming in and out. The wheel can be rolled up and down and sometimes also left and right.

Many laptops simulate the wheel roll by detecting two fingers moving on the touchpad or dedicating an area for scrolling near the touchpad edge.

The native wheel events behave differently depending on the device, the operating system and the browser. Especially the speed and direction can vary. Therefore some of the gesture libs have decided to abstract it as a gesture.

Common names: wheel, mouse wheel, scroll

Event names:
wheel : tapspace
mousewheel : yui
scrollstart, scrollend : jquery-touch-events

Ability names:
wheelable : tapspace

4.13. Orientation change

The orientation change happens when a mobile device is tilted on its side or up. The gesture is used for readjusting the viewport size and refreshing the layout.

Common names: tilt, screen rotate, orientation change

Event names:
orientationchange : jquery-touch-events

4.14. Direction

The direction is a special gesture implemented by NippleJS, a lib that simulates a joystick on touch screens. It is similar to the drag gesture, although it captures only the direction relative to the starting point of the drag. The direction gesture comes in two flavours, dir and plain. The former divides the joystick angle into four directions: up, down, left, and right. The latter divides the angle into two halves: up and down, or left and right.

Event names:
dir, plain : nipplejs

Life cycle events:
start, move, end : nipplejs

4.15. Custom gesture

Some of the libs provide tools for the developer to construct customised gestures. For example, Hammer.js gives an example of quadrupletap:

var quadrupleTap = new Hammer.Tap({ event: 'quadrupletap', taps: 4 })

ZingTouch helps the developer in a similar fashion. Here is a gesture that requires two fingers to be pressed simultaneously no more than 500 ms.

var twoFingerTap = new ZingTouch.Tap({ numInputs: 2, maxDelay: 500 })
A two-finger tap gesture
A drag gesture along a path

React-touch provides an exceptional feature to create custom shaped drag gestures. We could even classify them as a separate gesture type, the path gesture. To define a path gesture, the developer prepares a sequence of directions that together describe the rough path the user must perform to trigger the gesture. Such a gesture can, for example, resemble a full circle or letters L or U, and be used for typing or to replace passwords and PIN codes. See CustomGesture by react-touch for details.

5. Gesture event object properties

An event object is a collection of key-value properties.

When the lib recognises a gesture, it triggers a gesture event and calls all registered handler functions. The functions are passed an event object. The event object has properties that describe the gesture, for example how long it lasted.

The set of available properties and their names depend on the lib. Below we first go through some general properties the libs provide and then special properties unique for each gesture type. The property names are written in cursive and their types and units within parens ().

5.1. General properties of gesture event objects

Gesture event objects often carry similar properties, especially within the same library. Here we can see various general properties the reviewed libs deliver in their event objects.

  • type of the gesture
    • hammer: type (string)
  • movement of the gesture center relative to the previous event
    • hammer: deltaX, deltaY, deltaTime
    • interactjs: dx, dy (px)
  • page x and y coordinates of the starting event
    • interactjs: x0, y0 (px)
  • viewport x and y coordinates of the starting event
    • interactjs: clientX0, clientY0 (px)
  • total distance traveled during the gesture
    • hammer: distance
  • average angle traveled during the gesture
    • hammer: angle
  • current velocity
    • hammer: velocityX, velocityY
    • interactjs: velocityX, velocityY
  • highest reached velocity during the gesture
    • hammer: velocity
  • current speed measure of the pointer. In physics, velocity has a direction while speed does not.
    • interactjs: speed
  • general direction of the gesture i.e. up, down, left, right
    • hammer: direction, offsetDirection
  • scale and rotation during the gesture
    • hammer: scale, rotation
  • center position of the gesture
    • hammer: center
  • original native browser event
    • hammer: srcEvent
    • jquery.finger: originalEvent
    • jquery.touch: event
  • target element that received the browser event
    • hammer: target
    • interactjs: target
    • jquery.touch: element
    • jquery-touch-events: target
    • tapspace: element
  • interaction-defining object created at the binding
    • interactjs: interactable
    • tapspace: item
  • type of pointer used
    • hammer: pointerType
    • interactjs: pointerType
    • extjs: pointerType
  • event life cycle type i.e. start, move, end, cancel
    • hammer: eventType (string), isFirst (boolean), isLast (boolean)
  • list of current pointers of the gesture
    • hammer: pointers
  • list of new, changed, or removed pointers
    • hammer: changedPointers
  • identity of the pointer if there is only one.
    • interactjs: pointerId
  • reference to the preventDefault method of the browser event
    • hammer: preventDefault
  • a method to stop propagation
    • react-tappable: stopPropagation
  • the gesture the event is a part of
    • interactjs: interaction
  • event creation time
    • interactjs: timeStamp

5.2. Tap and hold events

Here are event object properties found specific to the tap and hold gestures.

  • x and y relative to the document
    • jquery.touch: x, y
  • x and y relative to the element
    • jquery.touch: ex, ey
    • jquery-touch-events: offset
  • x and y relative to the screen
    • jquery-touch-events: position
  • duration of the tap
    • zingtouch: interval (ms)
    • intearctjs: dt
    • jquery.touch: duration
    • jquery-touch-events: duration (ms)
  • starting point coordinates
    • jquery-touch-events: startOffset, startPosition
  • starting points of tap touches
    • tapspace: points
  • ending point coordinates
    • jquery-touch-events: endOffset, endPosition
  • timestamp at end
    • jquery-touch-events: endTime

5.3. Double tap events

In addition to the above tap event properties, the some double tap events had the following properties.

  • event object for each participated tap
    • jquery-touch-events: firstTap, secondTap
  • time between the taps
    • interactjs: dt
    • jquery-touch-events: interval (ms)

5.4. Move, drag, and pan events

The following event object properties were found specific to the move gesture.

  • current x and y coordinates on the page
    • jquery.finger: x, y
    • jquery.touch: x, y
  • current x and y coordinates on the element
    • jquery.touch: ex, ey
  • change in x and y coordinates since the last event
    • jquery.finger: dx, dy
    • react-touch: dx, dy
  • absolute change in x and y coordinates since the last event; can be used as a simple speed measure
    • jquery.finger: adx, ady
  • change in x and y since the gesture start
    • react-touch: translateX, translateY
  • the point where the gesture started
    • jquery.touch: start
  • total travel distance during the gesture
    • zingtouch: distanceFromOrigin
    • jquery.touch: distance
  • average dragging speed
    • jquery.touch: velocity
  • average move direction during the gesture
    • zingtouch: directionFromOrigin (degrees)
    • jquery.finger: orientation, direction
  • direction relative to the previous event
    • zingtouch: currentDirection (degrees ccw+x)
  • interaction with a drop area
    • interactjs: dragEnter, dragLeave

5.5. Swipe and flinch events

The following event object properties were found specific to the swipe gesture.

  • distance swiped
    • jquery.touch: distance (px)
  • distance along x or y axis
    • jquery-touch-events: xAmount, yAmount
  • duration of the swipe
    • jquery.touch: duration (ms)
    • jquery-touch-events: duration (ms)
  • average angle of the gesture
    • zingtouch: currentDirection (degrees ccw+x)
  • general direction of the gesture like up, down, left, right
    • jquery-touch-events: direction
  • speed of the gesture
    • zingtouch: velocity
    • jquery.touch: velocity (px/ms)
  • starting point and ending point
    • jquery-touch-events: startEvent, endEvent

5.6. Rotate events

The following event object properties were found specific to the rotate gesture.

  • angle between touch points at the beginning
    • zingtouch: angle (degrees ccw+x)
  • change in angle during the gesture
    • zingtouch: distanceFromOrigin (degrees ccw)
  • change in angle relative to the previous event
    • zingtouch: distanceFromLast (degrees ccw)

5.7. Scaling pinch events

The following properties were found specific to the pinch gesture that scales but does not rotate or move.

  • list of touches
    • react-tappable: touches
  • distance between touch points
    • zingtouch: distance (px)
    • react-tappable: distance (px)
    • extjs: distance (px)
  • center point of the gesture
    • zingtouch: center
  • change in pinch distance relative to the previous event
    • zingtouch: change (px)
  • ratio of the current distance per the initial distance
    • extjs: scale

5.8. Transform pinch events

The transforming pinch has properties of scaling pinch but also some properties of rotation and move.

  • center point of the gesture
    • react-tappable: center
  • displacement of the center since the gesture start
    • react-tappable: displacement ({x,y})
  • velocity of displacement
    • react-tappable: displacementVelocity ({x,y})
  • total travel distance of the center during the gesture
    • tapspace: distance (px)
  • distance between the first two touches
    • interactjs: distance
    • react-tappable: distance (px)
  • angle of the line between the first two touches
    • interactjs: angle
    • react-tappable: angle (degrees)
  • change in angle since the previous event
    • interactjs: da
  • change in angle since the gesture start
    • react-tappable: rotation (degrees)
  • current angular velocity
    • react-tappable: rotationVelocity (degrees/ms)
  • scaling ratio of the distance since the gesture start
    • interactjs: scale
    • react-tappable: zoom
  • change in scaling ratio since the previous event
    • interactjs: ds
    • react-tappable: zoomVelocity
  • rectangle that encloses all touch points
    • interactjs: box
  • event creation time
    • react-tappable: time (ms since epoch)
  • how long the gesture has lasted
    • tapspace: duration (ms)

5.9. Resize events

The following properties were found specific to the resize events.

  • edges that were dragged
    • interactjs: edges
  • new size of the target after resize
    • interactjs: rect
  • size change in relation to the previous event
    • interactjs: deltaRect

6. Gesture configuration parameters

Most of the libs allow the developer to adjust the gesture recognition parameters. The parameters include things like number of required pointers or distance to travel before recognition.

Below I have mapped all the different parameters the libs implement, their property names, units, and built-in default values. I left out parameters not related to interaction, such as a namespace from which Vue lib reads its values.

The option names are written in cursive. The default values and units are noted within parens ().

6.1. Tap options

The libs allow tap gesture detection to be configured in the following ways:

  • number of taps required to trigger:
    • hammer: taps
  • number of pointers required:
    • zingtouch: numInputs (1)
  • maximum time between taps:
    • hammer: interval
    • jquery.finger: doubleTapInterval
    • jquery.touch: tapDelay (250 ms)
  • maximum hold time for each tap:
    • hammer: time
    • zingtouch: maxDelay (300 ms)
    • vue-tap-and-hold: tapTime (200 ms)
  • maximum allowed movement during tap:
    • hammer: threshold
    • zingtouch: tolerance (10 px)
    • react-tappable: moveThreshold (100 px)
    • vue-touch-events: tapTolerance (10 px)
    • tapspace: tapMaxTravel (20 px)
  • disable click event and fire only tap:
    • jquery.touch: noClick
    • vue-touch-events: disableClick

6.2. Hold and press options

The libs allow the hold gesture detection to be configured in the following ways.

  • minimum required press time:
    • jquery.finger: pressDuration
    • jquery.touch: tapAndHoldDelay (500 ms)
    • react-tappable: pressDelay (1000 ms)
    • react-touch: holdFor (1000 ms)
    • vue-tap-and-hold: holdTime (1000 ms)
    • vue-touch-events: touchHoldTolerance (400 ms), longTapTimeInterval (400 ms)
  • maximum allowed movement during hold:
    • react-tappable: pressMoveThreshold (5 px)
  • how often to re-emit a progress event during hold:
    • react-touch: updateEvery (250 ms)

6.3. Move, drag, and pan options

The libs allow the drag gesture detection to be configured in the following ways.

  • number of touch points required:
    • zingtouch: numInputs (1)
  • minimum distance required:
    • zingtouch: threshold (1 px)
    • jquery.finger: motionThreshold
    • jquery.touch: dragThreshold (10 px)
    • yui: minDistance (0 px)
  • minimum time required to recognise as a drag:
    • jquery.touch: dragDelay (200 ms)
    • yui: minTime (0 ms)
  • how often to emit events during the gesture:
    • vue3-touch-events: dragFrequency (100 ms)
  • restrict the elements that can be targets for a drop:
    • jquery.touch: dropFilter, dropFilterTraversal
  • simulate friction
    • interactjs: inertia
  • mouse button required to trigger the gesture:
    • yui: button
  • direction to which the dragged element is allowed to move:
    • interactjs: startAxis, lockAxis
  • enable automatic scroll of the container if the dragged element is dragged across an edge.
    • interactjs: autoScroll (boolean, false)

6.4. Swipe and flinch options

The libs allow the swipe gesture detection to be configured in the following ways.

  • number of touch points required
    • zingtouch: numInputs (1)
  • speed the gesture must reach to be a swipe
    • zingtouch: escapeVelocity (0.2 px/ms)
    • yui: minVelocity (0 px/ms)
  • distance the gesture must travel to be a swipe
    • jquery.touch: swipeThreshold (30 px)
    • react-touch: swipeDistance (100 px)
    • vue-touch-events: swipeTolerance (30 px)
    • yui: minDistance (10 px)
  • duration the gesture is allowed to remain still
    • zingtouch: maxRestTime (100 ms)
  • maximum duration of the gesture
    • jquery.finger: flickDuration
  • direction of the swipe
    • yui: axis (string, ‘x’ or ‘y’)

6.5. Rotate options

Some of the libraries provide options to configure rotate gesture detection.

  • a point on the rotated element that must stay fixed during rotation
    • tapspace: pivot (point)

6.6. Resize options

The following configuration options were found for resizing:

  • which areas act as handles for resizing
    • interactjs: edges
  • allow the target element to be resized beyond {0,0}
    • interactjs: invert (boolean)
  • maintain aspect ratio during resize
    • interactjs: aspectRatio (boolean)

6.7. Geometric restrictions

Some libraries also allow special parameters that restrict or modify the effect of the gesture. For example, Interact.js allows the developer to set modifiers that can limit how far the target element can be dragged, or set up a grid the draggable will snap at drop. Also the resize options we saw above are a kind of geometric restrictions for the gesture effect.

  • restriction pipeline:
    • interactjs: modifiers (array)

6.8. Enable or disable touch or mouse

The libraries that handle both mouse and touch events might have configuration settings to disable either one.

  • use touch events in the gesture recognition:
    • jquery.touch: useTouch
  • use mouse events in the gesture recognition:
    • jquery.touch: useMouse

6.9. Options to limit the number of concurrent gestures

Browsers and operating systems set a hard limit on how many cursors or touch points can be active at a time. In a similar fashion, the libs can limit how many concurrent gestures there can be. Interact.js allows this limit to be specified for the whole document and for single elements.

  • maximum number of concurrent gestures on the whole document
    • interactjs: max (integer : infinity)
  • maximum number of concurrent gestures on an element
    • interactjs: maxPerElement (integer : 1)

Modern browsers expose the maximum supported number of touch points via navigator.maxTouchPoints.

6.10. Options for delegating the input events

Because mouse events do not target the element on which the gesture started, unlike touch events, some libs provide options that make the lib listen the whole document in addition to the interactive element. In other words, the handling of gestures is delegated to the document element.

In addition to the whole document, the options can target specific elements for delegation. By default, the events bubble in DOM towards the document root. Some libs provide tools to redirect the events to other elements not along the default bubbling path. I could not find information on when it is necessary set up a custom bubbling path, although I suspect it lets the developer to set a large area to listen for the events for a single gesture.

The following gesture options were found for event delegation:

  • interactjs: allowFrom, ignoreFrom (Element)
  • jquery.touch: trackDocument, trackDocumentNormalize, delegateSelector (string)
  • yui: root (Element)

6.11. Default coordinate system

The browser events present their x and y coordinates in multiple reference frames. The page coordinates are relative to the whole document, the screen coordinates are relative to the display, and the client coordinates are relative to the browser viewport. In the context of browsers, the positive y axis points down.

Most of the libs give the coordinates in multiple systems. Some others allow to configure which of the systems is used for the x and y coordinates:

  • jquery.touch: coordinates (string : ‘page’)

6.12. Prevention of browser default behaviour

Most of the libs offer some way to enable or disable native preventDefault calls via their configuration options. The options can affect either all the gestures handled by the lib, or single gesture type, or single gesture listener.

  • interactjs: preventDefault (string : ‘always’, ‘never’, or ‘auto’)
  • jquery.touch: preventDefault (object)
  • vue-touch-events: prevent (boolean)
  • yui: preventDefault
  • tapspace: preventDefault (boolean : true)

6.13. Options for stopping event propagation

Most of the libs offer some way for the developer to stop browser events that participated a gesture from propagating further in the DOM. A decade or two ago there might have been a performance gain from doing so, especially if the document was very deep. Nowadays the gain is negligible and the act of stopping the event can cause more problems than it solves. For details see The Dangers of Stopping Event Propagation by Philip Walton at CSS-Tricks.

The libs could expose a stopPropagation method in their gesture events. However, because a gesture is often a sequence of browser events, it might not be possible to stop them all via the method. Therefore the stopping is done via an option in advance.

  • vue-touch-events: stop (boolean)

6.14. Enabling passive touch event listeners

Some of the libs allow control on whether the event listeners are registered as passive or not. A passive listener promises the browser that they do not call the native preventDefault in its handler function. That allows the browsers to optimise performance for better responsiveness. See addEventListener at MDN for details about passive listeners.

  • vue-touch-events: disablePassive (boolean)

6.15. Manual or automatic recognition start

Some of the libs allow the developer to configure whether to start the recognition immediately after binding or manually later. The manual option gives the developer better control on when the element becomes interactive. Consider an example by Interact.js where element becomes draggable only after a successful double tap.

  • interactjs: manualStart (boolean), enabled (boolean)

6.16. Inertia and animation options

Most of the libs leave the effect of the gesture completely to the developer to decide. However, some libraries take care of the effect too, by providing prebuilt interaction and allowing its configuration. An example of such interaction is moving the element with the drag gesture. One aspect of the moving is the simulation of inertia or friction by animation. Options found in the review for animation include:

  • amount of friction i.e. how quickly the element slows down
    • interactjs: resistance
  • stopping speed at the end of the animation
    • interactjs: endSpeed (px/s)
  • enable the animation only after the gesture has ended
    • interactjs: endOnly
  • is the user able to interact with the element during the animation
    • interactjs: allowResume
  • duration of the animation after the gesture
    • interactjs: smoothEndDuration

6.17. Styling to apply during the gesture

Some of the libs, especially the component framework plugins, allow the developer to set a class name, for example ‘active’, that is automatically applied to the interactive element at the beginning of the gesture and removed at the end. In a similar fashion, Interact.js styles the mouse cursor for drag and resize gestures.

  • interactjs: styleCursor (boolean : true), cursorChecker (function)
  • vue-touch-events: touchClass (string : ”)

6.18. Configuring gestures via data attributes

While most of the libraries take the configuration options in an object, jQuery Touch Events take the approach to read them from the dataset attributes of the element.

<div id="myElement" data-xthreshold="500"></div>

7. Binding gestures to elements

Developers need to “bind” together the element, the gesture, and the effect.

After knowing which types of gestures the developer can expect, the next task for the developer is to decide where and how to react to the gesture. The act of connecting the the gesture, the element and the functionality together is called binding.

There seems to be many ways for binding, although the major way is to listen events and register event handlers that trigger when the event happens. An alternative way for the gesture libs to bind is to provide a wrapper method for each gesture type so that the method does the binding internally. Additionally in the component framework context, the component markup can directly contain instructions what events to listen and how to react.

All in all, the ways of binding are plenty and each lib implements them in their own flavour. Below, we go through them and give examples for each.

7.1. Binding via event listeners

Hammer.js wraps the target element and begins to listen the browser events and emit gesture events. The gesture events can then be listened and handled via the wrapper.

var hammertime = new Hammer(myElement, myOptions)
hammertime.on('pan', handlerFn)

ZingTouch takes the approach where the developer first defines a region to be listened for the browser events. Then the developer binds the region to a certain gesture, a handler function, and an element within that region. The handler function then, when triggered, executes the effect of the gesture, like rotating the element. The region approach might feel superfluous but it helps with the mouse gestures for reasons we will discuss in the section “Mouse and touch conflict”.

var listenerArea = document.getElementById('.container')
var inputArea = document.getElementById('.inputarea')
var rotateTarget = document.getElementById('#rotatetarget')
var region = new ZingTouch.Region(listenerArea)
region.bind(inputArea, 'rotate', function (ev) {
  rotateTarget.style.transform = 'rotate(' + ev.angle + 'deg)';
})

The jQuery plugin libraries jQuery Touch Events, jQuery Finger, and jQuery.touch work almost identically. They make jQuery objects to emit gesture events that the developer can listen with .on(name, handler) and .off(name) methods. A notable difference is that jquery.touch requires a .touch(opts) call before gesture events begin emitting. I did not test whether the other libs emit the events from the beginning or only after the first .on(name, handler) registration.

$('#myElement').on('tap', handlerFn)

Both jQuery Finger and jQuery.touch allow additional selector parameter for the libs to support the event delegation pattern. Following the pattern, an element higher in the DOM tree listens to events and includes the events in the recognition only if the selected element was the original target. For example in the following snippet, the container is listened for events to recognise a swipe gesture on the bar element.

$('#container').on('swipeLeft', '.bar', handlerFn)

Both jQuery Touch Events and jQuery.touch implement method wrappers for an alternative way to bind elements to handler functions.

$('#myElement').tap(handlerFn)

AlloyFinger implements the similar event listener pattern as the jQuery plugins. It also allows defining the handler functions directly in the options object.

var af = new AlloyFinger(element, { tap: handlerFn, swipe: anotherFn })

7.2. Binding via component attributes

React and Vue based libraries instruct the developer to bind the handler functions directly in the component markup. For example, the following Vue-Touch-Events snippet binds a tap handler function to a span element via v-touch:tap attribute.

<span v-touch:tap="handlerFn">Tap me</span>

7.3. Binding via element abilities

Interact.js takes the approach where it gives elements abilities, like “draggable”, “resizable”, and “gesturable”. The developer can either specify event handler functions in the parameters or listen the draggable for events as below.

interact('#target').draggable(parameters).on('move', handlerFn)

A bit similar ability-giving terminology is used by React-touch. The following React code is copied directly from the docs of the lib. The special Holdable element has the ability to detect hold gestures and react to them even during the hold.

import { Holdable } from 'react-touch';
<Holdable onHoldComplete={handleHold}>
  ({ holdProgress }) => <Button style={{opacity: holdProgress}} />
</Holdable>

Tapspace has chosen the approach where the target element is first wrapped as a touchable and then enabled for specific interaction. The following example makes the element in a Tapspace viewport draggable.

var touchable = new tapspace.Touchable(view, elem)
touchable.start({ translate: true })

7.4. Lazy binding

The libs that wrap the target element, like Hammer.js and jQuery Touch Events, could begin the gesture recognition and event emission right away. On the other hand, listening to browser events and detecting gestures is relatively laborious for the app, so why to do the work if no one is listening?

For example, assume we want to listen an element only for tap events so we call wrapper.on('tap', handler). In this case, does the library need to recognise and emit also drag and swipe gestures? No, because no one is listening. The on(...) call is enough to signal the lib what we want.

Therefore in libs like YUI the gesture recognition begins only after the first listener is registered. However, the logic for such an intelligent API can complicate the library code so much that some of the libs, like jQuery.touch and Tapspace require the developer to specify beforehand what gestures to recognise and emit.

8. Usability for end users

Gestural interaction is often more intuitive way for users to interact than typing. However, it has its pitfalls in usability and accessibility. In this section we go through issues found during the review.

8.1. Problems with time-restricted gestures

The double click, hold gesture, and other time-restricted gestures might be difficult for users to find or physically impaired users to execute. Environment affects a lot too, for example a precise double click becomes harder in a shaking bus. See this UX Q&A about problems of the long press.

It is an open question should the gesture library be responsible to answer the usability concerns by limiting its features or should the responsibility be left to app developers. For example, if the lib intentionally did not implement hold and double tap gestures, the developers might be pushed to reach their design goals using only single tap.

8.2. Problems with space-restricted gestures

Tiny targets are especially hard to hit with touch.

The smaller the area, the longer it takes to hit it. This is known as Fitts’s law. The target should be large enough for the gesture to be performed quickly enough. Also with touch devices the hit point is not so precisely visible for the user as with the mouse. It can be hard to hit a small target by touch even when there is plenty of time.

A problem with gestures that especially require multiple pointers is that when the touch points are too close to each other, they could become recognised only as one. If the area on which the gesture is to be performed is too small, it can be problematic for the user to keep the pointers far enough.

On the other hand, dragging over long distances can be challenging. The user might need to readjust the hand position and drop the dragged item accidentally. For details, see Drag-n-Drop by Laubheimer at NN/g.

8.3. Handling more than two pointers

The human hand has usually five fingers, which can all touch the surface intentionally or accidentally.

Libraries that provide pinch or rotation gestures usually work well with one or two touch points. However, not every library behaves well if additional fingers touch the screen, intentionally or not.

For example, react-tappable mentions a known issue that any touch event with three or more touches is ignored.

Tapspace gestures are designed to work with any number of pointers. The lib uses a transform estimator named Nudged to handle the math required to extract move, scale, and rotation properties from any number of touch points. Disclaimer: I am the author of both Tapspace and Nudged.

8.4. Multitouch emulation with a mouse

Users with mouse are limited to one pointer and therefore cannot execute multitouch gestures such as rotation. Hammer.js provides a tool named Touch Emulator that emulates two touch points when a mouse button and a shift key are pressed simultaneously. Moving the mouse cursor away from the original location while keeping the shift pressed emulates a pinch, and moving the cursor radially around the original location emulates a rotation.

8.5. Blocking gestures

Some gestures must block the default browser behaviour to work correctly. Hammer.js calls them blocking gestures. Due to their blocking nature, the lib has the vertical drag, pinch and rotate gestures disabled by default. When enabled, they block the default page scroll behaviour on the target element. Users might be annoyed when they scroll a long page and suddenly there is an element that steals the gesture for its own purposes.

Map widgets sometimes solve the issue by requiring at least two pointers to interact with the map. However, it is possible that users scroll the page by using two or more fingers.

8.6. Responsiveness

Should a swipe gesture trigger a page flip or should the page already move during the gesture? To make the UI feel responsive, the latter is better. Also, the speed of the page flip should match the speed of the gesture. For further tips and details, see this guide for using gestures in material design by Google.

9. Usability for developers

Building an application is a complex task. The handling of gestures is no different. In this section I gathered findings of things both the developer and the lib designer must think when they write gesture handling code.

9.1. Gesture end versus gesture cancel

Browsers emit touchend and touchcancel events. Some of the libs interpret these both as the end of the gesture, while others keep the concepts separate. For example, Hammer.js keeps the end and cancel separate, whereas ZingTouch and Tapspace combine them into a single end event. What is the difference?

The end event denotes a completion of a successful gesture and the cancel event denotes that the gesture was not successful. When the cancellation occurs, the recommended action for the app or the gesture lib is to undo any effects caused by the cancelled gesture. For example if a drag is cancelled, the dragged element should return back to its initial position. Instead, if the drag is ended, the dragged element should stay to where the gesture moved it.

Another situation where the clear separation between end and cancel is necessary is when two gesture recognisers compete, for example a drag versus a swipe. The winner must somehow prevent the effects of the loser. That is possible if the loser can be gracefully cancelled. See this Ext JS gesture doc section for detailed explanation.

In Pointer Event API, a mobile browser fires pointercancel event if the built-in gesture recognition classifies the gesture as page navigation. The winning gesture cancels the losing gestures. The cancelled “loser” gestures should revert their effects and let the winner continue.

9.2. Mouse and touch conflict

Most of the libs attempt to simplify the gesture handling by especially abstracting mouse and touch interaction behind the same interface. The two interaction methods are different in many ways, not only in the number of pointers.

9.2.1. Difference in targeting

The most problematic difference between native mouse and touch events, in my opinion, is that while the touch events always have their target on the element that the touch began, the mouse events do not. The mouse events have their target on the element the cursor points at the time. These dynamics of the target causes trouble for the libs that attempt to unify the mouse and touch input behind the same API.

A lib developer could ignore the difference at first, only to find later that dragging elements with a mouse is painful for quick-handed users. This is because, if a user moves the cursor too quickly, the cursor escapes the dragged element and the drag ends immediately. For details about the targeting difference between mouse and touch, see this HTML5Rocks article.

The gesture libs try to solve the problem basically by collecting the mouse events from a larger area. In practice, this means handling the mouse events higher in the DOM hierarchy. The details between the libs differ. We will go through a few examples.

ZingTouch requires the developer to define a region in which all the gestures are happening. Given that the mouse stays within the region, the gesture can be recognised correctly even if the mouse travels outside the original target element.

YUI library provides standAlone option for its move gesture. The gesture has life cycle of start, move, and end events. When standAlone is set false on an element, the element triggers move and end events only if the start event happened on the same element.

Tapspace handles the difference by listening touch events on the target but delegating the mouse events onto a viewport element that is an ancestor of the target. In a fashion similar to the region of ZingTouch, the viewport handles all the mouse events. The viewport remembers the original element on which the mouse gesture began at mousedown. As long as the mouse button is pressed down, the viewport detects all mouse events, replaces their target with the original, and then re-emits the modified events as rat events. The rats have their target fixed to the original, so they are safe to be digested like touch events by any gesture recogniser within the viewport.

Most modern browsers implement Element.setPointerCapture() to let developers deal with the issue. Some modern browsers also implement Pointer Lock API to further control the pointer behaviour.

9.2.2. Delayed click on mobile

Mobile web browsers are required to delay the click event on touch devices. The delay, commonly 300 milliseconds, is needed to let the browser separate a click from a double tap which the mobile browsers use for zooming by default. The delay, although tiny, is enough to make the UI feel sluggish as presented in this article about response times by Jacob Nielsen.

Many of the libs, for example Interact.js, implement the gestures so that there is no need for the delay. Interact.js calls this feature the fast click. For more details on the tap delay problem and how to solve it, see this Chrome developer blog article.

9.3. Conflicts with default behaviour

By default, many browser recognise and react to user gestures. Common examples on mobile devices are scrolling the page with move or swipe gesture and zooming into an element by double tapping it. The default behaviour is not always what the developer wants. Also the defaults might not be easy to disable and therefore the libs offer help either via automation or instructions.

9.3.1. Touch Action

As recommended by Interact.js, set touch-action: none in CSS of the target element to prevent the default touch behaviour, like copying an image element or zooming into the element. See touch-action at MDN for details.

Hammer.js helps the developer by managing the touch action rule automatically depending on the gesture. The developer can configure the behaviour with the touchAction option.

9.3.2. User Select

As recommended by Interact.js, set user-select: none in CSS to disable text selection on the target element. The selection can be annoying for example during dragging. However, the ability to copy text or portions of the page might be useful for the user, so think it through before disabling.

9.3.3. More tricks

There are a few more tricks to improve the gesture experience via CSS styling, including the non-standard -webkit-user-drag and -webkit-touch-callout rules. Hammer.js has put together a good list of CSS properties to consider, see Hammer.defaults.cssProps. See also Hammer.js Tips ‘n Tricks

9.3. Scroll and touchmove conflict

When a page is navigated by touch on a mobile device, some mobile browsers fire only touchmove event and some browser might fire both scroll and touchmove events, according to jQuery Touch Events docs.

On a desktop or laptop device, a mouse wheel roll or similar action on a touchpad fires a wheel event. The wheel event scrolls the web page by default thus causing also a scroll event.

9.4. Handling of nested or duplicate listeners

In a DOM element hierarchy, a parent and a child can both listen to same gesture events. Alternatively, an element can have multiple listeners for the same event. One aspect the lib designer should prepare for is how these situations are solved.

For example, consider the situation where both the parent and the child listen for a pan gesture event. The event can freely propagate from the child to the parent. When the user executes the pan gesture, the both handlers will be activated and the both elements will move. For the user, it might look like only the parent is moving if the elements are moving at unison. Alternatively, it might look like the children is moving at double speed because it followed the parent in addition to the gesture.

Browsers provide two methods for preventing the same event triggering unintended actions: event.preventDefault and event.stopPropagation. The latter, stopPropagation, is prone to cause unforeseen side-effects that are hard to debug. A good treatment on the subject with real-world examples is available at The Dangers of Stopping Event Propagation by Philip Walton at CSS-Tricks. Due to the problems of stopPropagation, the gesture libraries prefer the preventDefault method.

Although the preventDefault is primarily designed to prevent browser behaviour, the libs can utilise it as a general way to signal that the event is handled. In the jquery.finger library, calling the event.preventDefault will, in addition to the browser default behaviour, prevent triggering the parent and possible other listening ancestors.

The Ext JS takes the approach where the handler can claim the gesture so that no other gesture will finish. See Ext JS Claiming a Gesture for details.

9.5. Where to place the browser event listeners

Is it better to place the listeners directly on the target element or let the events bubble up to listeners of its ancestors? The latter is called event delegation. The benefits of the event delegation are debatable. See for example this Stack Overflow answer.

The Ext JS lib has chosen the approach where the browser events will bubble all the way to the window object and the gesture recognition is done there. See Ext JS Gestures for details.

Overall, the problem of where to place the listeners is connected to things we discussed above in the sections “Handling of nested or duplicate listeners” and “Mouse and touch conflict”.

9.6. Browser compatibility

The reviewed libraries have different targets when it comes to browser compatibility. Fortunately, modern web browsers respect the standards set by World Wide Web Consortium in a cheerful degree.

Hammer.js maintains a chart that shows which gestures are compatible with which browsers. Hammer.js Browser/device support

jQuery Touch Events provides a utility function isTouchCapable() to determine if the browser or device supports touch events.

10. Unit testing

One way to test gesture recognition is to simulate hand movements.

Quality software requires unit tests to ensure everything works as intended. The tests become especially helpful during the maintenance phase of the software life-cycle. A small bug fix can easily cause unexpected side effects after the time has faded the quirky design details from the mind of the lib designer. Unit tests make the maintenance burden bearable.

Gesture recognition is somewhat harder to test than for example a math library. A math function has numeric inputs and outputs that are easy to write in code by hand. Gestures are dynamic input from physical users, a real-world phenomenon that can amount to hundreds of data points over a few hundred milliseconds for a single gesture. Also, as we have seen above, the gestures can vary and be restricted in speed, direction, duration, and other ways. Due to these hardships, the lib designers have come up helpful tools.

10.1. Gesture simulation

Writing the sequences of data points by hand would be very tedious. Therefore building unit tests for gesture recognition require some kind of tools to create a wide range of test gestures in a relatively lightweight manner. Fortunately these tools exist and the gesture libraries utilise them in a varying degree. The following list presents the tools spotted during the review.

The simulation tools provide API to run virtual gestures in the browser. The gesture runner creates synthetic input events and emits them from specific element at specific coordinates. The synthetic events work exactly as the real events and bubble up the DOM accordingly. The task for the lib test designer is to first program the gestures and setup the recognisers and then see if the recogniser worked as expected.

Because the real DOM is involved, the tests need to be run in a real browser. The programmers and continuous integration workflows are used to running tests on the command line. Fortunately, headless browsers testing is a thing. Headless browsers allow web apps and test suites to run in a virtual browser completely from the command line. See Headless Chrome and Puppeteer for example.

10.2. Triggering gestures manually

Some of the libs provide ability to trigger the gesture events manually. This is different from the gesture simulation in the way that the manual triggering does not involve the gesture recognition nor the raw browser events.

The following snippet displays the trigger method of jQuery Touch Events.

$('#myElement').trigger('tap')

While manual triggering can be helpful to test apps that utilise gestures, it does not help the lib designer who needs to test that the recogniser works as intended. For the lib designer, the gesture simulation is the way to go.

10.3. Tooling

Aside from programmatic unit tests, some of the reviewed libraries offer tooling for manual testing. For example Interact.js has its dev-tools package that hints the developer about missing handlers and recommended CSS styles.

11. Conclusion

In this article we reviewed a bunch of gesture recognition libraries and analysed their features and approaches to gestures. While every library brings its own personality to the mix, some general patterns can be seen both in terminology and usage.

Not all gestures had consistent naming throughout the libraries. Especially hold and drag gestures had varying names. In contrast, the gestures tap, double tap, swipe, and rotate were rather uniformly named.

We saw three techniques to bind gesture recognition to elements: via event listeners, element abilities, and component attributes. The event listener technique is the most common way for binding. It also works under the hood of the other two techniques, although the two successfully simplify things for the developer.

We also saw many challenges the gesture recognition designer needs to solve to provide best possible user experience. Especially the conflict between mouse and touch and the needs to override default browser behaviour require deep understanding from the lib designer and the app developer. Fortunately the libraries provided lots of help in form of features and instructions.

I hope these findings turn out helpful to you. Whether you are programming gestures, using a gesture lib, or building one, there is lots of details to consider and it is good to have them listed in one place. And whatever you do, if you were to learn only one thing from this article, please, do not come up with yet another name for the hold gesture!

References

Most of the references are provided as links throughout the article. Here we list additional references that act as the golden standard for the user interaction in web browsers.

Gesture graphics are collected and derived from the following copyleft sources:

Contribute

I wish to keep the article in good shape so if you spot inconsistencies or errors, please let me know: akseli.palen@gmail.com. Thank you.

Akseli Palén
Hi! I am a creative full-stack web developer and entrepreneur with strong focus on building open source packages for JavaScript community and helping people at StackOverflow. I studied information technology at Tampere University and graduated with distinction in 2016. I wish to make it easier for people to communicate because it is the only thing that makes us one.

1 Comment

Leave a Comment

Your email address will not be published.