Integrating Application Gestures
The Tablet and Touch Technology API provides a set of built-in application gestures that you can assign to functions in your application. These application gestures provide a variety of glyphs and suggested functions. Application gestures can supplement the functionality offered by pen flicks, extending the range of functions that users can access quickly with the pen. While pen flicks are standardized across the system, application gestures can provide a set of application-specific commands. (For information about flicks, see Responding to Pen Flicks)
Application gestures can help a Tablet PC user interact with your application more efficiently and easily. With gestures, users can issue commands without using the keyboard and without navigating menus. Several of the gestures, such as the curlicue, are similar to common proofreader's marks and therefore easy to learn.
When planning to implement gestures for your application, consider the following:
- Are there any gestures that are commonly used in the non-computing world that translate well for your product (for example, proofreader's marks for reviewing documents)?
- In your application, which functions primarily support the kinds of tasks that are important to mobile users? Would any of this functionality work well using gestures?
- Are the gestures you choose distinct from and compatible with each other?
- If your application collects ink and recognizes gestures, consider how users will mix entries with the use of gestures. Also consider how to handle other pointing activities such as selection. See Distinguishing Gestures from Other Pen Input.
Technical Articles
- Mobile Ink Jots 6: Using Gestures in Tablet PC Applications
- RealTimeStylus Plug-in Sample
- Easily Write Custom Gesture Recognizers for Your Tablet PC Applications
Parts of a Gesture: Glyphs and Hot Points
Each application gesture has two elements:
- A glyph that defines the shape traced by the gesture
- A hot point that defines the focal point for the gesture.
- To review the glyphs and hot points for the standard application gestures, see Choosing a Set of Gestures to Support.
In many of the application gestures, the starting point of the gesture is the hot point. The hot point can be used to determine the focal point of the gesture.
The following illustration shows a user drawing the curlicue gesture to delete selected text in a document.
Generic word processing application
A caret gesture might be used to set the insertion point in a block of text and open Tablet PC Input Panel.
The first illustration below shows the use of a caret gesture to insert a handwritten word. The second illustration shows the word inserted as text.
Caret gesture
Recognizing Gestures
To configure your application to recognize application gestures, enable the ones that you want to process. You can also write a custom gesture recognizer if you need to respond to a different set of glyphs from those supplied by Windows. (See Scott Swigart's article, Easily Write Custom Gesture Recognizers for Your Tablet PC Applications.)
There are two ways that applications process gestures: through ink collecting objects such as InkCollector, and through the real-time stylus interfaces (see Accessing and Manipulating Stylus Input ). Using a gesture recognizer involves three general steps:
- Choosing a set of gestures to support
- Enabling gesture recognition in your ink area
- Processing notifications or stylus data sent by the system when the user performs a gesture
If you are implementing your pen interface using an ink collecting object, (such as InkCollector), you will use the gesture interfaces provided with those objects. With the real-time stylus interfaces, you interact with the gesture recognizer through the GestureRecognizer class.
The following table summarizes the interfaces involved in the above steps for both the real-time stylus interfaces and the ink collecting object interfaces. For the ink collecting objects, links are provided to the InkCollector interfaces. The same member names apply to InkOverlay and other objects.
Implementation Step | Real-Time Stylus Interfaces | Ink Collecting Object Interfaces |
---|---|---|
Choose a set of gestures to support |
On the GestureRecognizer object, enable the application gestures of interest by calling EnableGestures |
Call the SetGestureStatus method to indicate interest in a gesture. |
Enable gesture recognition |
Set the Enabled and MaxStrokeCount properties, and add the GestureRecognizer object to the synchronous plug-in collection for your RealTimeStylus object |
Set the CollectionMode for the ink collecting object to either the InkAndGesture or GestureOnly mode. |
Process the gesture events |
Gesture events are passed through the real-time stylus chain in the CustomStylusDataAdded notification. You indicate your interest in that notification, and then, in your implementation for that notification, you can test for the presence of gestures using the EnableGestures method (or query GestureRecognitionDataGuid in managed code). |
Respond to Gesture events. |
Creating a Custom Gesture Recognizer
You can use a custom gesture recognizer independently, or in addition to the GestureRecognizer. Create your custom gesture recognizer as an IStylusSyncPlugin, have it create CustomStylusData, and, for managed code, include the plug-in in the same StylusSyncPluginCollection as the GestureRecognizer. In the IStylusSyncPlugin you should combine gesture notifications from both recognizers into notifications to be consumed by an application.
Send comments about this topic to Microsoft
Build date: 2/8/2011