What This Page Covers
This page will give the developer a basic overview of the Wacom Feel™ Multi-Touch API framework, the Multi-Touch programming model, and details on writing a Multi-Touch application.
Windows operating systems that support the Wacom Feel™ Multi-Touch API. See the Overview page for details.
All touch-enabled Wacom tablets supported by the Wacom Tablet Driver are supported by this API. To find out if the Wacom Tablet Driver supports your tablet, please go to: https://www.wacom.com/support/product-support/drivers
Note that you should never use product names or model numbers as conditions in your multi-touch applications. Rather, you should always program to tablet touch properties as described in this document.
Programming Framework, SDK, Languages
Applications using the Feel™ Multi-Touch API can be written for any programming framework that supports the import of Windows DLL modules. These modules ship with and are installed by the Wacom Tablet Driver installation package.
The Multi-Touch SDK is actually a combination of modules installed with the tablet driver as well as header and CPP files (included with the sample code) used to interface to those modules. When building a C++ Multi-Touch application, it is necessary to include these two header files:
WacomMultiTouch.h / WacomMultiTouch.cpp – Defines supported Multi-Touch constructs; provides linking to the Multi-Touch libraries that came with the Wacom Tablet Driver.
WacomMultiTouchTypes.h – Various types and definitions used by the Multi-Touch API.
To simplify Multi-Touch app development, all Multi-Touch function access can be implemented using the dynamic functions initialized when the WacomMT.dll module is loaded. All demo code uses this technique (there is no static linking to a Multi-Touch library). The files WacomMultiTouch.cpp and WacomMultiTouch.h support this dynamic loading technique. The big advantage of dynamic loading is that you need build only one version of the app, and it will execute on both 64bit and 32bit user applications.
For those developers using Windows .NET, Multi-Touch support is built into the WacomMTDN.dll. The source code for this library can be found in our Multi-Touch Windows C#/.NET sample application. As with Multi-Touch C++ support, the tablet driver must be installed to use this SDK.
To assist the developer, a complete description of the Wacom-supported Multi-Touch API Reference, with C-style definitions, can be found in the Wacom Feel™ Multi-Touch Reference.
Multi-Touch API Programming Concepts
Basic programming model
To use the API, an application need only to include a few files from the Feel™ Multi-Touch SDK. There is a header file that defines all the data structures and data types (WacomMultiTouchTypes.h); a header file that defines all entry points of the API (WacomMultiTouch.h); and an implementation file that sets up the dynamic loading of the library (WacomMultiTouch.cpp). The dynamic library (WacomMT.dll) is included with the driver and should never be included with an application.
The basic use model for processing touch point data is as follows:
Initialization the of the API dll. If the dll does not load, please confirm you have the latest Wacom Table Driver installed.
Establish a device attach callback. This will immediately be called for each multi-touch device on the system. This will also be called if the user attached a new multi-touch device to the system. This callback includes a capabilities structure for each device. Using this data the application can see what types of data are supported and the data ranges.
Establish a device detach callback. This will be called when a device is detached from the system. Use this callback to close any open data callbacks the application has created.
Assuming a device is attached that supports touch point data, the application would establish a touch point data callback. This can be done using the direct callback methods or the window message loop callback method. This callback is triggered whenever a finger is detected on the device. Finger data includes position and state information. The application should either quickly process the data or buffer the data to a worker thread for further processing.
When the application is done processing touch data it should close all the callbacks and call the quit function.
This API can be used alongside the Wintab pen interface API. The touch API does not process or inform about the pen proximity. If the application needs to process both pen and touch data then the application should monitor both APIs.
Overview of a Multi-Touch Application
This section will provide detailed information about the recommended high-level structure of a Multi-Touch application.
In order to initialize the library, you must call WacomMTInitialize. WacomMTInitialize will attempt to load the Wacom Feel™ Multi-Touch library. If this succeeds then the entry point for all the other API functions are loaded. Then a call is made to the library with the version of the API requested by the application. If the library supports this version, it will report success. If any part of this function fails, the function returns the error code and the library will not work.
If the library was loaded then it is necessary for your application to release any open resources. When exiting your application, a call to WacomMTQuit will take care of the cleanup for the WacomMT library.
Attached Device Array Function
Your application will need to know how many multi-touch supported devices are connected. Use WacomMTGetAttachedDeviceIDs for this purpose. This function will return an array of attached devices. This function takes in an array of integers. To use this function the application should call it with no buffer and no size. This will return the number of devices attached. The application will then create a buffer and recall the function. It should be noted that the number of devices could change between these two calls. The function will then again return the number of devices attached and provide as many of the device IDs as the buffer will hold. These device IDs will then be used to get the device capabilities. It is recommended that the application use the attach callback function.
Device Capabilities Function
Your application will probably want to know the device capabilities of the attached multi-touch devices. The WacomMTGetDeviceCapabilities function is used to get the device capabilities for a given device ID. If the application used the WacomMTGetAttachedDeviceIDs function to get the IDs then you should call this function with each of the IDs to get a capabilities structure. These capabilities structures are cached by the application so it will know the scaling of the data provided by that device. An application can call this function when processing the data but this will cause a lot of unneeded processing.
Attach and Detach Device Callback Functions
In order to receive tablet attach and detach events, use the WacomMTRegisterAttachCallback and WacomMTRegisterDetachCallback to register your callback functions.
The attach callback function is the best choice for receiving notification of change in device information. Not only does this function provide a capabilities structure for all the currently attached devices, but it will also monitor the system and inform the application when a new device is added. Along with the detach callback this is the recommended method to keep track of the multi-touch devices. The detach callback function will monitor and inform the application when a device is detached. While strictly speaking an application could ignore a detach, it is recommended that the application close any open callbacks (data read callbacks are covered in the Data Read Functions section). This will allow the application to respond to the attach function more quickly.
Data Read Functions
There are three different types of data that may be provided by any given device. They are finger, blob, and raw.
The finger data is the most common and the most useful. All multi-touch devices at least support finger data. This data will track a touch point for each finger placed on the device. While that finger is down it will be assigned an ID. The ID represents a point of contact and not a specific finger. For example if the user places their index finger on the device it may be assigned ID 1. They then place their middle finger down, it is assigned 2. The ring finger follows assigning it 3. The user then lifts the middle finger finishing the lifetime of the contact labeled 2. If the finger is returned to the device it may be given the ID 2 because it is not currently being used as a contact but it could also be given the ID 4. If both the middle and ring finger are removed and the ring finger returned to the device the ID will not be 3 as before but should be 2 or maybe 5.
If the device supports blob data, it will be indicated in the capabilities structure. Blob data represents an outline of the object or objects on the surface of the device. There are two types of blobs: the primary blob and void blobs. A blob is made from one primary, and zero or more voids. The best way to think of a void is to imagine a doughnut. The outline of the outside of the doughnut is the primary blob and the hole is the void blob. Blobs are used when you want to perform an action on an irregular shape like a mask or smudge.
If the device supports raw data it will be indicated in the capabilities structure. Raw data consists of the raw values from each point on the device.
There are two primary ways that an application can be notified of touch data availability: by callback, or by window handle.
The data callback function sets up a callback for a specific device. The callback is defined by the device for which it is created and the hit rectangle provided for that callback. This means that an application can create regions on a device. Most of the time an application will not need to segment the device, that is why the default (NULL) region is the entire touch device. The units for the hit rectangle vary depending on the device. For an opaque device like an Intuos or Bamboo tablet, the units are normalized (i.e. the origin of the device is 0,0 and the width is 1.0 and the height is 1.0). For an integrated device like a TabletPC, the hit rectangle is in pixels. The device location in the virtual monitor world is known by the driver. These values will change when the monitor location or size is changed. Your application should monitor the system device change message.
For Windows, the API has extra data collection functions. These functions take in a HWND and instead of activating a callback the data is sent to the application on the message thread of the provided window. The provided window is also used to create the hit rectangle for integrated devices. Special care should be used with these functions as improper data handling will interfere with regular message handling.
When the data function is created the application indicates what it is planning to do with the data by specifying what processing mode should be used (see WacomMTProcessingMode). There are three processing modes that an application can use:
If the application is registered in observer mode (see WMTProcessingModeObserver in WacomMTProcessingMode), the data is sent both to the application and the system. This means that if the application performs actions based on the data, the driver/OS does not know and will also act upon the data. The callbacks can also be created in consumer mode. An observer will get all data not captured by a consumer even when the application is in the background.
If an application is registered in consumer mode (see WMTProcessingModeNone in WacomMTProcessingMode), the data is only sent to that application and any observers whose hitrect intersects with the consumer. The first consumer for that data is the only one to receive the data. The OS does not get the data. Simply put, for a consumer to get data, the application must be the front most application (i.e. has keyboard focus).
Finally, if an application has registered a hitrect in passthrough mode (see WMTProcessingModePassThrough in WacomMTProcessingMode), all data going to that hitrect is passed through to the OS. This is useful for carving out a section of your multi-touch application that can still respond to user touch input for program control (such as button presses, or an application tool palette). Note that passthrough mode can co-exist with either the consumer or observer modes described above. Touch data going to a passthrough hitrect region will – in addition to being processed by the OS – be processed by an observer, but not processed by a consumer. Thus if your application is primarily a consumer, and you register a small dialog as a passthrough hitrect, you can interact with and move that dialog around without sending data to the consumer below.
If an observer needs to stop getting data it should unregister its callback functions or unregister its window handle.
Application Design Considerations
The Wacom Feel™ Multi-Touch API is a cross-OS platform data format that works with Mac and Windows. It is designed to work with all supported touch-enabled Wacom tablets. When designing software applications using the Wacom Feel™ Multi-Touch API you can be comfortable that your features should be consistent and compatible with future products from Wacom. Here are some design considerations:
Help your user keep their creative focus on their work. Seek to maximize fluidity of gestures for pan, rotate, zoom in-out, etc. within the user workspace.
Maximize the touchability of the user workspace. Use larger and well-spaced menus, controls, sliders, etc.
Enable non-dominant hand to control menus while using pen with the dominant hand. Allow menus to float or be near the non-dominant hand for simple two handed operations.
Bring controls to the pen and touch location, rather than menu strips.
Integrating Pen and Touch
One of the key benefits of using the Feel™ Multi-Touch API is to get fluid interaction with both pen and touch. Our recommendation is that touch gestures (multi-finger commands such as zoom, pan, etc.) you implement should not be interrupted when the pen is in range of the tablet. This allows for a more natural workflow as the user does not have to consciously raise the pen out of range before manipulating the application. Thus, arbitrating between pen and touch works best if you use pen pressure rather than the in range status of the pen. For example, continue handling touch input while pen pressure is zero. Once the pen touches the tablet, pressure is greater than zero. As long as the pen is hovering (zero pressure), the app can continue processing touch input.
When using simultaneous pen and touch it is best to pay attention to the confidence bit. This is particularly useful for ignoring data that comes from the hand holding the pen.
The Multi-Touch Windows .NET and the Multi-Touch Windows C++ sample code demos provide examples of integrating both the Wacom Feel™ Multi-Touch API and the tablet pen API into the same application. Note that these sample code examples do not implement gesture support.
Display vs. Opaque Tablet Touch Behavior Considerations
The most important thing to consider when designing touch interactions for display tablets (Cintiq, etc) and opaque tablets (Intuos, etc) is that display tablets are direct touch and opaque tablets are indirect touch.
Indirect touch tablets operate like track pads. Touching an indirect device may move the cursor but does not automatically generate a click at that location. As you move your finger, the cursor moves relative to its last location. You can reach any part of the desktop area when multiple monitors are attached. In this model, a gesture affects the object that is selected or the object that has focus. Because the tablet is relative and not mapped to any specific monitor the finger data is output in a unit natural percentage of the tablet.
Direct touch tablets move the cursor on a single monitor. There is a one to one relationship between where the user touches on the tablet surface and where the touch input is mapped to in their software environment. As soon as the user touches the tablet surface a click is generated at that location. Here a gesture affects the object under the fingers. Finger data from a direct touch device will be output to the monitor space that the tablet is mapped to.
The Feel™ Multi-Touch API provides information about the tablet providing touch data. An application should always check for and handle data from both direct tablets and indirect touch tablets. Using the information provided with the device capabilities, an application can translate the finger position data into the best units for the individual application.
Overview - Introduction to Multi-Touch API
Reference - Complete API details
FAQs - Multi-Touch programming tips
Where to get help
If you have programming questions about the Multi-Touch API, please visit our Support page at: https://developer.wacom.com/developer-dashboard/support