You are on page 1of 11

Writing drivers for common touch-screen interface hardware

By Kenneth G. Maxwell
Embedded Systems Design
(06/15/05, 03:20:00 PM EDT)

Although touch screens are rapidly becoming more popular most developers have
never created one before. Here is a step by step design guide that leads you
through the hardware and software required to make touch screens work the first
time.

Touch screens are everywhere. Industrial control systems, consumer electronics, and even
medical devices are commonly equipped with touch-screen input. We use touch screens
every day without even thinking about it. You might get cash at your ATM, sign for a
package, check in for your flight, or look up a telephone number all by using a touch screen.

This article describes two relatively new CPU offerings that provide built-in support for
touch-screen input. I'll show you how to write a software driver that will configure,
calibrate, and continuously respond to touch-screen input using either of these
microprocessors. The result will be working code that you're free to download and use as a
baseline for your own efforts.

Good but not perfect


No input method is ideal for all situations, and touch screens are not a good fit in certain
applications and product types. I'd be remiss if I didn't at least summarize the good and
not-so-good factors associated with using a touch screen as a user-input method.

First, the good: touch screens have an undeniable coolness factor and can instantly make a
product more enjoyable to use. Touch screens are also intuitive. When your user wants
Option A, he reaches out and touches Option A. What could be more intuitive than that? A
two-year-old knows how to reach out and touch what he or she wants.

Finally, touch screens are attached to the system for which they provide input. The user
can't misplace the remote or the mouse. If he has the device in front of him, he has the
touch screen ready to start providing input.

The bad? Touch screens can also be misused and applied to products where they really
don't belong. I'm referring here to safety-critical devices, where a touch screen can be
dangerous if used without appropriate precautions. I'll summarize a couple of the most
obvious potential problems and leave you to the references to learn more.
Figure 1: Parallax (cross-section view)

The first problem is parallax, which is the difference between the apparent position of an
object on the screen and its actual active position on the touch panel. This problem is
illustrated in Figure 1. The best example I can think of is the typical drive-up ATM. The
machine doesn't raise and lower itself based on the height of your car, so if you pull up in a
tall SUV or truck you'll likely be looking down at the display from an elevated position. ATMs
are designed to protect the expensive parts (the display) from hostile users by placing
layers of strengthened glass between the user and the display.

The touch screen cannot be so protected. If it were, you would not be able to touch it,
would you? So the touch screen sits right on the surface, and the display some layers of
glass behind the surface. This creates a physical separation between the touch layer and the
display layer. If you're looking at the screen at any angle, this means that where you press
the touch screen to make a selection may be physically quite a distance from the input
location expected by the user interface software.

Humans adapt to this offset fairly quickly. You learn to mentally project the displayed
information to the surface of the touch screen and touch the correct location after a little
trial and error. ATM designers also account for this by using large buttons and separating
them liberally to help prevent accidental activation of the wrong button. Of course,
mistakenly pressing the wrong ATM button won't give you cancer or cause you to go blind.
A mistake like this on a medical control device could certainly do either if the system
designer doesn't build in extensive safety precautions.

Parallax is minimized by reducing the physical distance between the display and the touch
layers. There will always be some glass at the front of a CRT or covering an LCD. The best
case is for the touch-sensitive electronics to be built into this glass, and for this glass to be
as thin as possible. This reduces the separation between touch-input layer and display layer.
Handheld devices such as Palm organizers use this strategy because they don't have to be
nearly as concerned about mechanical strength or hostile users. The separation is minimized
(you feel like you're actually touching the graphical elements), and the accuracy is greatly
improved.

A second obvious problem is that during the period of time the user is touching the screen,
the object touching the screen (a stylus, a finger) is obscuring at least some small part of
the display from the user's eye. This is more of a factor in factory automation applications
where the user is more likely to use a finger or glove than a thin stylus, but even with a
stylus the act of selecting something on the screen momentarily obscures part of the
information you're presenting to the user. As an example, imagine you are displaying a
slider-type control for the user to adjust a value such as speed or volume, and you display
the chosen value numerically just to the left of the slider control. It works great until a left-
handed user operates your system and can't see the chosen value until he removes his
finger. You have to factor this sort of thing into your user-interface design.

Touch-screen technologies
Before we can begin writing a touch-screen driver we have to have some basic
understanding of how the hardware works. Many different touch technologies convert
pressure or touch at a screen location into meaningful numerical coordinates. Some of these
technologies include resistive, surface wave, infrared, and capacitive touch screens. For an
excellent overview of the available technologies, you can go to www.elotouch.com and
www.apollodisplays.com.

For this article I'll focus on resistive touch screens. Resistive touch screens are very popular,
and they're the type of touch screen you'll find integrated with many evaluation boards and
development kits. Resistive touch screens are popular mainly because they're inexpensive
and electrically straightforward to add to your system.

Resistive touch screens are so named because they are basically resistive voltage dividers.
They're composed of two resistive sheets separated by a very thin insulator usually in the
form of plastic micro-dots. When you touch the screen, you deform the two resistive sheets
just enough to make electrical contact between them. Your software can figure out where
the sheets are shorted together, and hence the touch location, by examining the voltage
produced across the voltage dividers.

There are several types of resistive touch screens with names like "four-wire," "five-wire,"
and "eight-wire." The additional wires improve accuracy and reduce temperature drift, but
the basic operation doesn't change. In the simplest four-wire design, one resistive layer, the
"x-axis" layer, is powered and the second layer, the "y-axis" layer is used as the pickup to
measure voltage corresponding to the x-axis position. The process is then reversed and the
y-axis layer is powered while the x-axis layer is used as the voltage pickup.

Figure 2: Touch-screen circuit diagram


Figure 2 shows the simple circuit equivalent of a resistive touch screen. Note that two
completely separate readings must be taken, x-axis position and y-axis position, and these
readings cannot be taken in parallel with a four- or five-wire resistive touch screen. Your
software must read one axis, then the other. It doesn't matter in which order you take
these readings.

To convert the voltage produced by a resistive touch screen into a number we need an
analog-to-digital converter (ADC). Until quite recently this ADC was nearly always external
to the main CPU. An example of such an ADC controller is the Burr Brown NS7843 or
NS7846. This device is a 12-bit analog-to-digital converter with built-in logic to control a
touch screen by powering alternate planes and converting from the other. You could
accomplish the plane power switching using general-purpose I/O lines and such, but this
device offloads much of the work and also provides means for generating a touch or pen-
down interrupt.

More recently several CPU manufacturers have started to bring the analog-to-digital
conversion block and specialized touch-screen control circuitry inside the main CPU. This
makes perfect sense when a CPU is intended for consumer devices, telematics, or other
markets where LCD displays and touch screens are prevalent.

The reference boards


For this article we'll examine two reference boards featuring CPUs that have integrated
touch-screen support. Both of these CPUs are based on the ARM processor architecture.

The first board is the Freescale MX9823ADS evaluation board, which features Freescale's
MC9328MX1 processor. These can be ordered directly from Freescale distributors. The
evaluation kit includes a QVGA (240x320) color LCD and touch screen.

The second board is based on the Sharp LH79524 ARM processor. The Sharp reference
boards can be ordered from LogicPD Corporation (www.logicpd.com), along with integrated
display and touch kits. Several interchangeable display kits are available at resolutions
ranging from QVGA to 800x600 pixel resolution.

Rather than listing the code for each driver within this article, I'll instead describe the design
and flow of the drivers and highlight the important bits. You can download the full source
code for each driver at ftp://ftp.embedded.com/pub/2005/07maxwell.

Taking a top level view, the software provides functions to accomplish these things:

1. Configure the controller hardware;


2. Determine if the screen is touched;
3. Acquire stable, debounced position measurements;
4. Calibrate the touch screen;
5. Send changes in touch status and position to the higher-level graphics software.

Now I'll walk you through each step in detail.

Hardware configuration
The first thing the touch drivers need to do is configure the hardware. For these integrated
controllers, this means writing to memory-mapped registers to configure the controllers to a
known state. This is done in a function named TouchConfigureHardware() in each driver.
In order to configure the hardware, we already have some decisions to make. Should the
driver be interrupt driven? What type of conversion rate is required to get responsive and
accurate touch position information? Let's walk through the thought process used to make
these decisions.

As for whether or not the touch driver should use interrupts, the example drivers do in fact
use the interrupt-driven approach. I did this mainly because, to be honest, it's fun to use
interrupts. Don't infer from this implementation that it's always the best or correct design,
and don't let anyone tell you that your touch driver is "done wrong" if it's not interrupt-
driven.

I bring this up only because it seems "polling" has become a dirty word to embedded
systems programmers. I once asked a client if he was polling or using interrupts to service
an input device. The response was "this is an embedded system, we don't do any polling." I
felt (temporarily) like an idiot for asking the question, but on further reflection polling is a
very reasonable thing to consider. If you're using an RTOS and all of your tasks are often
blocked waiting for an external event of some sort, the processor is sitting in an idle-task
loop, doing nothing constructive. Maybe a better design in this situation is to have your idle
task poll for input from your touch screen. It's a reasonable design and is worth considering
depending on your overall system requirements.

How to go about configuring an interrupt varies from one operating system to another.
You'll find sections of the code have been #ifdef'ed in for each of the many supported
RTOSes. In all cases the drivers actually use two distinct interrupts:

1. An interrupt to wake up when the screen is initially touched, known as the


PEN_DOWN interrupt and
2. A second interrupt to signal when the ADC a set of data conversions.

I'll describe these interrupts and how they're generated in the following paragraphs.

The next question is how fast do we want to receive sample input readings from the ADC.
The rate will affect how we need to configure the clock that will drive the touch controller
and ADC. We want the clock to be fast enough to provide responsive input and accurate
tracking but not so fast that the conversion is inaccurate or the system is consuming more
power than required.

In my experience, a touch screen needs to provide position updates to the higher-level


software at a minimum 20Hz rate, or every 50ms. Faster is better, assuming the higher-
level software can keep up, and we aren't too concerned with power usage. If the touch
input response is much slower than this, there will be a noticeable and annoying lag
between touch input by the user and visual response on the display.

The 20Hz update rate might not sound too challenging, but providing updates at 20Hz
actually requires sampling at approximately 200Hz, depending on how many readings we
are going to take before deciding we have a stable input. We need to oversample in order to
debounce and average the touch input position values. Resistive touch screens, especially
the inexpensive variety, are notoriously noisy and bouncy.

The driver needs to sample the input for each axis several times before sending a position
update to the higher-level software. The provided drivers default to configuring ADC clocks
on the respective processors for a minimum 200Hz (5ms) sampling rate. This allows the
driver to sufficiently debounce and filter the incoming raw data and still provide a 20Hz true
position update rate to the high-level user interface software.

For the Freescale i.MX processor the touch controller module is named the Analog Signal
Processor (ASP). The processor provides two peripheral clocks that are derived by dividing
the core CPU clock. The course input to the ASP block is PERCLK2 (Peripheral Clock 2),
which can again be divided to produce the final input clock to the ASP. Note that PERCLK2
drives other sub-modules in addition to the ASP block, including the internal LCD controller,
and therefore the touch driver cannot program PERCLK2 just for a good fit for touch
sampling. PERCLK2 is programmed to the highest rate required by all attached peripherals,
which in most cases would be the LCD controller, and further divided as required for the
slower peripherals. The MC9328MX1 reference manual includes a table that specifies the
clock programming values needed to achieve a 200Hz output data rate.

Hardware configuration for the Sharp LH79524 requires programming a few GPIO pins so
that they are assigned to the ADC function, programming and enabling the ADC clock, and
programming the ADC sequencer.

The LH79524 ADC is a full programmable state machine and sequencer that is quite an
amazing bit of circuitry all by itself. The ADC can be programmed to drive one touch plane;
delay; take a measurement; drive the opposite plane; delay; take a measurement, and so
on all without any core CPU intervention whatsoever. Understanding how to program the
LH79524 ADC Sequencer control banks could be a challenge but this is made much easier
by an application note provided by Sharp (www.sharpsma.com). The provided driver
precisely follows the recommendations of this application note regarding how to configure
the Sharp ADC sequence controller.

Is the screen touched?


Once the basic hardware setup is complete, we need a reliable method to determine if the
screen is touched. It makes no sense to run the ADC and get conversion readings if the user
is not touching the screen. Both controllers provide a mechanism to detect if the screen is
touched, and optionally to interrupt the main processor when a touch down event occurs.
The driver function that determines if the screen is touched or not is named
WaitForTouchState().

When the controller is in the touch detection mode the y-axis touch plane is tied high
through a pull-up resister. The x-axis touch plane is tied to ground. When the user touches
anywhere on the screen, the planes are shorted together and the y-axis plane is pulled low.
This can be connected internally to an interrupt generation mechanism known as the
PEN_DOWN IRQ.

During normal operation the drivers use the PEN_DOWN IRQ to wake up the touch driver
task when a touch-down event occurs. This allows the driver task to block itself and not
consume any CPU time when the screen is not touched and wake up and go into conversion
mode once the user touches the screen. We can also save power by disabling the ADC clock
while not in active conversion mode.

During calibration and active sampling the drivers use the same basic mechanism to detect
a screen touch; however, in these modes the drivers mask the actual interrupt and simply
check the touch status manually. For the Freescale processor, this requires programming
the controller to touch detect mode and checking the PEN_DOWN IRQ bit. For the Sharp
processor, touch detection is built into the ADC command sequence and no extra steps are
required.

Reading touch data


During calibration and normal operation, we need a procedure to read and debounce the x-
and y-axis raw data values and determine if we have a stable reading while the screen is
touched. This procedure is named TouchScan() in both drivers. The outline of this procedure
is:

1. Check to see if the screen is touched.


2. Take several raw readings on each axis for later filtering.
3. Check to see if the screen is still touched.

While performing analog-to-digital conversions, both controllers provide means to program


a delay between powering the touch planes and beginning an actual analog-to-digital
conversion. Freescale calls this delay the Data Setup Count (DSCNT), and it's a number of
ASP input clocks to delay after switching between planes. Sharp calls this the pre-charge
delay.

In either case this time delay is needed because the resistive touch panel is two large
conductors separated by a thin insulator, which is the textbook definition of a capacitor. A
delay is required when switching which plane we're powering and from which plane we'll
perform an analog-to-digital conversion to allow this capacitor to settle to steady state.

For the Freescale i.MX1 processor, once we initiate conversions the data produced by the
ADC is stored in a 16-bit wide by 12-entry deep FIFO. The ADC produces 9-bit unsigned
results, so the upper seven bits of each 16-bit entry are discarded. This means the full-scale
data range of this touch controller is 0 to 511, although in reality no ADC or resistive touch
screen produces results near the limit values.

We can program the processor to generate an interrupt when the FIFO has any data
available or program to interrupt when the input FIFO is full. Since we always want to take
multiple readings, the driver programs the FIFO to interrupt when full. When this interrupt
occurs, we have 12 raw analog-to-digital conversions ready for processing, corresponding to
six readings for the x-axis and six readings for the y-axis.

The Sharp LH79524 processor allows us to program a precise sequence of steps to complete
before generating an interrupt. As each step is performed, results are likewise stored in an
input FIFO to be retrieved by our driver software. Results are stored as 16-bit values. The
most significant 10 bits of each result are the analog-to-digital conversion value, and the
least significant four bits are the sequence index. The 10-bit conversion result means that
this touch controller has a full-scale range of 0 to 1,023 counts, although again you will
never observe results exactly at the limit values.

Once the sequencer control words are programmed on the LH79524, all the driver needs to
do to acquire raw readings is command the sequencer to go. When the EOS (End Of
Sequence) interrupt is generated, our results are ready to be picked up and examined. The
sequencer can be configured to trigger automatically when the screen is touched, trigger on
software command, or trigger continuously.
Be aware that there will always be some noise and variation in raw converter readings; this
is normal. You'd be hard-pressed to take two consecutive readings on a resistive touch
screen and get back identical 9-bit or 10-bit raw data values. You will find however that as
the stylus or finger enters or leaves the touch screen, the readings vary much more than if
you are holding steady pressure. Remember that the user is mechanically connecting
together two flat resistors, the touch planes. Some small amount of time will pass during
which the electrical connection between the two planes is marginal, as the user presses and
releases the touch panel. We need to reject these readings until the system stabilizes,
otherwise our reported touch position will jump about wildly and the higher level software
will not act appropriately.

There is an unavoidable tradeoff here. If we require a narrow stability window, the driver
won't be able to track fast "drag" operations. This is important for things like scrolling or
pen-tracking during signature input. If we widen the stability window, we run the risk of
accepting touch values that are inaccurate and the result of the marginal plane connection
described above. You will need to experiment to determine the best values to use on your
system. Intelligent touch controllers likewise allow you to tune these parameters via
software commands.

The number of readings to take for each sample, the allowable variation in consecutive
readings, and the speed at which samples are taken are all progammable parameters of
each driver. These parameters can be adjusted via #defines to produce the best results on
your system. An intelligent external touch controller will often take dozens or hundreds of
readings at a very fast rate for improved accuracy. Since we are doing this filtering using
our core CPU, we need to decide how much time we can reasonably afford to spend in our
touch-sampling task. Embedded systems involve tradeoffs, and it's your job to make good
compromises to produce a system your user is happy with.

As a sort of game, I like to test the commercial touch systems I run into in daily life. The
next time you sign for a purchase or package using a touch screen, try flailing away with
fast broad pen movements. Watch the result and see how well the screen tracks your
movements. If you see nice smooth tracking, you know the driver is sampling pretty fast,
probably 200Hz or more. Often you will observe your strokes turned into straight lines (slow
sampling) or missed entirely (rejected input due to large value changes). Try not to shout
"yee haa" while you're performing this little test in a retail store or you might get some
strange looks. Normal people just don't understand what excites engineers.

Calibration
To this point we have been describing support functions of the drivers, the dirty work that
has to be done and working before we can get to the cool stuff. Now that these functions
are in place we're ready to actually ask the user to touch the screen.

Resistive touch screens require calibration. We need some reference values to be able to
convert the raw A-to-D numbers we'ill receive into screen pixel coordinates required by the
higher-level software. In an ideal case the calibration routine might be run once during
initial product power-up testing, and the reference values saved to nonvolatile memory. I've
organized the touch drivers to run the calibration routine once on entry, but keep in mind
that you can save the reference values and not bother the user with calibration on
subsequent power-up cycles. In any case you'll want to provide the user with a method of
entering the calibration routine just in case the calibration becomes inaccurate due to
temperature drift or other factors.
The calibration routine, named CalibrateTouchScreen(), is a simple step-by-step procedure
that provides the user with a graphical target on the screen, asks the user to touch the
target, and records the raw ADC readings for use later in our raw-data to pixel-position
scaling routine. The graphical target and user prompts are displayed by using the Prism
graphics software API, but this can be implemented using any similar graphics software.

In a perfect world we'd need only two sets (x and y) of raw values, the minimum and
maximum values read at opposite corners of the screen. In reality many resistive touch
screens are notably nonlinear, meaning that simply interpolating positions between the min
and max values will yield a highly inaccurate driver.

By nonlinear, I mean that equidistant physical movements across the screen won't return
equal increments in the raw data. Worse still, the value read for one axis, say the y-axis,
might vary significantly even if we only change the x-axis touch position. To demonstrate
this phenomenon I charted y-axis data readings as I moved a stylus across a typical
resistive touch screen from left to right, keeping the y-axis position as constant as possible.
You would reasonably expect the y-axis readings to remaining somewhat constant as we
slide from left to right across the x-axis, but as Figure 3 shows this is not the case.

Figure 3: Y-axis variation with x-axis motion

The result of this is that the more calibration points you can take the better, to minimize the
span of your interpolation windows and produce the best accuracy possible. If you can
calibrate once in the factory, taking a lot of sample points isn't a big deal. If that's not
possible you'll have to decide how many points you want to force your user to enter to
produce an accurate calibration. The provided calibration routine uses four data points, one
at each corner of the screen. This produces results accurate to within a pixel or two on a
VGA resolution (640x480) display screen on the reference boards described. For higher
screen resolutions or other touch screens this may either be overkill or be not nearly
enough data points to produce an accurate driver. The only way to determine this for
certain is to work with your real hardware and do lots of trial-and-error testing.

In any case, my advice is to err on the side of taking too many calibration points. The user
will be less annoyed by a long calibration procedure done infrequently than by a normal-
state system that doesn't accurately respond to touch input.

Normal operation
Once the calibration sequence is complete, we're ready to begin normal operation and start
sending touch events to the higher-level software. I've organized each of the provided touch
drivers to run as a low-priority task within each supported RTOS environment.

The task entry point is named PrismTouchTask, because the drivers are written to operate
with the Prism graphics software. These drivers can be modified to work with another
graphics package or even your own home-brew user-interface environment. In any case
PrismTouchTask first calls the hardware-configuration routine, then calls the calibration
routine, and finally enters a forever loop waiting for touch input.

In the MX1 driver, the forever loop blocks itself by waiting for the PEN_DOWN interrupt
event described earlier. While the screen is touched, the task continuously reads raw
values, converts them to screen pixel coordinates, and sends changes in touch position or
status to the higher-level software. I call this the "active tracking" mode.

The LH79524 driver works in a similar fashion. When a PEN_DOWN interrupt is generated,
we command the ADC sequencer to start doing conversions. The driver does this at a 20Hz
rate, checking for position changes, until the screen is no longer touched.

While the screen is touched, we continuously read multiple conversion values for each axis
to determine if the touch position is stable. If the delta or change in any two consecutive
readings are outside of a #defined noise window, we start over. We do this until either we
can read multiple consecutive values that are within this #defined stability range, at which
time we scale the results and report an update to the higher-level software, or the screen is
no longer touched, at which time we again block the task and wait for input.

Before and after each conversion sequence, the driver must check to ensure that the screen
is still touched. We don't want to report a stable reading to the higher-level software that is
actually an "open state" reading. I've also seen drivers that automatically discard N number
of readings after the screen is initially touched. I didn't find discarding some number of
initial readings to be necessary or beneficial with either of the example boards.

While the screen is touched, the driver takes each stable reading and coverts the raw data
to pixel coordinates using simple linear interpolation. The routine to read raw values and
convert them to screen coordinates is named GetScaledTouchPosition().

The final piece


OK, we've tuned the driver and have accurate, scaled, reliable touch information. What do
we do with all this great data? If you're running with a graphical user interface system like
Prism, the heavy lifting is done. You simply pack up the touch data into a message and send
the message into the PRISM message queue. The Prism software figures out what to do with
it from there.

Prism recognizes three touch-input event types corresponding to touch down, touch up, and
drag. Sending drag events is optional but is required if you want to present smooth scrolling
operations to the user. The logic to decide what type of message to send into the Prism
message queue is contained in the function named SendTouchMessage() in the provided
source code.

One item of note here is the use of a function named Fold() for sending drag
(POINTERMOVE) messages. This is a handy Prism API function that prevents the user
interface from getting behind in response to user input. For example if the user is scrolling a
large window on a high-resolution display, it's possible for the user interface to get behind
in re-drawing the scrolled window. You don't want the screen to continue scrolling after the
user releases the scroll bar while the user interface catches up. Instead, if the message
queue already contains a PM_POINTERMOVE message, we just want to update that message
to the latest position instead of posting a new message. The effect is that the user interface
scrolls to the latest position and skips any intermediate position updates that are happening
too fast for the processor.

This is the purpose of the Fold function provided by Prism. It checks to see if this message
type is already in the message queue, and if so it simply updates the existing message
instead of posting an entirely new message. You might want to implement something
similar if you're using an alternative graphics package.

Reach out and download


I've tried to provide an overview of what is required to implement a touch driver using two
modern CPUs that implement integration touch control circuitry. Contact us to get a sample
of touch driver source code to review.

Obviously providing accurate and reliable touch information can require a significant amount
of processor time. Intelligent ADCs designed specifically for supporting touch-screen input
can greatly reduce your core CPU loading and improve the accuracy of your touch-screen
input system.

Ken Maxwell is a practicing software engineer with 18-years-worth of experience writing


embedded software. Ken is currently the president of Blue Water Embedded, Inc, makers of
the Prism graphics toolkit for embedded systems. You can reach him at
info@bwembedded.com

Resources:
Freescale Semiconductor Inc. Sharp Microelectronics
MC9328MX1 Reference Manual LH79524 User's Guide
www.freescale.com www.sharpsma.com

Logic Product Development


www.logicpd.com

ELO TouchSystems
Touch-screen technology datasheets
www.elotouch.com

Apollo Display Technologies


Touch-screen technology datasheets
www.apollodisplays.com

Sharp Microelectronics
Using the Sharp ADC with Resistive Touch Screens
Paul Kovitz, Staff Engineer
www.sharpsma.com

You might also like