Vincent Lévesque

Tactile Graphics Rendering Using Three Laterotactile Drawing Primitives

Vincent Levesque, Vincent Hayward
Haptics Laboratory, McGill University, Montreal, Canada
Proceedings of Haptic Symposium 2008, March 13-14, Reno, Nevada.


This paper presents preliminary work towards the development and evaluation of a practical refreshable tactile graphics system for the display of tactile maps, diagrams and graphs for people with visual impairments. Refreshable tactile graphics were dynamically produced by laterally deforming the skin of a finger using the STReSS2 tactile display. Tactile features were displayed over an 11 x 6 cm virtual surface by controlling the tactile sensations produced by the fingerpad-sized tactile display as it was moved on a planar carrier. Three tactile rendering methods were used to respectively produce virtual gratings, dots and vibrating patterns. These tactile features were used alone or in combination to display shapes and textures. The ability of the system to produce tactile graphics elements was evaluated in five experiments, each conducted with 10 sighted subjects. The first four evaluated the perception of simple shapes, grating orientations, and grating spatial frequencies. The fifth experiment combined these elements and showed that tactile icons composed of both vibrating contours and grated textures can be identified. The fifth experiment was repeated with 6 visually impaired subjects with results suggesting that similar performance should be expected from that user group.

1. Introduction

Refreshable braille displays and speech synthesizers have greatly improved access to textual information for visually impaired persons by giving them access to digitized content. Access to graphical information remains comparatively limited in part because visual graphics must be processed and simplified to be suitable for tactile use, but also because of the unavailability of reliable and affordable means to convey refreshable tactile graphics through a computer. Most tactile graphics are currently produced on physical media through a variety of methods including collage, embossed paper, thermoforming, printing on microcapsule paper and, more recently, high-density braille printing and 3D printing [5,23,15]. Tactile graphics produced with such methods have proved to be of great use for geographic and orientation maps, mathematical graphs and diagrams. These are particularly important in education where visually-impaired students must have access to the same information as their sighted peers [1,10]. They can also provide information which would be difficult to grasp from direct experience of the environment or from verbal descriptions [3]. Tactile graphics produced on physical media, however, are typically bulky and often deteriorate with use. More importantly, physical media does not afford access to dynamic content such as interactive geographic information systems (GIS). The interactive control over features such as layer visibility and zoom level offered by these applications could be particularly valuable in the context of tactile graphics since information density must generally be reduced to cope with the skin's limited resolution. Refreshable tactile graphics could therefore improve the experience of interacting with graphical information for the visually impaired.

Various approaches have been explored to produce interactive tactile graphic displays. Pen-based 3D force-feedback devices can be used to simulate the exploration of raised-line drawings or other 3D patterns with a probe [16,20,26]. Patterns can similarly be produced with 2DOF haptic devices such as consumer-grade haptic mice and joysticks [17,26]. Although these approaches can be effective, interacting with a single-point of contact reduces realism and complicates exploration. An alternative consists of using a transducer known as a tactile display that produces programmable tactile sensations by deforming or otherwise stimulating the skin. Research on tactile displays has resulted in a wide array of prototypes using different skin stimulation methods and actuation technologies [22]. The difficulty of designing tactile displays results largely from the high density of actuators needed to produce distributed sensations in the fingerpad. Although their use extends to other fields such as surgery simulation and gaming, many tactile displays have been evaluated on the basis of their ability to display shapes or other tactile patterns [21,19,9,8,2]. Readers are referred to [22] for a more complete survey of experimental tactile displays and their use as graphical displays for visually impaired persons.

Tactile displays can be divided in two classes depending on whether they provide a real or virtual surface for exploration. The first class of displays presents a large, programmable surface to be explored by the fingers or hands. The surface typically consists of an array of actuated pins that produce a discrete approximation of a 3D surface. Shimada et al., for example, designed a tactile graphics display system with a 32 x 48 array of braille pins manufactured by KGS Corp. (Japan) [18]. Although this approach closely approximates static tactile graphics, it also increases cost due to the large number of actuators needed. The large size of such tactile displays also hinders portability. The second approach consists of producing a large virtual surface out of a smaller tactile display. This is achieved by dynamically altering the sensation produced by a tactile display in fixed contact with the fingerpad in response to displacements. The most famous example is the Optacon, a reading aid commercialized in the 1970's that converted images of printed text captured from a mobile camera to tactile patterns on an array of 24x6 vibrating pins [13]. Reasonable reading rates were achieved after considerable training. Tactile displays of this class can also be used to explore virtual tactile graphics when connected to a personal computer. An example is the VTPlayer mouse with its two 4x4 braille pin arrays [7]. The main advantage of this approach is that fewer actuators are needed, reducing cost and size. Producing meaningful sensations without relative motion between the stimulator and fingerpad, however, is challenging.

The work presented in this paper aims to address this problem by producing controlled lateral skin deformation patterns rather than indenting the skin. This principle, which we name laterotactile stimulation, assumes that the critical deformation occurring at the level of the mechano-receptors can be approximated by laterally deforming the skin. A series of tactile displays have been designed to exploit this principle including 1-D [12,14] and 2D [24] arrays of laterally moving skin contactors. All use a similar technology based on piezoelectric bending motors. Previous work has shown that when appropriately programmed, skin deformation patterns produced by these displays can evoke the sensation of brushing against localized features such as braille dots and gratings [12,11,14].

This paper presents our most recent work on the display of refreshable tactile graphics using the latest 2D laterotactile display, the STReSS2 [24]. Three tactile rendering methods capable of producing the sensation of gratings, dots, and vibrating patterns are presented. An early version of these tactile rendering algorithms were previously used in a tactile memory game that demonstrated the capabilities of the STReSS2 tactile display during the 2006 ACM Conference on Human Factors in Computing Systems [25].

This paper also reports on our efforts to evaluate the effectiveness of the system for the display of tactile graphics. A first experiment evaluated the identification of simple shapes using the three rendering methods. The next three experiments investigated the device's rendering of tactile gratings at various orientations and spatial frequencies. The final experiment combined shape and texture rendering to evaluate the system's ability to display tactile icons. The first four experiments were each conducted with 10 sighted subjects. The final experiment was conducted with 10 sighted and 6 visually impaired subjects to validate the results for the target user group. The elements of tactile graphics investigated here constitute a first step toward the design of tactile maps and diagrams adapted for display by laterotactile stimulation.

2. Tactile Display Prototype

The tactile rendering system used in this work is a prototype that combines a STReSS2 tactile display with an instrumented 2D planar carrier (Fig. 1). The STReSS2 revision used stimulates the skin by controlling the deflection of a matrix of 8x8 piezoelectric bending motors [24]. The actuators deflect toward the left or right by approximately 0.1 mm, and produce a blocked force in the order of 0.15 N. The center-to-center distance between adjacent actuators is 1.2x1.4 mm. The reading fingerpad therefore rests against an active contact area of 9x11 mm. Filters with a 200 Hz cut-off frequency enable more accurate signal reconstruction and attenuate most energy at the natural resonance frequency of the actuators, resulting in the elimination of most audible noise and in more natural tactile sensations.

Figure 1a: Picture of the stress 2 tactile dispay with it's array of 8 by 8 actuators.
Figure 1b: Picture of the stress 2 mounted over the tip of a robotic device's arms.
Figure 1c: Picture of hand holding the device with the index finger on the tactile display.
Figure 1. (a) Active area of the STReSS2 tactile display, (b) STReSS2 mounted on a planar carrier, and (c) usage of the device.

The STReSS2 was mounted on a planar carrier that allowed movement within a 11x6 cm virtual surface. The carrier was a 2 degree of freedom (2DOF) haptic device with low friction and inertia called the Pantograph [4]. The device was used as a passive carrier and its motors were therefore inactive. The carrier measured position with a resolution better than 13 μm. The workspace of the Pantograph was slightly reduced to prevent collision with the tactile display, resulting in the above mentioned virtual surface dimensions. Rotation of the display was neither prevented nor measured and users were therefore required to maintain the orientation of the display. The tactile display's electronics were covered with a plastic protector and foam for safe and comfortable usage. More information about this apparatus can be found in [24].

The system's driving signals were produced at 1 kHz on a personal computer running Linux and the Xenomai real-time framework. Actuator activation signals were produced with a resolution of 10 bits. Rendering algorithms and drivers were programmed in C++.

3. Tactile Rendering

The STReSS2 display produces tactile sensations by dynamically controlling lateral deformation patterns on the fingerpad in response to exploratory movements within its planar carrier's workspace. Extracting meaningful sensations from this mode of skin stimulation requires the specification of appropriate actuator activation patterns, a process that we term tactile rendering by analogy with graphics rendering. This section describes in details three laterotactile rendering methods that produce dotted outlines, vibrating patterns and virtual gratings. The tactile sensations produced by these rendering methods are modulated over the virtual surface using bitmapped grayscale images. This allows the creation of tactile patterns with standard image editing software. These renderings can also be combined to create more complex tactile graphics. Fig. 2 shows visual representations of squares rendered with all three methods.

Figure 2a: Illustration of a square outline made of small dots.
Figure 2b: Illustration of a square outline made of a white noise texture.
Figure 2c: Illustration of a square filled with a sinusoidal grating.
Figure 2d: Illustration combining the patterns shown in Fig. 2 a, b and c.
Figure 2. Visual illustration of squares rendered with (a) dots, (b) vibration, (c) gratings, and (d) a combination of all three.

By convention, the following discussion specifies actuator deflections δ between 0 (right) and 1.0 (left). Deflecting actuators to the right when at rest provides a greater swing upon activation and increases the strength of some sensations. Although directional effects appear to be minimal, this resting position is also selected so that activation occurs against motion when moving the display from left to right. The deflection of unloaded actuators is used here as an approximation of the intended skin deformation patterns. Actual deformation patterns may differ due to the complex biomechanical properties of the skin. Fig. 3 illustrates the displacement of the tactile display over a virtual surface as well as the deflection of its actuators.

Figure 3a: Illutration showing the tactile display as a matrix of 8 by 8 red square over a circle filled with a grating texture.
Figure 3b: Closeup on the tactile display representation.
Figure 3. (a) Virtual surface with a grated circle and (b) close-up on tactile display deflection pattern.

3.1 Dot Rendering

The sensation of brushing against a dot is produced by swinging actuators towards the left and then back to the right as they slide over a virtual dot. The deflection is expressed mathematically as follows:

Equation 1: delta of r is equal to 1.0 if r is less than or equal to P; is equal to 0.5 plus 0.5 times cos of Pi times r minus P over 1 minus P if r is greater than P and less than or equal to 1.0; and is equal to 0.0 otherwise. (1)

where r=||pi,j - pcenter||/radius is the relative distance from the center of the dot. As they move over the dot, actuators first follow a smooth sinusoid that takes them from their rest position on the right to their active position on the left. They then maintain this deflection over a plateau of radius P. A plateau of P=0.25 was found to produce smooth transitions while giving each dot sufficient area to be easily perceptible from any direction. The location and amplitude of each dot is specified with blobs in a grayscale image. Dots can be positioned anywhere on the virtual surface provided that they do not overlap. Dot patterns are represented visually as shown in Fig. 2a.

This rendering method was inspired by earlier work on the display of Braille [11] and dot patterns [25] by lateral skin deformation. While these earlier attempts assumed that the dots were either exclusively or mostly explored by horizontal motion, the improved method presented here allows exploration from any direction, thereby improving the realism of the sensation and facilitating contour following. This improvement results from the use of a radial deflection pattern with a plateau at its center.

3.2 Vibration Rendering

This tactile rendering method produces a sensation of localized vibration within the virtual surface [25]. The vibratory sensation is produced by controlling the deflection of each actuator i,j as a temporal oscillation:

Equation 2: delta of i, j and t is equal to 0.5 + 0.5 cos of 2 Pi f t if i and j are even and odd, or odd and even; or is equal to 0.5 - 0.5 cos of 2 Pi f t otherwise. (2)

The phase of vibration is inverted for adjacent actuators to maximize compression and shearing deformation, and thereby increase the strength of the sensation. A vibration frequency of 50 Hz was similarly found to provide the strongest sensation. Higher frequencies could potentially increase contrast further but could not be used at present time due to limitations of the I/O hardware used to communicate with the STReSS2.

Vibratory patterns are produced by modulating the amplitude of vibration of actuators as a function of their position within the virtual surface. The amplitude mapping is specified with a grayscale image mask. Vibrating patterns are represented visually using a white-noise texture (e.g. Fig. 2b).

3.3 Grating Rendering

This rendering method extends our earlier work on the display of vertical gratings to that of gratings of arbitrary orientation [25]. The grating rendering produces a sensation similar to that of brushing a finger against a corrugated surface. This sensation is obtained by propagating a sinusoidal wave across the tactile display at a specific angle. The deflection of each actuator is given by:

Equation 3: delta of d is equal to 0.5 plus 0.5 cos of 2 Pi d over lambda. (3)

where d = y cosθ - x sinθ is the distance from the actuator position (x,y) to a reference line crossing the origin at angle θ. This produces a grating of spatial wavelength λ at an orientation of θ. Horizontal and vertical gratings produce natural sensations for a wide range of spatial frequencies. Diagonal orientations produce noisier sensations. The orientation of a grating can be judged either by attending to the subtle directional sensation on the fingertip or by finding the direction of movement with the strongest or weakest stimulus, corresponding to motion across or along ridges respectively. Again, the amplitude of the grating texture is modulated by an image mask. Grating patterns are represented visually as shown in Fig. 2c.

3.4 Composite Rendering

The three rendering methods described previously produce tactile patterns by deflecting the actuators only as they pass over specific regions of the virtual surface, otherwise leaving them at their resting position to the right. Provided that there is no overlap between their active regions, it is therefore possible to combine tactile layers rendered with different methods by simply adding together their modulated actuator deflection functions. This allows complex tactile patterns to be created, as represented visually in Fig. 2d.

4. Experiments

This section describes five experiments that were conducted to gain a better understanding of the tactile display system's capabilities. The first experiment looked at the identification of simple geometrical shapes produced with either dots, vibration or gratings (Section 4.1). The second experiment investigated the difference in spatial frequency necessary to differentiate gratings (Section 4.2). The third and fourth experiments studied the identification of grating orientations, first with intervals of 30° and then of 45° (Sections 4.3 and 4.4). The fifth experiment looked at the identification of tactile icons composed of vibrating contours and grated interiors (Section 4.5).

Table 1: Group A was composed of 2 males and 8 females aged from 19 to 29 with a mean of 22.4. Group B was composed of 4 males and 6 females aged from 19 to 26 with a mean of 21.8. Group C was composed of 3 males and 7 females aged from 18 to 52 with a mean of 24.1. Group VI was composed of 5 males and 1 female aged from 35 to 65 with a mean of 47.8. One subject in groups A, B, and VI, and two in group C were left-handed.

Table 1. Description of the four groups of subjects who participated in the experiments.

Three groups of 10 sighted subjects (A, B and C) and one group of 6 visually impaired subjects (VI) participated in the experiments. Each subject took part in one or two experiments during a one-hour experimental session. Group A participated in the first experiment, group B in the second and third, and group C in the fourth and fifth. The firth experiment was repeated with the visually impaired subjects of group VI. The subjects were selected solely based on availability and paid for their participation. They performed the experiment with the index of their preferred hand. Details about preferred hand, gender, and age distribution within the subject groups are shown in Table 1. Two of the subjects of group VI had previously participated in a study on the use of the STReSS2 as a Braille display [11]. Two were blind from birth and the others had lost their sight between the ages of 3 and 20.

4.1 Shape

4.1.1 Description

The first experiment evaluated the perception of simple geometric shapes displayed on the tactile display. The experiment was designed to also evaluate the effect of rendering method and shape size on identification performance. The experiment consisted in the identification of six shapes rendered using the three methods described in the previous section at two different scales. Fig. 4 illustrates the six shapes as well as the six variations of the circle shape. The experiment was conducted with subject group A.

Figure 4a: Illutration showing six geometric shapes (square, circle, equilateral triangle, right triangle, diamond and plus).
Figure 4b: Illustration showing 3 sets of small and big circles: circles filled with a grating texture, circle outlines made of dots, and circle outlines made of white noise texture.
Figure 4. Experimental stimulus for shape identification experiment: (a) six shapes and (b) example of the six variations of a shape.

The shapes were selected so as to fill a 2 or 3 cm square. Vibrating shapes were produced with a 1.5-mm-thick outline that exceeds the spacing between actuators and therefore prevents aliasing effects. An approximation of the outline was similarly produced with dots of 1-mm radius. The grating was used to present filled shapes since it is intented as an areal texture and does not produce clear outlines. A spatial wavelength of 2 mm was selected to produce a well-defined boundary while still feeling natural. A vertical grating was used since it appears to give the strongest illusion.

Since the identification strategy differs depending on the rendering method, each method was tested separately in randomized order. Each experiment began with a short training session in which subjects familiarized themselves with the shapes. Subjects were then asked to identify 48 shapes (6 shapes x 2 sizes x 4 iterations) presented in random order with a one-minute break at half-time. The shapes were presented for a maximum of 20 s. Answers were entered by typing a key corresponding to idealized shape outlines shown on-screen.

4.1.2 Results

The performance of the subjects is shown for each rendering method in Fig. 5. Subject A1 performed significantly worse than all others and is therefore excluded from analysis. The remaining 9 subjects correctly identified 76.0% of the shapes. Identification was performed in 14.2 s on average, with 17.4% of the trials going over the time limit.

Figure 5: Bar chart showing the performance of each subject as a function of the rendering method. Click on the image for a textual representation.
Figure 5. Shape identification performance as a function of the rendering method. Subjects are sorted by overall performance.

Table 2 gives the performance for all conditions. A repeated measures two-way ANOVA reveals no significant interaction between rendering method and scale on shape identification performance [F(2,16)=0.693, p=0.514]. The average performance was 85.2% for vibrating shapes, 78.0% for dotted shapes and 64.8% for grating-textured shapes. The difference in performance was significant between grating rendering and both dot rendering (t=-2.489, p<0.05) and vibration rendering (t=-5.335, p<0.05). The difference between dotted shapes and vibrating shapes was not significant (t=-1.167, p=0.277). Five subjects performed better with vibration, three with dots and one equally well with both. Seven of the nine subjects expressed a preference for the rendering method with which they performed best.

Table 2: Click on the image for a textual representation.

Table 2. Shape identification performance (%) as a function of shape, scale and rendering method (G=grating, D=dot, V=vibration)

Performance was also affected by the scale of the shapes (t=-2.981, p<0.05). 79.8% of large shapes were identified correctly compared with 72.2% of small shapes. Overall, the best performance was obtained with large and small vibrating shapes (88.9% and 81.5%) followed by large dotted shapes (80.6%). Performance also varied with the shape displayed (see Table 3). Performance dropped from 85.2% for the right triangle to 64.8% for the plus sign. Asymetries are also visible, notably between plus and diamond, and diamond and circle.

Table 3: Click on the image for a textual representation.

Table 3. Distribution of answers (%) in shape identification experiment.

4.2 Grating Spatial Frequency

4.2.1 Description

The second experiment was conducted to determine the difference in spatial wavelength necessary to be able to differentiate and scale gratings. The experiment consisted in the identification of the vertical grating with highest spatial frequency among two gratings shown side-by-side. Fig. 6 illustrates the stimulus used. The experiment was conducted at the beginning of subject group B's experimental sessions.

Figure 6: Illutration showing a square with a fine grating pattern on the left and a square with a coarse grating pattern on the right.
Figure 6. Experimental stimulus for grating spatial frequency comparison experiment (shown with wavelengths of 3 mm and 6 mm).

The gratings were separated by a 1-cm-wide blank space so that the tactile display never touched both gratings at once. The spatial wavelength of the gratings was varied between 1.0 and 6.0 mm in 0.5 mm increments. Each experiment began with a short familiarization session in which various pairs of gratings were shown. Subjects were then asked to identify the grating with highest spatial frequency in 110 randomized trials (once per non-identical pair for 11 wavelengths). The sensation was presented for a maximum of 10 s. Answers were entered with the keyboard. Subjects wore sound blocking earphones. The number of trials decreased linearly from 20 for differences in wavelength of 0.5 mm down to 2 for differences of 5.0 mm (n=22-4Δ).

4.2.2 Results

Subject B9 performed far worse than all others (54.5% compared with 91.3±2.7%) and is therefore exluded from analysis. Four trials were also rejected because they resulted from accidental key presses (duration less than 0.5 s). The performance of the remaining 9 subjects is shown in Fig. 7 as a function of the difference in spatial wavelength. The success rate gradually increases from 74.4% at 0.5 mm to near perfection at and above 3.0 mm. The trial duration follows a similar pattern, gradually decreasing from 6.5 s at 0.5 mm to 3.1 s at 5.0 mm. Only 5.4% of trials extended past the 10 s time limit.

Figure 7: Bar charts showing the average success rate and trial duration as a function of the difference in wavelengths. Click on the image for a textual representation.
Figure 7. Percentage of correct answers and average trial duration as a function of the difference in wavelength (mm) in the grating frequency comparison experiment. The standard deviation across subjects is shown as an error bar.

4.3 Grating Orientation (Fine)

4.3.1 Description

This experiment evaluated the subjects' ability to perceive the orientation of virtual gratings. The experiment was designed to also evaluate the effect of spatial frequency on orientation judgments. The experiment consisted in the identification of six orientations (0°, ±30° or ±60°, 90°) at three different spatial wavelengths (4 mm, 6 mm and 8 mm). Fig. 8 illustrates the grating orientations and spatial frequencies. This experiment was conducted in the second part of group B's experimental session.

Figure 8a: Illustration of the grating texture at 6 different orientations.
Figure 8b: Illustration of the grating texture at 3 different spatial frequencies.
Figure 8. (a) Grating orientations and (b) spatial wavelengths used during the fine orientation identification experiment.

Each experiment began with a short familiarization session during which subjects were exposed to the different grating orientations. Subjects were then asked to identify the orientation of 90 gratings (6 orientations x 3 spatial frequencies x 5 iterations) presented in randomized order with a 2-minute break at half-time. The gratings were presented for a maximum of 10 s. Subjects wore sound blocking earphones and were asked not to use diagonal motion to explore the virtual grating. This directive was given so that diagonal orientations would be identified by tactile motion on the fingertip rather than by finding the direction of motion with the weakest sensation. Answers were entered by typing a key corresponding to idealized grating representations shown on-screen.

4.3.2 Results

One trial was rejected because it resulted from an accidental key press. The performance of subjects at identifying orientation is shown in Fig. 9. Orientation was identified correctly 46.1% of the time. Trials lasted 8 s on average, with 25% extending past the 10 s time limit. Horizontal and vertical gratings were identified more easily (76.0% and 60.6%) than diagonal gratings (35.0%). Trial duration similarly dropped from 8.6 s for diagonal gratings to 6.7 s for horizontal and vertical gratings.

Figure 9: Gratings were identified correctly 76.0±19.7% of the time at 0°, 44.0±15.5% at 30°, 26.0±12.8% at 60°, 60.6±34.9% at 90°, 32.7±19.2% at -60°, 37.3±18.9% at -30°.
Figure 9. Performance at identifying fine grating orientations. The standard deviation across subjects is shown as an error bar.

The distribution of answers for each orientation is shown in Fig. 10. The shape of the response distribution is similar for 30° and -30° orientations, showing a tendency to answer correctly or otherwise to select any other diagonal orientation. Similarly, subjects tended to select the correct sign for ±60° gratings but appeared unable to distinguish between 30° and 60°. ±60° gratings were also often confused with vertical gratings.

Figure 10: Click on the image for a textual representation.
Figure 10. Distribution of answers in the fine grating orientation identification experiment.

There was a statistically significant difference in performance between 4-mm and 6-mm gratings (t=-3.279, p<0.05), but not between 4-mm and 8-mm (t=-1.111, p=0.295) or 6-mm and 8-mm (t=-1.495, p=0.169). The orientation of 4-mm, 6-mm and 8-mm gratings was correctly identified 41.8%, 50.7% and 45.7% of the time respectively.

4.4 Grating Orientation (Coarse)

4.4.1 Description

This follow-up experiment repeated the previous experiment with an easier task in order to better understand where the perceptual limit lies when judging grating orientation. The number of orientations was reduced to four (0°, 90° and ±45°) and the spatial wavelength was set to the best value found in the previous experiment (6 mm). The maximum trial duration was extended to 15 s and subjects were allowed to move in diagonal. Strategies to accomplish the task were explained during the training session. Subjects were asked to identify the orientation of 40 gratings (4 orientations x 10 iterations) presented in randomized order. The experiment was conducted at the beginning of group C's experimental session.

4.4.2 Results

Subject C1 misunderstood the task and is excluded from analysis. The performance of the remaining subjects is shown in Fig. 11. 0° and 90° gratings were identified correctly 88% of the time, while +45° and -45° gratings were identified correctly 87% and 85% of the time. Trial duration was 8.8 s on average, with 14.2% of trials extending past the 15 s limit. The confusion matrix shows that vertical and horizontal gratings were rarely confused for one another (Table 4).

Figure 11: Click on the image for a textual representation.
Figure 11. Performance at identifying coarse grating orientations. Subjects are sorted by overall performance.

Table 4: Click on the image for a textual representation.

Table 4. Distribution of answers in coarse grating orientation identification experiment.

4.5 Tactile Icons

4.5.1 Description

The final experiment examined the perception of tactile icons composed of vibrating shapes filled with a grating texture. The experiment consisted in the identification of the shape (circle, square, inverted triangle or right triangle), grating orientation (vertical or horizontal) and grating frequency (high or low) of tactile icons. The tactile icons used are illustrated in Fig. 12. The experiment was conducted during the second part of group C's experimental session and repeated with the 6 visually-impaired subjects of group VI.

Figure 12a: Illustration showing the outline of a circle, a square, an equilateral triangle and a right triangle made of a white noise texture.
Figure 12b: Illustration showing four texture patches made of fine and coarse gratings at the horizontal and vertical orientation.
Figure 12c: Illustration showing a circle made of a white-noise outline filled with a coarse horizontal grating.
Figure 12. Stimulus used in tactile icon identification experiment: (a) four shapes, (b) four textures and (c) example of icon.

The four shapes were selected based on their identifiability in the shape perception experiment. The vibration rendering method was selected because it yielded the best results in that experiment and because it provides greater contrast with gratings than dotted outlines. Shapes were drawn at the larger 3-cm scale that also produced the best results in the experiment. Larger shapes also increase the size of the textured area and facilitate texture identification. Vertical and horizontal texture orientations were selected because they appear to be most easily identified, and rarely confused for one another. The spatial wavelengths were selected as far apart as possible without compromising orientation judgments. The values selected (2 mm and 6 mm) were sufficiently distinctive to be easily identified when maintaining a regular exploration speed.

The experiment began with a familiarization session lasting approximately 10 minutes during which subjects where shown the different icons and trained to judge their varying characteristics. The experiment consisted in the identification of 48 icons (4 shapes x 2 grating orientations x 2 grating frequencies x 3 iterations) presented in randomized order with a 2-minute break after each third of the trials. Icons were presented for a maximum of 40 s. Subjects identified the tactile patterns by making three selections on a modified numeric keypad. The available answers were shown to sighted subjects as symbolic illustrations on-screen. The visually impaired subjects were given a keypad with equivalent patterns made of thick solder wire glued to the keys. Their selections were confirmed by playing recorded speech after each key press. In both cases, subjects were allowed to modify their answers once entered if they felt that it was mistyped or if they revised their judgment. Subjects wore sound blocking earphones. Subject VI4 did not wear sound-blocking earphones for a third of the experiment. Results are unlikely to have been affected since audio cues were barely audible and masked by ambient noise.

4.5.2 Results

Fig. 13 shows the percentage of correctly identified shapes, grating orientations, grating frequencies and icons (all three parameters combined). Fig. 14 shows the average trial duration for each subject.

Figure 13: Click on the image for a textual representation.
Figure 13. Performance in tactile icon experiment for sighted subjects (group C) and visually-impaired subjects (group VI). Subjects are sorted by overall performance.

Figure 14: Click on the image for a textual representation.
Figure 14. Mean trial duration in tactile icon experiment. Subjects are sorted by performance.

There was no statistically significant difference between the performance of the sighted and visually impaired subjects. Sighted subjects correctly identified 88.5% of the shapes, 95.8% of the grating orientations and 88.8% of the grating frequencies, compared with 87.5%, 94.4% and 77.1% for the visually impaired. All three parameters were correctly identified 78.5% and 67.7% of the time for sighted and visually impaired participants respectively. The average trial duration was 24.5 s, with 11.7% of trials extending past the time limit for the sighted, and 23.2 s with 17.4% of timeouts for the visually impaired. The results, however, are heavily skewed by the low performance of a single visually impaired subject (VI1).

5. Discussion

The first experiment showed that the tactile graphics system is capable of rendering simple geometric shapes of moderate size (2 or 3 cm) using all three rendering methods. Shapes rendered with dots or vibrations were more readily identified than those rendered with gratings, perhaps because the latter were filled. In agreement with many subjects' ambivalence when picking a favorite, there was no statistically significant difference in performance with dots or vibration. The larger of the two shape sizes was also easier to identify. Informal observations suggest that enlarging the shapes further could reduce performance by requiring more information to be integrated while following contours. Performance should similarly be expected to decrease at smaller scales as details become more difficult to discern. Performance could be further improved by tweaking various parameters such as the spatial frequency of the grating, the diameter of dots or the line thickness of vibrating contours. The salience of corners could also be increased using decorations, much like serifs in typography.

The second experiment showed that it is possible to scale vertical gratings with a difference in spatial wavelength as low as 0.5 mm. Simply discriminating gratings may be even easier. As shown in the final experiment though, identifying gratings by spatial frequency is a much harder task due to the difficulty of memorizing a reference for comparison. The dependence of the sensation on exploration velocity also increases the difficulty. Preliminary experiments also suggest that the task is more difficult for diagonal and, to a lesser extent, horizontal gratings. Similarly, small differences may be more difficult to detect when comparing two large wavelengths.

The third and fourth experiments showed that vertical and horizontal gratings can be identified. The third experiment showed that identifying diagonal orientations with a 30° resolution without diagonal movement is nearly impossible. Performance improved greatly with the reduction of the resolution to 45° and the use of diagonal movement. More experimentation will be necessary to determine if a finer resolution could be obtained when diagonal movement is allowed. Results also suggest that discrimination may be reduced at high spatial frequencies. This is reasonable considering that high frequency gratings feel less natural and that moving straight along their ridges is more difficult.

The fifth and final experiment showed that identifying the shape and texture of a set of 16 tactile icons is possible. This icon set could be expended by using more shapes, by adding a diagonal grating orientation and by adding grating spatial frequencies. Training may become more important as the icon set grows, particularly for judging the spatial frequency of the texture. Dotted contours should also be investigated, although their low contrast with gratings would likely degrade performance.

In all cases, it is interesting to note that subjects were given less than 15 minutes of training. Many felt that they performed better with time, notably for shape and icon identification. We can therefore expect performance to improve with practice. Many subjects, on the other hand, reported that their finger was getting numb over the duration of the experiments. Trial durations were also kept short to insure that judgments were made intuitively rather than by persistence. Performance would likely have improved if more time was given.

This preliminary work on the display of tactile graphics by lateral skin deformation relied mostly on sighted subjects. Visual feedback may have played a part in some experiments by, for example, allowing the identification of shapes by visual observation of finger movements. Subject were allowed to see the apparatus to facilitate monitoring of the orientation of the tactile display. As the results with visually impaired subjects show, this precaution was not essential. This issue will be resolved in future work by mounting the display on a carrier capable of measuring its orientation and by adjusting the rendering algorithms accordingly. The workspace of the display will also be increased to allow more practical applications.

Previous work also indicates that variations in performance can be expected between sighted, late blind and early blind participants due to their varying degrees of visual and tactile experience [6]. The similar performance of sighted and visually impaired subjects on the icon identification experiment suggests however that differences may be minimal with the simple tasks performed here. This may be due to the non-visual nature of the tasks or the novelty of the exploration strategies that had be learned by all subjects alike to use the device effectively. The findings of the rest of the study may therefore extend to the visually impaired population. Nevertheless, future work will focus on confirming these findings with visually impaired users.

These experiments provide an early assessment of the possibilities offered by the STReSS2 for the rendering of virtual tactile graphics. Due to the large number of parameters involved, the experiments covered only a small fraction of the rendering possibilities. They nevertheless suggest that the device will be useful in a variety of applications of tactile graphics for visually impaired persons such as maps, graphs and diagrams. Shapes play an important part in tactile graphics by conveying both symbolic information (e.g. point symbols in a map) and more complex information (e.g. geometric concepts). Areal textures are also commonly used as a tactile color to highlight, label or otherwise mark part of a tactile graphics. The information gathered through these experiments provides a basis for using these drawing elements in tactile graphics produced by laterotactile displays. A tactile map could, for example, be contructed with vibrating political boundaries and regions colored with easilly disciminable textures. Similarly, the tactile icons developed in the fifth experiment could be used as informative point symbols in a tactile map or diagram. The basic data obtained from these and other experiments will be used to design more complex tactile graphics appropriate for display by lateral skin deformation.

6. conclusion

This article discussed three rendering methods capable of producing tactile graphics within a virtual surface by laterally deforming the skin. Five experiments were conducted to evaluate the system's ability to display basic elements for tactile graphics. The first experiment showed that simple shapes rendered with dots or vibration can be identified. The second showed that differences in spatial frequencies as low as 0.5 mm are sufficient to compare virtual gratings. The third and fourth experiments showed that determining the orientation of a virtual grating is possible within 45° if motion is not constrained. Finally, the fifth experiment showed that tactile icons composed of vibrating outlines filled with grating textures can be identified. The results obtained with visually impaired subjects on the final experiment suggest that the findings of the study are applicable to that user group. This work constitutes a first step toward the display of more complex tactile graphics in applications of relevance for visually impaired persons, such as tactile maps, diagrams and graphs.


The revision of the STReSS2 used in this work was designed and implemented by Qi Wang and Vincent Hayward. The experiments were approved by the Research Ethics Boards of McGill University and of the Centre for Interdisciplinary Research in Rehabilitation of Greater Montreal (CRIR). Subjects gave their informed consent before participating. The authors would like to thank Jérôme Pasquero for his help with the statistical analysis of the results and for discussions about this work. They would also like to thank Andrew Gosline for his help with the experimental apparatus. This work was motivated in part by discussions on tactile graphics with Nicole Trudeau, Aude Dufresne and Gréegory Petit. Funding from Fonds qu&eactue;bécois de la recherche sur la nature et les technologies (PR-114110) is gratefully acknowledged. The visually impaired subjects were recruited through the Institut Nazareth et Louis-Braille (INLB) and the Typhlophile website.


[1] F. K. Aldrich and L. Sheppard. Graphicacy: The fourth 'r'? Primary Science Review, 64:8-11, 2000.

[2] M. Benali-Khoudja, C. Orange, F. Maingreaud, M. Hafez, A. Kheddar, and E. Pissaloux. Shape and direction perception using vital: A vibro-tactile interface. In Touch and Haptics Workshop (IROS 2004), 2004.

[3] M. Blades, S. Ungar, and C. Spencer. Map Use by Adults with Visual Impairments. Professional Geographer, 51(4):539-553, 1999.

[4] G. Campion, Q. Wang, and V. Hayward. The Pantograph Mk-II: a haptic instrument. In Proc. IROS 2005, pages 193-198, 2005.

[5] P. K. Edman. Tactile Graphics. AFB Press, New York, 1992.

[6] M. Heller. Picture and pattern perception in the sighted and the blind: the advantage of the late blind. Perception, 18:379-389, 1989.

[7] G. Jansson, I. Juhasz, and A. Cammilton. Reading virtual maps with a haptic mouse: Effects of some modi?cations of the tactile and audio-tactile information. British Journal of Visual Impairment, 24(2):60-66, May 2006.

[8] K. Kaczmarek and S. Haase. Pattern identification and perceived stimulus quality as a function of stimulation waveform on a fingertip-scanned electrotactile display. IEEE Trans. on Neural Systems and Rehabilitation Engineering, 11(1):9-16, 2003.

[9] K.-U. Kyung, M. Ahn, D.-S. Kwon, and M. Srinivasan. A compact broadband tactile display and its effectiveness in the display of tactile form. In Proc. World Haptics Conference 2005, pages 600-601, 18-20 March 2005.

[10] S. Landau, M. Russell, K. Gourgey, J. N. Erin, and J. Cowan. Use of the talking tactile tablet in mathematics testing. Journal of Visual Impairment & Blindness, 97(2):85, Feb. 2003.

[11] V. Levesque, J. Pasquero, and V. Hayward. Braille display by lateral skin deformation with the stress2 tactile transducer. In Proc. World Haptics Conference 2007, pages 115-120. IEEE, 2007.

[12] V. Levesque, J. Pasquero, V. Hayward, and M. Legault. Display of virtual braille dots by lateral skin deformation: feasibility study. ACM Transactions on Applied Perception, 2(2):132-149, 2005.

[13] J. G. Linvill and J. C. Bliss. A Direct Translation Reading Aid for the Blind. Proceedings of the IEEE, 54(1):40-51, 1966.

[14] J. Luk, J. Pasquero, S. Little, K. MacLean, V. Levesque, and V. Hayward. A role for haptics in mobile interaction: initial design using a handheld tactile display prototype. In Proc. CHI'06, pages 171-180, 2006.

[15] D. McCallum and S. Ungar. An introduction to the use of inkjet for tactile diagram production. British Journal of Visual Impairment, 21(2):73-77, May 2003.

[16] K. Moustakas, G. Nikolakis, K. Kostopoulos, D. Tzovaras, and M. Strintzis. Haptic rendering of visual data for the visually impaired. IEEE Multimedia, 14(1):62-72, Jan.-March 2007.

[17] P. Parente and G. Bishop. BATS: The Blind Audio Tactile Mapping System. In Proc. of the ACM Southeast Regional Conference, 2003.

[18] S. Shimada, M. Shinohara, Y. Shimizu, andM. Shimojo. An approach for direct manipulation by tactile modality for blind computer users: Development of the second trial production. In Proc. ICCHP 2006, pages 1039-1046, 2006.

[19] M. Shinohara, Y. Shimizu, and A. Mochizuki. Three-Dimensional Tactile Display for the Blind. IEEE Transactions on Rehabilitation Engineering, Vol. 6, No. 3, 1998.

[20] C. Sjostrom, H. Danielsson, C. Magnusson, and K. Rassmus-Grhn. Phantom-based haptic line graphics for blind persons. Visual Impairment Research, 5(1):13-32, Apr. 2003.

[21] H. Tang and D. Beebe. A Microfabricated Electrostatic Haptic Display for Persons with Visual Impairments. IEEE Transactions on rehabilitation engineering, 6(3), 1998.

[22] F. Vidal-Verdu and M. Hafez. Graphical tactile displays for visually-impaired people. IEEE Trans Neural Syst Rehabil Eng, 15(1):119-130, Mar 2007.

[23] P. Walsh and J. A. Gardner. TIGER, a new age of tactile text and graphics. In Proc. CSUN 2001, 2001.

[24] Q. Wang and V. Hayward. Compact, portable, modular, high-performance, distributed tactile transducer device based on lateral skin deformation. In Proc. HAPTICS-06, pages 67-72, 2006.

[25] Q. Wang, V. Levesque, J. Pasquero, and V. Hayward. A haptic memory game using the stress2 tactile display. In Proc. CHI'06, pages 271-274, 2006.

[26] W. Yu and S. Brewster. Comparing two haptic interfaces for multimodal graph rendering. In Proc. HAPTICS 2002, pages 3-9, 2002.