Tactile Graphics and Strategies for Non-Visual Seeing

by Steven Landau, President of Touch Graphics

Published in Thresholds 19, pp. 78 - 82
MIT School of Architecture, 1999.

Introduction

For people who have difficulty seeing, acquiring spatial and pictorial information is a challenge, and in our image-laden world, access to pictures has increasingly become a requirement for full participation in communal life. In 1996, Touch Graphics was formed as a for-profit company to perform research and to develop a number of graphical tools intended for a blind and visually impaired audience. These products and systems employ "tactile graphic" materials as their central feature. This work has been done in partnership, and is based on ten years of earlier research, by Computer Center for Visually Impaired People, Baruch College, City University of New York.

Tactile graphics, in this case, refers to the presentation of spatial or pictorial material using textured and raised-line media, thin PVC sheets that have been color printed and then vacuum-thermoformed with shallow three-dimensional relief images. Tactile graphics are "looked at" with the user's fingertips, or by a combination of vision and touching. The products include maps of bus routes, subway systems and interior transit facilities; free-standing public access "talking" kiosks; and touchable computerized interfaces and interactive programming. For all of these products and systems, the driving motivation has been to explore the ways that the user acquires information through tactile means: the normal rules of graphic design are largely inapplicable here, and new techniques based on non-visual perception were developed by necessity. The following discussion will describe this research and its products. It is of particular interest that in the course of this work, we have begun to identify a tactile aesthetic experience that may be comparable to what a sighted person feels when presented with satisfying visual materials. In addition to servicing our constituency of blind and visually impaired people, we are interested in determining the qualitative differences between tactile and visual perception: we start from the premise that engaging this overlooked component of our sensory equipment can deepen spatial awareness for members of this underserved and growing segment of the population.

 

Maps

Originally developed as way-finding tools for complex urban transit systems, tactile graphic maps have evolved into a holistic system for presenting spatial information using non-visual means that has proved useful for a wide range of mapping applications. The primary challenges facing a tactile cartographer revolve around the requirement that dense information, suitable for presentation in a single print document, must be separated into several discrete "layers" that can be studied individually. In the example of the New York City Subway system, a single print map (fig. 1) provides information about geographical context; about the routes and interconnections of the 26 subway lines; about important intermodal hubs at which riders can transfer to commuter rail or suburban buses; and the names of each of the subway stations, shown in sequence along each train route. Visual perception can discriminate each of the symbols shown on this map, and the sighted reader can mentally absorb information at several levels without becoming overwhelmed with data. For the visually impaired traveler, this material must be presented in a sequential fashion, where each scalar view is digested from the general to the specific. The goal for the tactile mapmaker is to allow the reader to accumulate information in a controlled way so as to construct a mental image of a complex system.

 

 For our imaginary city known as Utopia, this means that a geographical overview (fig. 2) is shown first, to orient the user and to provide a first look at the landmasses and disposition of the city vis a vis the surrounding watercourses. For some blind people, "seeing" this simple map becomes a profound life-event: for someone who has never been confronted with a graphic map, developing an accurate mental picture of a place is difficult. Many blind people construct these pictures based on their experience and anecdotal information. Upon using a map for the first time, these ideas are either confirmed or shattered. This is an essential first step in coming to grips with complex spatial configurations that must be mastered before the next more detailed graphic images are examined, so that true independence of travel can be achieved.

The next element in our sequence of map-types is the system overview (fig. 3). Here, all of the various subway routes are shown together, to give some indication of the extent of the system; to show it in reference to major landmarks, such as parks, the shoreline, and the city boundaries; and to locate key transportation nodes at which the passenger can link up to commuter rail or suburban buses. Typically, this map would not be brought along on an excursion: rather, it provides a general view of the network, which can be studied at home during trip planning.

When the blind or visually impaired transit-rider sets out, he or she will normally carry one or more strip maps (fig. 4), illustrating the specifics of the subway routes that will be navigated on-route to the destination. These compact booklets depict each station along a particular subway line in a linear fashion. Instead of providing a spatially accurate picture of the actual movement of the train through the city (such as in the system overview), strip maps show the temporal progression of the subway as it passes through each station. Each of the 3" x 6" pages of these maps shows three stops along a straight line of travel, and stops are represented by event-markers, like beads on a necklace. Actual compass bearings and relative distances between stations are ignored. This map-type is particularly well-suited to the blind traveler, because of the focus on sequence: when a person cannot rely on instantaneous acquisition of spatial position and orientation, he must build understanding through a steady accumulation of discreet facts about the environment as they are confronted over a period of time. Not surprisingly, most sighted subway riders do exactly the same thing; when the visual connection to the above-ground world is severed, we all tend to count the number of stops (events) to our destination instead of trying to reckon the distance or direction of travel. Spatial relationships often become secondary to temporal considerations when we are deprived of vision.

The final components of the progression from the most general to the most specific are, what are known in the parlance of professionals who train blind travelers, mobility maps. These show detailed views of important public places, such as long-distance bus and train stations, airports and ferry terminals. Mobility maps can be used to show both the interiors of buildings (fig. 5) and their context within a neighborhood (fig. 6). These documents illustrate (with scalar and directional accuracy) the physical realities of complicated environments. For the traveler, they provide valuable information regarding the layout of these facilities, including locations for building entrances, ticket and information counters, and departure platforms. They can also show vehicular traffic patterns that must be negotiated to reach the building itself.
                                               

Through the combination of the four map products discussed above, a trained tactile map reader can independently use public transportation to perform normal life tasks like getting to school, to the office, or for recreation. Beyond the primary requirement of separating out information into digestible packages, our experience in creating this system has led to the development of several other rules-of-thumb for producing images intended for the non-visual acquisition of spatial information, some of which are listed below:

 

Talking Kiosks

With the support of a grant from the Federal Transportation Administration, the world's first Talking Kiosk was installed for a 1-year demonstration in 1996. An updated, permanent version, sponsored by MTA/Long Island Railroad, (fig. 7) was unveiled in a public ceremony in July of 1999. Both Talking Kiosks were installed in the Long Island Railroad's New York terminal facility in Penn Station. These devices provide "way-finding" information to the general public in a format that is fully accessible to a blind or visually impaired traveler. They employ a simple, yet powerful combination of audio and tactile feedback to the user's queries. In practice, a traveler uses the Talking Kiosk in the following way:

The essential difference between the function of maps that are carried vs. those mounted to fixed position kiosks is that the latter have the advantage of deploying an established station point, or 'you are here' marker. This allows the user to compare his or her destination's location in space accurately in relation to the current position. In print maps, 'you are here' is usually designated with a large symbol that is easy to find at a glance. For tactile maps, this is even more important; it must be possible, with a quick scan over the surface with the fingertips, to quickly locate the station point symbol. Having to search each time for the home position makes for a choppy and probably frustrating experience. At the Lighthouse in New York City, fixed-position tactile maps at the elevator lobby of each of the building's 15 floors identify major destinations on that floor in shallow relief on plastic maps, but the 'you are here' is a large steel ball bearing. The change in materials (texture and temperature!) is an immediate give-away that this is the most important place on the map.

At the Talking Kiosk, with the benefit of spoken words to enhance the map's ability to communicate, 'you are here' is easily found. In the first Talking Kiosk, the user could ask (via keypad picks) to have his hand guided to the kiosk in the following fashion. The Kiosk says: "Touch the map anywhere now and the map's narrator will guide your hand to the Kiosk." When the user touches the map, the Kiosk says, "Go right". The user moves his hand to the right and touches again. He continues to adjust his finger's position as the Kiosk coaches, "Go up...Go up...Go left...Go down..." and when the user's finger touches the 'you are here' symbol, there is a little bell, and he hears the congratulatory confirmation, "You've found the Kiosk". Once the user has identified his current location with absolute confidence, it becomes possible to inspect the map (usually with both hands, one remaining on the Kiosk and the other roaming), to determine the locations of various places in relation to the Kiosk. This feature was found to be easy to use and very helpful, so with the second Kiosk, it was expanded to allow for any destination to be located through directed narration as described above.

It is our ambition to promote an interconnected network of Talking Kiosks, so that a person who has difficulty seeing will know to listen for the bird song upon entering a complicated public space to locate a source of accessible travel information. The confidence that this will generate may convince some blind or visually impaired people to venture out of their homes, and to participate more fully in those mainstream activities that require access to transportation.
Development of the Talking Kiosks helped us to further refine what it means to see without vision. A sighted person acquires information useful for successful negotiation of his environment using his eyes; upon scanning a new place, he instantly creates a spatial model that can be consulted at will in order to inform decisions. Without the benefit of vision, a similar process occurs, except that this acquisition of information takes place over a period of time. And, while vision is clearly a more efficient method of information-gathering than non-visual means (tactile, auditory, olfactory), there are distinct advantages to the latter. For example, sighted people know that they cannot see through walls or around corners, and they tend to defer consideration of what is beyond the visual sphere until it is confronted in person. On the other hand, a blind person who is adept at non-visual seeing feels no such compunction to limit the range of his environmental knowledge to what can be seen with the eyes. His mental model might well extend much further afield, and make up in range what it lacks in local detail.
Another obvious limitation of sight is that it trains us to distrust our other senses: until we see something, we withhold final judgement. Our noses might indicate that there is a bakery 20 feet away on the left, however we will probably not be sure that its there until we read the sign outside or see the display within. The common wisdom that blind people have super-normal powers of hearing and smell is not physiologically accurate. However, they typically learn to rely much more heavily on the other senses, and can use them more successfully for identification and spatial imaging.

Talking Tactile Tablet and the Tactile Graphical User Interface

When Personal Computers were first introduced, there was great optimism in the blindness community that their availability would lead to greater independence and employability, since speech synthesis is an effective means of accessing text-based information that appears on a video monitor. However, now that Graphical User Interfaces, like Microsoft's Windows are ubiquitous, it becomes clear that the early promise of the PC as an enabling technology is not guaranteed; the ability to point-and-click and drag-and-drop with a mouse requires good visual acuity and motor skills. Although most programs written for Windows claim to be accessible using a combination of "keyboard shortcuts" and screen reader software, this is not always the case. Especially when spatial understanding of the screen layout is a principal feature of an application's function, developing a reasonable level of facility can be difficult or impossible. In Microsoft's Excel spreadsheet program, for example, it is very difficult to create and read spreadsheets without vision. The structure of these documents, which usually consist of rows and columns of cells, each containing data, is often so complex, that it is not realistic to expect the average visually impaired user to develop an adequate mental model without some assistance. And since programs like Excel have become almost indispensable to success in some careers (like purchasing, real estate development or personnel), people who have difficulty seeing are effectively locked out.

When the first Talking Kiosk project was completed, it became apparent that the concept of combining audio output with computerized tactile graphic images could, if properly deployed, help to mitigate the barrier created by the Graphical User Interface. The Talking Tactile Tablet initiative was born out of the recognition that a non-visual audience could work with graphic images if they were presented tactilely; mouse-type manipulations could be accomplished as long as the user could feel the shapes and hear their identities before making a selection. With funding from the Department of Education through the Small Business Innovation Research Program, Touch Graphics created the Talking Tactile Tablet. The six-month project led to the design and construction of a prototype of the device and associated programming and tactile media for its use.

The Talking Tactile Tablet (fig. 8) is a low-cost computer peripheral; it consists of a compact enclosure (14" x 10" x 1 1/2") which houses a touch-sensitive surface, a simple apparatus for holding an 8 1/2" x 11" tactile graphic overlay motionless against the touch-surface, and hardware for establishing RS 235 serial communications between the device and a host computer via a single cable. The computer interprets touches on the tactile graphic overlay as mouse picks: the x,y coordinates of the tactile object that has been touched becomes information that is available for use by software running on the host computer. By assigning identities to rectangular areas defined by their upper left and lower right corners, it is possible to have the computer speak the name of any feature of a tactile drawing that has been sensitized in this fashion, and for which an appropriately named digital audio file has been prepared. Additionally, through the use of an authoring system like Macromedia's Director, elaborate interactive programming can be created that uses the TTT as a pointing device, the tactile overlay as a static "video" image, and both pre-recorded and synthetic voice and sound effects as output. Using this system, the potential for rich, multi-media computer applications that can be competently run by blind and visually impaired appears to be limitless.

In order to demonstrated the virtues of this system, we designed a Tactile Graphical User Interface (TGUI: see fig. 9) with standardized control icons and format for use with the TTT that emulates the model of Windows computing. A start-up routine and a sample application were developed for use with the system. The start-up routine serves as a master control program and tutorial for initializing the device every time a new tactile drawing is placed on the device's "easel". Most basically, this is a two-step process: once the drawing has been fixed in position, the user hears a recorded voice instructing him to press "set up" dots in three corners. By this means, the user communicates to the computer a correction factor that represents the actual displacement of the mounted tactile drawing as compared to an ideal position. All subsequent touches are then interpreted in the context of the overlay's actual position. This calibration process is important, because it is unreasonable to expect a totally blind user to accurately place the drawing on the device. Next, the user runs his finger along a series of ten small boxes at the top of each sheet. He finds that three of these boxes have small raised dots in them, and proceeds to press each of them in response to verbal cues. By pressing these dots, the user communicates to the computer that of many possible applications is being run. The user hears the name of the application, and at this point, the start-up routine relinquishes control of the system and invokes the appropriate executable file for use with the selected overlay.

Our sample application was called Match Game; it is intended for children ages seven and older. Each time the game is played, the computer randomly assigns animal sound files to each of 64 squares in an eight-by-eight tactile grid. Players take turns choosing two squares to hear the sounds. Each square has an address that is determined by its position in an alphanumeric matrix: for example, the square that is in the upper left corner of the grid is labeled "A1", the one that is in the third row and the fourth column is "C4", etc. The object is to find pairs of matching sounds. The computer keeps score and controls the flow of the game.
Although the game is simple in concept, both the sample application and the start-up routine successfully demonstrated the viability of the system in the following ways:

Our ambition is to create a library of interactive computer applications for use with the Talking Tactile Tablet; once a critical mass of software titles are made available, schools, libraries, and individuals will be more likely to invest in the hardware. Some future ideas for applications include travel guides, crossword puzzles, trigonometry curricula, spread sheets, and simulators for learning orientation and mobiltiy skills. We hope to establish the Tactile Graphic User Interface as a standard design protocol for anyone who wants to develop new computer applications that rely on non-visual perception through audio/tactile means.

Conclusions

We are interested in discovering ways in which the tactile viewing experience can be made more pleasurable; just as in print media, work that is particularly aesthetic and enjoyable is focused on and digested more thoroughly as compared to material that has been designed without considering graphic quality. The characteristics of an aesthetic tactile experience, however, have been shown to be quite different from the print equivalent.

As technology has advanced, the means of creating and distributing print materials has become much cheaper. Graphic art techniques have adapted to this pace, and the end result may be that our ability to look carefully and absorbently may be deteriorating. These days, we are presented with great quantities of visual images that we feel compelled to consume in an increasingly scanned and necessarily cursory manner. The highest value is put on designs that attract our attention amidst an expanding universe of competing and distracting voices; less care is devoted to richness of content and thoughtfulness of layout after the original impact has been made. Tactile graphics must pursue an alternate strategy, since it is almost impossible to make an instant impact: the reader using his or her fingertips can only examine one region or point at a time, slowly accumulating fragments to construct a mental model of the complete picture. Furthermore, there is no tactile equivalent to the common practices of web surfing, remote control clicking or magazine page flipping. The tactile reader must spend at least a few minutes looking before the whole image or design scheme begins to emerge. Tactile images that are organized to allow for the identification of discreet bits of content that is presented in a rational hierarchy usually offer the most satisfying and aesthetic experiences. The necessarily slow and deliberate pace of tactile looking might even be thought of as an antidote to the superficial characteristics of a contemporary media culture that offers instant gratification but does not require much of a commitment on the part of the viewer.


return to publications page.