Professional Documents
Culture Documents
INTRODUCTION
Sentient computing systems, which can change their behaviour based on a model of the environment they construct using sensor data, may hold the key to managing tomorrow's device-rich mobile networks. At AT&T Laboratories Cambridge, have built a system that uses sensors to update a model of the real world.Designed the model's terms (object positions, descriptions and state, and so forth) to be immediately familiar to users. Thus, the model describes the world much as users themselves would. We can use this model to write programs that react to changes in the environment according to the user's preferences. We call this sentient computing because the applications appear to share the user's perception of the environment. Treating the current state of the environment as common ground between computers and users provides new ways of interacting with information systems. A sentient computing system doesn't need to be intelligent or capable of forming new concepts about the world, it only needs to act as though its perceptions duplicate the user's. Earlier works described a prototype of this system and stated our intention to deploy it on a large scale. As computer users become increasingly mobile, and the diversity of devices with which they interact increases, so the over headof configuring and personalizing these systems increases. A natural solution to this problem would be to create devices and applications that appear to cooperate with users, reacting as though they are aware of the context and manner in which they are being used, and reconfiguring themselves appropriately.
Figure1.1:World Model Here is a diagram showing what we are trying to achieve. While people can observe and act on the environment directly, application programs observe and act on the environment via the world model, which is kept up to date using sensors and provides an interface to various actuators; if the terms used by the model are natural enough, then people can interpret their perceptions of the world in terms of the model, and it appears to them as though they and the computer programs are sharing a perception of the real world.Our project implemented the sentient computing systems model of the world as a set of software objects that correspond to real-world objects. Objects in the model contain up-to-date information about the locations and state of the corresponding realworld objects. Ascertaining object positions with nearhuman levels of accuracy requires a pecially designed sensor system.
2
location update rate across each radio cell of 150 updates per second. The signals from imultaneously triggered Bats are encoded using a differential- phase modulation scheme, allowing receivers to distinguish among them.
sleep state, from which they wake only when they move again. This design saves power and frees up locationupdate opportunities for allocation to other Bats. Taken together, the Bats low-power features result in a battery lifetime of around 12 months.
We can further increase the fidelity of the model by determining if someone appears to be seated. The system looks for periods of constant-velocity motion above a certain speed. Seeing such motion, it assumes that the person is walking forward and is upright, and records the floor-to-Bat height. If the Bat ever drops below this level significantly, we infer that the user has taken a seat. Because the users body shadows the Bats ultrasonic signal, only receivers in front of the person will detect signals from the Bat (assuming that the person wears the Bat somewhere on the front of the body). Therefore, we can estimate the persons orientation from the Bats osition and the pattern of receivers that detected it. Figure 3 shows a 3D visualization of the model, which indicates how the sentient computing systems view of the world accurately matches the environments real state.
In outdoor spaces people might move purposefully towards a destination or browse more casually whereas in office environments an occupant might primarily travel between their desk and the provided amenities.The acceptability of any location sensing is also environment dependent: users who participateat work may well be loathe to wear marker tags (or even to be tracked at all) at home,Systems operating in outdoor environments must cope with extremes of temperature, lighting,and humidity produced whereas indoor environments benefit from some measure of protection from the weather. Also, useful infrastructure (such as power and network wiring) which might be exploited by a location system is more common within buildings rather than in outdoor environments.
The bat also contains a pair of buttons and a beeper which can be used to provide context-sensitive control and feedback.
Figure3.1. Bat: showing from right to left: two copper coiled antennae, the radio transmitter module (in gold, receiver underneath), the AA battery (the large white and green object) and two ultrasonic transmitters. Total length of the device is about 2.5 inches. 3.1.1 ACTIVE BAT The ultrasonic location system is based on the principle of trilateration - position finding by measurement of distances (the better-known principle of triangulation refers to position finding by measurement of angles). A short pulse of ultrasound is emitted from a transmitter (a Bat) attached to the object to be located, and we measure the times-of-flight of the pulse to receivers mounted at known points on the ceiling. The speed of sound in air is known, so we can calculate the distances from the Bat to each receiver - given three
10
or more such distances, we have enough information to determine the 3D position of the Bat (and hence that of the object on which it is mounted). By finding the relative positions of two or more Bats attached to an object, we can calculate its orientation. Furthermore, we can deduce some information about the direction in which a person is facing, even if they carry only a single Bat, by analysis of the pattern of receivers that detected ultrasonic signals from that transmitter, and the strength of signal they detected
11
12
Firstly, by only emitting a signal every 15 seconds, the mean current consumption can be very small with the result that badge-sized batteries will last for about one year. Secondly, it is a requirement that several people in the same locality be detectable by the system. Because the signals have aduration of only one-tenth of a second, there is approximately a 2/150 chance that two signals will collide when two badges are placed in the same location. For a small number of people, there is a good probability they will all be detected. Even so, in order to improve this chance, the beacon oscillator has been deliberately designed around low-tolerance components. The components used for the beacon oscillator have a 10% tolerance rating; The Active Badge also incorporates a light-dependent component that, when dark, turns the badge off to conserve battery life. Reduced lighting also increases the period of the beacon signal to a time greater than 15 seconds. In ambient lighting conditions in a room, this effect only slightly modifies the period, but it is another factor that ensures synchronized badges will not stay synchronized very long. If the badge is placed in a drawer out of office hours, at weekends and during vacation, the effective lifetime of the batteries is increased by a factor of 4. Note that the more obvious solution of a manual switch was considered a bad idea as it was likely that a badge user would forget to turn it on. Other options for switching the device on included a tilt switch and an accelerometer, although the size limitation of a badge precluded using them in the initial experimental system. A disadvantage of an infrequent signal from the badge is that the location of a badge is only known, at best, to a 15-second time window. However, because in general a person tends to move relatively slowly in an office building, the information the Active Badge system provides is very accurate. An Active Badge signal is transmitted to a sensor through an optical path. This path may be found indirectly through a surface reflection, for example, from a wall. A badge must be worn on the outside of clothing, so an essential part of the badge case design was the clip allowing it to be easily attached to a shirt or a blouse. Most commonly, the badge was worn at the breast pocket position; however, some people preferred a belt or waist position. The belt position was not as good when the wearer was seated at a desk but typically the system still detected enough signals to locate the badge.
13
Ethernet, to support multiple badge networks, the data being relayed back to one master server by conventional network protocols. The workstations provided a simple connection between each of the badge networks and the main computer network. However, a badge network can also exist on its own supported by a single network controller. A prerequisite for the badge network was that it should be able to link all areas of any building with an arbitrary topology. Power would need to be fed though the network because the sensors would be too numerous and distributed in too many remote places for them to be supplied by power locally. Given these constraints, the badgesensor network has been designed as a 4-wire system. Two of these wires carry the network power-supply, the third carries the serial addressing information allowing the network controller to nominate a station, and the remaining wire carries data back to the network controller. Conventional telephone twisted pairs are used, which means it is possible to take advantage of any spare telephone cable already in a building. Although, as with most networked systems, the cabling is unavoidably a large fraction of the system cost, the cost has been minimized by using standard telephone twisted-pairs. The data-transfer format is logically the same as RS232, but the network is physically a wired-OR system. The consequence is that by using a simple level-shifting interface-box, any computer with an RS232 port can be used as the network master. In order that the network master does not have to poll the sensors at high speed to avoid data loss(e.g. if two badges in one room signaled with a very short delay between them), a FIFO has been designed into each sensor that is capable of buffering 20 badge sightings. This allows the network master to multiplex its time between polling the network, manipulating badge data, and making the data available to clients.
15
A map of one of our offices, showing visibility spaces around computers, and usage spaces around people. The red shading indicates a containment state Containment relationships between 2D geometric shapes are a good way of formalising vague spatial relationships. Simpler abstractions fail to capture complexities in the environment which are obvious to the user, while more sophisticated ones risk being too complex for the user to understand. It turns out that people are very well-suited to reasoning about and remembering 2D geometric shapes. This abstraction also works wel for application programmers because they can use traditional GUI programming styles, treating spaces around objects as though they were buttons on a computer screen.
16
17
Figure4.1: Spatial Monitoring Spatial Monitoring application that moves users desktops around with them. The application registers with the Spatial Monitor (1); as the user (pjs) approaches the display (2) or moves away from it (3), the spatial monitor sends a positive or negative containment event to the application that transfers or removes the desktop to or from the screen.
18
5. APPLICATIONS
Sentient computing systems provide a wide range of location-aware applications for users.
5.1 BROWSING
An important application class, model browsers simply display the environments current state. In addition to the 3D visualization shown in Figure 3, we have also implemented the continuously updated map shown in Figure 4.
Figure 5.1. A 2D-map visualization of the world model. In Room 210, a user (djc) is sitting in front of his workstation while using his telephone, which is colored red to indicate that it is off the hook.
The map can display personnel, furniture, telephones, workstations, and other relevant information so that users can immediately find the phone nearest to a person, determine whether it is in use or not, then place a call by clicking on the phone displayed on the map;
19
see, by the disposition of their neighbors on the map, whether to contact a person or notfor example, a person standing next to a visitor may not wish to be disturbed; and request to be informed when a person leaves a room or arrives in the building, or to receive notification when the phone located nearest that person is not in use.
mouse panels in 2D, slider controls in 1D, and buttons in 0D, then project the 3D Bat position into these spaces to control the user interface components.
5.3.1 MICE. We have placed several 50-inch displays around our building. When a Bat enters the mouse panel space that is placed in the model around each screen, the system projects the 3D Bat position into a 2D mouse pointer reading, then transmits it to the workstation controlling the screen. Any persons Bat can then be used as a mouse for any of these screens. 5.3.2 VIRTUAL BUTTONS A button in a traditional GUI simply indicates an area of the display with some special application-level significance, accompanied by a label that has some meaning to the user. We can take the same approach with our 3D interface by registering a point in the model as having an application-level significance, and labeling the real world with a physical token at the corresponding point. To activate these virtual buttons, a user places a Bat on the physical label and presses one of the Bats real buttons. The button-press message from the Bat interrupts the scheduler, thereby ensuring that the Bat requests a location update within 50 ms and allowing it to process the virtual button press with low latency. Because each persons Bat bears a unique identifier, the system knows who originated each virtual-button event, thus virtual buttons can control personalized applications. One application of this technique, the smart poster, contains labels for one or more virtual buttons, as Figure 5 shows. We use smart posters to control follow- me applications. For example, a user can enable or disable his follow-me desktop or telephone service with a smart poster. Applications that smart posters control will normally send some audible feedback directly via the users Bat. We have also used a smart poster to control a networked scanner. Users press buttons on the poster to choose
21
resolution, color level, data compression format, and destinationwhich can be either their e-mail in-box or information timeline. 5.3.3 AUGMENTED REALITY . If, in the future, users have a permanently available view of the model, it would
not be necessary to create a label for a virtual button in the physical world. We have experimented with a non stereoscopic head-mounted display, tracked using data combined from three Bats and an inertial sensor, to create an augmented reality display that superimposes information from the model onto the users view of the real world. This system could display labels for otherwise invisible button spaces. Systems administrators could also use the augmented reality display to maintain those parts of the world model that do not update automatically, such as the positions of furniture. Augmenting the users view of the environment with the models current state can make any omissions or inconsistencies immediately obvious.
Currently, the user must explicitly perform configuration, data storage, and indexinga fixed overhead that presents a disincentive to use mobile devices for small tasks. Applying sentient computing to this challenge may result in large productivity improvements. Given the current lack of small devices with wireless connectivity, we have had to use devices that store time-stamped data in nonvolatile memory. We also store timestamped ownership events that the model generates, and, when the device synchronizes
22
with the network, we correlate the information streams to determine which data corresponds to which user. This approach precludes any kind of device personalization at the time of use, but it does allow us to experiment with personalized filing, post processing, and context-based retrieval. We have used several types of shared digital cameras, located by Bats, to automatically store photographs in a users timeline. We also have used a shared digital voice recorder to automatically store audio memos in the users timeline, and we have implemented a service that transcribes the voice data using that users voice model. We also file the transcription in the timeline along with the voice data used to generate it. One advantage of a context-aware storage mechanism is that it can associate arbitrary data items simply by putting them close to one another without maintaining any kind of referential integrity. We can use this capability to compose information sources in powerful ways. Once the system completes the automatic transcription, we can use a text search on the transcription to retrieve the voice data. We have implemented a photo annotation program as another example of this information-retrieval style. When a fixed camera takes a photo, the application queries the model to determine who is in the cameras field of view, then stores a suitable caption in the timeline at the same time it stores the photograph. Thus, a text search for +photo +Hopper returns ections of a timeline containing photographs of Andy Hopper. We cannot provide this service for presently available handheld cameras because we cannot establish the zoom state of the lens for a given photo. Figure shows a sample timeline that contains data that several users generated contemporaneously: a photograph taken using a handheld camera, an annotated photograph taken using a fixed camera, and a scanned document.
23
Figure5.5.1:Smart Poster
24
Using a smart poster to control a networked scanner. The highlighted spots on the poster are buttons in the sentient computing system's model of the world. This user is telling the system to scan a document that is lying on the scanner glass. He will then press the 'scan to hopper' button to scan the document to his information hopper. Here is one `smart poster' we have created which is used to control a networked scanner. The user can use buttons on the poster to select high-level options, like colour/monochrome, data compression format, resolution and whether to use the sheet feeder, and then scan the document to their information hopper or to their email inbox. Because the system knows who is pressing which button, it knows where to send the scan and it can even remember a user's preferred scanner setup and use that.
Figure5.5.2:This smart poster controls the phone call fowarding service. The button at the bottom left toggles the service on or off. There is another button over the picture of the bat which makes the bat ring with the sound it will use when the user is called.
25
We also use smart posters to control our ubiquitous services -- the one above is the poster which controls our phone forwarding service. A user can turn the service on or off, or even hear an example of the sound the bat will make when they get a phone call. Smart Posters are a good way of advertising new services: they catch the eye, explain what the service does, and provide a way for users to opt in to it, all on a sheet of paper -the ultimate thin interface.
they move around. Here's an example clip (warning: 40Mbytes), which was recorded using the bulletin board and has a brief description of what it does. The bulletin board uses standard networked MPEG-1 codec boxes which transmit via IP multicast, so it could also be made to support recording of n-way video conferences.
27
6.CONCLUSION
The most obvious real-world application areas for this technology are in large buildings with highly mobile populations who spend their time generating information, looking at information, using different kinds of equipment and communicating with each other. Examples of such environments include hospitals and large office buildings. Ultimately, we believe that sentient computing provides benefits wherever people have to interact with machines. One day, all user interfaces could work this way. Over the next few years we expect wireless devices and LANs to become more widespread. But without a sentient computing system, the value of a wireless device is limited. There is a widespread assumption that radio devices themselves have some kind of innate sensing capability because useful proximity information can be inferred from radio signal strength. This assumption is incorrect, firstly because multipath effects within buildings greatly distort the relationship between signal strength and distance, and secondly because it fails to take account of environmental discontinuities like walls and floors. But sentient computing is more than a solution to the problems of configuration and personalization.When people interact with computer systems in this way, the environment itself becomes the user interface.we think this goal a natural one for humancomputer interaction.
28
7.REFERENCES
1. L. Brown, ed., The New Shorter Oxford English Dictionary, Oxford University Press, Oxford, UK, 1993. 2. A. Harter et al., The Anatomy of a Context-Aware Application, Proc. 5th Intl Conf. Mobile Computing and Networking (Mobicom 99), ACM Press, New York, 1999, pp. 59-68. 3. A. Ward, Sensor-Driven Computing, doctoral dissertation, University of Cambridge, K, 1998. 4. N. Priyantha, A. Chakraborty, H. Balakrishnan, The Cricket Location-Support System, Proc. 6th Intl Conf. Mobile Computing and Networking (Mobicom 00), ACM Press, New York, 2000, pp. 32-43. 5. J. Werb and C. Lanzl, Designing a Positioning System for Finding Things and People Indoors, IEEE Spectrum, Sept. 1998, pp. 71-78. 6. P. Bahl, V. Padmanabhan, and A. Balachandran, Enhancements to the Radar User Location and Tracking System, tech. report MSR-TR-2000-12, Microsoft Research, Redmond, Wash., Feb. 2000. 7. A. Harter and A. Hopper, A Distributed Location System for the Active Office, IEEE Network, Jan. 1994, pp. 62-70.
29