You are on page 1of 29

1.

INTRODUCTION
Sentient computing systems, which can change their behaviour based on a model of the environment they construct using sensor data, may hold the key to managing tomorrow's device-rich mobile networks. At AT&T Laboratories Cambridge, have built a system that uses sensors to update a model of the real world.Designed the model's terms (object positions, descriptions and state, and so forth) to be immediately familiar to users. Thus, the model describes the world much as users themselves would. We can use this model to write programs that react to changes in the environment according to the user's preferences. We call this sentient computing because the applications appear to share the user's perception of the environment. Treating the current state of the environment as common ground between computers and users provides new ways of interacting with information systems. A sentient computing system doesn't need to be intelligent or capable of forming new concepts about the world, it only needs to act as though its perceptions duplicate the user's. Earlier works described a prototype of this system and stated our intention to deploy it on a large scale. As computer users become increasingly mobile, and the diversity of devices with which they interact increases, so the over headof configuring and personalizing these systems increases. A natural solution to this problem would be to create devices and applications that appear to cooperate with users, reacting as though they are aware of the context and manner in which they are being used, and reconfiguring themselves appropriately.

1.1 SHARED PERCEPTIONS


What could we do if computer programs could see a model of the world? By acting within the world, we would be interacting with programs via the model. It would seem to us as though the whole world were a user interface.

Figure1.1:World Model Here is a diagram showing what we are trying to achieve. While people can observe and act on the environment directly, application programs observe and act on the environment via the world model, which is kept up to date using sensors and provides an interface to various actuators; if the terms used by the model are natural enough, then people can interpret their perceptions of the world in terms of the model, and it appears to them as though they and the computer programs are sharing a perception of the real world.Our project implemented the sentient computing systems model of the world as a set of software objects that correspond to real-world objects. Objects in the model contain up-to-date information about the locations and state of the corresponding realworld objects. Ascertaining object positions with nearhuman levels of accuracy requires a pecially designed sensor system.
2

1.2 LOCATION SENSING


The location sensor, shown in Figure 1, determines the 3D positions of objects within our building in real time. Personnel carry wireless devices known as Bats, which can also be attached to equipment. The sensor system measures the time it takes for the ultrasonic pulses that the Bats emit to reach receivers installed in known, fixed positions. It uses these times of flight to calculate the position of each Bat and hence the position of the object it tags by triangulation. To allow accurate time-of-flight measurements, a wireless, cellular network synchronizes Bats with the ceiling receivers. Base stations simultaneously address a Bat over the wireless link and reset the receivers over a wired network. A wireless back channel supports the Bats transmission of registration, telemetry, and control- button information.

1.3 CURRENT EMBODIMENT


We have installed the Bat system throughout our three-floor, 10,000-sq.-ft. office building, and all 50 staff members use it continuously. The system uses 750 receiver units and three radio cells, and tracks 200 Bats. Figure 2 shows the current Bat device. Each Bat measures 8.0 cm 4.1 cm 1.8 cm, has a unique 48- bit ID, and draws power from a single AA lithium cell. In addition to two input buttons, a Bat has a buzzer and two LEDs for output. Applications send commands over the wireless network to generate feedback via these devices. Built around a DSP microprocessor, the location system receiver units use a matched-filter signal-detection approach. Receivers are wired together in a daisychain network and are installed unobtrusively above the tiles in our buildings suspended ceilings. To simplify maintenance of the location system, telemetry data can be obtained from the Bats, indicating current battery health and firmware version number. Further, system administrators can send commands over the wireless and wired networks to reprogram Bats and receivers in the field. About 95 percent of 3D Bat position readings are accurate to within 3 cm. Each base station can addressthree Bats simultaneously, 50 times each second, giving a maximum
3

location update rate across each radio cell of 150 updates per second. The signals from imultaneously triggered Bats are encoded using a differential- phase modulation scheme, allowing receivers to distinguish among them.

1.4 SCHEDULING AND POWER SAVING


Because we use Bats to tag many mobile objects in the environment, we must fully exploit the limited number of location-update opportunities. We also take every opportunity to reduce the power consumption of Bats because frequently changing the batteries of several hundred devices would require significant effort. Base stations use a quality-of-service measure to share location-update opportunities between Bats. The base stations can preferentially allocate location resources to objects that move often or are likely to be moved. A scheduling process that runs on each base station determines not only when the base station will address a particular Bat but also when it will next address it. The process passes this information to the Bat across the wireless link, allowing the Bat to enter a low-power sleep state for the intervening time. The scheduling algorithm and Bats incorporate mechanisms that allow rapid changes to the QoS allocations. Rapid changes to the schedule prove particularly useful when a user presses a Bat button. This action generally indicates that the user wishes to perform some task with the Bat. In these cases, it helps to obtain a low-latency position update for the Bat. The Bat transmits a button-press message over the wireless network, and the base station responds by scheduling an immediate location-update opportunity for that Bat. Bats register with a base station using their unique 48-bit IDs and receive a temporary 10-bit ID local to the base station. The system subsequently refers to the Bat by its local ID, which it reclaims if the Bat moves out of the base stations range. This procedure limits the Bat population per base station, but results in shorter addressing messages and subsequent power savings. Each Bat has a sensitive motion detector that lets it tell the base stations whether it is moving or stationary. Because the base station doesnt need to repeatedly determine the position of stationary objects, the system places nonmoving Bats into a low-power
4

sleep state, from which they wake only when they move again. This design saves power and frees up locationupdate opportunities for allocation to other Bats. Taken together, the Bats low-power features result in a battery lifetime of around 12 months.

1.5 INDOOR SENSOR ALTERNATIVES


Alternative indoor-location sensors suitable for widescale deployment include other ultrasonic systems such as Cricket,4 indoor radio systems such as PinPoint5 and Radar,6 and infrared systems like Active Badge.7,8 However, because none of these systems can provide 3D location data with an accuracy of less than one meter, the fidelity of a model constructed using data from them would be poor. Indeed, our experiences in developing and using the Active Badge system suggest that context-aware applications often require location information that has a much finer granularity. In an outdoor environment, GPS9 provides accurate position information relative to the large scale of outdoor features, such as roads and fields; thus, it comes close to the ideal of sharing perceptions on this scale. However, many sentient computing applications rely on information accurate to the human scale, which cannot be obtained using such sensors.

2. MODELING THE ENVIRONMENT


Our system uses data from its sensors and from services such as an Alcatel 4400 CTI telephone switch to update its world model. The model functions essentially as a distributed application programming interface (API) for the environment, allowing applications to control the environment and query its state. The model handles all systemlevel issues, including sessions, events, persistence, and transactions. We make each software object a Common Object Request Broker Architecture (Corba) object and store its persistent state in an Oracle database.2 The model schema contains 80 different interface types, while the model of our building contains about 1,900 actual instances of physical objects. The system uses knowledge of the tracked objectsdynamics to filter the incoming location data, and then uses the filtered data to update the world model. Because the model must mirror the users perception of the world, the data must be accurate and robust, and must be updated with minimal latency. These requirements constrain our choice of filtering algorithms. We chose a method that follows these rules: Sensor errors impose low-amplitude noise on the reported positions of near-stationary objects, so the method heavily damps those streams of sightings that differ by only a few centimeters. The system cannot readily distinguish large errors from real object motions, because object accelerations are high relative to the sample rate. Therefore, we simply accept large changes in reported object position. If a large sensor error were to occur, this would result in a large error in the reported object position, but this is acceptable, because larger sensor errors become increasingly unlikely. If a reported change in object position implies an unreasonably high velocity, the method makes an exception to the last rule and discards the sighting.

We can further increase the fidelity of the model by determining if someone appears to be seated. The system looks for periods of constant-velocity motion above a certain speed. Seeing such motion, it assumes that the person is walking forward and is upright, and records the floor-to-Bat height. If the Bat ever drops below this level significantly, we infer that the user has taken a seat. Because the users body shadows the Bats ultrasonic signal, only receivers in front of the person will detect signals from the Bat (assuming that the person wears the Bat somewhere on the front of the body). Therefore, we can estimate the persons orientation from the Bats osition and the pattern of receivers that detected it. Figure 3 shows a 3D visualization of the model, which indicates how the sentient computing systems view of the world accurately matches the environments real state.

2.1 OPERATING ENVIRONMENT


Office environments have been a common focus for Sentient Computing. A telephone receptionist said based on room-level location was an early example. More fine-grained location information has also been exploited in this environment to provide real-time maps and to automatically select cameras for videoing employees and visitors as they move around Other targeted environments have included museums and exhibition halls in which researchers sought to provide additional information to visitors as they arrived at a particular exhibit Another study considered the hospital environment and provided a messaging system which can address users by role, location, and time Context-aware applications have also been deployed in the home using cues from the occupants locations for smart control of media devices and lighting Outdoor applications include city-wide location-based gaming where virtual players are chased by runners in the real world . Drishti is a navigation system for the blind which is suitable for both indoor and outdoor operation through handover between different location technologies Different environments present different challenges for a location system. Signal propagation is affected by the of the space: small offices limit propagation more than open-plan spaces. Occupants movement patterns differ between environments.

In outdoor spaces people might move purposefully towards a destination or browse more casually whereas in office environments an occupant might primarily travel between their desk and the provided amenities.The acceptability of any location sensing is also environment dependent: users who participateat work may well be loathe to wear marker tags (or even to be tracked at all) at home,Systems operating in outdoor environments must cope with extremes of temperature, lighting,and humidity produced whereas indoor environments benefit from some measure of protection from the weather. Also, useful infrastructure (such as power and network wiring) which might be exploited by a location system is more common within buildings rather than in outdoor environments.

3. SENTIENT COMPUTING TECHNOLOGY


The technological challenges for our system are: creating an accurate and robust sensor system which can detect the locations of objects in the real world; integrating, storing and distributing the model's sensor and telemetry information to applications so that they get an accurate and consistent view of the model; and finding suitable abstractions for representing location and resource information so that the model is usable by application programs and also comprehensible by people. To solve these problems we have built an ultrasonic location system, which provides a location accuracy of about 3cm throughout our 10000 square foot building, making it the most accurate large-scale wireless sensor system in the world; we have built a distributed object model, which integrates location and resource status data for all the objects and people in our building; and we have built a spatial monitoring system to implement an abstraction for expressing spatial relationships, which lets applications detect spatial relationships between objects in a way that seems natural to human users.

3.1 ULTRASONIC LOCATION SYSTEM


The location sensor system uses small (8cm long) devices called bats, each of which has a unique id, an ultrasound transmitter and a radio transceiver. The bats are located by a central controller, and a the world model stores the correspondence between bats and their owners, applying type-specific filtering algorithms to the bat location data to determine the location of the person or object which owns it. To locate a bat, the central controller sends its unique id over the radio channel, simultaneously resetting counters in a network of ultrasound receivers hidden in the ceiling. When a bat detects its id, it sends an ultrasonic pulse, which is picked up by some of the ceiling receivers. From the times of flight of the pulse from the bat to each of the receivers that detected it, the system can calculate the 3D position of the bat, to an accuracy of about 3cm.

The bat also contains a pair of buttons and a beeper which can be used to provide context-sensitive control and feedback.

Figure3.1. Bat: showing from right to left: two copper coiled antennae, the radio transmitter module (in gold, receiver underneath), the AA battery (the large white and green object) and two ultrasonic transmitters. Total length of the device is about 2.5 inches. 3.1.1 ACTIVE BAT The ultrasonic location system is based on the principle of trilateration - position finding by measurement of distances (the better-known principle of triangulation refers to position finding by measurement of angles). A short pulse of ultrasound is emitted from a transmitter (a Bat) attached to the object to be located, and we measure the times-of-flight of the pulse to receivers mounted at known points on the ceiling. The speed of sound in air is known, so we can calculate the distances from the Bat to each receiver - given three

10

or more such distances, we have enough information to determine the 3D position of the Bat (and hence that of the object on which it is mounted). By finding the relative positions of two or more Bats attached to an object, we can calculate its orientation. Furthermore, we can deduce some information about the direction in which a person is facing, even if they carry only a single Bat, by analysis of the pattern of receivers that detected ultrasonic signals from that transmitter, and the strength of signal they detected

Figure3.1.1: Working Of Active Bat

11

3.1.2 THE ACTIVE BADGE SYSTEM


The Active Badge system provides a means of locating individuals within a building by determining the location of their Active Badge. This small device worn by personnel transmits a unique infra-red signal every 10 seconds. Each office within a building is equipped with one or more networked sensors which detect these transmissions. The location of the badge (and hence its wearer) can thus be determined on the basis of information provided by these sensors.

3.1.3 AN ACTIVE BADGE DESIGN


A solution to the problem of automatically determining the location of an individual has been to design a tag in the form of an Active Badge that emits a unique code for approximately a tenth of a second every 15 seconds (a beacon). These periodic signals are picked up by a network of sensors placed around the host building. A master station, also connected to the network, polls the sensors for badge sightings, processes the data, and then makes it available to clients that may display it in a useful visual form. The badge was designed in a package roughly 55x55x7mm and weighs a comfortable 40g. Pulse-width modulated infrared (IR) signals are used for signaling between the badge and sensor mainly because: IR solid-state emitters and detectors can be made very small and very cheaply (unlike ultrasonic transducers); they can be made to operate with a 6m range, and the signals are reflected by partitions and therefore are not directional when used inside a small room. Moreover, the signals will not travel through walls, unlike radio signals that can penetrate the partitions found in office buildings. Infrared communication has been used in a number of commercial applications ranging from the remote control of domestic appliances to data backup links for programmable calculators and personal organizers. More recently, IR has been used as the basis for wireless local area ne tworks . Because IR technology has already been exploited commercially, it is inexpensive and readily available for developing new applications such as the Active Badge. An active signaling unit consumes power; therefore, the signaling rate is an important design issue.

12

Firstly, by only emitting a signal every 15 seconds, the mean current consumption can be very small with the result that badge-sized batteries will last for about one year. Secondly, it is a requirement that several people in the same locality be detectable by the system. Because the signals have aduration of only one-tenth of a second, there is approximately a 2/150 chance that two signals will collide when two badges are placed in the same location. For a small number of people, there is a good probability they will all be detected. Even so, in order to improve this chance, the beacon oscillator has been deliberately designed around low-tolerance components. The components used for the beacon oscillator have a 10% tolerance rating; The Active Badge also incorporates a light-dependent component that, when dark, turns the badge off to conserve battery life. Reduced lighting also increases the period of the beacon signal to a time greater than 15 seconds. In ambient lighting conditions in a room, this effect only slightly modifies the period, but it is another factor that ensures synchronized badges will not stay synchronized very long. If the badge is placed in a drawer out of office hours, at weekends and during vacation, the effective lifetime of the batteries is increased by a factor of 4. Note that the more obvious solution of a manual switch was considered a bad idea as it was likely that a badge user would forget to turn it on. Other options for switching the device on included a tilt switch and an accelerometer, although the size limitation of a badge precluded using them in the initial experimental system. A disadvantage of an infrequent signal from the badge is that the location of a badge is only known, at best, to a 15-second time window. However, because in general a person tends to move relatively slowly in an office building, the information the Active Badge system provides is very accurate. An Active Badge signal is transmitted to a sensor through an optical path. This path may be found indirectly through a surface reflection, for example, from a wall. A badge must be worn on the outside of clothing, so an essential part of the badge case design was the clip allowing it to be easily attached to a shirt or a blouse. Most commonly, the badge was worn at the breast pocket position; however, some people preferred a belt or waist position. The belt position was not as good when the wearer was seated at a desk but typically the system still detected enough signals to locate the badge.
13

3.2 MANAGING THE WORLD MODEL


The location and resource status data are represented by a set of persistent CORBA objects implemented using omniORB, our own GPL-ed CORBA ORB. Each real-world object is represented by a corresponding CORBA software object. Around 40 different types of object are modelled, each by its own CORBA object type; for example, the system includes the types: Person, Computer, Mouse, Camera, Scanner, Printer, Phone and so on. As well as the location of the real object, each software object makes available current properties of the real object, and provides an interface to control the real object,so, for example, a scanner can be set up and made to perform a scan by an application via its corresponding Scanner software object. The set of all these persistent CORBA objects make up the world model that is seen by applications. The objects themselves take care of transactions, fail-over, session management, event distribution and the all the other issues implicit in a large-scale distributed system, presenting a simple programming interface to the applications.

3.1.3 Badge Sensor and Telemetry-Network Design


To detect Active Badges in transit through a building, a sensor network must provide thorough coverage through adequate placement and density of sensors. (Because the unit cost of the sensors is low, thorough coverage is not prohibitively expensive.) Sensors need to be placed high up on walls or ceiling tiles of offices and on the entrances and exits of corridors and other public areas. An ideal system would take advantage of existing computer networks as a means of gathering badge sightings and relaying the data back to a central server for processing. The problem is that many buildings do not have a computer network, and secondly, if a sensor were to be interfaced directly to a modern-day network, the cost of the device would be significantly increased.A design was conceived that would allow an independent network to support up to 128 sensors,controlled from the RS232 port of any standard workstation. This approach allowed a network of workstations, joined by an
14

Ethernet, to support multiple badge networks, the data being relayed back to one master server by conventional network protocols. The workstations provided a simple connection between each of the badge networks and the main computer network. However, a badge network can also exist on its own supported by a single network controller. A prerequisite for the badge network was that it should be able to link all areas of any building with an arbitrary topology. Power would need to be fed though the network because the sensors would be too numerous and distributed in too many remote places for them to be supplied by power locally. Given these constraints, the badgesensor network has been designed as a 4-wire system. Two of these wires carry the network power-supply, the third carries the serial addressing information allowing the network controller to nominate a station, and the remaining wire carries data back to the network controller. Conventional telephone twisted pairs are used, which means it is possible to take advantage of any spare telephone cable already in a building. Although, as with most networked systems, the cabling is unavoidably a large fraction of the system cost, the cost has been minimized by using standard telephone twisted-pairs. The data-transfer format is logically the same as RS232, but the network is physically a wired-OR system. The consequence is that by using a simple level-shifting interface-box, any computer with an RS232 port can be used as the network master. In order that the network master does not have to poll the sensors at high speed to avoid data loss(e.g. if two badges in one room signaled with a very short delay between them), a FIFO has been designed into each sensor that is capable of buffering 20 badge sightings. This allows the network master to multiplex its time between polling the network, manipulating badge data, and making the data available to clients.

3.3 PROGRAMMING WITH SPACE


Location data is transformed into containment relations by the spatial monitor. Objects can have one or more named spaces defined around them. A quad-tree based indexing method is used to quickly determine when spaces overlap, or when one space contains another, and applications are notified using a scalable event mechanism.

15

Figure3.3:Programming With Space:.

A map of one of our offices, showing visibility spaces around computers, and usage spaces around people. The red shading indicates a containment state Containment relationships between 2D geometric shapes are a good way of formalising vague spatial relationships. Simpler abstractions fail to capture complexities in the environment which are obvious to the user, while more sophisticated ones risk being too complex for the user to understand. It turns out that people are very well-suited to reasoning about and remembering 2D geometric shapes. This abstraction also works wel for application programmers because they can use traditional GUI programming styles, treating spaces around objects as though they were buttons on a computer screen.

16

4. SOFTWARE SUPPORT FOR SENTIENT SYSTEMS


We have found that we require several common features in many of our sentient computing applications. It is sensible to integrate such services with the API provided by the world model, so that they become available to all applications.

4.1 SPATIAL MONITOR


A spatial monitor formalizes imprecise spatial relationships in terms of containment and overlapping relationships between suitable 2D spaces, as the Spatial MonitoringProgramming with Space sidebar describes. This allows developers to use an event-driven style of programming that treats spaces on the floor like buttons on a traditional GUI display, while people and other mobile objects become somewhat like mouse pointers. In our building, the spatial monitor handles approximately 3,700 individual spaces. The monitor, world model, and underlying database all run on one Sun Ultra 250 workstation, which has 500 Mbytes of memory and two 300-MHz processors. Consider an application that moves a users desktop between systems. This application would register with the spatial monitor, expressing interest in a circular space around a person, shaded green in Figure A, and an area in front of each display where the user could see the screen, shown as a blue-colored oval. When in front of the screen, the area around the user is contained within the area in front of the screen. The spatial monitor detects this relationship, and sends a positive containment event to the application. The application responds by displaying the users desktop on thatscreen. When the user moves away from the screen, the spatial monitor sends a negative containment event to the application, causing it to remove the users desktop from the screen.

17

Figure4.1: Spatial Monitoring Spatial Monitoring application that moves users desktops around with them. The application registers with the Spatial Monitor (1); as the user (pjs) approaches the display (2) or moves away from it (3), the spatial monitor sends a positive or negative containment event to the application that transfers or removes the desktop to or from the screen.

4.2 TIMELINE-BASED DATA STORAGE


Sentient computing lets users personalize their network appliances. Many of these appliances generate data, which the system must store in a way that does not require the user to specify its destination. For example, if a user takes a photograph with a wireless camera, the computing system needs to automatically store the picture in such a way that the user can easily retrieve it. Based on earlier work,10 we developed a method that creates a timeline that contains the data each individual generates and also incorporates information from the model. Users can query this timeline with a temporal query language to retrieve data based on context.

18

5. APPLICATIONS
Sentient computing systems provide a wide range of location-aware applications for users.

5.1 BROWSING
An important application class, model browsers simply display the environments current state. In addition to the 3D visualization shown in Figure 3, we have also implemented the continuously updated map shown in Figure 4.

Figure 5.1. A 2D-map visualization of the world model. In Room 210, a user (djc) is sitting in front of his workstation while using his telephone, which is colored red to indicate that it is off the hook.

The map can display personnel, furniture, telephones, workstations, and other relevant information so that users can immediately find the phone nearest to a person, determine whether it is in use or not, then place a call by clicking on the phone displayed on the map;
19

see, by the disposition of their neighbors on the map, whether to contact a person or notfor example, a person standing next to a visitor may not wish to be disturbed; and request to be informed when a person leaves a room or arrives in the building, or to receive notification when the phone located nearest that person is not in use.

5.2 FOLLOW-ME SYSTEMS


Application developers can make services ubiquitously available to users by moving their interfaces to the nearest appropriate input or output device, a technique we call follow-me.A follow-me desktop application can display a users Virtual Network Computing desktop on the nearest computer screen, triggered by the buttons on the users Bat. When an external call targets the extension number of an absent user, the users Bat makes a ringing sound. By pressing a button on the users Bat, the user can have the call routed to the nearest phone. If the user chooses instead to ignore the call, the system will forward it to a receptionist. We have installed several cameras in one room in our building. A user can request to be tracked by these cameras, and the system will create a video stream in which the nearest camera is selected to keep the user in-shot as he or she walks between different desks, displays, and whiteboards. Each of these applications uses a different interpretation of the word nearest. Spatial containment formalizes these notions. To support follow-me desktops, each workstation has a space extending from it around the area in which the keyboard can be used. Follow-me phone calls are supported by spaces that take into account obstacles posed by other objects, like desks. Camera selection is supported by spaces that delineate the floor area in which a given camera will get a good view of the user. Simply measuring the distance between two objects does not fully capture the concept of nearness as people use it. For example, a telephone that appears to be physically close to a user may be inaccessible because of intervening desks and walls. Therefore, interpreting nearness in terms of proximity would not give good results for any of the applications weve described. A proximity based approach would too often return the workstation behind the user, the phone on the other side of the room divider, or the camera that is next to the user but happens to be pointing above his or her head.
20

5.3 NOVEL USER INTERFACES


If we consider a Bat to be a pointer in a 3D user interface that extends throughout our building, we can create new types of user interfaces using Bats and model information. Thus, we can use lower-dimensional spaces to create arbitrarily haped

mouse panels in 2D, slider controls in 1D, and buttons in 0D, then project the 3D Bat position into these spaces to control the user interface components.

5.3.1 MICE. We have placed several 50-inch displays around our building. When a Bat enters the mouse panel space that is placed in the model around each screen, the system projects the 3D Bat position into a 2D mouse pointer reading, then transmits it to the workstation controlling the screen. Any persons Bat can then be used as a mouse for any of these screens. 5.3.2 VIRTUAL BUTTONS A button in a traditional GUI simply indicates an area of the display with some special application-level significance, accompanied by a label that has some meaning to the user. We can take the same approach with our 3D interface by registering a point in the model as having an application-level significance, and labeling the real world with a physical token at the corresponding point. To activate these virtual buttons, a user places a Bat on the physical label and presses one of the Bats real buttons. The button-press message from the Bat interrupts the scheduler, thereby ensuring that the Bat requests a location update within 50 ms and allowing it to process the virtual button press with low latency. Because each persons Bat bears a unique identifier, the system knows who originated each virtual-button event, thus virtual buttons can control personalized applications. One application of this technique, the smart poster, contains labels for one or more virtual buttons, as Figure 5 shows. We use smart posters to control follow- me applications. For example, a user can enable or disable his follow-me desktop or telephone service with a smart poster. Applications that smart posters control will normally send some audible feedback directly via the users Bat. We have also used a smart poster to control a networked scanner. Users press buttons on the poster to choose
21

resolution, color level, data compression format, and destinationwhich can be either their e-mail in-box or information timeline. 5.3.3 AUGMENTED REALITY . If, in the future, users have a permanently available view of the model, it would

not be necessary to create a label for a virtual button in the physical world. We have experimented with a non stereoscopic head-mounted display, tracked using data combined from three Bats and an inertial sensor, to create an augmented reality display that superimposes information from the model onto the users view of the real world. This system could display labels for otherwise invisible button spaces. Systems administrators could also use the augmented reality display to maintain those parts of the world model that do not update automatically, such as the positions of furniture. Augmenting the users view of the environment with the models current state can make any omissions or inconsistencies immediately obvious.

5.4 DATA CREATION, STORAGE, AND RETRIEVAL


If we use Bats to locate mobile networked devices, we can use their positions to infer who holds them. We can then increase the productivity of each device by personalizing the device spontaneously when a given user picks it up, thereby making it easy for users to share devices; filing any data the device generates according to user preferences by, for example, placing the data in the users timeline or e-mailing it to the user; and placing additional contextual information in the timeline to aid data retrieval.

Currently, the user must explicitly perform configuration, data storage, and indexinga fixed overhead that presents a disincentive to use mobile devices for small tasks. Applying sentient computing to this challenge may result in large productivity improvements. Given the current lack of small devices with wireless connectivity, we have had to use devices that store time-stamped data in nonvolatile memory. We also store timestamped ownership events that the model generates, and, when the device synchronizes
22

with the network, we correlate the information streams to determine which data corresponds to which user. This approach precludes any kind of device personalization at the time of use, but it does allow us to experiment with personalized filing, post processing, and context-based retrieval. We have used several types of shared digital cameras, located by Bats, to automatically store photographs in a users timeline. We also have used a shared digital voice recorder to automatically store audio memos in the users timeline, and we have implemented a service that transcribes the voice data using that users voice model. We also file the transcription in the timeline along with the voice data used to generate it. One advantage of a context-aware storage mechanism is that it can associate arbitrary data items simply by putting them close to one another without maintaining any kind of referential integrity. We can use this capability to compose information sources in powerful ways. Once the system completes the automatic transcription, we can use a text search on the transcription to retrieve the voice data. We have implemented a photo annotation program as another example of this information-retrieval style. When a fixed camera takes a photo, the application queries the model to determine who is in the cameras field of view, then stores a suitable caption in the timeline at the same time it stores the photograph. Thus, a text search for +photo +Hopper returns ections of a timeline containing photographs of Andy Hopper. We cannot provide this service for presently available handheld cameras because we cannot establish the zoom state of the lens for a given photo. Figure shows a sample timeline that contains data that several users generated contemporaneously: a photograph taken using a handheld camera, an annotated photograph taken using a fixed camera, and a scanned document.

23

5.5 SMART POSTERS


Because the sentient computing system creates an interface that extends throughout the environment, we can treat it just like a traditional user interface and create a `button' anywhere in the environment. In a traditional GUI, a button is just an area of the screen, which usually (but not necessarily) has a label associated with it. In a sentient computing system, a button can be a small space anywhere in a building -- again, it may have a label associated with it. Of course, the label need be nothing more than a piece of paper. To press the button, the user just puts the bat on the label and clicks a button on the bat. As a bonus, because the system knows which bat was used, it knows who pressed the button. We can create a poster with several of these buttons on it -- the poster is a user interface which can be printed out and stuck on the wall.

Figure5.5.1:Smart Poster

24

Using a smart poster to control a networked scanner. The highlighted spots on the poster are buttons in the sentient computing system's model of the world. This user is telling the system to scan a document that is lying on the scanner glass. He will then press the 'scan to hopper' button to scan the document to his information hopper. Here is one `smart poster' we have created which is used to control a networked scanner. The user can use buttons on the poster to select high-level options, like colour/monochrome, data compression format, resolution and whether to use the sheet feeder, and then scan the document to their information hopper or to their email inbox. Because the system knows who is pressing which button, it knows where to send the scan and it can even remember a user's preferred scanner setup and use that.

Figure5.5.2:This smart poster controls the phone call fowarding service. The button at the bottom left toggles the service on or off. There is another button over the picture of the bat which makes the bat ring with the sound it will use when the user is called.
25

We also use smart posters to control our ubiquitous services -- the one above is the poster which controls our phone forwarding service. A user can turn the service on or off, or even hear an example of the sound the bat will make when they get a phone call. Smart Posters are a good way of advertising new services: they catch the eye, explain what the service does, and provide a way for users to opt in to it, all on a sheet of paper -the ultimate thin interface.

5.6 Context-Aware Information Retrieval


Sentient computing can help us to store and retrieve data. Whenever information is created, the system knows who created it, where they were and who they were with. This contextual metadata can support the retrieval of multimedia information. In our system, each user has an `information hopper', which is a timeline of information which they have created. Two items of information created at the same time will be in the same place in the timeline -- this allows us to associate data items in a composable way, without having to maintain referential integrity. The system knows who the user was with at any point on the timeline, and the timelines of users who were working together can be merged to generate another timeline. This lets us generate records of collaborative work without any maintenance effort, by using the sentient computing system as a kind of ubiquitous filing clerk. The timeline can be browsed using a normal web browser. Here is an example timeline which illustrates some of these points.

5.7 USER INTERFACES


We can control other devices using located tags like the bat. We have implemented a video streaming and camera control system using networked MPEG codecs and pan-tilt-zoom controllable cameras. One of our applications of this is a distributed video bulletin board which lets users create video messages, and organise them into threads, controlling a camera by using a bat as a pointing device. Users can control the camera's pan, tilt and zoom by pointing at things with their bat, or by wearing their bat, letting the camera track them as
26

they move around. Here's an example clip (warning: 40Mbytes), which was recorded using the bulletin board and has a brief description of what it does. The bulletin board uses standard networked MPEG-1 codec boxes which transmit via IP multicast, so it could also be made to support recording of n-way video conferences.

27

6.CONCLUSION
The most obvious real-world application areas for this technology are in large buildings with highly mobile populations who spend their time generating information, looking at information, using different kinds of equipment and communicating with each other. Examples of such environments include hospitals and large office buildings. Ultimately, we believe that sentient computing provides benefits wherever people have to interact with machines. One day, all user interfaces could work this way. Over the next few years we expect wireless devices and LANs to become more widespread. But without a sentient computing system, the value of a wireless device is limited. There is a widespread assumption that radio devices themselves have some kind of innate sensing capability because useful proximity information can be inferred from radio signal strength. This assumption is incorrect, firstly because multipath effects within buildings greatly distort the relationship between signal strength and distance, and secondly because it fails to take account of environmental discontinuities like walls and floors. But sentient computing is more than a solution to the problems of configuration and personalization.When people interact with computer systems in this way, the environment itself becomes the user interface.we think this goal a natural one for humancomputer interaction.

28

7.REFERENCES
1. L. Brown, ed., The New Shorter Oxford English Dictionary, Oxford University Press, Oxford, UK, 1993. 2. A. Harter et al., The Anatomy of a Context-Aware Application, Proc. 5th Intl Conf. Mobile Computing and Networking (Mobicom 99), ACM Press, New York, 1999, pp. 59-68. 3. A. Ward, Sensor-Driven Computing, doctoral dissertation, University of Cambridge, K, 1998. 4. N. Priyantha, A. Chakraborty, H. Balakrishnan, The Cricket Location-Support System, Proc. 6th Intl Conf. Mobile Computing and Networking (Mobicom 00), ACM Press, New York, 2000, pp. 32-43. 5. J. Werb and C. Lanzl, Designing a Positioning System for Finding Things and People Indoors, IEEE Spectrum, Sept. 1998, pp. 71-78. 6. P. Bahl, V. Padmanabhan, and A. Balachandran, Enhancements to the Radar User Location and Tracking System, tech. report MSR-TR-2000-12, Microsoft Research, Redmond, Wash., Feb. 2000. 7. A. Harter and A. Hopper, A Distributed Location System for the Active Office, IEEE Network, Jan. 1994, pp. 62-70.

29

You might also like