You are on page 1of 15

DESIGN REPORT

WALKING MENTALLY NOT THERE


Bring safety to people who fully concentrate on their mobile phones while walking on streets

HANSEN WEI 4182006

1st Experiment

Figure 1

This experiment was designed as a modification of the rearview mirrow experiment conducted in last cycle(Figure 1). A small mobile phone was fixed on top of the original smartphone with double-side tapes, providing participant an additional front view. Through this experiment I want to see whether it would help or not if the participant could see the front view without lifting head up.

RESULT
I tested it only with 1 participant, who admitted that he could not fully experience the help of front view, since the angle of the small screen was not right towards him and as well it was too dim compared with the big one. Both of these 2 factors made him hard to follow what is happening on the small screen, thus, I decided to stop testing with this prototype and started to think about how to provide the front view on the big screen itself.

Prototyping for 2nd Experiment


It was not hard to come up with the idea of using mirrors to reflect the view in camera mode, but the difficulty lies in how to position the mirror, fix the mirror and find the best angle. At first I tried to draw the ray diagram before really making the prototype, which didnt help a lot since I cannot ensure whether it was right or wrong just by imagining. Thus, I decided to play with real mirrors, it was lucky for me to find several pieces of small cheap mirrors in Kruidvart store, and some were even rotatable themselves, which makes it much easier for me to try different angles. After several trials, the perfect position way to bring front view into the camera screen was identified as the following picture shows.

2nd Experiment
With this prototype, I conducted several experiments with 6 different participants (see right pictures ). The task was to read the jokes printed on paper attached on the screen while walking, with front view displayed on the screen. Besides, to get more in-depth insights from participants, 2 different test situation was provided to each participant: one is to read the joke printed on a transparent paper fully covered on the screen, and as well the front view was zoomed in to be displayed over the full screen (see picture above). The other situation is to read the joke printed on paper half size of the screen, the front view was separated from the text and was kept as original as it was (see picture below).

FrontView

Finding 1

Seperated>Merged

Before conducting the test, I assumed that the transparent sheet might work better than the left one, because the text and image were overlapping, I thought once there are some movements happens on the screen, people would notice it more easily. But actually the test proved that most people prefer the left one: text separated with front view. Since for the right situation, most participants indicated that while they were fully absorbed in the texts on transparent sheet, the background view will become blurred and even color pattens, which is very hard for them to notice any changes except for some big movements popping into the screen. While for the left situation, some participants mentioned the distance between texts and image is kind of close, which makes it not that hard for them to switch eyes between 2 contents and also, splitting texts and image bring them awareness that they have to switch and they would do it when necessary.

Finding 2

Wide >Narrow

Another interesting finding is nearly all participants prefer the front view displayed in original small screen (left picture) rather than the zoomedin full screen (right picture). The main reason they gave is that the original view in small screen is much wider than the zoomed-in view in the full screen, thus, more actual things is involved in a wider view than a narrow one. Besides, someone mentioned that a broader view could bring more safety feeling and even the feeling of in-control because they can get more overview of what happens in front at one time. While the narrow view, even though covering the full screen, only provided a narrow sight angle and over-ahead view, which makes people feel like being pulled ahead from where they really were. Moreover, since the view was zoomed-in , it shake much more fiercely than the original view, which is very irritating for users, this is also a reason why most participants prefer the small wide screen since it provided a more steady view.

FrontView

Conclusion:
Generally, the experiments went quite well among all participants, people did understand the use of front view and they were able to feel the quality of anticipation from this design. Personally, I really like this physical way to transmit the anticipation quality, so I decided to keep it as part of the final concept. Possibilities for improvement drawn from experiments will be taken into consideration when designing details: e.g. splitting the view with the texts, widening the view to involve more things (see picture above) and keeping the view as steady as possible. However, some drawbacks of this idea were also identified during tests. Some participants mentioned the front view in the small screen didnt help much in telling the distance and size of objects. So when people rely too much on this little screen, the optical illusion of distance and size may even bring other problems, e.g, a stone might be seen disappeared from the screen, while actually its just out of the camera range and could still be in front of your feet(see pictures below), tripping you down if you dont look at the real world. So, here raises the question: Do I want to solve all these problems within such a small screen (e.g. adding distance/movement detecting feature to this front view) to make it perfect for users to use without lifting head up at all or do I still want them to follow their instinct reaction(raising head up when needed) and this front view only act as an assistant tool. After introducing 2 more experiments, this question will be answered.

Figure 2

3rd Experiment
In parallel with the first experiment, another design engagement focusing on guidance was conducted, which was developed from the red belt idea in last cycle. In this experiments, I tried to translate the guidance quality into the cellphone screen. What I did was to use an ipad to skype with the smartphone held in participants hand, meanwhile, filming a paper with texts and sharing the screen with the participant, who are asked to walk while reading the texts. During his walk, I tried to add movements to the texts by moving the ipad, e.g. I sometimes turned the text towards a direction before participant made a turning in reality or I sometimes zoomed-out the screen while the participant was too focused in reading. Through this design, I want to see how people react on and feel about being guided in this way. The test was conducted on the platform since this is the place where WIFI environment is stable and also easy for me to observe participant.

Due to poor test situation, this test was only tested with 1 participant, but it was enough to get some useful insights.

Findings
The direction indication works well
The direction indication actually works quite well, the participant mentioned that every time when the text was slanting, his head also slanted towards the same direction, followed by his body and his feet.

Zooming-out the texts brings more concentration


I brought obstacle for reading by zooming-out the texts and making it very small, assuming this may make participant stop reading and raise head up. But actually participant get more concentrated while the view was zoomed-out, since he said he have to get close enough to the screen to be able to read.

Reading obstacle is irritating


Though creating changes or obstacles (e.g. texts get turning or small) i n the content participants are focusing on do help in guiding. Participant admitted he do not really like being guided in this way, because the obstacles and changes popped up suddenly and they were forced to take effort to adapt to it, which is very annoying.

4th Experiment
The focus of the 4th experiment was a combination of anticipation and guiding. A phone with printed joke attached on it was provided to participants, with half of the screen left for video chatting with another Ipad device. I, as the researcher, followed and filmed the participant using the Ipad. Thus, the participant was able to read the text and see the image of he/she is walking at the same time. By providing the backside view of the participant, the front view of environment was also involved in this screen, which was supposed to provide the quality of anticipation to participant. Also by showing someone walking in the screen, the participant might feel being guided to some extend, through which the quality of guiding is transmitted. Besides, during the walking process, I also added some movement to the view, like what I did in last experiment: I always turned the camera view before the participant made turning and zoomed in the camera when the participants walk too fast, but the difference here is that for participants, the movement occurred only on the video view, not acting as a reading obstacle.

The test was conducted with 2 participants in IO faculty, where there is Wifi all around.

Findings
Its interesting to see the back view of oneself
Both participants feel curious to see their own back view in the screen, one even mentioned this is the first time he saw his back in his life. The other girl said this view made her pay attention to the way she walks, if her walking gesture looks stupid in the screen, she might adjust it.

Feel safe while seeing onself in another angle


The boy mentioned he felt safe while see himself in such a different angle, through which he can see the environment around him and the front view as well.

Notice the difference of zoom in/out


Both of them noticed that the view was zoomed in when I asked them to walk fast. The girl indicated that she kind of got used to the original broad view. While suddenly the camera zoomed in she can only see her head blocked the view on the screen, which makes her feel uncomfortable.

Get confused about turning indication


I assumed that when my camera turn faster before people make turning, they would notice it and would try to catch up to get back into the screen view. But out of my expectation, the boy hesitated and stopped while I turned the camera towards one direction before he was about to make a turning. He explained: I saw my character get lost in the view, so I stop to wait for him to come back in the view. Here we can see, the unconscious reaction made him mistakenly thought he was the one holding the camera.

Conclusion & Final concept


As mentioned before, through experiment 1 and 2, it was decided to apply the small screen of front view into the final concept to transmit the quality of anticipation, which mainly aims at creating a safety feeling for people. The following question was, based on this small screen, what else can I do to enhance the extent of real safety? After conducting experiment 3 and 4, when looking at the focus of them, I did find a transition: at first I mainly focused on testing how people feel about being guided to be on track, later on during the test I actually switched my focus to studying how people feel about the subtle changes I provided to them which were relevant to their own behaviors. Then I realized what a nice direction it is ! Also, i nspired by what the girl said in the 4th experiment, while she saw herself in the screen, she would instinctively pay attention to the way she behave. I started to think, in stead of creating something to guide people where to go, why not creating something to guide people how to behave correctly, or at least make them aware that what they behave (walking while playing phones). Thus, I suddenly caught the ideal meaning of the small front view I put in the phone: it shouldnt be something that you can fully rely on, but has to be something to bridge you and the real world, or even drag you out of the virtual world. Additionally, what I also find interesting is that in experiment 3 and 4, people sometimes were forced to react to the subtle changes(turning indication& I provided to them and they didnt really like it since tthey felt kind of being controlled by others. Thus, I was thinking of being-in control might also be a nice interaction quality to provide to the user because it also matches with my interaction vision: Controlling in a cockpit Based on these conclusion& reflections, I formulated my final concept direction as:

Final concept direction

Changing or making pedestrians aware of their behavior by providing them subtle changes on the frontview screen which are relevant to their in-correct behaviors while focusing on phones.

Mirror 1

Mirror 2
The small mirror structure integrated in the phone case was designed based on the perfect mirror position I found during experiment 2. As the picture shows, normally mirror 1 is always there and mirror 2 is hidden in the case, once you push the button, mirror 2 will be slided out, meanwhile the s p e c i a l i ze d A P P i n y o u r phone will be automatically started, entering FrontView mode.

Push

Specialized APP (providing front view) + Specialized PHONE CASE (small mirrors integrated) = Final Concept

WALK TOO FAST

narrow zoom-in ahead falling

dirty fingerprinted

TOUCH TOO OFTEN

TYPE TOO QUICK

shaking swaying vibrating

dim blurred

WATCH TOO LONG

double image

You might also like