Professional Documents
Culture Documents
1st Experiment
Figure 1
This experiment was designed as a modification of the rearview mirrow experiment conducted in last cycle(Figure 1). A small mobile phone was fixed on top of the original smartphone with double-side tapes, providing participant an additional front view. Through this experiment I want to see whether it would help or not if the participant could see the front view without lifting head up.
RESULT
I tested it only with 1 participant, who admitted that he could not fully experience the help of front view, since the angle of the small screen was not right towards him and as well it was too dim compared with the big one. Both of these 2 factors made him hard to follow what is happening on the small screen, thus, I decided to stop testing with this prototype and started to think about how to provide the front view on the big screen itself.
2nd Experiment
With this prototype, I conducted several experiments with 6 different participants (see right pictures ). The task was to read the jokes printed on paper attached on the screen while walking, with front view displayed on the screen. Besides, to get more in-depth insights from participants, 2 different test situation was provided to each participant: one is to read the joke printed on a transparent paper fully covered on the screen, and as well the front view was zoomed in to be displayed over the full screen (see picture above). The other situation is to read the joke printed on paper half size of the screen, the front view was separated from the text and was kept as original as it was (see picture below).
FrontView
Finding 1
Seperated>Merged
Before conducting the test, I assumed that the transparent sheet might work better than the left one, because the text and image were overlapping, I thought once there are some movements happens on the screen, people would notice it more easily. But actually the test proved that most people prefer the left one: text separated with front view. Since for the right situation, most participants indicated that while they were fully absorbed in the texts on transparent sheet, the background view will become blurred and even color pattens, which is very hard for them to notice any changes except for some big movements popping into the screen. While for the left situation, some participants mentioned the distance between texts and image is kind of close, which makes it not that hard for them to switch eyes between 2 contents and also, splitting texts and image bring them awareness that they have to switch and they would do it when necessary.
Finding 2
Wide >Narrow
Another interesting finding is nearly all participants prefer the front view displayed in original small screen (left picture) rather than the zoomedin full screen (right picture). The main reason they gave is that the original view in small screen is much wider than the zoomed-in view in the full screen, thus, more actual things is involved in a wider view than a narrow one. Besides, someone mentioned that a broader view could bring more safety feeling and even the feeling of in-control because they can get more overview of what happens in front at one time. While the narrow view, even though covering the full screen, only provided a narrow sight angle and over-ahead view, which makes people feel like being pulled ahead from where they really were. Moreover, since the view was zoomed-in , it shake much more fiercely than the original view, which is very irritating for users, this is also a reason why most participants prefer the small wide screen since it provided a more steady view.
FrontView
Conclusion:
Generally, the experiments went quite well among all participants, people did understand the use of front view and they were able to feel the quality of anticipation from this design. Personally, I really like this physical way to transmit the anticipation quality, so I decided to keep it as part of the final concept. Possibilities for improvement drawn from experiments will be taken into consideration when designing details: e.g. splitting the view with the texts, widening the view to involve more things (see picture above) and keeping the view as steady as possible. However, some drawbacks of this idea were also identified during tests. Some participants mentioned the front view in the small screen didnt help much in telling the distance and size of objects. So when people rely too much on this little screen, the optical illusion of distance and size may even bring other problems, e.g, a stone might be seen disappeared from the screen, while actually its just out of the camera range and could still be in front of your feet(see pictures below), tripping you down if you dont look at the real world. So, here raises the question: Do I want to solve all these problems within such a small screen (e.g. adding distance/movement detecting feature to this front view) to make it perfect for users to use without lifting head up at all or do I still want them to follow their instinct reaction(raising head up when needed) and this front view only act as an assistant tool. After introducing 2 more experiments, this question will be answered.
Figure 2
3rd Experiment
In parallel with the first experiment, another design engagement focusing on guidance was conducted, which was developed from the red belt idea in last cycle. In this experiments, I tried to translate the guidance quality into the cellphone screen. What I did was to use an ipad to skype with the smartphone held in participants hand, meanwhile, filming a paper with texts and sharing the screen with the participant, who are asked to walk while reading the texts. During his walk, I tried to add movements to the texts by moving the ipad, e.g. I sometimes turned the text towards a direction before participant made a turning in reality or I sometimes zoomed-out the screen while the participant was too focused in reading. Through this design, I want to see how people react on and feel about being guided in this way. The test was conducted on the platform since this is the place where WIFI environment is stable and also easy for me to observe participant.
Due to poor test situation, this test was only tested with 1 participant, but it was enough to get some useful insights.
Findings
The direction indication works well
The direction indication actually works quite well, the participant mentioned that every time when the text was slanting, his head also slanted towards the same direction, followed by his body and his feet.
4th Experiment
The focus of the 4th experiment was a combination of anticipation and guiding. A phone with printed joke attached on it was provided to participants, with half of the screen left for video chatting with another Ipad device. I, as the researcher, followed and filmed the participant using the Ipad. Thus, the participant was able to read the text and see the image of he/she is walking at the same time. By providing the backside view of the participant, the front view of environment was also involved in this screen, which was supposed to provide the quality of anticipation to participant. Also by showing someone walking in the screen, the participant might feel being guided to some extend, through which the quality of guiding is transmitted. Besides, during the walking process, I also added some movement to the view, like what I did in last experiment: I always turned the camera view before the participant made turning and zoomed in the camera when the participants walk too fast, but the difference here is that for participants, the movement occurred only on the video view, not acting as a reading obstacle.
The test was conducted with 2 participants in IO faculty, where there is Wifi all around.
Findings
Its interesting to see the back view of oneself
Both participants feel curious to see their own back view in the screen, one even mentioned this is the first time he saw his back in his life. The other girl said this view made her pay attention to the way she walks, if her walking gesture looks stupid in the screen, she might adjust it.
Changing or making pedestrians aware of their behavior by providing them subtle changes on the frontview screen which are relevant to their in-correct behaviors while focusing on phones.
Mirror 1
Mirror 2
The small mirror structure integrated in the phone case was designed based on the perfect mirror position I found during experiment 2. As the picture shows, normally mirror 1 is always there and mirror 2 is hidden in the case, once you push the button, mirror 2 will be slided out, meanwhile the s p e c i a l i ze d A P P i n y o u r phone will be automatically started, entering FrontView mode.
Push
Specialized APP (providing front view) + Specialized PHONE CASE (small mirrors integrated) = Final Concept
dirty fingerprinted
dim blurred
double image