MobileHCI 2009 report

Finally, I found some time to write a report about MobileHCI’09. This conference started in 1998 as a workshop. It matured as a conference in 2003. This year there were 306 participants, and the acceptance rate for full paper was 24%.

The keynote speaker was Jun Rekimoto, professor at University of Tokyo and researcher at Sony Research Lab. He presented some work he conducted on “Large Scale Integration of Real and Virtual Worlds”. One of their latest project is called PlaceEngine. It is basically a WiFi based Position Recognition engine which might combine also GPS information. The idea of building this infrastructure started a couple of years ago with a project called Annotated Reality [Rekimoto 1998]. WiFi positioning is important because it is fast to acquire location and also works indoor and outdoor. Finally it can distinguish the height at which the user is located. Their basic idea is that people might participate into the data collection part of building this database of access points: they want to turn a folsksonomy collection into wifi infostructure sensing (sensonomy). Prof. Rekimoto makes the example of wiper wheater maps (wheather sensors on taxi cabs). They basically come out with a strategy to geolocate the position of the access points. Similare to what I was proposing for GSM antennas.

They used this lagorithm to implement some realworld markerless AR.

I basically opened the conference presenting the first paper titled: Text versus speech: a comparison of tagging input modalities for camera phones:

Speech and typed text are two common input modalities for mobile phones. However, little research has compared them in their ability to support annotation and retrieval of digital pictures on mobile devices. In this paper, we report the results of a month-long field study in which participants took pictures with their camera phones and had the choice of adding annotations using speech, typed text, or both. Subsequently, the same subjects participated in a controlled experiment where they were asked to retrieve images based on annotations as well as retrieve annotations based on images in order to study the ability of each modality to effectively support users’ recall of the previously captured pictures. Results demonstrate that each modality has advantages and shortcomings for the production of tags and retrieval of pictures. Several guidelines are suggested when designing tagging applications for portable devices.

[link to PDF] [DOI] [Slides of the presentation]

Sian Lindley presented a qualitative study on pictures taken with SenseCam devices. They noticed that pictures taken with sensecam devices capture things that are normally nor photographed. People in their qualitative experiment did not use sensecams to record their lifelog but they use it as a special kind of photography. Their work focused on aesthetics and sentimentality of photography. The paper is titled: Frozen in time and “time in motion”: Mobility of vision through a SenseCam lens, and it is available here.

Arto Puikkonen presented an interesting field study to understand how people create videos with mobile phones. They recruited 11 participants that collected a total of 255 videos during two weeks. They found that 65% of the videos were planned and watched on large displays. Also over 85% of the video were meant for oneself and not for others. The paper was titled: Practices in Creating Videos with Mobile Phones.

Peter Mockel addressed the Industrial Keynote of Deutsche Telekom AG. He described how at DT Labs they are trying to get the user more involved. They have 150 researchers but they manage to produce 250 publications per year and a paper application per week. They designed a game called “Scotland yard on your mobile phone”.

Richard Harper from MSR Cambridge presented “Glancephone an exploration of human expression”. His initial argument was that sociology does not exist. He quoted a book of Hutchinson titled: “There is no such thing are social science”. In essence, the point that Winch was trying to make, that has so often been misinterpreted (though thankfully clarified by Hutchinson, Read and Sharrock), is that the desire to utilise and replicate the methods and achievements of the ‘natural’ sciences to the ‘social’ and ‘human’ sciences is profoundly mistaken. The concept of a ‘social science’ is a misnomer that merely displays itself as ‘bad’ philosophy and is the very scientism that Wittgenstein and Winch aimed to steer us away from. In their work, Harper and colleagues are interested in fitting design to human use. Richard explains how communicating for people is a really rich activity that is usually reduced when mediated by technology. The glancephone allows users to let callers glance at what they are doing before making a phone call. They conducted a user study to understand how it was perceived. What happened during the trial was that people used the system to be glanced not to glance at people.

Karin Leichtenstern presented as study titled: “Studying Multi-user settings for pervasive games”. They tried to understand what is the best way to allocate resources in a multi-user pervasive game so to balance collaboration and communication. They conducted a user study with 18 children. They built 3 group with different configurations.
Ohad Inbar presented a study on how to design “Online Help in Mobile Devices”. They noticed that 63% of 1 out of 7 phones sold are returned because said to be “faulty” however they are not broken. Built-in help solutions are not good because these people are not good in seeking these information.On the other hand an online help might be considered intrusive. Therefore they designed a context-aware help that could kick-in every time they were facing a difficult situation. They were detecting the problematic aspect of the user’s interaction by the activity of the user as seen from the system.

Anupriya Ankolekar presented: “Friendlee: A mobile application for your social life”. Their starting assumption is that true social networks are smaller than the contacts you have on FB. Interactions are driven by smaller, more intimate groups of users like Twitter and phone communication (CDR). Difficult to filter out unwanted staff from Facebook to concentrate on the core. Friendlee construct your intimate social network from the call-logs and SMS. Share personal context and browse connections of friends. Big social networks do not distinguish between the general and intimate contacts. MIG33 is an social network application for phone only, like loopt. [more]

Nirmal Patel presented a study on “Two Thumbs Chording”: This works explain and evaluates a technique to enter text with the keyboard of a mobile with two thumbs. The paper includes a nice methodology and metrics for comparing inputing techniques.

Stephen Brewster presented a paper titled: “Pressure-Based Text Entry for Mobile Devices”. Their motivation was that currently pressure input is of little help for interaction because we do not have good ways to offer good feedback to the user so that they can adapt their movements and control finely the device. They designed a pressure keyboard where a light pressure brings a lower case letter and and hard pressure brings an uppercase letter. They found that good feedback is key for pressure input. The pressure keyboard is slower than other input method but is more robust to error when the user is walking. They used the NASA TLX for measuring workload.

Kun Yu presented a paper titled: “Coupa: operation with pen linking”. It is basically an interface where the user can operate the phone by just drawing a line between a number of labels sitting on the edges of the screen. The labels represents actions and items that the user uses frequently.

Leif Opperman presented a study titled: “Ubiketous computing”. They explained how cycling is good for many reasons. They want to augment riding by providing hystorical information and other information while the user is riding. The project was called “sillitoe trail“. The second study was called “rider spoke” where players could freely explore the city for 1 hour. They could hear a narrator voice recorded and they could record messages that others could hear. The paper contains useful design suggestions for designing interactive systems for cyclists.

Ronald Ecker, from BMW research presented an interactive menu for car entertainment systems called PieTouch. The paper addresses the complexity of designing interactive menus for in-car entertainment systems that do not conflic the most important navigation functionalities of the car.

Martin Pielot presented a study on how to support map-based wayfinding. Their initial argument was that paper-based maps are favored over navigation systems and allow more effective navigation [Ishikawa et al., 2008 & Rukzio et al., 2009] than GPS for pedestrians. They developed a belt with many vibration motors that could offer egocentric cues to the user toward the final destination. They designed a user study to understand the impact of this technology. They found that the belt was helping pedestrian navigation a lot. The belt allowed them to orientate the map and to detect and correct maps mistakes that were difficult to fix without (e.g., pedestrian paths).

Simon Robinson presented a study titled: “Sweep-Shake: Finding Digital Resources in Physical Environments”. They designed an interaction technique to access digital resources using the shake device. moving the device users could select specific content and using the vibrator feedback they could locate the location connected with some content in the environment.

Johannes Schoning presented a study titled: “PhotoMap: Using Spontaneously taken Images of Public Maps for Pedestrian Navigation Tasks on Mobile Devices”. They started from the assumption that when you are visiting a parc you do not use your mobile navigation device but you use public maps to understand where to go because they contain better information that you could find online. The idea of Photomap is to take a picture of these publically available map and transfer them to the mobile device to add dynamic positioning on top of the map. The main idea behind photomap is the georeferencing that is required to position yourself on top of these locally available maps. Userg generated maps are richer that the ones you can get from google, yahoo, or microsoft. [more]

Alireza Sahami Shirazi presented a paper titled: “Emotion Sharing via self-composed Melodies on Mobile Phones”. The paper includes interesting references that are relevant for MobiMoood. They designed a system for composing tunes that could be sent to a certain recepient to share emotions.

Finally, Markku Turunen, from Tampere university, presented a paper titled: “User Expectations and User Experience with Different Modalities in a Mobile Phone Controlled Home Entertainment System”.The author designed an experiment to test different ways of using the mobile phone to control a Home Entertainment Systems. The best part of this paper was to devise an interesting methodology to extrapolate users’ feedback. The same methodology was presented in INTERACT and INTERSPEECH. [more]

Leave a Reply