CHI 2011 report

CHI 2011 was held in Vancouver, CA on May 7-11 [1]. For those who do not know CHI, this is the premiere conference on human-computer interaction. This year there were over 3K attendees. The conference had over 6.9K authors that submitted 2.5K articles. Of these, 20% were accepted for presentation in the conference. The event had 16 tracks running in parallel with over 150 presentations every day. The keynote was delivered by Howard Rheingold (Smart Mob): it was about learning.

I attended the first three days of the conference and –as always– there were many papers that got my attention. What you are going to read next are some notes I took of papers that I found interesting. Particularly, I attended sessions on the following topics: research methods (mostly qualitative), telepresence, tagging, low-cost ICT for development, microblogging, user studies in developing regions, wireless networks, home automation, HCI for peace, location sharing, and low-cost phones.

On the first day, I attended a session on research methods, that got three honorable mentions. Of these I liked the presentation of Eric Baumer (Cornell) who conducted a study to compare activity theory and distributed cognition. His argument was that depending on the theory that you choose as researcher you can get results that are dramatically different. To prove the point, they conducted a fake study that was analyzed using both the AC theory and the DC theory. They discussed several points that help researchers choose the right research method. The other presentation I liked was that of Jens Riegelsberger and Audrey Yang (Google) who reported on methodological issues they identified while conducting field research across 9 locations. There were several things that worked well such as in-field pre analysis, cloud tools and templates used by the various teams to share data, and card-sorting at the basecamp. However, there were several things that did not work well, such as safety margins for logistics that were too short, and work load of field teams that was set too high. On the same session, Leah Findlater (HCIL, U Maryland) presented the Aligned Rank Transform for non-parametric Factorial Analysis [2]. Basically, error rates and user satisfactions are often measured with ordinal scales. Also, Error rates are often skewed towards zero and therefore we cannot analyze the data using a factorial ANOVA. The method they presented, called ART retain familiarity with the f-test and allow to conduct factorial analysis using ANOVA procedures on these situations. Unfortunately, they did not present the math behind, need to look at the paper.

In the afternoon, I attended the Designing for Democracy session. The first presentation did not fully fit in this session because it was about persuasive technology to promote ideal weight. It was presented by Victoria Schwanda (Cornell). They presented a system called Fit4Life that used sensor to monitor all the user was doing and to even listen to his conversations with the aim of persuading him/her to have a more active lifestyle. The system is NOT real. The created this design to provoke discussion on the limit that this kind of technology should have. The second presentation was delivered by Joan di Micco (IBM research), about how to engage citizens through visualizations of congressional legislation. She proposed 4 stages of engagement with government data: a. understanding, b. communication, c. interpretation, d. contribution. Their system was dealing with the first level. They used MALLET, a machine classifier to assign each part of a bill to a certain topic. They conducted analysis of usage patterns of power users vs. casual users. They also interviewed many of the casual users. Later, Moira Burke (CMU) presented Social Capital on Facebook, a longitudinal study on social capital based on kinds of facebook activities and individual differences. They run longitudinal surveys paired with facebook server logs. To measure casual relations between the variables they used a lagged dependent variable analysis. They found that lots of direct communication is associated with well-being. So, for social capital it is not enough to have friends in network. Benefits, come from interacting one-to-one with them. Similar findings were presented by Christian Yoder (U North Carolina). They found that status updates were not associated with an increase of social capital. This is mostly because these updates were not “talking” to anybody in particular.

Later on the same day, I attended the “tagging” session. One of the most interesting presentation was delivered by Michael Bernstein (MIT) who talked about “friendsourcing”. The whole concept was extremely related to the Social Tagging Revamped paper that we presented last year. Their basic premise was that some applications need specific information about you and can perform vey interesting forms of personalization. He described in particular Collabio, a facebook game that allows participants to tag each other with keywords that describe their interests and preferences. Using this information they could propel a number of services that are better tailored to people’s needs. He also listed a number of commercial services that were designed using the “social crowdsourcing” premise: GuessWho, FeedMe, Social Q&A (a specialized version of Quora), and Socialpedia. All in all, it was interesting to see that our idea of crowdsourcing is taking place and being incorporated into several commercial products. Michael has posted the paper and the slides of the presentation on his website [3].

On Tuesday, I attended the low-cost ICT for development session. Elba del Carmen (U Duisburg-Essen) and colleagues conducted surveys and field studies to understand how children appropriated the presence of mobile phones in rural classrooms in Panama. They lent the phones to the students for the duration of the study. Ruy Cervantes (U California Irvine) presented a study of how mexican schools used low-cost laptops. Their findings showed that the ecological infrastructure is key to support laptop-based education. Technology coordinators were extremely important to bring teachers up to speed and to administer the sharing of the resources. A strong human infrastructure was key to support change. Next, Gaurav Paruthi (MSR India) presented a study on how DVD players can be used as offline browsers for wikipedia. Their basic premise was that DVD players penetration rate was higher than PC penetration rate in India. Therefore they designed a distribution of wikipedia for DVD [4]. The menu and search could be done through the remote control of the television. They believe this is the cheapest way of distributing multimedia content in developing regions.

In the afternoon, I attended the session on microblogging behavior. Kate Starbird (U Colorado) presented a paper titled: “Voluntweeters: Self-Organizing by Digital Volunteers”. The released a microsyntax for twitter to help during the emergency situations. They studied how people used twitter during the quake in Haiti last year. Volunteers were retweeting, tweeting ushahidi reports, verifying information and putting people in contact with local coordinators. Volunteers did not know each other before the quake. Therefore they studied the emergence of the organization. Cathy Marshall (MSR) presented a study of people’s perception of ownership of user generated content. Who owns the tweets? What are the limits that people feel about media they did and did not create themselves? For many people is perfectly fine to take online content and re-use it for personal presentations and communications. Haewoon Kwak (KAIST) presented a study on the reasons why people unfollow others in social networks. Studying this kind of behavior is hard because social networks do not expose this kind of behavior. Therefore they scraped a dataset of 1.2M users in Korea and collected daily snapshots of follow networks and compared consecutive structures of this network. They found that people unfollow frequently in twitter, 43% of the time during the subsequent 2 months after following a peer, with an average 15 unfollow per person. The study reported a number of reasons for unfollowing a peer. A similar study was conducted by Funda Kivran-Swaine (Rutgers U) who focused on the impact of network structure on breaking ties in online social networks. They tried to understand what structural properties of the social network of nodes can predict the breaking up of ties in twitter. They used a huge dataset that was analyzed with multi-logistic regression. The model is reported in the paper. The more neighbors a dyad shared the less likely the breaking of the relationship. They also found that follow-back rate in twitter is a good indication of status in the SN.

Less related to this last group was the study presented by Jennifer Golback (U Maryland) on computing political preference among twitter followers. They used a list of congress members that are active on facebook. They also used a secondary source of information to understand how much liberal or conservative they are. They also intersect this information with news on online news sites. They found that people tend to follow politicians whose ideas reflect theirs. They are thinking about using this system to create a recommender system. They can use the same method to rate companies with their environmental score.

In the afternoon, I attended a session on user studies / ethnography in developing regions. Indrani Medhi (MSR India) presented a study on designing mobile interfaces for novice and low-literacy users. Deepti Kumar (IIT Madras) presented a study on how mobile payments are handled in India. The study focused on how people bargain and negotiate prices. Elisa Oreglia ( U California Irvine) describes information-sharing practices and ICT use in rural northern China. They found an abundance of information, and a scarce localization. There is an abundance of ict but under-utilization. They found a prevalence or oral information exchanges. Also, they found that information brockers are extremely important.

One last paper in the afternoon caught my attention. Barry Brown (U California San Diego) presented an interesting study on challenges and opportunities for field trial methods. The paper discusses methodological challenges in running user trials. They constructed a fake trial to examine how trial insights are dependent on the practices of investigators and participants. The best quote: “participants do not like your system, they like you!”.

On Wednesday, I attended the wireless networks session. Marshini Chetty (Georgia Tech) presented a paper on making network speeds visible. They designed Kermit a prototype [5], who visualize who is online, to allow personalization of the display, show the biggest bandwidth user, and show a history of bandwidth usage to make correlations. Participants in their interview showed that they had little understanding of what bandwidth is. They also showed to have little knowledge of how internet applications use bandwith. The tool seemed to help understanding who was consuming the internet in the household. Kermit also allowed them to control network usage in ways that participants were not used to.






Rhythms and plasticity: television temporality at home

Irani, L., Jeffries, R., and Knight, A. Rhythms and plasticity: television temporality at home. Personal Ubiquitous Comput. 14 (October 2010), 621–632. [PDF]


This work focuses on new temporalities of media consumption in the home as enabled by new media technologies. The authors conducted in-home interviews and diary study with 14 households over two weeks. They found many instances of the classic image of television temporality, namely families relaxing together in front of the television. They also found examples of time-shifting to adjust broadcasts to fit one’s agenda. However, they also found a range of complex temporal patterns that sit between these two extremes.

They found many instances of rhythmic television watching. These are subject to change and are negociated. When made visibles these are a source for social coordination. DVR technologies allowed people flexibility in the times they watched television.

They found that rhythms might span across households. However, while most of the previous literature focused on the synchronous nature of these social events, they found instances of watching television independently at individually convenient times that still sustained the rhythm of the collective discussion experiences.

They also found accounts of “plastic” television watching. Plastic time activities are the variable, ad hoc time that fits between or along with other activities. On Demoand cable television allowed users to browse lists of shows, find one that fit into one’s anticipated amount of time, and immediately begin watching. This kind of television would also allow users to skip not relevant or interesting content or to watch more episodes of a series of interest.

The study concludes with interesting and relevant design implications:

1) the design and support of temporal awareness may support the sociality of television. Design that support asymnchronous sociality might also support some users keep up with television shows.

2) on demand TV might support plastic time by having a selection of very short video segments;

3) recommender systems would benefit from sense of timing and social awareness and realizing that timing might be as important as the stories or genres presented.

An examination of daily information needs and sharing opportunities

Dearman, D., Kellar, M., and Truong, K. N. An examination of daily information needs and sharing opportunities. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (New York, NY, USA, 2008), CSCW ’08, ACM, pp. 679–688. [PDF]


The main argument of this work is that context-sensitive information needs can be supported by individuals in the social network. The authors support the idea that many contextual needs require specialized knowledge that is often not available on the Internet.

Under this assumption they conducted a 4-weeks diary study in which a diverse group of participants recorded the information they needed or that they wanted to share. They collected 1290 entries that were analyzed using grounded theory affinity analysis. They grouped the needs into 9 main categories with relative subcategories. Participants were able to satisfy their needs 45.3% of the time. Participants satisfied their information need by asking someone, going to a location where the information was available, look the answer on the web and user other methods suchg as the GPS, paper documents, trial and error and other media.

By looking qualitatively at the answers they observed some interesting facts: The timeliness of the information was a key factor and also the trust relationship with the source of the answer was an higly quoted variable that participants took into account.

WishTree: share your wishes

The basic idea of the application is that you can formulate a wish and share it with people around you. The wish takes the form of a seed that you need to cultivate by giving enough water and light. Little by little the wish grows into a tree and it starts making flowers. People nearby can see your wish because it is planted at a geolocated position and can send you comments and encouraging messages to help you make you wish come true.

The concept of the app came out of the joint work that we did last summer with the UDSI team in Barcelona. I also like the idea because of my past work with the DigitalSeed [link] etc.

If you want to try out the app: WishTree

WishTree_1.png WishTree_2.png

Information needs and practices of active mobile internet users

Heimonen, T. Information needs and practices of active mobile internet users. In Proceedings of the 6th International Conference on Mobile Technology, Application & Systems (New York, NY, USA, 2009), Mobility ’09, ACM, pp. 50:1–50:8. [PDF]


The authors of this paper looked at the effect of dataplan on smartphones have on mobile information needs and poractices. The authors used a diary study with 8 participants over the course of two weeks. Participants were filling a web form where they were asked to answer a certain number of questions (where / when / what / how / did you find what you were looking for?). They used a strict definition of information needs: need for a piece of informationthat you cannot recall from memory or that is not immediately available to you and that you would likely spend few minutes attempting to solve it while mobile.

They classified with simple coding the information needs into 15 topical categories. They divided the needs into utilitarian (pragmatic) and hedonic (entertainment). Most of the needs, 45%, were addressed immediately. The majority of the users answered the needs through web search. Interestingly, they also found that the contributing reasonto the information need is not easily attributable to any environmental factor (33% of the times).

The paper also presents some nice implications for design, such as the use of communal knowledge to solve mobile search needs. Also, they suggested several techniques to take advantage of the user’s interaction to predict mobile information needs.

Design Competition at MobileHCI 2011: The Essence of Mobile Communication and Connectivity

The text of the design competition has been published on the MobileHCI’11 website (design brief below):

I would be grateful if you could spread the word around to colleagues who might be interested in academia and industry.

Design brief: The Essence of Mobile Communication and Connectivity

Mobile technology is very close to everyday lives, for more than half of the world’s population. It is reflected well by the fact that we hear about it very often. However designers, researchers, technologists and journalists alike tend to give too much highlights on what’s new, especially around high-end, so-called ‘Smart phones’. Arguably, smart phones embody the exciting future of mobile connectivity, but they may not reflect the reality of mobile connectivity for majority of users around the world for many years to come, due to affordability, infrastructure, total cost of ownership, and simply, user preference.

Overall, people’s behaviors around mobile communication devices have certainly changed over the past decades. People have experienced a considerable diversity of mobile phones, hence it is logical to assume that what people may consider as essential features of a mobile phone would have been diversified and changed as well.

Thereby we challenge contestants to define the essence of mobile communication and connectivity and demonstrate what, why and how to design it. Most importantly, contestants need to think about a mobile device that is not about extra new features but is rooted on the good design of its core essential functionalities and experience they will create. Perhaps we can think about a phone that uses the exact same hardware features of existing basic phones but with a completely redesigned user experience. Perhaps we can think about a phone that “pushes” some of its intelligence to the cloud. Or a phone that can be operated with voice and that does not require a display.

Context-aware computing applications

B. Schilit, N. Adams, and R. Want, “Context-aware computing applications,” in WMCSA ’94: Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications, (Washington, DC, USA), pp. 85–90, IEEE Computer Society, 1994. [PDF]


This paper describes software that reacts to an individual’s changing context. According to the authors, three important aspects of context are: where you are, who you are with, and what resources are nearby. Context includes different aspects of the physical environment around the user.

To investigate these topics they developed ParcTab, a small hand-held devices that uses infrared based cellular network for communication. The Tab acts as a graphics terminal and most of applications run on remote hosts.

Using this experimental environment, they describe four interaction mechanism:

  1. Proximate Selection, the located-objects that are nearby are emphasized or otherwise made easier to choose.
  2. Automatic Contextual Reconfiguration is the process of adding new components, removing existing components or altering the connections between components.
  3. Contextual Information and Commands happens when contextual information can produce different results accodring to the context in which they are issued.
  4. Context-Triggered Actions are sets of rules that specify how contex-aware systems should adapt.


An operational definition of context

A. Zimmermann, A. Lorenz, and R. Oppermann, “An operational definition of context,” in CONTEXT’07: Proceedings of the 6th international and interdisciplinary conference on Modeling and using context, (Berlin, Heidelberg), pp. 558–571, Springer-Verlag, 2007. [PDF]


This paper presents a summary of theoretical definitions of context that were developed in the past in the field of computer science. The authors’ argument presented in the paper is that most of the definitions that were proposed in the past were indirect definitions that used synonyms or that were either too general or incomplete.

By summarizing previous work, the authors presented an operational definition of context that could be used to characterize the situation of anentity. According to the authors, elements for the description of this context information fall into five categories:

  1. individuality
  2. activity
  3. location
  4. time
  5. relations

Also, according to the authors something is context because of the way it is used in interpretation, not due to its inherent properties. When interacting and communicating in everyday life, the perception of situations, as well as the interpretation of the context is a major part. Therefore, the author presents some operational additive to the general definition: context transitions, variation of approximation, change of focus, shift of attention, shared contexts, the establishment of relations, the adjustment of shared contexts, and the exploiting of relationships.


Toward a multidisciplinary model of context to support context-aware computing

N. A. Bradley and M. D. Dunlop, “Toward a multidisciplinary model of context to support context-aware computing,” Hum.-Comput. Interact., vol. 20, no. 4, pp. 403–446, 2005. [PDF]


This paper presents a comprehensive literature review of multidisciplinary research on context. The primary aim of the authors was that of reviewing and merging theories of context within linguistic, computer science, and psychology to propose a multidisciplinary model of context that would facilitare application developers.

The authors find out that contextual interactions appered to comprise the cross-disciplinary component for understanding and using principles of context. From a liguistic perspective it is the interaction between two people, within computer science it is the user-application interaction (combined with possible interactions with other people and objects=, and within psychology it is the internal and external interactions. Last, contextual interactions should be considered also though the notion of embodiment, as described by Dourish (2001).