There has been much writing about MOOCs – Massive Open Online Courses – lately, but little about their interaction design. This may seem unsurprising; there is simply no groundbreaking UX work on Coursera or EDx. But I do believe that part of the success of the MOOC is because they are better designed than predecessors. Just compare the experience of the MOOC, with much of the ‘open courseware’ that can be found on i-Tunes University. The mediocre live recordings of u1niversity classes that were so common there have been replaced by special purpose, high quality materials. MOOCs also allow for an inkling of educational interactivity: through tests and assignments and, sometimes, peer feedback. It seems likely that this brittle marriage between UX and educational design contributed to the tipping point for online learning that MOOCs, appear to embody. So it is worthwhile to consider how we can improve interaction design of the MOOC further. To explore the room for improvement I asked my social interaction design class to come up with designs for MOOC’s that would increase engagement and participation rates, would strengthen educational interactivity and would encourage peer feedback and (informal) peer learning. In this post I discuss four solution-directions which they threw back at me.

Chunk and Unlock

UX-designers know the power of chunking and chunking is important in educational design too. Unfortunately currently MOOCs do not chunk learning well. They do better than open courseware: typically lectures are sliced into separate video-lessons of about 5-15 minutes. But these chunks of knowledge transfer are seldom interlaced with knowledge activation chunks such as small assignments or quizzes. So there is a weekly ‘listen-do cycle’ to most MOOCs, which seems hardly a suitable rate for online learning. No wonder so many users skip the ‘do-part’. One way to resolve this is to use the power of unlock. Rather that offering the materials in a fixed, weekly pace, the user can unlock a new instruction video by doing a small assignment or quiz, or by reviewing someone else’s work. An interesting variant could be social unlock. Users are matched to a partner, and both have to contribute something to a joint assignment before they can unlock the next instruction materials. Of course random matching may go wrong in a learning environment with many lurkers, but users could be matched to other users who unlock at the same pace or who check in to unlock simultaneously.

Improve the connection between content and community

Once, schools used to favor the principle of separation of learning and social peer engagement. In the classroom, you would listen to the teachers; during the breaks you could talk to classmates; which was not considered learning. Such schools do still exist, but they are hardly considered as best practice. Still, separation of learning and community is in the basic design of most MOOC platforms. MOOCs offer materials for individual learning with a hyperlink to a forum – elsewhere in cyberspace – where learners can engage with one another about the content of the course. So the coupling between community and content is as loose as it could possibly be. As a solution, my students suggested to take a better look at Massive Multiplayer Online Roleplaying Games (MMORGs) – which inspired the term MOOC in the first place. Players in those games take on joint challenges with division of labor. Communication around the challenges, encourages peer learning, the forum is not a separate place anymore and community emerges around the content. Although this seems ideal, it can be hard to design such challenges and set up the special purpose UX to support it. A less demanding solution to the same problem is to couple forum entries to specific content and challenges within the MOOC. This is done well on code academy where every exercise has its own forum entry. One of my students went as far as suggesting a dynamic forum coupled to the video. Users could add questions to specific points in the video instruction, which are then answered by other users watching the same video on a later time.

Improve online identity, presence and urgency

One well-known social web pattern which is rare in the MOOC is showing the presence of other people, and lowering the threshold of informal interaction with them. Consider yourself looking at a video in an MOOC: do you have an idea of whom else is watching this video at the time? Currently not. But supporting online identity and social presence – in a properly designed way – may increase participation and engagement a lot. Profiles on MOOCs are weak and too much focused on the narrow role of the user as a learner. One of my students suggested deep integration of MOOC platforms with the professional networking site: LinkedIn. Don’t learning and professional development go hand in hand? How about a ‘Dream jobs’ profile, accompanied by MOOC achievements? Other students suggested increasing the presence of other users through live forums, connecting community and content in the here and now. Maybe videos could be started only after a critical mass of learners say (say 5) signed in to it, so live chat would become an opportunity and study groups may be formed in a natural way.

Level the playing field

In a MOOC it seems as there is a single expert and many, many (equal), novices learning from the expert. This is a myth. Many users of MOOCs are far from novice and many even bring additional skills to those of the super expert. But the myth, the ‘bright light’ radiating from the super expert in the MOOC may hinder participation from semi-expert and peer learning. Do you feel free to try on your own ideas when an expert is watching you? In response, my students came up with ways to level the playing field, in the hope more participants of the MOOC would feel like playing. One solution was to give special status to expert users. Possibly they can provide extra content, so the MOOC becomes more of a shared place. Another solution could be to stimulate creativity, and to create community around creative exercises. Exercises that do not have a right or wrong to them might help challenging the ‘one expert model’ many MOOC users get from its design. A final avenue could be to create a large joint project, a barn raising challenge. Users could create something together, like in Wikipedia, which adds meaning to the course materials, which can handle contributions from people with a large rage of expertise and skill and which can give a feeling of joint discovery and achievement.

Where do suggestions like these bring us? Will MOOCs become interactive education places where joint learning takes place, when their designers take up these suggestions from my students? Maybe. Hopefully. But what I expect most is a diversification of online a d blended learning possibilities and experiences, from ‘simple’ open educational content, to well-designed educational games, and many other blended forms of on-line and off-line learning. Following, Manovich, we could draw a parallel of online learning today, to the cinema of the 1900s and say that MOOCs are one among many experimental forms that will one day define the ‘language of blended learning’ and will fill the ecosystem of educational forms of the times to come. This diversification can be exciting in itself, but what I am really looking forward too is finding out how the marriage of educational and user experience design develops.


A review of the book “Ubiquitous Photography” by Martin Hand.

I have a notebook somewhere with an old advertisement for camera phones. The ad shows a woman in sexy lingerie, sending a picture to her partner to ask for his (her?) opinion. It must have been 2001, camera phones were new and the telephone providers undertook a charming effort to explain their utility to the general audience. The message came across though: digital photography has changed who makes pictures, the reasons for making pictures, the way in which we use and share pictures and with it, it changed the cultural meaning and significance of photography and photos. Martin Hand tries to describe and explain these changes in his book “Ubiquitous Photography”.

Digitization is just one of the many changes that happened to the technology of photography since its invention at the end of the nineteenth century. The first photographs resembled realistic portrait paintings. Photography was hard for everyone: subjects had to sit still for many hours, photographers had to control light and setting precisely and printing photos was a specialist job. So photos were precious objects. Later, agency shifted from the photographer to the technology. Increased light sensitivity made action photography possible. The Kodak camera made photography accessible to everyone, resulting in the family album and tourist photography. Photos became affordable objects for everyone. For long, printing photos was still a professional service, but the Polaroid camera changed this, too. Photo printing became immediate and the ‘snapshot’, a new photography emerged. Digitization, in many ways just the next small improvement, was a turning point – in particular when the camera phones came about. Digital photography combined the immediacy of the Polaroid with flexibility of presentation. We can share photos online and we can print them on any surface – including birthday cakes, clothing and shower curtains. This started an enormous diversification of (personal and professional) photography practices.

This diversification, in turn, makes it increasingly hard to build theories about photography which do justice to its diversity. Martin Hand’s tries to contribute to the problem by carefully tracing both the continuities and the changes in academic thinking about photography, once ‘digital’ arose. Hand: “Photography has now many co-existent lives each of which is part of a different trend and may have a different trajectory in the future, but all have significant connections with earlier photographies.” (P185). He uses personal photography – so the pictures you and I make – as a focus point and he writes about professional photography only in passing. Still there are many threads to trace and the book offers a complex quilt of ideas about photography and the ways in which we are weaving photos into the fabrics of our lives. Throughout, Hand recognizes some bigger trends or themes, such as: photography is shifting from capture to performance, from permanence to ephemerality and its role is shifting from celebrating the family to living publicly. Let me discuss these shifts in turn.

Once upon a time, photos used to be a proof of the objective reality. Photos didn’t lie: they captured reality as it appeared in front of the camera. While this is more or less true from a technical perspective, in the practice of photography it has always been a myth. For example: photographers choose which part of reality they capture and this framing has a big impact on what the photo has to say. Also, on early photos we see people posing for the camera in carefully created photosets – how ‘real‘ was this? Even photo manipulation is a much older craft than many people know. But it is this myth of photos as a depiction of reality which makes up its role as a performance medium today. Billboard advertisements are an example. They are clearly photo compositions, but they seem to say “this dream we are portraying here could be the objective reality if you buy our product”. This is performance using the myth of capture. The family album is another example: it isn’t used so much to remember important events in life, it is much more about storytelling to celebrate family values and to shape the ‘ideal’ family. The family album doesn’t capture the way your family is, it shows how you like it to be and how you want portray it to others. This family performance can turn photography into the director of your experience. During my last holiday in Ecuador there were frequent-volcano-photography-bus-stops, to take pictures of volcanos and of course to take pictures of ourselves taking those pictures. So I wondered: was the holiday about the experience of seeing a volcano or about collecting photos to show our friends the ‘reality’ of our holiday. Photos are little objective snapshot of our live, which play a major role in the make believe plays we play with each other. Photography may always have been about performance. But it seems fair to say that older photography still celebrated the myth of capture, while today’s photography uses it merely as a background for other things. Today the focus is on performance, simply because we take many more pictures, which we share in many more ways with many more audiences, often in digitally manipulated form.

Related to the idea of capture is the idea of the photo as a fixation of a moment in life. Before digital, we took pictures of those moments in life which we wanted to ‘save forever’; as such we fixed them, chemically on a piece of paper. They weren’t literally fixed: just like now, photos did migrate from one place to the other. They moved from hanging in a frame on the wall to a family album or a shoebox beneath the bed, but they were ‘about’ fixation nevertheless. Even the shoebox pictures had a role in remembering and sharing stories about the precious moments in life. This practice of photography still exists, but new practices arose too. Today, photos have become ephemeral objects. Rather than physical evidence from the past, many 21st century photos are about the ‘here and now ’- and just that. We send each other photos of our food, the dirty dishes, writings on a whiteboard, or the delay sign for a train, as an integral part of chatting with friends. In other words photos have become a form of speech. Today, we use photos to say: “Look, this is what I am looking at right now”, and we do this a lot, much like the camera phone ad, I begun with, predicted. This practice is a next step after the snapshot. Nowadays we are capturing and sharing stuff that isn’t worth a dollar for a print to us; goodbye to the photo as a record of the ‘precious things in life’. According to Hand, college students express a concern about the unexpected permanence of digital photos (p155). The technology of photography still has a fixation character: photos may be permanent in the sense that their bits and bytes will dwell forever on some hard disc or cloud computer and they may one day be found by future data archeologist. Still: in the everyday practice of photography the ephemeral, fleeting and the mundane have become dominant forms.

Next, we must ask to what extend the personal photo is still about the family. In his book, Martin Hand pays attention to the impact that the way in which we share photos has on the photo’s which we take (and the other way around). More and more, we share our photos on social media: Instagram, Facebook and Flickr, rather than through the family album. This changes the scope of personal photography. Hand: “The emergent problem of ‘how to live publicly’ in a post-privacy world is not only about the management of digital self-presentation but also recognizes the collective nature of public life (p183)”. Our photos find bigger and different audiences, with the disadvantage of losing control. Who has access to your photos and in what contexts these are presented? This is a much more difficult question today, than it was years ago. Personal photography experienced a shifting agency of sense making from the personal to the collective and a loss of control of who chooses to look at your photos and with which expectations. Hand shows this vividly with an account of how college students react to person tagging (which controls whose ‘timeline’ changes) on Facebook, compared to topic tagging on Flickr, which has less effects on audience control. As soon as we post a picture online we trade connectivity for ownership and the future of personal photography may well depend on the situations in which this turns out to be favorable and the situations in which this turns out be a Faustian deal. The result will not only affect the pictures we share, finally it will influence the pictures we take.

This last point may be Hands most important one. There is a bidirectional relationship between the technology of photography and its cultural use, which he calls ‘reconfiguration’. The reason for taking a picture is mostly to share it. How we share the picture, with whom and to what extend we feel in control of this process may be important to the pictures we choose to take (or not). Changes to the technology and, in particular, to the cost of making pictures affects the pictures we take and they shape our reasons for sharing and the audiences of the pictures. Digitization has made the relationship between the technology for taking and sharing photo more complex, dynamic and exciting. When we try to make sense of these changes we need to see that the photo is no longer an image on a physical piece of paper. Rather, we see a networked object that can collect meaning and an audience more or less independently. This can be for better or worse to the original authors. Instead of fixed objects, capturing the ideal family, photos are now shapeshifting objects for performing a diverse set of scenarios in today’s ephemerality; publicly.

Reading more.
Lev Manovich is a strong advocate of the idea of a new media object as a networked object, a view shared by Dhiraj Murthy, when he writes about Twitter.


Imagine having a device which can post sound bites from your home to your Facebook wall. Now and then it will record the sounds in your home, scramble them – so listeners cannot recognize specific sounds, and it will make an audio post for you. Most people I talk to, think this is a stupid idea. But it is what “Facebook Listener”, designed by Stefan Veen and his colleagues, does – and I do believe it could be a success. Or at least I believe in the broader idea of awareness systems: allowing people to share background information with others in a lightweight way. In this post I will answer four critical questions I often get about awareness systems and I hope to show where the opportunities for awareness are.   

Facebook Listener

Facebook listener records sounds from your home and allows you to listen to sounds, others shared.Picture from the original article (link below) 

A short history of awareness

Maybe I should to take a step back and discuss shortly where some of the ideas behind Facebook Listener originated. The academic tradition of awareness systems started in the workplace of the early 1990ies. When people collaborate they build on tacit knowledge about each other. Several workplace studies showed that people who have an awareness of their co-workers can collaborate more easily and effectively. Scientist turned out to publish most articles with people whose office was nearby. People in control rooms needed details about what others were doing to avoid mistakes. Because of findings like this, researchers slowly started to recognize a design opportunity: “If people need background information about each other to work together”, they thought, “can’t we design systems which deliver such information across a distance?”. So soon, researchers started to build devices which allowed awareness, background contact and casual communication at a distance. Mostly these media spaces, as they were called, involved video links between multiple remote offices. This was exciting and controversial work: in those times computers had to be useful tools to complete specific tasks effectively. The idea to use computers to provide background information about others seemed to come from outer space. Exciting or not, researchers needed several incarnations to get the design of these systems right, and honestly, the early controversy didn’t fade.

Media-Space-office-meeting

Early awareness systems, called media spaces, provided video links with remote colleagues.
Image from: http://people.cs.vt.edu/~srh/Media%20Space%20Home.html

Is there a need for awareness systems?

The top 1 question. Does your work, for example, improve if you get a video connection with remote colleagues; so you can see them behind their desk, typing? Or, does anyone benefit from listening to scrambled sounds from your home on Facebook? Perhaps not: I have not seen studies of media spaces which led to measurable improvements on business results and clearly my friends survive without being able to listen to my house sounds today.  But, probably yes too. Consider Digital Family Portrait, for example. Researchers from Georgia Institute for Technology placed sensors in the home of an elderly woman so the computer system could get a sense of her activity during the day. The (adult) children could see this activity data on a digital photo frame in their home. There were butterflies around the picture of the elderly woman and big butterflies meant “mother is active”, while little butterflies pointed to the opposite. Digital Family Portrait raises many questions, but the users were positive about it and it shows how awareness systems can support real communication needs.

Digital Family Portrait

Digital Family Portrait gives subtle cues about activity.
Picture from: http://home.cc.gatech.edu/jimRowan/4)

Often, our communication is not so much about the contents of the communication but for the sake of communicating itself. This is called phatic communication. When I share what I am eating on Twitter, my followers learn something boring and something important about me. The boring part is what I am eating. The important part is that I am alive, all is well, and the communication channel is open. They can contact me if they want to. It turns out that this last phatic part of the message is important, in particular between close ones. The users of digital family portrait didn’t really need to know how active their grandmother was. But they needed a possibility to check on her regularly for their peace of mind. Digital Family Portrait provided them an easy way to do just that. There are other reasons the butterflies on Digital Family Portrait made good sense as well. The meaning of a bit of information about someone close to you depends on your background. To the users of the Family Portrait the size of the butterfly is not so important, but the way it changes over time is. A sudden drop in activity, for example, means there is a reason to check on grandma (or the hardware). Finally, background information such as Facebook Listener, Media Spaces or Digital Family Portrait give, can be a starting point for a more intimate and meaningful conversation. Users of Digital Family Portrait could use the butterflies to talk about health and lifestyle choices. So it could be that Facebook Listener, Digital Family Portrait and Media Spaces aren’t presenting the right information about the right people in the right way, to the right people. But this doesn’t mean the general idea is wrong. It seems people want to share and receive background information about others. Even if it is just phatic communication, I would say there is a need for awareness systems.

Why spyness systems? What is wrong with human updates? Aren’t those sufficient?

Granted that people need background information from others and that they already share such information on social media, why bother any further? Or stronger: aren’t we crossing a hard and scary border when we start to capture and send information about people automatically, like the three examples so far do? I find this question much more difficult to answer, but I would say no: it fine to build systems which give automatic updates. Perhaps JK Rowling can support my argument. In the Harry Potter book “The Goblet of Fire”, the Weasley Family has a clock which shows were family members are, rather than the time. This is an example of automatic updates that seem sympathetic and useful. In fact we know from research it is. Soon, this “whereabouts clock” as researchers called it, was build and evaluated by Microsoft Research [1]. They report that – at least within nuclear families – the clock gave a sense of reassurance, connectedness, expression of identity and social touch. The clock quickly became an integral part of the routines of the family members: it provided phatic communication possibilities. Family members got a sense that everything was going to routine, that all was well. Part of the success of the clock may have been its Potterish design, though. Much like the original version of the Weasley family, and unlike some commercial tracking systems, Microsoft’s whereabouts  clock used rough place labels like ‘home’, ‘work’, ‘school’ and ‘elsewhere’, rather than precise GPS data. Probably there is a tradeoff in automatic sharing. The more intimate the data you share gets, the more abstract, course grained or fussy the presentation of this date needs to be, to be acceptable to users.

Wherabouts clock

Figure 4: Microsoft incarnation of the whereabouts clock, first described in an Harry Potter book.
Picture from: http://research.microsoft.com/en-us/groups/sds/whereabouts_clock.aspx

Who needs even more ‘information display’s’?

This question comes in different forms, but the gist is that people wonder how to act on the new information they are getting. Wouldn’t it be awkward to pick up the phone and say “why’s your butterfly so small today ma”?  The notion of awareness is information centric much more than communication centric indeed. Many awareness systems researchers want to figure out what people could share though sensors and how to show this information to users. They seem to care less about the next step: providing the means for the ‘talk’ that may follow from the awareness. Often, like in Digital Family Portrait the information stream is one directional too. The mother lacks common ground, she doesn’t know how big her butterfly is that day, which creates the awkwardness of the example.  Some researchers argue that people have enough channels and the awareness system only needs to provide the ‘trigger’ for the communication. But I think the evidence from the studies I mentioned points in a different direction. Users of Facebook Listener turned out to use it to create dedicated ambient-home-sound-scape messages for their friends. Microsoft researchers reported that users of the whereabouts clock wanted to be able to send messages back to their peers from the clock. Evaluators of other awareness systems report similar findings. Awareness information triggers communication needs, and it seems normal to support those from the system, preferably in an open ended and playful way.

Who would buy such a system?

This is a fair question. A positive evaluation doesn’t form a business case. There are two serious barriers to market entry of current awareness systems: single purpose systems and the network effect. Many awareness systems serve a single purpose: a specific awareness need or a single type of awareness information. This fine for research projects, but it creates small markets and high entry costs. It seems difficult to overcome. Users of the Digital Family Portrait and the whereabouts clock can understand the information of the system because the design supports the use case. Some have tried to create more flexible awareness systems such as the modestly successful awareness rabbit called Nabaztag, but this is difficult.  It is a pity, but Nabaztag cannot display location information of the family in an optimal way. When we abstract away from (design for) a specific type of information the awarness display becomes hard to read. A related problem is the network effect: part of the value of any communication system is in the number of users that already use it. Awareness systems, usually being specific purpose, a-symmetrical and standalone, may suffer from this effect. The decision to buy or use is not with a single person but at least with a small group. This makes those awareness systems that make use of existing technical infrastructure and use patterns (of mobile phones for example) much more likely to succeed than other sensible awareness proposals.

Nabaztag

Nabaztag is a general purpose awareness systems. The ears can be used to communicate with other Nabaztag users.
Image from
http://en.wikipedia.org/wiki/Nabaztag

So the proposal of Stefan Veen and his colleagues: to make awareness systems which cooperate with Facebook makes a lot of sense from a market perspective. The idea is resembles our proposal of a year earlier: to integrate social media and business by building software on top of the existing social media (integration software). The benefit could be that users are already networked in this environment and they can make use of its lightweight communication tools. But this benefit comes with two costs. First, there is a usability challenge. In our own research we found that it is hard to communicate to users how the use of the new interface works together with Facebook. In a way users handle two systems at the same time. It is not easy to understand how an action in the integration software or awareness system plug-in changes the state of Facebook, for example their timeline. Second, Facebook integration only has an advantage if whatever comes out of the awareness system is also informative for non-users of the system. Any message the new system puts on Facebook needs to be a good addition to the facebook timeline. This is a strong limitation. So while I expect that awareness systems will be build that will integrate with social networking sites, chances are that they will most likely provide a new output of the data which is on Facebook already,  or can be inferred from it, rather than providing a new input device.  But on the other hand, if awareness systems are eventually communication systems new inputs will soon be added to this outlet in your home.

Reading More

Much of the information for this blog post comes from the book “Awareness Systems: Advances in Theory, Methodology and Design”. In particular: the history of awareness (chapter 1), phatic communication (chapter 7) and  the wheraboutsclock (chapter 18). The design of Facebook Listener is described in detail in this NordiChi paper, the Digital Family Portrait in this paper. Our exploration of software which uses social media as an infrastructure for new applications can be found here.

Loosley related to this post are my post about common ground, privacy coordination, playification and Twitter.


[1] Several incarnations where build at Eindhoven University of Technology as well.


A book review

The problem with books about the impact of social media on society is that most of them are too polemic. The positive books make sweeping claims about how social media is going to change the world for the better. Their writers predict social media will cause more a participatory, open and democratic society. The dystopians, in contrast, sketch equally sweeping visions of a world full of shallow ego-senders who cannot deal with intimacy and cannot focus on more than 2 lines of text. In this battlefield of extreme ideas it is hard to find a balanced treatment based on a thorough review of existing scholarship and empirical evidence. It is this void that Dhiraj Murthy is trying to fill with his book on Twitter: Social Communication in the Twitter Age.

It is fair to ask what makes Twitter so special to deserve its own book: shouldn’t the book be about the impact of all social media on society? In fact this narrow scope on Twitter works well for Murthy. It is easy to see if a statement about the impact of Twitter relates to a broader social media trend or to Twitter alone. Also, in a surprising number of cases, Twitter turns out to be different from its social media neighbors. Compared with Facebook, Twitter is more public, lean and lightweight. Most Twitter users go with the default of public tweets, they incorporate weaker ties into their Twitter networks and they form ad-hoc connections with other users with hashtags.  Hashtags, retweets, special purpose accounts and lists, also allow for collections of tweets as a new narrative form. Twitter shares a focus on news and opinion with blogs, which explains its close relation with the mainstream press, but there are important differences between Twitter and blogs too. Twitter is much more participatory, because of its quick reflections, for example. Twitter is an interactive multicasting system, allowing many too many communications. On Twitter, each tweet can have a different audience (size). Its users can select audiences in several ways: @reply’s, retweets and hashtags. These audience selection practices, make tweets part of a complex social structure and less ego-centric than regular blog posts. Twitter builds bridges between the private and the communal and between the mundane and profound. This is a lot like normal blog posts, but on Twitter people tend to mix these extremes much more. The differences between Twitter and other media may be a reason it has become the primary lens through which we look on the impact of social media on society, at least for Murthy.

Murthy is quick to disarm the polemics too. He starts his book with a comparison between Twitter and the telegraph. It is no surprise that many of the more sweeping claims about Twitter could be close echoes of early responses to the telegraph. This is a sobering thought and it makes clear that Murthy is planning to be more careful with his analysis. He continues this careful line of reasoning in his theoretical chapter. Building on ideas from, among others, Lev Manovich, Marshal McLuhan and Erving Goffman he tries to give perspective to some daunting questions around Twitter. Is it, for example, a democratizing force in society? Isn’t Twitters openness and instant information spreading, fueling equal access, the idea of a global village and public debate? This may be, but in practice Twitter participation rates suggests a digital divide and its social circles are fairly homophilious. Twitter is a tool for the elites and those turn out to meet like-minded people rather than those with opposing views and values. This hurts the democratic appeal of Twitter. On Twitter, we do get a sense of the global village, but it is a stratified picture.

Another question is what the key-motivations are for people to use Twitter. People may want to be connected with and informed about a wide circle of social contacts and they may want to build their identity with frequent updates about their lives. This is true for most social media, but Twitter may be special because of its focus on news. Twitter gives the possibility take part in something important. This may be what drives citizen’s journalist and the way users use Twitter at events such as concerts, conferences and with TV programs may be other examples of this motivation to be part of something larger. The second motivation is that, unlike many other media, Twitter is real. The mainstream media paint a stylized picture of the world, movies and advertisers show photo-realistic illusions which may feed a need to connect with real people sharing their thoughts about the here and now, mundane or profound. Twitter focuses our attention to the small and big events around us. It fosters and update culture and celebrates and event driven society.

Murthy discusses four case studies to develop these ideas about Twitter: citizen’s journalism, disasters, activism and medicine. I believe his chapter on citizen’s journalism is just as foundational as his theoretical chapter. It is hard to understand Twitters role in disaster and activism without considering the synergy between Twitter and the mainstream press. Twitter is an ambient news environment which plays two roles for the press. First, the press can harvest news from Twitter. The press can make citizens who tweet about important events their ‘ground army’. This changes the way the press acts as a mediator between the public and the people who ‘are’ the news. While there is also direct contact between citizen’s journalist and parts of the audience during a disaster, these citizens lose their following of once the news has traveled elsewhere. So, in return for the free updates, the press has the capacity to connect citizen’s journalist to news  audiences, which may be a motivation for citizen’s journalist to share the news at all. Second, Twitter is an outlet for news organizations. Twitter plays a role in the spreading and curating the news reports the press provides. As Twitter is an information and low threshold communication channel in one, it allows for direct contact between journalists and the public. This puts pressure on journalist to act in such a way that they can keep the trust of the audience while the direct lines between audiences and citizen’s journalist also a pressure to act faster. So Twitter supports a new relationship between the press and the public, which could be symbiotic, but is also subject to several new tensions.

If the circumstances are right, a single tweet can gain an enormous reach and become famous. This happened to the first photograph of the US airways plane that had to make an emergency landing on the Hudson River in January 2009.  The story of the Hudson River tweet shows how citizen’s journalist can be important for the news, but also that the hype can be much bigger. The attention for the emergency landing quickly shifted into a raving debate about the importance of Twitter itself and the new reality it creates. There was a similar shift of attention for the Iranian postelection protest in 2009, the 2011 ‘Arabic Spring’, and the 2011 earthquake in Japan. It seems unlikely that the activist in Egypt coordinated their efforts with Twitter. It doesn’t make sense to use a public tool for coordinating protests, but more importantly only the flabbergasting low figure of 0.00014% of the Egyptian population actually uses Twitter. Other disasters and protest show low participation rates too. During disasters and revolutions, the enormous numbers of topical tweets, show ‘the West’ tweeting to ‘the West’ about the disaster, not the victims tweeting to ‘the West’. This raises the question whether Twitter can have any real impact for activists at all. But it does, because of the synergy between traditional press and Twitter. Murthy shows that Twitter may play a role during disasters to augment social communications on the ground (be it within the small elite of users), to empower disaster victims (because they have a voice to the world) and to call for support (or comfort) from this worldwide attention. Similarly: dictators who face protests on Twitter will not fear the small number of direct Tweets from the ground, but they turn out to be sensitive to the changing public opinion in the West that can be a result of it.

In last case study of the book: Twitter and Health we see the importance of a third paradox of Twitter. Apart from bridging the private and public and the mundane and profound, Twitter also mediates between information and community. While Twitter is less participatory than Facebook, its multicasting possibilities allow topical tweeters to build and audience. So while Twitter is much more focused on information than community, it can be a valuable resource for patients to build a support group around them. For these virtual support groups it is important that Twitter is a channel in which they can post mundane information. You would not write a blog post or visit an online health support group to share your anxiety for a doctor’s visit from the waiting room, but it is this time and place where support from like-minded people may be most vital. Twitters bridge between the mundane and the profound plays out in a different way too as health experts engage in lightweight communication with patients. Patients may connect directly to experts rather than through their doctors who may have an information lag. These experts may also play an ‘activist’ role for the disease by connecting to other audiences as well. We see a similar mix of tweets with celebrities, who are followed for gossip, but who can be at the forefront of the more profound Twitter based- fund-raising efforts for flood victims. Health care shows in several ways how Twitter creates an information and support network that transcends traditional organizational and social structures. This new network can be a welcome addition to health care and support.

With his book “Twitter: Social Communication in the Twitter Age”, Dhiraj Murthy has written a tantalizing, insightful and balanced account of the existing scholarship about the question how Twitter changes our communication practices. These changes may not be as big as the web-optimists would like us to believe, but they are nevertheless significant. Citizen journalist may not revolutionize the news making industry, but they do change the playing field for journalists. Twitter isn’t the democratizing agent that some see in it, but activist manage to create international recognition and pressure because of its close relation with the regular press. Murthy has done a good job to bring perspective to these debates, but to me, his chapter on Twitter in health care is the most important chapter in the book. When we discuss the impact of social media on our communication practices , we usually focus on the ‘big events’ or ‘bright stars’, the disasters and revolutions. The small changes the medium brings in the everyday life of everyday people gets much less attention. Surely these are harder to see, but they may be more important for the well-being of the global village than the big events that everyone is watching so closely. In his chapter on health Murthy gets closest to this everyday use of Twitter. Here we see how Twitter’s paradoxes play out in a positive way. Twitter may be the noisy, superficial and banal communication channel for which it is bashed around by web pessimists. But Murthy shows that when the private meets public, the mundane meets the profound and information precedes community health care may benefit. It is this chapter which shows most clearly how the subtle differences between Twitter, social network sites, dedicated e-communities and blogs lead to different, new, and complementary communication practices.

Reading More

I wrote several posts about Twitter before which fit in nicely with the ideas in the book. In “Is Twitter Getting Fat?”, I discuss the lightweight nature of Twitter and how this may be changing because of commercial pressure and feature creep. In “Does Twitter Have a Tempo?” I discussed how Twitters update culture, originates from the way Tweets set a communicative contexts for other Tweets. In “Hashtags and the Semantics of Interactive Language” I discuss the history and interactivity of hash tags, and how these may influence their linguistic uses. My post Recipient Responsibility in Netiquette, continues this linguistic line of reasoning. I first commented on the possible problem of homophily in my post Turkle’s Turn

Earlier book reviews include my recent review of Lev Manovich’ book “The Language of New Media” and reviews of Wikinomics and Understanding Media.


A book review

It is not an easy challenge Lev Manovich sets in his book “The Language of New Media“. In the early days of cinema hardly anyone could foresee the enormous cultural impact this new medium would have on our society. A new artistic language, cinematography, was born; but no one recorded its first steps systematically. This wasn’t only because people didn’t see the importance of cinema. Documenting an emerging language is a form of historiography; merely impossible without the advantage of hindsight. It simply takes time, before it becomes clear what the right or interesting historical questions are. So, I never believed Manovich, could succeed in “providing a potential map of what the field could be” (p11) back in 2001. But, I do feel his book deserves a close reading. The first steps in building a theory are usually the hardest and, despite the difficulty of the task, Manovich did cover interesting ground.

Manovich tries to understand new media with the eyes of an artist. Two ingredients are necessary for every art piece. First there is the influence of existing media. A key concept in Manovich’s theory of new media is the cultural interface (p 69). Much in the way the human computer interface, structures the interaction between humans and the computer, cultural interfaces give structure to the users’ interaction with culture (or “cultural data”). The interfaces of CD-ROMs, web pages, games and apps are all cultural interfaces. Often new media reuse ideas, forms and conventions from older media. Early cinema build on theater, rock music on blues. So the forms of new cultural interfaces also stem from older, already familiar forms such as: magazines, newspapers, photography or -indeed- cinema. Second, the new media have offer new technological possibilities and affordances to the artist (Manovich speaks of operations). As media makers experiment with these new operations, some older conventions will fade and new conventions emerge. So to understand the language of new media we need to look carefully at its ancestors and at the possibilities of computers.

Manovich believes three older cultural forms are most important to describe the language of new media:  print, cinema and the human computer interface (in practice Manovich specifically refers to the graphical user interface, the GUI). Print, the oldest form, was adopted first. The page is a cultural convention of print that persisted into the digital age; although the worldwide web also revived the ancient form of scrolls. Hyperlinks were the most radical and disruptive innovation to texts. Hyperlinks challenge old ways of organizing information. The structure of the web isn’t like a library (with an index) or a book (structured through rhetoric narrative) (p 77), but more like a walk on the beach where you may find random objects one after the other (p 78). Cinema, is the second cultural form that influences new media; in particular games. The moving camera is a convention borrowed from cinema, apt for navigating in virtual worlds. Also many games borrow story forms from cinema. But games moved beyond cinematic conventions too. They break the rules of (natural) perspective and story, in search of forms that are more suitable for interaction. The graphical user interface (GUI) brings controls, menu structures, and the desktop metaphor to new media. These seem to fit in, but there is friction as well. The controls offered by GUI’s need to stick to the underlying metaphors (for usability) and simultaneously blend in with the story world of the cultural interface. While new media often show mixtures of print, cinema and graphical user interfaces these mixtures may be rough. Often, the underlying ideas of what the screen represents differs so much (flat surface with information, window into an immersive environment, control center), that these forms cannot be wedded easily.

So print, cinema and GUI’s, are to new media what photography and theater were for cinema. They provide the “raw material” of cultural conventions that are available to new media makers. But they do not yet describe the other major influence on the language of new media: what  new media creators tend to do with this material. In other words, Manovich needs to turn to the affordances of new media creation technology, or as he calls it: the operations. Cinematography, for example, makes creative use of different types of shots and editing techniques. Do these operations­­­ have new media equivalents? Manovich believes these are selection, teleaction, and compositing. Teleaction, the ability to see and act at a distance, allows the camera to be everywhere and users to cooperate across the world. In massive online multiplayer games for instance or in massive online open courses. Computers allow easy access and reuse of older material and both selection and compositing capitalize on this possibility. The rise of the DJ, cleverly choosing and combining existing materials to create new music, is an example of the power of selection and compositing as means to create new forms. The growing importance of special effects in movies, another.

New media creation technology also allows to create new types of illusions. Computer generated images for example. Mimesis: to mimic nature, has been an important goal of cinema and this remains so in computer simulation. For computer vision scientist, for example, cinema is an important market and source of inspiration: “High quality, means virtually indistinguishable from live action motion picture photography” (p191). In practice, creating a realistic immersive experience involves more than just photo-realism. It involves many forms of mimesis like: touch, interaction (with virtual characters), moving about in virtual space and 3D graphics. But mimesis isn’t at all one-dimensional. Rather, in cinema, the quest for realism progressed through a succession of ‘codes’ in which only some parts of the experience mimic real life and the viewer filled in the gaps. In computer graphics ‘realism’ was first achieved by ‘deep perspective’ and later by ‘correct lightening and shading’. Considering the broad playing field for new mimetic codes in new media  (for touch, interaction, movement and so on), we may expect a long period in which media makers try to set new mimetic frontiers.

After describing the conventions and affordances of the new technology, the cultural interfaces, operations and illusions, Manovichs turns to the emerging genres or forms. He focusses on two of those new forms: the database and navigable space. Databases are special because they have no beginning or end. Much more than any old form, new media objects allow the user random access to items in the piece. The user experiences new media through hyperlinks, browsing and searching rather than through the guided tours that traditional narrative forms offer. The second form is navigable space, which is the dominant form in many games and interactive stories. It is the first time space is a medium. Space can now be stored, retrieved, transferred and it can be used to tell a story. Today, architecture touches media and new media designers need to learn how to tell a story with space.

The Language of New Media, is a rich book which offers a comprehensive theory of new media. It is an interesting idea to use cinema as a model for new media and with this approach Manovich could get to some marked insights in the language of new media. But I do not feel the book can live up to its goal to be a map of the field. Manovich tries to bring clarity to the field by thinking of new media in layers: cultural interfaces, operations, illusions and forms. But the distinction between these layers is not always clear and he fails to show how these layers influence each other. How did, for example, spatial navigation as a form emerge through the operations selection and compositing on cultural interfaces? Manovich raised the question, but he gives no answer. There were many other questions like this, that haunted me when I was reading the book. At the end of the day Manovich book is more a structured description of the status quo of new media at his time, rather than a theory through which we can understand it’s development. And then there is the scope of the book…

When you compare (early) cinema and new media, you start inspecting the parts that are most like cinema. This way, Manovich misses a lot. As an interaction designer I can hardly agree with Manovich treatment of the human computer interface (and its history). Interaction design in games draws much more from board games and free play, than from desktop tools. So board games are important cultural interfaces that feed new media. Also, in contrast to film, new media has multiple context of use, and is paid with a bigger diversity of business models. This must have an impact on the developing language of new media, and studying the history of cinema cannot tell you what it is.

Shortly after the language of new media came out, web 2.0 unfolded. This changed what most people see as the dominant forms of new media. As early as 2005, four years after the language of new media was first published, Mark Deutze, considered “participation”, “remediation”, and “bricolage”, to be the principal ingredients of digital culture. Deutzes description of – in a word – participatory remix culture, would be part of any new book on the language of new media, but Manovich could not have foreseen this development when he wrote his book. Manovich knew that this might happen, as it happened with cinema before. He writes: “It is tempting to extend this parallel a little further and speculate wether this new language is already drawing closer to acquiring it’s final and stable form, just as film language acquired its classical form during the 1910’s. Or It may be that the 1990s are more like the 1890ies, in the sense that the computer-media language of the future will be entirely different from the one used today”.

So did Manovich write his book 10 years too early? Probably not, rather I think his scope was too wide for his analytical means. Cinema has never been the ‘meta-medium’ which the computer is today. Cinema, has always’s been expensive to produce and it has always offered  a special and exclusive experience to the audience. So the technology and the economics (business models) of cinema quickly converged, in turn allowing for stable cinematic conventions to emerge. The meta-medium of the computer is cheaper, more diverse in the types of experiences it can offer, the contexts of consumption and the diversity of business models available to support their production. There is not one language of new media, there are many. And unlike with cinema, they are not likely to revolve around a common core any time soon. Manovich was right and observant when it came to games and digital cinema, but he may have underestimated the power of (linear) narrative, overestimated virtual reality and he may have missed forms like augmented reality or social media. Many new media forms, like immersive games and websites have stable language for years. For these forms, Manovich book provides at the least a descriptive framework. But other forms, such as augmented reality, social media and embedded media have their proverbial 1890ies still to come.

Reading more:

In my post Cognitive Bias in the Global Information Subway , I discuss the language of search and its impact. In Collateral Damage of the Robots Race (on the Web) and Social News Needs a Nuanced ‘Like’, Quickly, I discuss the impact of artificial intelligence on the way web experiences are structured.

Earlier bookreviews on this side include Reading Wikinomics and Reading Marshall McLuhan’s Understanding Media.

Here is a good Dutch summary of “The Language of New Media”.


I guess most design students have learned about integration at some point. Business, Costumer-relations, Engineering, Marketing and other departments may have very different demands for a new product and the designer has to find her (his) way across the different value sets and constraints of these different departments. An integrated design manages to do so; elegantly. Integration is difficult enough when the product is ‘just’ a new loot on a product family tree; but in innovative projects integration can be daunting. In this post I would like to address such projects and show how modeling ‘integration’ in innovative projects can relief some of the design tension that emerges in integrative innovative design projects.

In his PhD thesis Describing Design, A Comparison of Paradigms, Kees Dorst defines integration as follows: “Someone is designing in an integrated manner when he/she displays a reasoning process building up a network of decisions concerning a topic (part of the problem or solution), while taking account of different contexts (distinct ways of looking at the problem) or solution”. This is just a formal way of saying a designer needs to take different (incompatible) viewpoints into account in the design process. So: for innovative design projects the question is if there are generic viewpoints that apply to most projects and capture most of the design space. Finding such viewpoints takes experience, but for the interactive product design projects I worked in: user, design, and technology always were the ‘big three’. ‘Business’ may be a fourth, but I’ll get back to business near the end of this post.

Of course, I am not the first to highlight user, design and technology as the most important perspectives in interactive product design. In their seminal paper called “What Do Protoypes Protoype?”, Charles Houde & Stephany Hill make a distinction between four types of prototypes: role prototypes, implementation prototypes, look and feel prototypes and integration prototypes (see the figure below).

The prototyping model of Houde & Hill

Each of these prototypes forms a tangible and temporary answer to a design question. Role prototypes answer the question “what changes in the life of the user because of the new product”. Look and feel prototypes are about the sensory experience of the product. Implementation prototypes address the question how the product will work. Finally, integration prototypes answer multiple of these design questions at the same time.

Rather than just a prototyping concern, the three corners of the Houde & Hill model are  general concerns in innovative design projects. So to use the Houde & Hill model as a model for integration in innovation, you need to interpret it more broadly. Therefore we extended the model to include three relevant contexts for design and three innovation forces (see the figure below). The first innovation force is user pull. A design team shows to master user pull if it has a concern with the user and the utility of the product in the context of use. The team creates new user scenario’s in writing and with storyboards (role prototypes). Most of you will recognize this as a user centred design capability, but ‘pure’ UCD is not enough. If the other two forces: design push and technology push are neglected the result may not be innovative and may not appeal to users (beyond the utility of the product). A walker for example, fulfills all functional user needs but somehow something seems to be missing. Perhaps walkers do not appeal to soft values such as status and power because of a lack of design push in the design, and possibly the latest technologies could bring improvements to walker design too.

The second force is design push. For design push a team needs to be sensitive to the social and cultural context and have the ability to translate this into design solutions. Anthony Dunne’s critical design (design for debate) movement, aiming to expose undercurrents in society, is an example of design push in its purest form. But there are lighter forms of design push too. For example: many designers are able to translate brand values into a design, thus showing sensitivity of the social cultural context. In the Houde & Hill model, design push ends up in the ‘look and feel’ corner, but I do not like this term at all. A good ‘look and feel’ prototype expresses meaning in a specific social cultural context, which is more than a ‘pretty picture’. Therefore I prefer the term semantic prototype. Design push is a positive force, but there can be too much of it too: projects with too much design push are ‘interesting’ or ‘provocative’, but not always useful.

The Integrative Innovation Model

 

The last force is technology push which is the ability to identify new technological developments and to appropriate them for the design.  Of the three forces technology push has the worst name. You do not need to be a user centred design fundamentalist to know of a product that is mostly unusable and only suitable for technology geeks. But it is hard to deny technology push is an invaluable innovation force too. Where would smartphones have been without multitouch technology? Do we blame the engineers developing this technology without a user need, or Apple for recognizing its potential? Just like the other forces technology push can be a major driver in new product development, but it needs to be balanced with the other forces.

By zooming out of the nasty details of integration in specific projects, the integrative innovation model is a design management tool, more than a design tool. It helps planning design projects by focusing on the most important viewpoints and contexts. It can be a useful checklist in several ways. It shows which skills you need in the team, it allows for a constant ‘integration’ check when the project is running and it guides the research questions of the projects. The model has disadvantages too. User, technology and design may work for interactive product development, but often business is just as important. Also the set of forces and contexts may be quite different for other disciplines such as media design or social design. Also, the model doesn’t solve the difficulty of integration in design. In most of the projects I did, new viewpoints arose along the way. These could not always be ‘fitted’ on the original big three of the integrative innovation model, but where just as important for preserving an integrative solution. But then again, once we found those “project specific viewpoints”, we were always close to the finish, and design management was no longer a top priority. Until that point the model served us well.

Reading More

I wrote about the integrative innovation model before in our The Web And Beyond paper: “UX in the Wild: on Experience Blend & Embedded Media Design”. In this paper you can find examples of projects for which the model served as a starting point.

The paper by Charles Houde & Stephany Hill is available online as well.


I do not tend to think about netiquette a lot. As with privacy, I always feel that etiquette on the web should be debated by a bunch of gray conservatives; or at least by people who are much wiser and more decent than I am.  But, when I finally started thinking about it, I figured the wise bunch were missing a vital point. So I decided to help them out. The point is this: language use, even on the web, is a form of coordinated action. In this post I will explain what I mean by coordinated action, why this view on language use is so important and how it affects netiquette. I will argue that language use works if both sender and recipient share a responsibility, and why, in the daily practice of internet use, I feel it is the recipients who need to step up – and be more, well, decent.

Decent, however is not a word you would use for sender which planted the seeds for this post. This boy, a  student, said  something really nasty about one of my colleagues on Facebook. I never learned his exact words, so for the sake of argument: let’s assume it was beyond all limits, it was not something he would say to a teacher in real life and certainly not something he should have said at all. Most Facebook post go by fairly unnoticed, but attention sticks to the outliers. This particular post was picked up by a social media scanner that my institute uses to oversee what people say about us online. Next, because of the aggressive tone of the message it was forwarded to the director of my department. He felt the incident was serious enough to write all students (and staff) an e-mail, alerting them to basic netiquette rules and the social media code of conduct of the institute. This is how I learned about what happened. Now, I tend to be quite liberal about what you can say online, but in this case it was easy to put myself in my colleagues feet as well – this aggression could just as well been directed to me. So it took me a long time to figure out whether I had more sympathy with the student in question or with my director pressing students in general to be more sensitive about the type of information they put ‘on a public website’.  In the end I decided the student deserved my sympathy most.

To me, more than a generation gap between the n-geners and the older generation this little Facebook incident is an example of a widespread confusion about the way human communication works. Most people see communication as a transfer of information. There is a sender who encodes his communicative intention into a message which is transferred through a channel and decoded by a receiver. If something goes wrong it is because of noise, which could be an encoding or decoding problem – or it could be a property of the channel. You can look at communication this way, but there is an alternative view which is often much more suitable: Herbert Clark’s theory of language use as joint action.

In a nutshell, Herbert Clark’s theory claims that using language is like dancing the Tango. For it to work, both partners have to play a part, in close coordination with each other. Dancers place their bodies and feet in response to their partners moves, to move the dance forward and similarly speakers and listeners work closely together to advance their conversations. To see how, it could be useful to discuss the actionladders for speakers and listeners briefly.

Speaker.                        Listeners.

Produce sounds           Attend to sounds

Give a signal                 Recognize signal

Meaning                        Understand

Propose                           Uptake

These action ladders are a layered protocol for human communication, but what is special about Clarks view is that he shows speakers and listeners have to cooperate and coordinate on all levels of the action ladder simultaneously. If I propose to my wife “Let’s go for a walk” we need to coordinate on four levels. I need to get her to attend to the sounds I am making (1), make her recognize that I am trying to tell her something (2), pick words that she can understand (3) and put it into words which make her consider my proposal (4). At all four levels we both need to play our part and monitor each other to see if we are still on track. We do it effortlessly every day, but using language is a very intricate and coöperative performance.

The model also applies to written communication. Clark sees face-to-face communication as the basic form, but he believes mail, chat and traffic signs are forms of joint action too. When I try to write a letter to a distant girlfriend, I weigh my words so she will understand them rightly. Often I wished she was there to give me direct feedback. Online chat often goes awkward, because temporality is messed up, but we can set things straight easily. WIthin our country we agree in the meaning of traffic signs, and teach them to all drivers, so they can communicate without words when they are on the road. In case of an accident any misunderstanding of the socially agreed meaning of these signs is taken very seriously. All these forms show human coordination, although it may be indirect and the forms may be more static than in everyday conversation.

All this coordination would not be possible without a lot of background knowledge, which Clark calls “common ground”. Possibly the most important contribution of his theory to our understanding of human communication is that he shows how we build up this common ground over time by talking to each other. For example, if my mother tells me: “the doctor thinks I am in good shape”, I know she is talking about the professional opinion of her physician and not her sexual appeal for to doctor across the street. This has to do with my background: I didn’t misinterpret her words because I knew she went for a checkup. But my mother knew, I knew about her doctors visit, because see told me about it yesterday. This is language coordination at work, before my mother picked her words, she considered what my background would be and she did not have to be afraid for any misunderstandings.

What does this mean for netiquette and the role of the recipient? The difference between an information centric model – the sender, receiver noise model – and a coordination centric model, such as Clark’s common ground theory, becomes obvious when a post is read by someone who is not part of the originally intended audience. The director of my institute in the example I gave earlier for example.

In the traditional information centric view recipients just need to decode messages they have access to. This means, that if you put something on a public website, whole world is entitled to read, interpret and have an opinion about it. Access defines the audience: it is the sender who takes all responsibility, including managing access, the receiver has a free lunch.  In a coordination centric view, we admit that speakers or senders are trying to coordinate meaning and understanding with a specific group. The language in this group is defined by the common ground they created, thus by the conversational history they share. It is no longer enough for the receiver to have access to the information of the sender: if he is not part of the intended audience, he does not have the conversation history, so he cannot understand and appreciate the true meaning of the post. This doesn’t have to mean, listening in is always inappropriate, bad or evil, but it seems reasonable to ask the uninvited receiver to suspend his judgment until he has verified the true intentions of the speaker.

When the n-gener considers his Facebook page to be his own private channel on which he can say whatever he likes, he is not making a controversial cultural statement or showing a bad understanding of this technology or his privacy settings. No, he is stating an empirical fact: for all post he has written so far, he only has gotten responses from an ingroup of friends. It is reasonable to assume they will be the only ones to respond when he is angry and decides to put it in words he should not have used in the first place. In turn, my colleagues (including the director of my institute) and me are justified if they believe these words are inappropriate considering our own background knowledge. But if we want to do something about it, we need to invest in normal online human relationships with these students: we need to make proper friends or follow them, we need to make ourselves known, we need to engage regularly, not only if we dislike what is being said, so that we build up a discourse and earned respect if we disapprove of something. If we learn about inappropriate posts using a scanner, and start picking on decontextualized outliers, it would be decent to be extremely modest. Let try private messages of the sort: “sorry for spying on you, my scanner told me you said such and such, what did you really mean by that comment?”

Reading More.

I have written about a coordination view on language use in my last post: A Case for Privacy Coordination. Also I briefly discussed Herbert Clark’s theories about language as coordinated action in my post: Does Twitter have a Tempo?. I wrote about n-geners in my post Evaluating the Net-Gen Argument.

Of course Herbert Clark’s comprehensive book “Using Language” provides the ultimate background to these ideas.




Follow

Get every new post delivered to your Inbox.