A Review of the book “Simians, Cyborgs and Women”, (1991)

Donna Haraway’s claim to fame must be the essay “A Cyborg Manifesto (…)”. It isn’t an accident that this title brings about associations with Robocop and Bladerunner…; it was written in the same time frame. Nevertheless my guess is Haraway’s writing will prove to be more time-resistant than the movies that augment her imagery. The essay was published in the book “Simians, Cyborg and Women”, which deals, above else – and including A Cyborg Manifesto, with the art of scientific storytelling. Haraway’s writing isn’t easy. I found it hard to read the essay without the rest of a book as background and I am still not sure whether I believe the crises she identifies can be resolved with the cyborg as the alternative hero. Nevertheless I do believe it is an idea worth considering.

Most of Simians (…), deals, as the title suggests, with monkeys and apes, or rather what biologists see in the behavior of our ancestors. Monkey behavior serves as a model of human behavior, but Haraway shows how scientist, gather and interpret evidence about animal behavior in the light of certain, highly gendered, hypotheses about the origins of human behavior. She shows, for example, how much evidence about the productive role of dominance in monkey groups, was inspired by the -bluntly gendered- “man the hunter” hypothesis, which ruled thinking about human origins at the time. One does not even have to question the quality of the evidence to have doubts about the picture of monkey behavior that arises. In another chapter she attacks the quality of the evidence itself, in particular the way infanticide among languor monkeys was studied in a highly selective and biased manner. The more modern theories of sociobiologists – who try to explain animal behavior almost entirely as a byproduct of genetic selection processes) await a similar deconstruction by Haraway. She shows how this theory of behavior which is based on a decentral, scarcity driven system resembles characteristics with the (neo) capitalist worldview which was flourishing when the theory came about. Haraway does not claim biologists are bad scientist, in the contrary her point is that scientific studies are stories, with hidden assumptions and messages, just like other stories.

This early work of Haraway features a couple of key points about the history of primate biology. First, we study monkey behavior at least in part for humanistic motives: we want to get to know ourselves, through studying our ancestors. Second, the talk-back of this scientific work is disappointing: we turn out to reproduce how we see ourselves, through our stories about monkeys, rather than alter our self-images based on the behavior of the monkeys. This holds for early work, but it still holds as her analysis of sociobiology shows. Haraway: “.. the history of biology convinces me that basic knowledge would reflect and reproduce the new world”. Third, gender is an important concept in anthropological writings about primates and it has so far not been treated particularly neutral. Fourth, many of these theories follow a (hidden) biblical arc – or a ‘birth myth’. The apes represent paradise or the ‘natural order of things’ (paradise, the newborn) after which humanity may have gone astray (mankind fallen into sin, the lost innocence of the adult). Studying this natural order can help us return to nature (paradise, innocence). If this is the dramatic arc of our scientific stories of humanity, it is a somewhat ironic finding that we reproduce the new world in the old one.

“Cyborg writing must not be about the Fall, the imagination of a once-upon-a-time wholeness before language, before writing, before Man. Cyborg writing is about the power to survive, not on the basis of original innocence, but on the basis of seizing the tools to mark the world that marked them as other.”

Against this background, Harrayway positions the image of a cyborg as an alternative. From the start the Cyborg Manifesto comes across as a grotesque piece of writing. Haraway positions the essay as act of blasphemy, she tries to create a political myth, faithful to feminism, socialism and materialism; the intent is to be critical, serious and humoristic and playful at the same time. She introduces the cyborg as a powerful metaphor that can be used to combat much of what is wrong with traditional humanistic-scientific and feministic writings. Cyborgs – both man and machine – are hybrid, ambiguous, organisms. They have no birth myth, they have never been innocent. They bridge traditional boundaries: they are both fiction and lived experience, both man and machine, both natural and handmade, neither male or female, both real and virtual. If anyone can combat the traditional mistake of reproducing our cultural distinctions in search of our innocent selves it must be the Cyborg. Cyborg thinking allows us to see the world as a polymorphic information system. The ideas behind information technology have shaped our thinking. Hierarchies and dominance, once cornerstones of our thinking, make place for the idea of networks and interrelations. For long, the human body has been the model for the world, but now, it has become the subject of information technology. The immune system, for example is seen as an information system. Many sciences including biology and ecology have in fact become information sciences. This calls for a new form of scientific storytelling.  A theory of everything needs to be rejected because it misses out on most of reality but we should revert to an anti-scientific meta-physics instead. Rather we need to celebrate a science in which pluriformity and situated knowledges are chosen over grand unified theories to understand and reconstruct the borders of our daily lives, in connection with others.

“Cyborg politics is the struggle for language and the struggle against perfect communication, against the one code that translates all meaning perfectly, the central dogma of phallogocentrism”  

If like Lakoff and Johnson argue, metaphors shape our thinking, Haraway’s proposal for a cyborg epistemology is a bold effort to change the practices of humanistic and scientific storytelling by choosing a new metaphor for it. I doubt, though, if this will turn out to be fruitful. Haraway combats our needs for wholeness and natural order and replaces it with a proposal for celebrating shattered mosaics and the cognitive dissonance it brings. This is realistic, and truthful, but it runs against a deep felt human need. Besides her essay does not escape her criticism of theories about monkeys. Her thinking –too- is fed by the dominant ideas of her time frame: such as post-modern thinking and a celebration of information technology as a liberating force. Nowadays these views are in decline. Haraway’s cyborg theory, too, is a product the time it was written in and rather insensitive to the talkback of the facts she uses to support it. Nevertheless Haraways book provides an insightful analysis if how this process works and the Cyborg manifesto stands out as a tantalizing, thought provoking and emancipatory essay. In many ways it an essay of the late eighties, but it was ahead of its times too, foreshadowing, among others, the next nature movement. The manifesto will remain to provide food for thought for years to come, if only, to end with an ironic note, by future scholars on the evolution of humanistic thinking in the past.

Reading more.

My last post called Reasoning on Metaphorical Foundations, discussed Lakoff and Johnsons thesis that metaphors are central to our conceptual thinking as they have put forward in their book Metaphors We Live By.

Rather than focussing on the Cyborg manifesto I reccomend to read the full book: Simians, Cyborgs and Women


A review of the book Metaphors We Live By, by George Lakoff and Mark Johnson.

Just how important are metaphors? Many linguists, certainly if you asked them before 1980, would have told you something like this. Metaphors are a nice feature language, important to poets, but of little importance to ordinary language use, the mind or society.  But then Lakoff and Johnson, composed a thorough defense of the opposite thesis. They argued that metaphors shape our understanding of the world, and our action in it. To them human thinking and social action is deeply metaphorical in nature. In this post I will critically examine their book.

Lakoff and Johnson introduce the idea that metaphors are a building blocks of our conceptual thinking with the example argument is war. There are many sentences which fit this metaphor: “Your claims are indefensible. He attacked the weak points in my argument. I never won an argument with him”. Are these, Lakoff and Johnson ask, exotic poetic ways of taking about arguments, or are these sentences linguistic evidence for the fact that we use the concept of war to understand and reason about the concept of argument? If the last explanation holds true, an analysis of the way metaphors are used in everyday language, can tell us a lot about the way our conceptual system works. Clearly, this is the program they unfold in the rest of the book.

Metaphors can transfer the structure of one concept to another. Our understanding of arguments borrows the ideas of struggling parties, an all-or-nothing struggle, attacks and defenses from the concept of war. In time is money, the idea of a limited, valuable resource is projected on the concept of time: “Is that worth your time? You are running out of time.” This comes at a cost though. If we use a metaphor to understand a concept it highlights certain aspects of the concept we use it for and hides others. Time, seen as money, highlights its value within our culture but it hides its indefinite flowing, which is in turn highlighted by time is a river. According to Lakoff and Johnson, metaphorical structuring is partial:  we can apply multiple metaphors to a single concept to highlight different aspects of it. This is particularly clear with concepts such as love (as chaos, as journey, as magic, etc.) and ideas (as containers, as resources, as fashions) which typically take structures from multiple donors. Metaphorical understanding has coherence though:  the concept which receives a structure needs to do so in a way consistent with its donor. Although the practice of metaphorical structuring is universal, the specific metaphors are culturally bound.

There are two special types of metaphors, which appear to be so foundational that Lakoff and Johnson assign special status to them. One is the class of orientational metaphors such as in-out, up-down, front-back and so on. These are used to structure many other concepts: “happy is up, sad is down; conscious is up, unconscious is down; health is up, sickness is down; more is up, less is down; and so on”. Another class is that of the ontological metaphors.  Ontological metaphors allow us to grab concepts as (physical) substances and objects. The gradual rising of prices, is seen as the entity inflation in: “we need to combat inflation; inflation makes me sick”. Ontological metaphors allow us to quantify, identify aspects, express causation, setting goals and motivate actions among others. Lakoff and Johnson claim spatial and ontological metaphors are grounded in our (early) physical experience, and allow us to use this experience to grasp much more abstract ideas which we face later.

So metaphors are not a linguistic fringe at all. We use metaphors all the time to make sense of the world around us. We create meaning through metaphor. We systematically understand and structure new concepts in terms of older, more concrete, concepts. The most fundamental concepts are the orientation and ontological metaphors which structure most of our thinking. Because metaphorical thinking is so natural to us, most of us go unnoticed. This makes it a powerful weapon of speech. Declaring war on drugs frames drugs as a an entity, which must be defeated, in a win or lose situation, drug dealers become enemies, policemen allies, there will be a struggle with wins and losses, which may take long and take sacrifices and so on. Almost without noticing the metaphor structures our thinking and our actions.

Metaphors We Live By builds a strong case for the ubiquity and importance of metaphors. Lakoff and Johnson’s core insights have an intuitive appeal and are supported by a lot of examples. The book has two main weaknesses. First: Lakoff and Johnson act as if only building block of our conceptual system. All is metaphor. This seems unlikely. Are, for example, orientation and ontological metaphors really metaphors in the same way as structural metaphors are? This is hard to assess with linguistic evidence alone, which is the second major weakness of the book. Take referring.  We learn to refer to physical entities. Later we refer to abstract concepts such as inflation. Lakoff and Johnson claim we understand inflation as a physical object so we can refer to it, which makes it metaphorical in nature. But this circular as Lakoff and Johnson present the fact that we refer to inflation as evidence for our understanding it as a physical entity. Until a lot of psychological experimentation has proved the truth value and reach of metaphorical understanding, it will be hard to assess the true importance of metaphors for our mind and society. Until then it remains an intriguing theory which is certainly worth reading the book.

P.S. In the book Lakoff and Johnson give an account of the consequences of their theory for the philosophical notions of subjectivism and objectivism. As I found this the least interesting part of the book, I took the liberty of skipping it.


There has been much writing about MOOCs – Massive Open Online Courses – lately, but little about their interaction design. This may seem unsurprising; there is simply no groundbreaking UX work on Coursera or EDx. But I do believe that part of the success of the MOOC is because they are better designed than predecessors. Just compare the experience of the MOOC, with much of the ‘open courseware’ that can be found on i-Tunes University. The mediocre live recordings of u1niversity classes that were so common there have been replaced by special purpose, high quality materials. MOOCs also allow for an inkling of educational interactivity: through tests and assignments and, sometimes, peer feedback. It seems likely that this brittle marriage between UX and educational design contributed to the tipping point for online learning that MOOCs, appear to embody. So it is worthwhile to consider how we can improve interaction design of the MOOC further. To explore the room for improvement I asked my social interaction design class to come up with designs for MOOC’s that would increase engagement and participation rates, would strengthen educational interactivity and would encourage peer feedback and (informal) peer learning. In this post I discuss four solution-directions which they threw back at me.

Chunk and Unlock

UX-designers know the power of chunking and chunking is important in educational design too. Unfortunately currently MOOCs do not chunk learning well. They do better than open courseware: typically lectures are sliced into separate video-lessons of about 5-15 minutes. But these chunks of knowledge transfer are seldom interlaced with knowledge activation chunks such as small assignments or quizzes. So there is a weekly ‘listen-do cycle’ to most MOOCs, which seems hardly a suitable rate for online learning. No wonder so many users skip the ‘do-part’. One way to resolve this is to use the power of unlock. Rather that offering the materials in a fixed, weekly pace, the user can unlock a new instruction video by doing a small assignment or quiz, or by reviewing someone else’s work. An interesting variant could be social unlock. Users are matched to a partner, and both have to contribute something to a joint assignment before they can unlock the next instruction materials. Of course random matching may go wrong in a learning environment with many lurkers, but users could be matched to other users who unlock at the same pace or who check in to unlock simultaneously.

Improve the connection between content and community

Once, schools used to favor the principle of separation of learning and social peer engagement. In the classroom, you would listen to the teachers; during the breaks you could talk to classmates; which was not considered learning. Such schools do still exist, but they are hardly considered as best practice. Still, separation of learning and community is in the basic design of most MOOC platforms. MOOCs offer materials for individual learning with a hyperlink to a forum – elsewhere in cyberspace – where learners can engage with one another about the content of the course. So the coupling between community and content is as loose as it could possibly be. As a solution, my students suggested to take a better look at Massive Multiplayer Online Roleplaying Games (MMORGs) – which inspired the term MOOC in the first place. Players in those games take on joint challenges with division of labor. Communication around the challenges, encourages peer learning, the forum is not a separate place anymore and community emerges around the content. Although this seems ideal, it can be hard to design such challenges and set up the special purpose UX to support it. A less demanding solution to the same problem is to couple forum entries to specific content and challenges within the MOOC. This is done well on code academy where every exercise has its own forum entry. One of my students went as far as suggesting a dynamic forum coupled to the video. Users could add questions to specific points in the video instruction, which are then answered by other users watching the same video on a later time.

Improve online identity, presence and urgency

One well-known social web pattern which is rare in the MOOC is showing the presence of other people, and lowering the threshold of informal interaction with them. Consider yourself looking at a video in an MOOC: do you have an idea of whom else is watching this video at the time? Currently not. But supporting online identity and social presence – in a properly designed way – may increase participation and engagement a lot. Profiles on MOOCs are weak and too much focused on the narrow role of the user as a learner. One of my students suggested deep integration of MOOC platforms with the professional networking site: LinkedIn. Don’t learning and professional development go hand in hand? How about a ‘Dream jobs’ profile, accompanied by MOOC achievements? Other students suggested increasing the presence of other users through live forums, connecting community and content in the here and now. Maybe videos could be started only after a critical mass of learners say (say 5) signed in to it, so live chat would become an opportunity and study groups may be formed in a natural way.

Level the playing field

In a MOOC it seems as there is a single expert and many, many (equal), novices learning from the expert. This is a myth. Many users of MOOCs are far from novice and many even bring additional skills to those of the super expert. But the myth, the ‘bright light’ radiating from the super expert in the MOOC may hinder participation from semi-expert and peer learning. Do you feel free to try on your own ideas when an expert is watching you? In response, my students came up with ways to level the playing field, in the hope more participants of the MOOC would feel like playing. One solution was to give special status to expert users. Possibly they can provide extra content, so the MOOC becomes more of a shared place. Another solution could be to stimulate creativity, and to create community around creative exercises. Exercises that do not have a right or wrong to them might help challenging the ‘one expert model’ many MOOC users get from its design. A final avenue could be to create a large joint project, a barn raising challenge. Users could create something together, like in Wikipedia, which adds meaning to the course materials, which can handle contributions from people with a large rage of expertise and skill and which can give a feeling of joint discovery and achievement.

Where do suggestions like these bring us? Will MOOCs become interactive education places where joint learning takes place, when their designers take up these suggestions from my students? Maybe. Hopefully. But what I expect most is a diversification of online a d blended learning possibilities and experiences, from ‘simple’ open educational content, to well-designed educational games, and many other blended forms of on-line and off-line learning. Following, Manovich, we could draw a parallel of online learning today, to the cinema of the 1900s and say that MOOCs are one among many experimental forms that will one day define the ‘language of blended learning’ and will fill the ecosystem of educational forms of the times to come. This diversification can be exciting in itself, but what I am really looking forward too is finding out how the marriage of educational and user experience design develops.


A review of the book “Ubiquitous Photography” by Martin Hand.

I have a notebook somewhere with an old advertisement for camera phones. The ad shows a woman in sexy lingerie, sending a picture to her partner to ask for his (her?) opinion. It must have been 2001, camera phones were new and the telephone providers undertook a charming effort to explain their utility to the general audience. The message came across though: digital photography has changed who makes pictures, the reasons for making pictures, the way in which we use and share pictures and with it, it changed the cultural meaning and significance of photography and photos. Martin Hand tries to describe and explain these changes in his book “Ubiquitous Photography”.

Digitization is just one of the many changes that happened to the technology of photography since its invention at the end of the nineteenth century. The first photographs resembled realistic portrait paintings. Photography was hard for everyone: subjects had to sit still for many hours, photographers had to control light and setting precisely and printing photos was a specialist job. So photos were precious objects. Later, agency shifted from the photographer to the technology. Increased light sensitivity made action photography possible. The Kodak camera made photography accessible to everyone, resulting in the family album and tourist photography. Photos became affordable objects for everyone. For long, printing photos was still a professional service, but the Polaroid camera changed this, too. Photo printing became immediate and the ‘snapshot’, a new photography emerged. Digitization, in many ways just the next small improvement, was a turning point – in particular when the camera phones came about. Digital photography combined the immediacy of the Polaroid with flexibility of presentation. We can share photos online and we can print them on any surface – including birthday cakes, clothing and shower curtains. This started an enormous diversification of (personal and professional) photography practices.

This diversification, in turn, makes it increasingly hard to build theories about photography which do justice to its diversity. Martin Hand’s tries to contribute to the problem by carefully tracing both the continuities and the changes in academic thinking about photography, once ‘digital’ arose. Hand: “Photography has now many co-existent lives each of which is part of a different trend and may have a different trajectory in the future, but all have significant connections with earlier photographies.” (P185). He uses personal photography – so the pictures you and I make – as a focus point and he writes about professional photography only in passing. Still there are many threads to trace and the book offers a complex quilt of ideas about photography and the ways in which we are weaving photos into the fabrics of our lives. Throughout, Hand recognizes some bigger trends or themes, such as: photography is shifting from capture to performance, from permanence to ephemerality and its role is shifting from celebrating the family to living publicly. Let me discuss these shifts in turn.

Once upon a time, photos used to be a proof of the objective reality. Photos didn’t lie: they captured reality as it appeared in front of the camera. While this is more or less true from a technical perspective, in the practice of photography it has always been a myth. For example: photographers choose which part of reality they capture and this framing has a big impact on what the photo has to say. Also, on early photos we see people posing for the camera in carefully created photosets – how ‘real‘ was this? Even photo manipulation is a much older craft than many people know. But it is this myth of photos as a depiction of reality which makes up its role as a performance medium today. Billboard advertisements are an example. They are clearly photo compositions, but they seem to say “this dream we are portraying here could be the objective reality if you buy our product”. This is performance using the myth of capture. The family album is another example: it isn’t used so much to remember important events in life, it is much more about storytelling to celebrate family values and to shape the ‘ideal’ family. The family album doesn’t capture the way your family is, it shows how you like it to be and how you want portray it to others. This family performance can turn photography into the director of your experience. During my last holiday in Ecuador there were frequent-volcano-photography-bus-stops, to take pictures of volcanos and of course to take pictures of ourselves taking those pictures. So I wondered: was the holiday about the experience of seeing a volcano or about collecting photos to show our friends the ‘reality’ of our holiday. Photos are little objective snapshot of our live, which play a major role in the make believe plays we play with each other. Photography may always have been about performance. But it seems fair to say that older photography still celebrated the myth of capture, while today’s photography uses it merely as a background for other things. Today the focus is on performance, simply because we take many more pictures, which we share in many more ways with many more audiences, often in digitally manipulated form.

Related to the idea of capture is the idea of the photo as a fixation of a moment in life. Before digital, we took pictures of those moments in life which we wanted to ‘save forever’; as such we fixed them, chemically on a piece of paper. They weren’t literally fixed: just like now, photos did migrate from one place to the other. They moved from hanging in a frame on the wall to a family album or a shoebox beneath the bed, but they were ‘about’ fixation nevertheless. Even the shoebox pictures had a role in remembering and sharing stories about the precious moments in life. This practice of photography still exists, but new practices arose too. Today, photos have become ephemeral objects. Rather than physical evidence from the past, many 21st century photos are about the ‘here and now ’- and just that. We send each other photos of our food, the dirty dishes, writings on a whiteboard, or the delay sign for a train, as an integral part of chatting with friends. In other words photos have become a form of speech. Today, we use photos to say: “Look, this is what I am looking at right now”, and we do this a lot, much like the camera phone ad, I begun with, predicted. This practice is a next step after the snapshot. Nowadays we are capturing and sharing stuff that isn’t worth a dollar for a print to us; goodbye to the photo as a record of the ‘precious things in life’. According to Hand, college students express a concern about the unexpected permanence of digital photos (p155). The technology of photography still has a fixation character: photos may be permanent in the sense that their bits and bytes will dwell forever on some hard disc or cloud computer and they may one day be found by future data archeologist. Still: in the everyday practice of photography the ephemeral, fleeting and the mundane have become dominant forms.

Next, we must ask to what extend the personal photo is still about the family. In his book, Martin Hand pays attention to the impact that the way in which we share photos has on the photo’s which we take (and the other way around). More and more, we share our photos on social media: Instagram, Facebook and Flickr, rather than through the family album. This changes the scope of personal photography. Hand: “The emergent problem of ‘how to live publicly’ in a post-privacy world is not only about the management of digital self-presentation but also recognizes the collective nature of public life (p183)”. Our photos find bigger and different audiences, with the disadvantage of losing control. Who has access to your photos and in what contexts these are presented? This is a much more difficult question today, than it was years ago. Personal photography experienced a shifting agency of sense making from the personal to the collective and a loss of control of who chooses to look at your photos and with which expectations. Hand shows this vividly with an account of how college students react to person tagging (which controls whose ‘timeline’ changes) on Facebook, compared to topic tagging on Flickr, which has less effects on audience control. As soon as we post a picture online we trade connectivity for ownership and the future of personal photography may well depend on the situations in which this turns out to be favorable and the situations in which this turns out be a Faustian deal. The result will not only affect the pictures we share, finally it will influence the pictures we take.

This last point may be Hands most important one. There is a bidirectional relationship between the technology of photography and its cultural use, which he calls ‘reconfiguration’. The reason for taking a picture is mostly to share it. How we share the picture, with whom and to what extend we feel in control of this process may be important to the pictures we choose to take (or not). Changes to the technology and, in particular, to the cost of making pictures affects the pictures we take and they shape our reasons for sharing and the audiences of the pictures. Digitization has made the relationship between the technology for taking and sharing photo more complex, dynamic and exciting. When we try to make sense of these changes we need to see that the photo is no longer an image on a physical piece of paper. Rather, we see a networked object that can collect meaning and an audience more or less independently. This can be for better or worse to the original authors. Instead of fixed objects, capturing the ideal family, photos are now shapeshifting objects for performing a diverse set of scenarios in today’s ephemerality; publicly.

Reading more.
Lev Manovich is a strong advocate of the idea of a new media object as a networked object, a view shared by Dhiraj Murthy, when he writes about Twitter.


Imagine having a device which can post sound bites from your home to your Facebook wall. Now and then it will record the sounds in your home, scramble them – so listeners cannot recognize specific sounds, and it will make an audio post for you. Most people I talk to, think this is a stupid idea. But it is what “Facebook Listener”, designed by Stefan Veen and his colleagues, does – and I do believe it could be a success. Or at least I believe in the broader idea of awareness systems: allowing people to share background information with others in a lightweight way. In this post I will answer four critical questions I often get about awareness systems and I hope to show where the opportunities for awareness are.   

Facebook Listener

Facebook listener records sounds from your home and allows you to listen to sounds, others shared.Picture from the original article (link below) 

A short history of awareness

Maybe I should to take a step back and discuss shortly where some of the ideas behind Facebook Listener originated. The academic tradition of awareness systems started in the workplace of the early 1990ies. When people collaborate they build on tacit knowledge about each other. Several workplace studies showed that people who have an awareness of their co-workers can collaborate more easily and effectively. Scientist turned out to publish most articles with people whose office was nearby. People in control rooms needed details about what others were doing to avoid mistakes. Because of findings like this, researchers slowly started to recognize a design opportunity: “If people need background information about each other to work together”, they thought, “can’t we design systems which deliver such information across a distance?”. So soon, researchers started to build devices which allowed awareness, background contact and casual communication at a distance. Mostly these media spaces, as they were called, involved video links between multiple remote offices. This was exciting and controversial work: in those times computers had to be useful tools to complete specific tasks effectively. The idea to use computers to provide background information about others seemed to come from outer space. Exciting or not, researchers needed several incarnations to get the design of these systems right, and honestly, the early controversy didn’t fade.

Media-Space-office-meeting

Early awareness systems, called media spaces, provided video links with remote colleagues.
Image from: http://people.cs.vt.edu/~srh/Media%20Space%20Home.html

Is there a need for awareness systems?

The top 1 question. Does your work, for example, improve if you get a video connection with remote colleagues; so you can see them behind their desk, typing? Or, does anyone benefit from listening to scrambled sounds from your home on Facebook? Perhaps not: I have not seen studies of media spaces which led to measurable improvements on business results and clearly my friends survive without being able to listen to my house sounds today.  But, probably yes too. Consider Digital Family Portrait, for example. Researchers from Georgia Institute for Technology placed sensors in the home of an elderly woman so the computer system could get a sense of her activity during the day. The (adult) children could see this activity data on a digital photo frame in their home. There were butterflies around the picture of the elderly woman and big butterflies meant “mother is active”, while little butterflies pointed to the opposite. Digital Family Portrait raises many questions, but the users were positive about it and it shows how awareness systems can support real communication needs.

Digital Family Portrait

Digital Family Portrait gives subtle cues about activity.
Picture from: http://home.cc.gatech.edu/jimRowan/4)

Often, our communication is not so much about the contents of the communication but for the sake of communicating itself. This is called phatic communication. When I share what I am eating on Twitter, my followers learn something boring and something important about me. The boring part is what I am eating. The important part is that I am alive, all is well, and the communication channel is open. They can contact me if they want to. It turns out that this last phatic part of the message is important, in particular between close ones. The users of digital family portrait didn’t really need to know how active their grandmother was. But they needed a possibility to check on her regularly for their peace of mind. Digital Family Portrait provided them an easy way to do just that. There are other reasons the butterflies on Digital Family Portrait made good sense as well. The meaning of a bit of information about someone close to you depends on your background. To the users of the Family Portrait the size of the butterfly is not so important, but the way it changes over time is. A sudden drop in activity, for example, means there is a reason to check on grandma (or the hardware). Finally, background information such as Facebook Listener, Media Spaces or Digital Family Portrait give, can be a starting point for a more intimate and meaningful conversation. Users of Digital Family Portrait could use the butterflies to talk about health and lifestyle choices. So it could be that Facebook Listener, Digital Family Portrait and Media Spaces aren’t presenting the right information about the right people in the right way, to the right people. But this doesn’t mean the general idea is wrong. It seems people want to share and receive background information about others. Even if it is just phatic communication, I would say there is a need for awareness systems.

Why spyness systems? What is wrong with human updates? Aren’t those sufficient?

Granted that people need background information from others and that they already share such information on social media, why bother any further? Or stronger: aren’t we crossing a hard and scary border when we start to capture and send information about people automatically, like the three examples so far do? I find this question much more difficult to answer, but I would say no: it fine to build systems which give automatic updates. Perhaps JK Rowling can support my argument. In the Harry Potter book “The Goblet of Fire”, the Weasley Family has a clock which shows were family members are, rather than the time. This is an example of automatic updates that seem sympathetic and useful. In fact we know from research it is. Soon, this “whereabouts clock” as researchers called it, was build and evaluated by Microsoft Research [1]. They report that – at least within nuclear families – the clock gave a sense of reassurance, connectedness, expression of identity and social touch. The clock quickly became an integral part of the routines of the family members: it provided phatic communication possibilities. Family members got a sense that everything was going to routine, that all was well. Part of the success of the clock may have been its Potterish design, though. Much like the original version of the Weasley family, and unlike some commercial tracking systems, Microsoft’s whereabouts  clock used rough place labels like ‘home’, ‘work’, ‘school’ and ‘elsewhere’, rather than precise GPS data. Probably there is a tradeoff in automatic sharing. The more intimate the data you share gets, the more abstract, course grained or fussy the presentation of this date needs to be, to be acceptable to users.

Wherabouts clock

Figure 4: Microsoft incarnation of the whereabouts clock, first described in an Harry Potter book.
Picture from: http://research.microsoft.com/en-us/groups/sds/whereabouts_clock.aspx

Who needs even more ‘information display’s’?

This question comes in different forms, but the gist is that people wonder how to act on the new information they are getting. Wouldn’t it be awkward to pick up the phone and say “why’s your butterfly so small today ma”?  The notion of awareness is information centric much more than communication centric indeed. Many awareness systems researchers want to figure out what people could share though sensors and how to show this information to users. They seem to care less about the next step: providing the means for the ‘talk’ that may follow from the awareness. Often, like in Digital Family Portrait the information stream is one directional too. The mother lacks common ground, she doesn’t know how big her butterfly is that day, which creates the awkwardness of the example.  Some researchers argue that people have enough channels and the awareness system only needs to provide the ‘trigger’ for the communication. But I think the evidence from the studies I mentioned points in a different direction. Users of Facebook Listener turned out to use it to create dedicated ambient-home-sound-scape messages for their friends. Microsoft researchers reported that users of the whereabouts clock wanted to be able to send messages back to their peers from the clock. Evaluators of other awareness systems report similar findings. Awareness information triggers communication needs, and it seems normal to support those from the system, preferably in an open ended and playful way.

Who would buy such a system?

This is a fair question. A positive evaluation doesn’t form a business case. There are two serious barriers to market entry of current awareness systems: single purpose systems and the network effect. Many awareness systems serve a single purpose: a specific awareness need or a single type of awareness information. This fine for research projects, but it creates small markets and high entry costs. It seems difficult to overcome. Users of the Digital Family Portrait and the whereabouts clock can understand the information of the system because the design supports the use case. Some have tried to create more flexible awareness systems such as the modestly successful awareness rabbit called Nabaztag, but this is difficult.  It is a pity, but Nabaztag cannot display location information of the family in an optimal way. When we abstract away from (design for) a specific type of information the awarness display becomes hard to read. A related problem is the network effect: part of the value of any communication system is in the number of users that already use it. Awareness systems, usually being specific purpose, a-symmetrical and standalone, may suffer from this effect. The decision to buy or use is not with a single person but at least with a small group. This makes those awareness systems that make use of existing technical infrastructure and use patterns (of mobile phones for example) much more likely to succeed than other sensible awareness proposals.

Nabaztag

Nabaztag is a general purpose awareness systems. The ears can be used to communicate with other Nabaztag users.
Image from
http://en.wikipedia.org/wiki/Nabaztag

So the proposal of Stefan Veen and his colleagues: to make awareness systems which cooperate with Facebook makes a lot of sense from a market perspective. The idea is resembles our proposal of a year earlier: to integrate social media and business by building software on top of the existing social media (integration software). The benefit could be that users are already networked in this environment and they can make use of its lightweight communication tools. But this benefit comes with two costs. First, there is a usability challenge. In our own research we found that it is hard to communicate to users how the use of the new interface works together with Facebook. In a way users handle two systems at the same time. It is not easy to understand how an action in the integration software or awareness system plug-in changes the state of Facebook, for example their timeline. Second, Facebook integration only has an advantage if whatever comes out of the awareness system is also informative for non-users of the system. Any message the new system puts on Facebook needs to be a good addition to the facebook timeline. This is a strong limitation. So while I expect that awareness systems will be build that will integrate with social networking sites, chances are that they will most likely provide a new output of the data which is on Facebook already,  or can be inferred from it, rather than providing a new input device.  But on the other hand, if awareness systems are eventually communication systems new inputs will soon be added to this outlet in your home.

Reading More

Much of the information for this blog post comes from the book “Awareness Systems: Advances in Theory, Methodology and Design”. In particular: the history of awareness (chapter 1), phatic communication (chapter 7) and  the wheraboutsclock (chapter 18). The design of Facebook Listener is described in detail in this NordiChi paper, the Digital Family Portrait in this paper. Our exploration of software which uses social media as an infrastructure for new applications can be found here.

Loosley related to this post are my post about common ground, privacy coordination, playification and Twitter.


[1] Several incarnations where build at Eindhoven University of Technology as well.


A book review

The problem with books about the impact of social media on society is that most of them are too polemic. The positive books make sweeping claims about how social media is going to change the world for the better. Their writers predict social media will cause more a participatory, open and democratic society. The dystopians, in contrast, sketch equally sweeping visions of a world full of shallow ego-senders who cannot deal with intimacy and cannot focus on more than 2 lines of text. In this battlefield of extreme ideas it is hard to find a balanced treatment based on a thorough review of existing scholarship and empirical evidence. It is this void that Dhiraj Murthy is trying to fill with his book on Twitter: Social Communication in the Twitter Age.

It is fair to ask what makes Twitter so special to deserve its own book: shouldn’t the book be about the impact of all social media on society? In fact this narrow scope on Twitter works well for Murthy. It is easy to see if a statement about the impact of Twitter relates to a broader social media trend or to Twitter alone. Also, in a surprising number of cases, Twitter turns out to be different from its social media neighbors. Compared with Facebook, Twitter is more public, lean and lightweight. Most Twitter users go with the default of public tweets, they incorporate weaker ties into their Twitter networks and they form ad-hoc connections with other users with hashtags.  Hashtags, retweets, special purpose accounts and lists, also allow for collections of tweets as a new narrative form. Twitter shares a focus on news and opinion with blogs, which explains its close relation with the mainstream press, but there are important differences between Twitter and blogs too. Twitter is much more participatory, because of its quick reflections, for example. Twitter is an interactive multicasting system, allowing many too many communications. On Twitter, each tweet can have a different audience (size). Its users can select audiences in several ways: @reply’s, retweets and hashtags. These audience selection practices, make tweets part of a complex social structure and less ego-centric than regular blog posts. Twitter builds bridges between the private and the communal and between the mundane and profound. This is a lot like normal blog posts, but on Twitter people tend to mix these extremes much more. The differences between Twitter and other media may be a reason it has become the primary lens through which we look on the impact of social media on society, at least for Murthy.

Murthy is quick to disarm the polemics too. He starts his book with a comparison between Twitter and the telegraph. It is no surprise that many of the more sweeping claims about Twitter could be close echoes of early responses to the telegraph. This is a sobering thought and it makes clear that Murthy is planning to be more careful with his analysis. He continues this careful line of reasoning in his theoretical chapter. Building on ideas from, among others, Lev Manovich, Marshal McLuhan and Erving Goffman he tries to give perspective to some daunting questions around Twitter. Is it, for example, a democratizing force in society? Isn’t Twitters openness and instant information spreading, fueling equal access, the idea of a global village and public debate? This may be, but in practice Twitter participation rates suggests a digital divide and its social circles are fairly homophilious. Twitter is a tool for the elites and those turn out to meet like-minded people rather than those with opposing views and values. This hurts the democratic appeal of Twitter. On Twitter, we do get a sense of the global village, but it is a stratified picture.

Another question is what the key-motivations are for people to use Twitter. People may want to be connected with and informed about a wide circle of social contacts and they may want to build their identity with frequent updates about their lives. This is true for most social media, but Twitter may be special because of its focus on news. Twitter gives the possibility take part in something important. This may be what drives citizen’s journalist and the way users use Twitter at events such as concerts, conferences and with TV programs may be other examples of this motivation to be part of something larger. The second motivation is that, unlike many other media, Twitter is real. The mainstream media paint a stylized picture of the world, movies and advertisers show photo-realistic illusions which may feed a need to connect with real people sharing their thoughts about the here and now, mundane or profound. Twitter focuses our attention to the small and big events around us. It fosters and update culture and celebrates and event driven society.

Murthy discusses four case studies to develop these ideas about Twitter: citizen’s journalism, disasters, activism and medicine. I believe his chapter on citizen’s journalism is just as foundational as his theoretical chapter. It is hard to understand Twitters role in disaster and activism without considering the synergy between Twitter and the mainstream press. Twitter is an ambient news environment which plays two roles for the press. First, the press can harvest news from Twitter. The press can make citizens who tweet about important events their ‘ground army’. This changes the way the press acts as a mediator between the public and the people who ‘are’ the news. While there is also direct contact between citizen’s journalist and parts of the audience during a disaster, these citizens lose their following of once the news has traveled elsewhere. So, in return for the free updates, the press has the capacity to connect citizen’s journalist to news  audiences, which may be a motivation for citizen’s journalist to share the news at all. Second, Twitter is an outlet for news organizations. Twitter plays a role in the spreading and curating the news reports the press provides. As Twitter is an information and low threshold communication channel in one, it allows for direct contact between journalists and the public. This puts pressure on journalist to act in such a way that they can keep the trust of the audience while the direct lines between audiences and citizen’s journalist also a pressure to act faster. So Twitter supports a new relationship between the press and the public, which could be symbiotic, but is also subject to several new tensions.

If the circumstances are right, a single tweet can gain an enormous reach and become famous. This happened to the first photograph of the US airways plane that had to make an emergency landing on the Hudson River in January 2009.  The story of the Hudson River tweet shows how citizen’s journalist can be important for the news, but also that the hype can be much bigger. The attention for the emergency landing quickly shifted into a raving debate about the importance of Twitter itself and the new reality it creates. There was a similar shift of attention for the Iranian postelection protest in 2009, the 2011 ‘Arabic Spring’, and the 2011 earthquake in Japan. It seems unlikely that the activist in Egypt coordinated their efforts with Twitter. It doesn’t make sense to use a public tool for coordinating protests, but more importantly only the flabbergasting low figure of 0.00014% of the Egyptian population actually uses Twitter. Other disasters and protest show low participation rates too. During disasters and revolutions, the enormous numbers of topical tweets, show ‘the West’ tweeting to ‘the West’ about the disaster, not the victims tweeting to ‘the West’. This raises the question whether Twitter can have any real impact for activists at all. But it does, because of the synergy between traditional press and Twitter. Murthy shows that Twitter may play a role during disasters to augment social communications on the ground (be it within the small elite of users), to empower disaster victims (because they have a voice to the world) and to call for support (or comfort) from this worldwide attention. Similarly: dictators who face protests on Twitter will not fear the small number of direct Tweets from the ground, but they turn out to be sensitive to the changing public opinion in the West that can be a result of it.

In last case study of the book: Twitter and Health we see the importance of a third paradox of Twitter. Apart from bridging the private and public and the mundane and profound, Twitter also mediates between information and community. While Twitter is less participatory than Facebook, its multicasting possibilities allow topical tweeters to build and audience. So while Twitter is much more focused on information than community, it can be a valuable resource for patients to build a support group around them. For these virtual support groups it is important that Twitter is a channel in which they can post mundane information. You would not write a blog post or visit an online health support group to share your anxiety for a doctor’s visit from the waiting room, but it is this time and place where support from like-minded people may be most vital. Twitters bridge between the mundane and the profound plays out in a different way too as health experts engage in lightweight communication with patients. Patients may connect directly to experts rather than through their doctors who may have an information lag. These experts may also play an ‘activist’ role for the disease by connecting to other audiences as well. We see a similar mix of tweets with celebrities, who are followed for gossip, but who can be at the forefront of the more profound Twitter based- fund-raising efforts for flood victims. Health care shows in several ways how Twitter creates an information and support network that transcends traditional organizational and social structures. This new network can be a welcome addition to health care and support.

With his book “Twitter: Social Communication in the Twitter Age”, Dhiraj Murthy has written a tantalizing, insightful and balanced account of the existing scholarship about the question how Twitter changes our communication practices. These changes may not be as big as the web-optimists would like us to believe, but they are nevertheless significant. Citizen journalist may not revolutionize the news making industry, but they do change the playing field for journalists. Twitter isn’t the democratizing agent that some see in it, but activist manage to create international recognition and pressure because of its close relation with the regular press. Murthy has done a good job to bring perspective to these debates, but to me, his chapter on Twitter in health care is the most important chapter in the book. When we discuss the impact of social media on our communication practices , we usually focus on the ‘big events’ or ‘bright stars’, the disasters and revolutions. The small changes the medium brings in the everyday life of everyday people gets much less attention. Surely these are harder to see, but they may be more important for the well-being of the global village than the big events that everyone is watching so closely. In his chapter on health Murthy gets closest to this everyday use of Twitter. Here we see how Twitter’s paradoxes play out in a positive way. Twitter may be the noisy, superficial and banal communication channel for which it is bashed around by web pessimists. But Murthy shows that when the private meets public, the mundane meets the profound and information precedes community health care may benefit. It is this chapter which shows most clearly how the subtle differences between Twitter, social network sites, dedicated e-communities and blogs lead to different, new, and complementary communication practices.

Reading More

I wrote several posts about Twitter before which fit in nicely with the ideas in the book. In “Is Twitter Getting Fat?”, I discuss the lightweight nature of Twitter and how this may be changing because of commercial pressure and feature creep. In “Does Twitter Have a Tempo?” I discussed how Twitters update culture, originates from the way Tweets set a communicative contexts for other Tweets. In “Hashtags and the Semantics of Interactive Language” I discuss the history and interactivity of hash tags, and how these may influence their linguistic uses. My post Recipient Responsibility in Netiquette, continues this linguistic line of reasoning. I first commented on the possible problem of homophily in my post Turkle’s Turn

Earlier book reviews include my recent review of Lev Manovich’ book “The Language of New Media” and reviews of Wikinomics and Understanding Media.


A book review

It is not an easy challenge Lev Manovich sets in his book “The Language of New Media“. In the early days of cinema hardly anyone could foresee the enormous cultural impact this new medium would have on our society. A new artistic language, cinematography, was born; but no one recorded its first steps systematically. This wasn’t only because people didn’t see the importance of cinema. Documenting an emerging language is a form of historiography; merely impossible without the advantage of hindsight. It simply takes time, before it becomes clear what the right or interesting historical questions are. So, I never believed Manovich, could succeed in “providing a potential map of what the field could be” (p11) back in 2001. But, I do feel his book deserves a close reading. The first steps in building a theory are usually the hardest and, despite the difficulty of the task, Manovich did cover interesting ground.

Manovich tries to understand new media with the eyes of an artist. Two ingredients are necessary for every art piece. First there is the influence of existing media. A key concept in Manovich’s theory of new media is the cultural interface (p 69). Much in the way the human computer interface, structures the interaction between humans and the computer, cultural interfaces give structure to the users’ interaction with culture (or “cultural data”). The interfaces of CD-ROMs, web pages, games and apps are all cultural interfaces. Often new media reuse ideas, forms and conventions from older media. Early cinema build on theater, rock music on blues. So the forms of new cultural interfaces also stem from older, already familiar forms such as: magazines, newspapers, photography or -indeed- cinema. Second, the new media have offer new technological possibilities and affordances to the artist (Manovich speaks of operations). As media makers experiment with these new operations, some older conventions will fade and new conventions emerge. So to understand the language of new media we need to look carefully at its ancestors and at the possibilities of computers.

Manovich believes three older cultural forms are most important to describe the language of new media:  print, cinema and the human computer interface (in practice Manovich specifically refers to the graphical user interface, the GUI). Print, the oldest form, was adopted first. The page is a cultural convention of print that persisted into the digital age; although the worldwide web also revived the ancient form of scrolls. Hyperlinks were the most radical and disruptive innovation to texts. Hyperlinks challenge old ways of organizing information. The structure of the web isn’t like a library (with an index) or a book (structured through rhetoric narrative) (p 77), but more like a walk on the beach where you may find random objects one after the other (p 78). Cinema, is the second cultural form that influences new media; in particular games. The moving camera is a convention borrowed from cinema, apt for navigating in virtual worlds. Also many games borrow story forms from cinema. But games moved beyond cinematic conventions too. They break the rules of (natural) perspective and story, in search of forms that are more suitable for interaction. The graphical user interface (GUI) brings controls, menu structures, and the desktop metaphor to new media. These seem to fit in, but there is friction as well. The controls offered by GUI’s need to stick to the underlying metaphors (for usability) and simultaneously blend in with the story world of the cultural interface. While new media often show mixtures of print, cinema and graphical user interfaces these mixtures may be rough. Often, the underlying ideas of what the screen represents differs so much (flat surface with information, window into an immersive environment, control center), that these forms cannot be wedded easily.

So print, cinema and GUI’s, are to new media what photography and theater were for cinema. They provide the “raw material” of cultural conventions that are available to new media makers. But they do not yet describe the other major influence on the language of new media: what  new media creators tend to do with this material. In other words, Manovich needs to turn to the affordances of new media creation technology, or as he calls it: the operations. Cinematography, for example, makes creative use of different types of shots and editing techniques. Do these operations­­­ have new media equivalents? Manovich believes these are selection, teleaction, and compositing. Teleaction, the ability to see and act at a distance, allows the camera to be everywhere and users to cooperate across the world. In massive online multiplayer games for instance or in massive online open courses. Computers allow easy access and reuse of older material and both selection and compositing capitalize on this possibility. The rise of the DJ, cleverly choosing and combining existing materials to create new music, is an example of the power of selection and compositing as means to create new forms. The growing importance of special effects in movies, another.

New media creation technology also allows to create new types of illusions. Computer generated images for example. Mimesis: to mimic nature, has been an important goal of cinema and this remains so in computer simulation. For computer vision scientist, for example, cinema is an important market and source of inspiration: “High quality, means virtually indistinguishable from live action motion picture photography” (p191). In practice, creating a realistic immersive experience involves more than just photo-realism. It involves many forms of mimesis like: touch, interaction (with virtual characters), moving about in virtual space and 3D graphics. But mimesis isn’t at all one-dimensional. Rather, in cinema, the quest for realism progressed through a succession of ‘codes’ in which only some parts of the experience mimic real life and the viewer filled in the gaps. In computer graphics ‘realism’ was first achieved by ‘deep perspective’ and later by ‘correct lightening and shading’. Considering the broad playing field for new mimetic codes in new media  (for touch, interaction, movement and so on), we may expect a long period in which media makers try to set new mimetic frontiers.

After describing the conventions and affordances of the new technology, the cultural interfaces, operations and illusions, Manovichs turns to the emerging genres or forms. He focusses on two of those new forms: the database and navigable space. Databases are special because they have no beginning or end. Much more than any old form, new media objects allow the user random access to items in the piece. The user experiences new media through hyperlinks, browsing and searching rather than through the guided tours that traditional narrative forms offer. The second form is navigable space, which is the dominant form in many games and interactive stories. It is the first time space is a medium. Space can now be stored, retrieved, transferred and it can be used to tell a story. Today, architecture touches media and new media designers need to learn how to tell a story with space.

The Language of New Media, is a rich book which offers a comprehensive theory of new media. It is an interesting idea to use cinema as a model for new media and with this approach Manovich could get to some marked insights in the language of new media. But I do not feel the book can live up to its goal to be a map of the field. Manovich tries to bring clarity to the field by thinking of new media in layers: cultural interfaces, operations, illusions and forms. But the distinction between these layers is not always clear and he fails to show how these layers influence each other. How did, for example, spatial navigation as a form emerge through the operations selection and compositing on cultural interfaces? Manovich raised the question, but he gives no answer. There were many other questions like this, that haunted me when I was reading the book. At the end of the day Manovich book is more a structured description of the status quo of new media at his time, rather than a theory through which we can understand it’s development. And then there is the scope of the book…

When you compare (early) cinema and new media, you start inspecting the parts that are most like cinema. This way, Manovich misses a lot. As an interaction designer I can hardly agree with Manovich treatment of the human computer interface (and its history). Interaction design in games draws much more from board games and free play, than from desktop tools. So board games are important cultural interfaces that feed new media. Also, in contrast to film, new media has multiple context of use, and is paid with a bigger diversity of business models. This must have an impact on the developing language of new media, and studying the history of cinema cannot tell you what it is.

Shortly after the language of new media came out, web 2.0 unfolded. This changed what most people see as the dominant forms of new media. As early as 2005, four years after the language of new media was first published, Mark Deutze, considered “participation”, “remediation”, and “bricolage”, to be the principal ingredients of digital culture. Deutzes description of – in a word – participatory remix culture, would be part of any new book on the language of new media, but Manovich could not have foreseen this development when he wrote his book. Manovich knew that this might happen, as it happened with cinema before. He writes: “It is tempting to extend this parallel a little further and speculate wether this new language is already drawing closer to acquiring it’s final and stable form, just as film language acquired its classical form during the 1910’s. Or It may be that the 1990s are more like the 1890ies, in the sense that the computer-media language of the future will be entirely different from the one used today”.

So did Manovich write his book 10 years too early? Probably not, rather I think his scope was too wide for his analytical means. Cinema has never been the ‘meta-medium’ which the computer is today. Cinema, has always’s been expensive to produce and it has always offered  a special and exclusive experience to the audience. So the technology and the economics (business models) of cinema quickly converged, in turn allowing for stable cinematic conventions to emerge. The meta-medium of the computer is cheaper, more diverse in the types of experiences it can offer, the contexts of consumption and the diversity of business models available to support their production. There is not one language of new media, there are many. And unlike with cinema, they are not likely to revolve around a common core any time soon. Manovich was right and observant when it came to games and digital cinema, but he may have underestimated the power of (linear) narrative, overestimated virtual reality and he may have missed forms like augmented reality or social media. Many new media forms, like immersive games and websites have stable language for years. For these forms, Manovich book provides at the least a descriptive framework. But other forms, such as augmented reality, social media and embedded media have their proverbial 1890ies still to come.

Reading more:

In my post Cognitive Bias in the Global Information Subway , I discuss the language of search and its impact. In Collateral Damage of the Robots Race (on the Web) and Social News Needs a Nuanced ‘Like’, Quickly, I discuss the impact of artificial intelligence on the way web experiences are structured.

Earlier bookreviews on this side include Reading Wikinomics and Reading Marshall McLuhan’s Understanding Media.

Here is a good Dutch summary of “The Language of New Media”.




Follow

Get every new post delivered to your Inbox.