Reading Kim Zetter’s “Countdown to Zero Day”
You might not find it shocking news there is a digital weapons race going on between secret service agencies of nation-states like the US and other countries. Either you knew, or you expected it, or you wondered why you should care about this stuff at all. In all three cases I think you should read Kim Zetter’s Countdown to Zero Day. It is the trademark of a good nonfiction book that it takes something which is unjustly in the fringe of public attention and puts it in the spotlight. Zetters page-turner does this for cyber warfare and, almost in passing, she unpacks many questions we should have been asking for a long time. Recognizing the cliché, Zetter calls Stuxnet, the computer program that takes the lead role in her book a game changer. It is my best hope her book will be just that for the public debate about digital warfare and the cyber security of our physical world. Contains spoilers.
Hackers call a previously undiscovered security vulnerability which is used by a virus a ‘Zero Day’. A Zero Day goes undetected by virus scanners, but they are rare. It takes much time and effort to develop them or a lot of money to buy them on the black market. Most creators of malware do not bother to go through this effort, they simply hunt for victims that did not update their virus scanners yet. However, as it turns out there are new players in cyber space who do have reasons and the means to create them.
The book revolves around the discovery of Stuxnet, a sophisticated worm that was discovered by antivirus agencies in 2007. It soon turned out that Stuxnet did not exploit just one, which would have been rare already, but no fewer than four zero days. In the book we get a peek over the shoulders of the Symantec team that studied Stuxnet for several months. They slowly unmasked the mindboggling virus as a subtle sabotage tool, aimed at slowing down Iran’s uranium enrichment program. Worse, it turned out to be part of a suite of digital weapons, created by the US and Israeli government. Stuxnett formed the opening shot of a completely new type of warfare starting in the digital realm, but perfectly capable of doing a lot of damage in the real world. It was, to paraphrase President Obama, a program of many firsts; and of many wakeup calls.
How can a virus cause physical damage to an industrial uranium enrichment plant? Well such plants are controlled by computers. The computers that control power plants and production faculties are specialised industrial systems (called PLC’s). Nevertheless, Stuxnet was able to feed those PLC’s malicious code. Such code can disrupt measurement and control loops managing for example gas pressure and temperature in a chemical factory, thus causing serious industrial accidents, blowing up machines or destroying a factory in another way. With Stuxnets disruption of Iran’s uranium enrichment program the sabotage had to be much more subtle. Stuxnet changed the frequency of the centrifuges used to enrich uranium gas, disrupting the enrichment process, and increasing their chances of breaking.
Stuxnet proof of principle of digital physical sabotage should make us think about how safe our own (nuclear) power plants, chemical factories and other partly computerized systems are. The sobering answer is: very unsafe. Subtle sabotage like Stuxnet did require detailed knowledge of both the programing language of the specific PLC’s as the real world processes these control. But a brute force attack is much, much easier and PLC security has not been a priority of the companies that produce them, as PLC viruses are new to the world. In other words: the safety of much of our heavy industry and public transport is reliant on control processes that can be disturbed easily.
But it doesn’t stop at industrial sites. It doesn’t take much imagination to see where this is heading with the internet of things becoming reality. One of the side stories in Zetter’s book, for example, deals with smart electricity meters. A virus can shut them down and disable the possibility of remote updates. The effect could be a city out of power, which can only be remedied by replacing smart meters in a door-to-door program. While more and more of our equipment gets digital control and network capacities, is becoming vulnerable to cyber attacks. Anything smart can be hacked. It is likely to be a matter of time until we see hacked electricity meters, traffic lights, tv-sets, cars, coffeemaker, toothbrushes and lightbulbs. A foreign government will not likely be the one creating those attacks, but others can and will.
What should we think of hacking governments? Zetter’s book effectively debunks the myth that hacking is the domain of Russian criminals, seeking a quick buck. The sophistication of Stuxnet showed digital weaponry is a ‘power game’, for which only governments currently have the knowledge and resources. Digital weapons have a risky vulnerability though: they can be copied. It didn’t take long before other malware stared using Stuxnet’s zero days and the same can be done with the sabotage code. The Stuxnet virus has been difficult to create, but compared with a missile thrower or atomic rockets, it much easier to copy, adapt and remix a digital warrior into a different one. Besides, anyone infected has access to the code. So while the creation of a novel (type) of digital warrior requires much specialized knowledge, time and effort, making a ripoff is easy, at least in comparison.
So, Zetter probes in the ethics of digital espionage, sabotage and warfare and asks what ends might justify the means. What justifies hurting the trust of the customers of Microsoft, Siemens and anti-virus software sellers? Of releasing something that could boomerang back to the own state? Of opening an arms race in digital-physical warfare, considering the most networked countries are those who have most to fear and loose in case of total cyberwar? Zetter presents a balanced view of these questions and leaves them open for debate.
My answer would be a simple no, though. The risk of others developing a weapon is often used to justify creating something much worse, the atomic bomb is a vivid example. If cyber weapons can be used by any skilled hacker the drawbacks of a digital arms race will most certainly outweigh its benefits. In this sense it is telling and ironic that the first digital warrior was created to provide military backing to the nonproliferation treaty.
If this wasn’t clear yet: I certainly recommend to read the book Countdown to Zero Day
I wrote about a more innocent digital arms race, in my post Collateral Damage of the Robots Race on the Web. Other book reviews include those of The Information, Simians, Cyborgs and Women and Metaphors We Live By.
Filed under: (re)thinking media, discussion, review | Leave a Comment
Tags: Book Review, Countdown to Zero Day, Cyberwar, Kim Zetter, Malware, nonproliferation, Online Security, Physical Digital Integration, PLC, Sabotage, Stuxnet
Information seems such a central word today, it is hard to imagine it has not always been subject of scientific inquiry. Still it was not until the 1950ies scientific information theory developed, although it would soon penetrate many other sciences including such fundamental ones as biology and particle physics. As Donna Haraway remarked, until the eighties information was the dominant metaphor for most sciences only to be replaced by the network since then. In The Information Gleick traces the science of information from its birth through its golden years, in a compelling, exiting popular science text.
The book has a more or less chronological structure, featuring chapters about tribal long-distance communication with ‘talking drums’, writing, automatic calculation, telegraphy, telephony, entropy, computing, DNA, random numbers, quantum computers and the Internet. In particular in the first chapters of the book it highlights the enduring intercourse of technology and intellectual progress that is so characteristic of information theory. Writing, telegraphy, telephony and computing are all technologies, and these inspired human thinking by offering metaphors for understanding the world and proposing challenges which needed a solution based on theory. In a bit more cursory way, Gleick also talks about the impact of these technologies on society. Gleicks real interest is theory though, so societal changes are more a context from which he explains intellectual progress.
‘The information’ is in many ways a coming of age novel. In its childhood, information theory might have needed information technologies to support and sustain its development, nowadays information theory can stand on its own feed and in turns support many other fields. As the book progresses, it topics become more and more fundamental an abstract. Gleick discusses information as the opposite of entropy in physics, he discusses several incarnations of Gōdel’s proof that formal systems like mathematics must be inconsistent, and their application to the calculation of random numbers. And he starts discussing the utility of information theory in other fields. First and foremost genetics but also quantum mechanics culminating in a vivid discussion of nascent field of quantum computing.
In the last two chapters Gleick returns to the everyday reality, by discussing Internet and Wikipedia, but there is something offbeat to this part of the book. Although, in particular the chapter about Wikipedia, is well researched and exiting to read, the connection with information theory is lost in this chapter. I am not sure if Gleick intended it this way but the heroes journey of information theory ends in quantum computing. These wrap up chapters show the birth of the new hero theory: network theory. The 21st century is about networks as much as the second part of the 20ieth century, the era of telecommunication and computers was all about code.
Although the golden age of information theory may be behind us, I consider the information a must-read for anyone working in a field related to information, communication or computing, which is many us. And if network theory becomes mature enough, soon enough for a treatment like this, I surely hope Gleick picks it up.
My review of Donna Haraway’s Apes, Simians and Cyborgs is good entry into her ideas about the (mis)use of information theory in Biology.
Filed under: (re)thinking media | Leave a Comment
This is the last post in a short series examining the benefits and drawbacks of thinking about the world in terms of ‘networks’ . Earlier on, I gave an introduction to mathematical network theory, I discussed the network as a way of explaining the world and I discussed social media as model for social networks. In this post I focus on the ‘flat’ and ‘democratic’ image that networks carry .
Why is it that people tend to regard networks as nonhierarchical? Mathematical network theory is perfectly suited to describe hierarchies as networks and even if hierarchy is not a defining property of the network, in most social networks a handful of people have much more influence or power than most. One reason networks are seen as flat may be the other big idea that is inspired, in part, by the Internet: self-organization.
The idea of self-organization predates the Internet, but it gained much traction lately. There turn out to be phenomena in the world that have an amazing complexity considering they emerged without a master plan or a leader giving orders. Bird flocks, termite hills and (most likely) the human brain are well known examples. If these can come out of networks of animals each just exhibiting their own behavioral patterns, or just from networks of cells, why not self-organize as human beings? Look at how well democracy works. Look at Wikipedia. All we have to do is to create (better) networks and order and common good will magically emerge.
The Achilles’ heel of this reasoning is of course that the examples of self-organization which are used as a source of inspiration are, in fact, sophisticated systems which evolved over many years. Yes, fairly simple, properly networked, behaviors can create complex phenomena benefiting the species creating them. But this only works if these are carefully tuned networks of simple behavior. Self-organization may be occurring in all networks, but the result will more often than not be uncertain rather than favorable for all. Take traffic jams, these are form of self-organization too.
What do we know about the effects of increasing connectivity in a network? Two opposing dynamics appear to be at play. First, in networks with a high connectivity, the Internet as perfect example, there is a strong winner-takes-all dynamic. Before Internet we would find a bookstore on every corner, but in the online world only a handful of players like Amazon can survive. In the less connected word every bookshop could preserve its own clientele in the neighborhood, because the switching costs to a bookshop farther away did not outweigh the benefits for most consumers. Increasing connectivity lowers the switching costs, increasing the action radius of the shops and the competition too. As a result bigger shops survive. So Google dominates search, Amazon dominates online retail. Similar things happen with increased connectivity in the real world. Better roads lead amenities too disappear in villages, because of increased connectivity to the city. The winner-takes-all dynamic thus leads to a centralization in networks, rather than a decentralization, like many proponent of self-organisation like to believe.
There is, however, an opposing dynamic, which Chris Anderson described in his book The Long Tail. The winner-takes-all dynamic creates a high head (few players take all the traffic), but Anderson pointed our attention to what is happening at the other end of the curve. Lets look at those who sell rare goods in small quantities. Increased connectivity allows shops that would not sell enough in the less connected world because they serve such a niche market to gain the audience they need to be sustainable. Artists may be able to make a living out of selling their artwork thanks to the Internet because they can reach out to a bigger potential audience. Grocery shops may disappear in villages, but an expensive restaurant, for which people are willing to travel to the village may now survive.
The winner-takes-all and the long-tail dynamic are this two sides of the same, increased connectivity, coin. In my post thinking internet and thinking I included the following quote from Brian Eno.
“I notice that everything the Net displaces, reapears somewhere else in a modified form. For example, musicians used to tour to promote their records, but since records stopped making much money due to illegal downloads, they now make records to promote their tours. Bookstores with staff who know about books, and record stores with staff who know about music, are becoming more common.”(Brian Eno)
This comment may very much be an illustration of the interplay between the two connectivity dynamics. Increased network connectivity leads to centralization and increased power when we look at who controls the commodities. Well connected networks are typically more hierarchical than less connected networks. But increased connectivity leads to decentralization and diversification when we look at the niches instead.
Interestingly, the biggest players on the net Amazon, Google, and Wikipedia have found ways to make use of both dynamics. They managed to become the first stop for internet users, partly by aggregating much of what happens in the niches. It is fine to take those players as a source of inspiration, of the potential benefits of networks. But they do not show networks are flat or democratic (quite the contrary) and they are not examples of the wonders of self-organization. For that we better stick to termites.
An nice book about self-organisation is turtles termites and traffics jams by Mitchel Resnick. An advocate of human self-organisation is James Surowiecki who’s book The Wisdom of the Crowds turned into a best seller.
This post is part of a series. The first post deals with mathematical network theory, the second with networks as an explanation of everything and the third with social media as a proxy for understanding social networks.
The Brian Eno quote in this blog was taken from my post “Thinking Internet and Thinking”, which deals with great minds anwers to the question how internet affects our thinking. I discussed several applications of network theory to marketing in my posts “Modeling the connected customer” and “The Traveling Influence Problem”.
Filed under: (re)thinking media | Leave a Comment
Tags: Amazon, Brian Eno, Chris Anderson, Google, Hierarchy, long tail, Network Centrality, Network Theory, Networks, Self-Organisation, Wikipedia, Wisdom of Crowds
A review of the book: A Timeless Way of Building by Christopher Alexander
In the Timeless Way of Building (first published in 1979), Cristopher Alexander exposes his theory of the use of pattern languages in architecture. The idea of pattern languages has not only been influential in architecture but also in other fields such as software architecture, education, and interaction design. I found this reason enough to revisit the writings that started it all.
I would not be surprised if many who started reading the timeless way for similar reasons as I had, have put it away after reading just a few pages. At first reading the book comes across as an almost religious text. Alexander introduces “the quality that cannot be named” as the highest ideal in building, which can be reached through “a process which brings order out of nothing but ourselves (…) if we will only let it”. The core of the book is divided in three parts the quality, the gate, the way. It is full of little summaries, often in the form of rules or maxims. These style choices are not my taste, but luckily, in-between all this idealistic chatter, Alexander does unfold an interesting and quite practical theory of design, which is worth considering.
In the first part of the book, Alexander tries to define the quality that cannot be named. He observes some places are nicer to be in than others and tries to define a single quality which sets these places apart from others. The quality can be described as a freedom from inner contradictions, the feeling of being whole, or alive, as a kind of freedom, as comfortable, as egoless and eternal. Some places have this quality, while others lack the quality. The quality can be recognized from patterns of events that keep on happening in the place, which give the place its unique character. These patterns of events, in turn, are interlocked with the patterns in space which have co-evolved with the activities. These observations, lead Alexander to two interrelated quests. First he tries to define a process of building that allows for this co-evolution and second he wants to develop a way to capture the best practices in building that sets such placed with a high living quality apart.
In the second part of the book, which I found the most interesting; Alexander sets out to explain the idea of a pattern language which forms the core of his theory. According to Alexander, people have always used pattern languages implicitly in their building practice, but modern architecture has lost touch with this way of building, hence his attempt to make it explicit. Patterns are a particular way to capture the essence of a good building practice. They describe a relationship between elements in space. One example is the rule of thumb that a living room should have windows on both sides. Patterns, are not descriptions of concrete buildings, but they are an abstraction of a building practice focusing on those part that are invariantly present in all buildings with a certain quality. Moreover a pattern solves a problem, or more specific, it relieves a tension between opposing forces. Patterns are culturally dependent, although some are more universally applied than others.
A pattern language is a network of patterns. This network has a partial hierarchy, A garden growing wild, can have tree places, which could consist of fruit trees, and so on; different hierarchical paths are possible through the network, so one might arrive at using fruit trees, through different paths. While the patterns form the words of the design language, the network defines the grammar, or at least the collocations of patterns. Using the pattern language is subsequently a process of differentiating space. A designer starts with the first pattern, and then works down the network slowly detailing the space until the design is complete. Although this process quite straightforward, and -perhaps- easy to learn it gives much creative freedom, as the designer can chose patterns, shape them according to the needs of the whole.
In the last part of the book Alexander discusses how a pattern language can be used in real projects. Working from a shared pattern language ordinary people can design, buildings and neighborhoods. Alexander discusses for example the case of the site of a clinic which is designed in a co-creative process by the later users of the clinic as one example. This part is illustrative in the workings of a pattern language, but it is also the part where Alexander returns to his idealistic and egalitarian agenda. He castigates modern architecture, argues building should be in the hands of the people, not of architects. Buildings need not to be drawn on paper, but can be built from marks on the ground, using a pattern language. Although there is something to these ideas, I guess, a more modest tone would have been appropriate. The practices Alexander preaches here are so far off of modern practice, that they need stronger arguments than he provides in the book.
There are several strong ideas in the Timeless Way. Alexander puts emphasis on utility, or use over aesthetics as main function in architecture. The ideas of patterns as a way to consolidate best practices is strong, and his celebration the tacit knowledge in craftsmanship and culture may be a dearly needed sound in modern society and education. It is less clear to what extent Alexander’s program to explicate patterns really fits this idea though. Alexander explicitly positions the practice of explicating patterns as an intermediate step. He believes patterns originate from the building and living culture of a community and explicating them is merely an antidote to the perils of modern building industry. Positioning patterns as an intermediate step, however, raises the question how a transition to the ideal state ought to occur. This question is never answered in the book.
The idea of a pattern language as a design language is certainly strong. Language forms a strong metaphor to understand what design is. The pattern language appears accessible as well, as it turns out to be useful means in a participatory design project described in the last part of the book (Alexander shares ideas, ideals and practices with the Scandinavian School of participatory design, which emergerd in the same period). On the other hand, composing a language out of a pattern network alone seems to be a poor solution. It seems to me a full-fledged design language needs to integrate more types of knowledge than explicated best practices and principles.
A more general critique is Alexanders tendency to present the pattern language as a solution to all problems. The book, already riddled with references to the Tao, pursuing the quality that cannot be named as the single ideal to strive for, feels religious enough without offering the pattern language as a salvation for all possible problems in building practice. More than as a theorist, Christopher Alexander speaks to us as a believer. If you can handle this, you’ll find in him a fine design theorist too, but I recon this will take a handful of blasphemy from your side.
Earlier bookreviews on this site include reviews of Lakoff and Johnsons “Metaphors We Live By“, Donna Harraway’s “Simians Cyborgs and Women” and the Edge booklet “Is internet chaning the way you think?“
Filed under: design education design, review | Leave a Comment
Tags: Book Review, Christopher Alexander, Design Language, Design Patterns, Design Theory, Pattern Language, Timless way of builing