I wrote an essay in 2009 about the Internet of Things, before people were calling it “the Internet of Things.” When I re-read it this afternoon, in 2017, I noticed something rather queer. It wasn’t actually about the Internet of Things at all. It was actually a personal manifesto advocating Anarchism, and condemning techno-capitalist fascism.
In 2009, despite having barely turned 25 years old, I had already been working as a professional web developer for a little over a decade. (That arithmetic is correct, I assure you.) At the time, I had some embarrassingly naÃ¯ve ideas about Silicon Valley, capitalism, and neoliberalism. I also had no idea that less than two years later, I’d be homeless and sleeping in Occupy encampments, and that I’d remain (mostly) happily houseless and jobless for the next six years, up to and including the time of this writing.
The story of my life during those two years is a story worth tellingâ€¦someday. Today, though, I want to remind myself of who I was before. I was a different person when 2009 began in some very important ways. I was so different that by the time it ended I began referring to my prior experiences as “my past life,” and I’ve used the same turn of phrase ever since. But I was also not so different that, looking back on myself with older eyes, I can clearly see the seeds of my anti-capitalist convictions had already begun to germinate and root themselves somewhere inside me.
Among the many other things that I was in my past life, I was an author. I’ve always loved the art of the written word. My affinity for the creativity I saw in and the pleasure I derived from written scripts drew me to my appreciation for computer programming. That is its own story, as well, but the climax of that trajectoryâ€”at least by 2009â€”is that I was employed as a technical writer. I blogged on a freelance basis for an online Web development magazine about Web development. I had already co-authored and published significant portions of my first technical book. And, in 2009, I had just completed co-authoring a second.
That second book was called, plainly enough, Advanced CSS, and was about the front-end Web development topic more formally known as Cascading Style Sheets. But that’s not interesting. At least, no more interesting than any other fleeting excitement over a given technical detail. What’s arguably most revealing about that book is the essay I contributed, which for all intents and purposes is the book’s opening.
My essay follows in its entirety:
User agents: our eyes and ears in cyberspace
A user agent is nothing more than some entity that acts on behalf of users themselves.1 What this means is that itâ€™s important to understand these users as well as their user agents. User agents are the tools we use to interact with the wealth of possibilities that exists on the Internet. They are like extensions of ourselves. Indeed, they are (increasingly literally) our eyes and ears in cyberspace.
Understanding users and their agents
Web developers are already familiar with many common user agents: web browsers! Weâ€™re even notorious for sometimes bemoaning the sheer number of them that already exist. Maybe we need to reexamine why we do that.
There are many different kinds of users out there, each with potentially radically different needs. Therefore, to understand why there are so many user agents in existence we need to understand what the needs of all these different users are. This isnâ€™t merely a theoretical exercise, either. The fact is that figuring out a userâ€™s needs helps us to present our content to that user in the best possible way.
Presenting content to users and, by extension, their user agents appropriately goes beyond the typical accessibility argument that asserts the importance of making your content available to everyone (though weâ€™ll certainly be making that argument, too). The principles behind understanding a userâ€™s needs are much more important than that.
Youâ€™ll recall that the Web poses two fundamental challenges. One challenge is that any given piece of content, a single document, needs to be presented in multiple ways. This is the problem that CSS was designed to solve. The other challenge is the inverse: many different kinds of content need to be made available, each kind requiring a similar presentation. This is what XML (and its own accompanying â€œstyle sheetâ€ language, XSLT) was designed to solve. Therefore, combining the powerful capabilities of CSS and XML is the path we should take to understanding, technically, how to solve this problem and present content to users and their user agents.
Since a specific user agent is just a tool for a specific user, the form the user agent takes depends on what the needs of the user are. In formal use case semantics, these users are called actors, and we can describe their needs by determining the steps they must take to accomplish some goal. Similarly, in each use case, a certain tool or tools used to accomplish these goals defines what the user agent is in that particular scenario.2
A simple example of this is that when Joe goes online to read the latest technology news from Slashdot, he uses a web browser to do this. Joe (our actor) is the user, his web browser (whichever one he chooses to use) is the user agent, and reading the latest technology news is the goal. Thatâ€™s a very traditional interaction, and in such a scenario we can make some pretty safe assumptions about how Joe, being a human and all, reads news.
Now letâ€™s envision a more outlandish scenario to challenge our understanding of the principle. Joe needs to go shopping to refill his refrigerator and he prefers to buy the items he needs with the least amount of required driving due to rising gas prices. This is why he owns the (fictional) Frigerator2000, a network-capable refrigerator that keeps tabs on the inventory levels of nearby grocery stores and supermarkets and helps Joe plan his route. This helps him avoid driving to a store where he wonâ€™t be able to purchase the items he needs.
If this sounds too much like science fiction to you, think again. This is a different application of the same principle used by feed readers, only instead of aggregating news articles from web sites weâ€™re aggregating inventory levels from grocery stores. All that would be required to make this a reality is an XML format for describing a storeâ€™s inventory levels, a bit of embedded software, a network interface card on a refrigerator, and some tech-savvy grocery stores to publish such content on the Internet.
In this scenario, however, our user agent is radically different from the traditional web browser. Itâ€™s a refrigerator! Of course, there arenâ€™t (yet) any such user agents out crawling the Web today, but there are a lot of user agents that arenâ€™t web browsers doing exactly that.
Search engines like Google, Yahoo!, and Ask.com are probably the most famous examples of users that arenâ€™t people. These companies all have automated programs, called spiders, which â€œcrawlâ€ the Web indexing all the content they can find. Unlike humans and very much like our hypothetical refrigerator-based user agent, these spiders canâ€™t look at content with their eyes or listen to audio with their ears, so their needs are very different from someone like Joeâ€™s.
There are still other systems of various sorts that exist to let us interact with web sites and these, too, can be considered user agents. For example, many web sites provide an API that exposes some functionality as web services. Microsoft Word 2008 is an example of a desktop application that you can use to create blog posts in blogging software such as WordPress and MovableType because both of these blogging tools support the MetaWeblog API, an XML-RPC3 specification. In this case, Microsoft Word can be considered a user agent.
As mentioned earlier, the many incarnations of news readers that exist are another form of user agent. Many web browsers and email applications, such as Mozilla Thunderbird and Apple Mail, do this, too.4 Feed readers provide a particularly interesting way to examine the concept of user agents because there are many popular feed reading web sites today, such as Bloglines.com and Google Reader. If Joe opens his web browser and logs into his account at Bloglines, then Joeâ€™s web browser is the user agent and Joe is the user. However, when Joe reads the news feeds heâ€™s subscribed to in Bloglines, the Bloglines server goes to fetch the RSS- or Atom-formatted feed from the sourced site. What this means is that from the point of view of the sourced site, Bloglines.com is the user, and the Bloglines server process is the user agent.
Coming to this realization means that, as developers, we can understand user agents as an abstraction for a particular actorâ€™s goals as well as their capabilities. This is, of course, an intentionally vague definition because itâ€™s technically impossible for you, as the developer, to predict the features or capabilities present in any particular user agent. This is a challenge weâ€™ll be talking about a lot in the remainder of this book because it is one of the defining characteristics of the Web as a publishing medium.
Rather than this lack of clairvoyance being a problem, however, the constraint of not knowing who or what will be accessing our published content is actually a good thing. It turns out that well-designed markup is also markup that is blissfully ignorant of its user, because it is solely focused on describing itself. You might even call it narcissistic.
Why giving the user control is not giving up
Talking about self-describing markup is just another way of talking about semantic markup. In this paradigm, the content in the fetched document is strictly segregated from its ultimate presentation. Nevertheless, the content must eventually be presented to the user somehow. If information for how to do this isnâ€™t provided by the markup, then where is it, and who decides what it is?
At first youâ€™ll no doubt be tempted to say that this information is in the documentâ€™s style sheet and that it is the documentâ€™s developer who decides what that is. As youâ€™ll examine in detail in the next chapter, this answer is only mostly correct. In every case, it is ultimately the user agent that determines what styles (in which style sheets) get applied to the markup it fetches. Furthermore, many user agents (especially modern web browsers) allow the users themselves to further modify the style rules that get applied to content. In the end, you can only influenceâ€”not controlâ€”the final presentation.
Though surprising to some, this model actually makes perfect sense. Allowing the users ultimate control of the contentâ€™s presentation helps to ensure that you meet every possible need of each user. By using CSS, content authors, publishers, and developersâ€”that is, youâ€”can provide author style sheets that easily accommodate, say, 80 percent of the needs of 90 percent of the users. Even in the most optimistic scenario, edge cases that you may not ever be aware of will still escape you no matter how hard you try to accommodate everyoneâ€™s every need.5 Moreover, even if you had those kinds of unlimited resources, you may not know how best to improve the situation for that user. Given this, who better to determine the presentation of a given XML document that needs to be presented in some very specific way than the users with that very specific need themselves?
A common real-life example of this situation might occur if Joe were colorblind. If he were and he wanted to visit some news site where the links in the article pullouts were too similar a color to the pulloutâ€™s background, he might not realize that those elements are actually links. Thankfully, because Joeâ€™s browser allows him to set up a web site with his own user style sheet, he can change the color of these links to something that he can see more easily. If CSS were not designed with this in mind, it would be impossible for Joe to personalize the presentation of this news site so that it would be optimal for him.
To many designers coming from traditional industries such as print design, the fact that users can change the presentation of their content is an alarming concept. Nevertheless, this isnâ€™t just the way the Web was made to work; this is the only way it could have worked. Philosophically, the Web is a technology that puts control into the hands of users. Therefore, our charge as web designers is to judge different peopleâ€™s needs to be of equal importance, and we canâ€™t do this if we treat every user exactly the same way.6
- This is purposefully a broad definition because weâ€™re not just talking about web pages here, but rather all kinds of technology. The principles are universal. There are, however, more exacting definitions available. For instance, the W3C begins the HTML 4 specification with some formal definitions, including what a â€œuser agentâ€ is. See http://www.w3.org/TR/REC-html40/conform.html. [↩]
- In real use cases, technical jargon and specific tools like a web browser are omitted because such use cases are used to define a systemâ€™s requirements, not its implementation. Nevertheless, the notion of an actor and an actorâ€™s goals are helpful in understanding the mysterious â€œuserâ€ and this userâ€™s software. [↩]
- XML-RPC is a term referring to the use of XML files describing method calls and data transmitted over HTTP, typically used by automated systems. It is thus a great example of a technology that takes advantage of XMLâ€™s data serialization capabilities, and is often thought of as a precursor to todayâ€™s Ajax techniques. [↩]
- It was in fact the much older email technology from which the term user agent originated; an email client program is more technically called a mail user agent (MUA). [↩]
- As it happens, this is the same argument open source software proponents make about why such open source software often succeeds in meeting the needs of more users than closed source, proprietary systems controlled solely by a single company with (by definition) relatively limited resources. [↩]
- This philosophy is embodied in the formal study of ethics, which is a compelling topic for us as CSS developers, considering the vastness of the implications we describe here. [↩]