Category: Crosspost

Posts which are to be cross-posted to an additional source.

My 2009 essay kinda-sorta about an Anarchist “Internet of Things”

I wrote an essay in 2009 about the Internet of Things, before people were calling it “the Internet of Things.” When I re-read it this afternoon, in 2017, I noticed something rather queer. It wasn’t actually about the Internet of Things at all. It was actually a personal manifesto advocating Anarchism, and condemning techno-capitalist fascism.

Yes, really.

In 2009, despite having barely turned 25 years old, I had already been working as a professional web developer for a little over a decade. (That arithmetic is correct, I assure you.) At the time, I had some embarrassingly naïve ideas about Silicon Valley, capitalism, and neoliberalism. I also had no idea that less than two years later, I’d be homeless and sleeping in Occupy encampments, and that I’d remain (mostly) happily houseless and jobless for the next six years, up to and including the time of this writing.

The story of my life during those two years is a story worth telling…someday. Today, though, I want to remind myself of who I was before. I was a different person when 2009 began in some very important ways. I was so different that by the time it ended I began referring to my prior experiences as “my past life,” and I’ve used the same turn of phrase ever since. But I was also not so different that, looking back on myself with older eyes, I can clearly see the seeds of my anti-capitalist convictions had already begun to germinate and root themselves somewhere inside me.

Among the many other things that I was in my past life, I was an author. I’ve always loved the art of the written word. My affinity for the creativity I saw in and the pleasure I derived from written scripts drew me to my appreciation for computer programming. That is its own story, as well, but the climax of that trajectory—at least by 2009—is that I was employed as a technical writer. I blogged on a freelance basis for an online Web development magazine about Web development. I had already co-authored and published significant portions of my first technical book. And, in 2009, I had just completed co-authoring a second.

That second book was called, plainly enough, Advanced CSS, and was about the front-end Web development topic more formally known as Cascading Style Sheets. But that’s not interesting. At least, no more interesting than any other fleeting excitement over a given technical detail. What’s arguably most revealing about that book is the essay I contributed, which for all intents and purposes is the book’s opening.

My essay follows in its entirety:

User agents: our eyes and ears in cyberspace

A user agent is nothing more than some entity that acts on behalf of users themselves.1 What this means is that it’s important to understand these users as well as their user agents. User agents are the tools we use to interact with the wealth of possibilities that exists on the Internet. They are like extensions of ourselves. Indeed, they are (increasingly literally) our eyes and ears in cyberspace.

Understanding users and their agents

Web developers are already familiar with many common user agents: web browsers! We’re even notorious for sometimes bemoaning the sheer number of them that already exist. Maybe we need to reexamine why we do that.

There are many different kinds of users out there, each with potentially radically different needs. Therefore, to understand why there are so many user agents in existence we need to understand what the needs of all these different users are. This isn’t merely a theoretical exercise, either. The fact is that figuring out a user’s needs helps us to present our content to that user in the best possible way.

Presenting content to users and, by extension, their user agents appropriately goes beyond the typical accessibility argument that asserts the importance of making your content available to everyone (though we’ll certainly be making that argument, too). The principles behind understanding a user’s needs are much more important than that.

You’ll recall that the Web poses two fundamental challenges. One challenge is that any given piece of content, a single document, needs to be presented in multiple ways. This is the problem that CSS was designed to solve. The other challenge is the inverse: many different kinds of content need to be made available, each kind requiring a similar presentation. This is what XML (and its own accompanying “style sheet” language, XSLT) was designed to solve. Therefore, combining the powerful capabilities of CSS and XML is the path we should take to understanding, technically, how to solve this problem and present content to users and their user agents.

Since a specific user agent is just a tool for a specific user, the form the user agent takes depends on what the needs of the user are. In formal use case semantics, these users are called actors, and we can describe their needs by determining the steps they must take to accomplish some goal. Similarly, in each use case, a certain tool or tools used to accomplish these goals defines what the user agent is in that particular scenario.2

A simple example of this is that when Joe goes online to read the latest technology news from Slashdot, he uses a web browser to do this. Joe (our actor) is the user, his web browser (whichever one he chooses to use) is the user agent, and reading the latest technology news is the goal. That’s a very traditional interaction, and in such a scenario we can make some pretty safe assumptions about how Joe, being a human and all, reads news.

Now let’s envision a more outlandish scenario to challenge our understanding of the principle. Joe needs to go shopping to refill his refrigerator and he prefers to buy the items he needs with the least amount of required driving due to rising gas prices. This is why he owns the (fictional) Frigerator2000, a network-capable refrigerator that keeps tabs on the inventory levels of nearby grocery stores and supermarkets and helps Joe plan his route. This helps him avoid driving to a store where he won’t be able to purchase the items he needs.

If this sounds too much like science fiction to you, think again. This is a different application of the same principle used by feed readers, only instead of aggregating news articles from web sites we’re aggregating inventory levels from grocery stores. All that would be required to make this a reality is an XML format for describing a store’s inventory levels, a bit of embedded software, a network interface card on a refrigerator, and some tech-savvy grocery stores to publish such content on the Internet.

In this scenario, however, our user agent is radically different from the traditional web browser. It’s a refrigerator! Of course, there aren’t (yet) any such user agents out crawling the Web today, but there are a lot of user agents that aren’t web browsers doing exactly that.

Search engines like Google, Yahoo!, and Ask.com are probably the most famous examples of users that aren’t people. These companies all have automated programs, called spiders, which “crawl” the Web indexing all the content they can find. Unlike humans and very much like our hypothetical refrigerator-based user agent, these spiders can’t look at content with their eyes or listen to audio with their ears, so their needs are very different from someone like Joe’s.

There are still other systems of various sorts that exist to let us interact with web sites and these, too, can be considered user agents. For example, many web sites provide an API that exposes some functionality as web services. Microsoft Word 2008 is an example of a desktop application that you can use to create blog posts in blogging software such as WordPress and MovableType because both of these blogging tools support the MetaWeblog API, an XML-RPC3 specification. In this case, Microsoft Word can be considered a user agent.

As mentioned earlier, the many incarnations of news readers that exist are another form of user agent. Many web browsers and email applications, such as Mozilla Thunderbird and Apple Mail, do this, too.4 Feed readers provide a particularly interesting way to examine the concept of user agents because there are many popular feed reading web sites today, such as Bloglines.com and Google Reader. If Joe opens his web browser and logs into his account at Bloglines, then Joe’s web browser is the user agent and Joe is the user. However, when Joe reads the news feeds he’s subscribed to in Bloglines, the Bloglines server goes to fetch the RSS- or Atom-formatted feed from the sourced site. What this means is that from the point of view of the sourced site, Bloglines.com is the user, and the Bloglines server process is the user agent.

Coming to this realization means that, as developers, we can understand user agents as an abstraction for a particular actor’s goals as well as their capabilities. This is, of course, an intentionally vague definition because it’s technically impossible for you, as the developer, to predict the features or capabilities present in any particular user agent. This is a challenge we’ll be talking about a lot in the remainder of this book because it is one of the defining characteristics of the Web as a publishing medium.

Rather than this lack of clairvoyance being a problem, however, the constraint of not knowing who or what will be accessing our published content is actually a good thing. It turns out that well-designed markup is also markup that is blissfully ignorant of its user, because it is solely focused on describing itself. You might even call it narcissistic.

Why giving the user control is not giving up

Talking about self-describing markup is just another way of talking about semantic markup. In this paradigm, the content in the fetched document is strictly segregated from its ultimate presentation. Nevertheless, the content must eventually be presented to the user somehow. If information for how to do this isn’t provided by the markup, then where is it, and who decides what it is?

At first you’ll no doubt be tempted to say that this information is in the document’s style sheet and that it is the document’s developer who decides what that is. As you’ll examine in detail in the next chapter, this answer is only mostly correct. In every case, it is ultimately the user agent that determines what styles (in which style sheets) get applied to the markup it fetches. Furthermore, many user agents (especially modern web browsers) allow the users themselves to further modify the style rules that get applied to content. In the end, you can only influence—not control—the final presentation.

Though surprising to some, this model actually makes perfect sense. Allowing the users ultimate control of the content’s presentation helps to ensure that you meet every possible need of each user. By using CSS, content authors, publishers, and developers—that is, you—can provide author style sheets that easily accommodate, say, 80 percent of the needs of 90 percent of the users. Even in the most optimistic scenario, edge cases that you may not ever be aware of will still escape you no matter how hard you try to accommodate everyone’s every need.5 Moreover, even if you had those kinds of unlimited resources, you may not know how best to improve the situation for that user. Given this, who better to determine the presentation of a given XML document that needs to be presented in some very specific way than the users with that very specific need themselves?

A common real-life example of this situation might occur if Joe were colorblind. If he were and he wanted to visit some news site where the links in the article pullouts were too similar a color to the pullout’s background, he might not realize that those elements are actually links. Thankfully, because Joe’s browser allows him to set up a web site with his own user style sheet, he can change the color of these links to something that he can see more easily. If CSS were not designed with this in mind, it would be impossible for Joe to personalize the presentation of this news site so that it would be optimal for him.

To many designers coming from traditional industries such as print design, the fact that users can change the presentation of their content is an alarming concept. Nevertheless, this isn’t just the way the Web was made to work; this is the only way it could have worked. Philosophically, the Web is a technology that puts control into the hands of users. Therefore, our charge as web designers is to judge different people’s needs to be of equal importance, and we can’t do this if we treat every user exactly the same way.6

  1. This is purposefully a broad definition because we’re not just talking about web pages here, but rather all kinds of technology. The principles are universal. There are, however, more exacting definitions available. For instance, the W3C begins the HTML 4 specification with some formal definitions, including what a “user agent” is. See http://www.w3.org/TR/REC-html40/conform.html. []
  2. In real use cases, technical jargon and specific tools like a web browser are omitted because such use cases are used to define a system’s requirements, not its implementation. Nevertheless, the notion of an actor and an actor’s goals are helpful in understanding the mysterious “user” and this user’s software. []
  3. XML-RPC is a term referring to the use of XML files describing method calls and data transmitted over HTTP, typically used by automated systems. It is thus a great example of a technology that takes advantage of XML’s data serialization capabilities, and is often thought of as a precursor to today’s Ajax techniques. []
  4. It was in fact the much older email technology from which the term user agent originated; an email client program is more technically called a mail user agent (MUA). []
  5. As it happens, this is the same argument open source software proponents make about why such open source software often succeeds in meeting the needs of more users than closed source, proprietary systems controlled solely by a single company with (by definition) relatively limited resources. []
  6. This philosophy is embodied in the formal study of ethics, which is a compelling topic for us as CSS developers, considering the vastness of the implications we describe here. []

The First Rule of Human Rights is to Never Trust Legal Systems Alone to Protect Human Rights

From the Tor Project’s blog today come “10 Principles for User Protection.” The very first principle is “Do not rely on the law to protect systems or users.” Tor Project developer Mike Perry:

Unfortunately, it is […] likely that in the United States, current legal mechanisms, such as NSLs and secret FISA warrants, will continue to target the marginalized. This will include immigrants, Muslims, minorities, and even journalists who dare to report unfavorably about the status quo. History is full of examples of surveillance infrastructure being abused for political reasons.

[…W]e decided to enumerate some general principles that we follow to design systems that are resistant to coercion, compromise, and single points of failure of all kinds, especially adversarial failure. We hope that these principles can be used to start a wider conversation about current best practices for data management and potential areas for improvement at major tech companies.

Mike Perry’s full list of 10 principles:

  1. Do not rely on the law to protect systems or users.
  2. Prepare and test policy commentary for quick response to crisis.
  3. Only keep the user data that you currently need.
  4. Give users full control over their data.
  5. Allow pseudonymity and anonymity.
  6. Encrypt data in transit and at rest.
  7. Invest in cryptographic R&D to replace non-cryptographic systems.
  8. Eliminate single points of security failure, even against coercion.
  9. Favor open source and enable user freedom.
  10. Practice transparency: share best practices, stand for ethics, and report abuse.

It’s genuinely refreshing to see this sort of thing coming from techies. The danger, of course, is in failing to point out that this is the sort of stuff the marginalized groups Mike Perry mentions have already been saying for generations. This is not a “pat ourselves on the back” moment, white techies. This is a “seriously, what the fuck is wrong with us that it took a Trump electoral victory to get vocal about this super basic stuff. (Spoiler: the answer is white supremacist capitalist patriarchy and all that it entails.)

Image courtesy CurrentAffairs.org.

Defense Against the Dark Arts and Mr. Robot’s Netflix ‘n’ Hack (rebooted) at Recurse Center

Last Saturday, I hosted another Mr. Robot’s Netlfix ‘n’ Hack session at the Recurse Center. I’ve been doing these weekly for three weeks now (here is a link to last week’s), and this time was the first week when the new set of batchlings were in the space. To better include them, we rebooted the series and re-screened the first episode of the show.

Last week was also the national elections in the United States. The outcome of that election was that Donald Drumpf was voted into office as President and over the course of the week he began selecting self-described white nationalists into positions of power in his upcoming administration. In light of these events, I’ve spent most of my waking hours fielding incoming requests for help about “what to do” in a number of different areas.

This election changes very little for me, personally. I have already been aware that we live in a police state, controlled by fascists and white supremacists. I’ve been preparing for worse and prepared for this eventuality for a long time. What this election changed, for me, was the fact that everyone around me was suddenly treating me like the things I was doing made sense, rather than being treated like some overly paranoid weirdo. So, that’s nice.

This also means that I’ve been getting lots of questions about digital security, privacy, anti-surveillance and censorship circumvention techniques. Y’know, commsec, opsec, and security culture stuff. In light of these events, I decided to kick off the new round of Mr. Robot’s Netflix ‘n’ Hack sessions with a whirlwind crash course of the defensive aspects of computer security techniques. Basically, I ran a very compressed CryptoParty.

Someone suggested that we call this a “Defense Against the Dark Arts” session, and I liked the analogy well enough to take the suggestion. Like the other Mr. Robot’s Netflix ‘n’ Hack nights, this one was well attended. We filled the session room to the max. It was probably between 15 or 20 of us to start with, and then it dwindled down to about 10 for the actual screening and post-screening discussion.

In my paradoxical, eternal optimism, I somehow had the idea that we could complete this lightning CryptoParty, which included install fests of Signal and the TorBrowser, within thirty minutes. I was wrong; we went over by about 30 minutes, and the screening of Mr. Robot started late. But so many (all?) of the attendees got set up with Signal and the TorBrowser, and that was really great.

As promised, I wanted to make sure that everyone had links to the reference guides and other resources presented in this defense-focused super quick “Defense Against the Dark Arts” session. To do so, I sent a follow up email with links to those resources. A portion of that email is presented verbatim, here:

In addition to these primers and the links included in them, additional useful resources are:

  • PrivacyTools.io – Simply start at the top and read down the page. This is as guided an introduction to privacy issues and what to do about them as it gets.
  • EFF’s Surveillance Self-Defense Handbook – A thorough treatment of anti-surveillance software, along with tutorials for how to get them installed and working on your system.
    • If you’re feeling overwhelmed by all of this already, consider spending just a little bit of time to walk yourself through the SSD’s Security Starter Pack.
  • PRISM-Break! – An overwhelmingly large digital reference card for all the privacy-enhancing tools available to you for a particular platform, purpose, or protocol. Be cautious here, some of the listed tools are experimental, not audited, or worse.
  • Security in a Box – A slightly dated, but still generally solid, resource website featuring much of the same content as the EFF’s Surveillance Self-Defense guide, but with a regularly updated blog. Created and maintained by the TacticalTech.org collective.

There’s a ton of stuff in there, and learning about how to defend yourself from governments, corporations, or malicious individuals on the Internet is more involved than simply picking up one or two tools. But a few well-chosen tools does give you a really, really good start. Taking some time to familiarize yourself with the above guides will hopefully help you become even more capable.

Following the install fest, we finally screened Episode 1 of Mr. Robot again. I already posted our list of tools, techniques, and procedures from the first week, and this didn’t change much. With a different audience, however, the discussion we had post-show did change quite a bit.

Unlike the first week, when people were interested in Tor onion routing and the dark/deep Web, this time people wanted to know about social engineeering and password cracking. So our discussion focused on sharing resources for social engineering, and books such as Kevin Mitnick’s “Art of Deception” and Robert Cialdini’s “Influence: The Psychology of Persuasion” came up. (So did Freedom Downtime, a documentary about Kevin Mitnick’s persecution by the FBI.)

After that, we also talked about the mechanics of password cracking. I gave an overview of the process from exploitation to data exfiltration, but focused on using the hash-“cracking” (really guessing) tool called Hashcat to demo finding the plaintext of hashed passwords. A lot of time in the discussion was spent showing the practicalities of how hashing (i.e., “trap door functions” or “one-way functions”) works by using md5 and shasum commands on the command line. Then I showed the syntax of the hashcat command to run a dictionary attack (with the infamous “rockyou” wordlist) against simple unsalted MD5 hashed passwords from a very old data dump file (hashcat -a 0 md5sums.txt wordlists/rockyou.txt). Have another look at the SecLists project on GitHub to find wordlists like these useful for password cracking.

We also talked about some common mistakes that application developers make when trying to secure their applications, and that users often make when trying to secure their passwords:

  • Try to generate per-user, instead of per-site, salt.
  • Don’t just double-hash passwords (i.e., hash(hash($password)), because this reduces the entropy used as input for the final result, and increases the chance of hash collisions. Instead, iterate the hash function by concatenating the original input (or a salt, or something) back into the resulting hash as well (i.e., hash($salt . hash($salt . $password))). This iteration also slows down an offline attack, but again, only if done correctly in code.
  • Don’t use multiple dictionary words as a password, even a long one, because these are easy to guess. For instance, contrary to popular belief, “correct battery horse staple” is a bad password, not because it lacks entropy, but because all of its components are likely to be in an attacker’s wordlist. Use a password manager and generate random passwords, instead.

Next week, we’ll return to our regularly-scheduled Mr. Robot’s Netflix ‘n’ Hack format: a demo/show-and-tell/exercise of a tool, technique, or procedure (TTP) featured in Episode 1, followed by a screening of Episode 2, and ending with a discussion about Episode 2’s TTPs. I thought that since we’ve done Onion services already, I would change gears and show an online attack similar to some of the ones Eliot used in the show by demoing a tool called Hydra. Another participant also said they may demo hiding data inside of audio CDs using a steganographic tool called DeepSound, also featured in episode 1.

However, this upcoming Saturday is a number of anti-Trump and anti-surveillance organizing meetings and workshops, so I may have to skip this week’s Mr. Robot’s Netflix ‘n’ Hack myself. If not, we may switch to Sunday just for the week. Time will tell. :)

Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

Self-described activist creator of Cell 411 app weirdly refuses to discuss its closed source tech because of anti-racist Twitter handle of the person asking

About a week ago I published a post cautiously praising the work of Boulder, Colorado based SafeArx, the company behind a smartphone app called Cell 411 claiming to cut down on the need for police:

Let me be clear that I love the idea of a decentralized emergency alerting response platform. I think it’s incredibly important for such a tool to exist. […] I want to see a project with Cell 411’s claims succeed and be a part of abolishing the police and the State altogether. I think there’s real potential there to make headway on an important social good (abolishing the police, dismantling the prison industrial complex, among other social goods) and I want to offer whatever supportive resources I can to further a project with these goals.

In the post, I raised some basic questions about Cell 411 that seemed to have gone unasked by reporters covering it. Chief among them is that the app claims to be a de-centralized alternative to 9-1-1, except that it’s not decentralized at all. I described this discrepancy as follows:

On the Google Play store, Cell 411 describes itself like this:

Cell 411 is a De-centralized, micro-social platform that allows users to issue emergency alerts, and respond to alerts issued by their friends.

The problem is in the very first adjective: de-centralized. To a technologist, “decentralization” is the characteristic of having no single endpoint with which a given user must communicate in order to make use of the service. Think trackerless BitTorrent, BitCoin, Tor, or Diaspora. These are all examples of “decentralized” networks or services because if any given computer running the software goes down, the network stays up. One of the characteristics inherent in decentralized networks is an inability of the network or service creator from unilaterally barring access to the network by a given end-user. In other words, there is no one who can “ban” your account from using BitTorrent. That’s not how “piracy” works, duh.

Unfortunately, many of the people I’ve spoken to about Cell 411 seem to believe that “decentralized” simply means “many users in geographically diverse locations.” But this is obviously ignorant. If that were what decentralized meant, then Facebook and Twitter and Google could all be meaningfully described as “decentralized services.” That’s clearly ridiculous. This image shows the difference between centralization and decentralization:

The difference between centralization and decentralization.

As you can see, what matters is not where the end users are located, but that there is more than one hub for a given end user to connect to in order to access the rest of the network.

Armed with that knowledge, have a look at the very first clause of Cell 411’s Terms of Service legalese, which reads, and I quote:

1. We may terminate or suspend your account immediately, without prior notice or liability, for any reason whatsoever, including without limitation if you breach the Terms.

This is immediately suspect. If they are able to actually enforce such a claim, then it is a claim that directly contradicts a claim made by their own description. In a truly decentralized network or service, the ability for the network creator to unilaterlly “terminate or suspend your account immediately, without prior notice or liability” is not technically possible. If Cell 411 truly is decentralized, this is an unenforceable clause, and they know it. On the other hand, if Cell 411 is centralized (and this clause is enforceable), other, more troubling concerns immediately come to mind. Why should activists trade one centralized emergency dispatch tool run by the government (namely, 9-1-1), for another centralized one run by a company? Isn’t this just replacing one monopoly with another? And why bill a centralized service as a decentralized one in the first place?

Despite this, I was hopeful that Cell 411’s creator, Virgil Vaduva, and his team would be willing to at least address the point, perhaps by discussing their development roadmap. Maybe it’s not decentralized yet, but they intend to decentralize it later on? That would be awesome, and important. Moreover, I asked if they would be interested in combining efforts with me or others with whom I’ve worked, since we’ve been developing an actually decentralized, free software tool with the same goal in mind called Buoy for a few months now. I said as much in my earlier post:

I want to see Cell 411 and Buoy both get better. Buoy could become better if it had Cell 411’s mobile app features. Cell 411 could become better if its server could be run by anyone with a WordPress blog, like Buoy can be.

I sent Virgil Vaduva an email last week, and tweeted at him before writing my post. (My previous post includes a copy of the email I sent him.) I was ignored. So I started tweeting at others who were tweeting about Cell 411, linking them to my questions. It seems that’s what got Mr. Vaduva’s attention, since today I finally got a response from him. And that response is extremely concerning for Cell 411’s supposed target audience: activists. Here’s how Mr. Vaduva “answered” my technical questions:

I’m not entirely sure why technical questions like these were answered by a hyper-focus on the militantly anti-racist Twitter handle I happen to be using right now (it’s actually “Kill White Amerikkka”), unless of course if Vaduva is having some kind of trigger reaction caused by (evidently not-so-latent) internalized white supremacy. Later, he called my original post, which, again, included outright praise for Cell 411 a “shitty hit piece.” I even offered to change my Twitter handle (as if that has any bearing at all on the technical matters?) for the duration of a discussion with him, but again, the only replies were, well, have a look:

The full thread is…well, classic Twitter.

I don’t know about you, but the idea of installing a closed-source app that reports my location to a centralized database controlled by a company whose founder actively deflects legitimate technical questions by objecting to a militantly anti-racist Twitter handle and making immature pro-capitalist statements when asked technical questions doesn’t sit well with me. But even if that were something I could tolerate, it raises even more concerning questions when that very same app is one touted as being built for anti-police brutality activists.

Last week, I would have told my friends, “Go ahead and try Cell 411, but be careful.” With this new information, my advice is: “Don’t trust anything created by SafeArx, including Cell 411, until and unless the technical issues are addressed, the source is released as free software, and its creators make clear that anti-racism and anti-capitalism is a core intention of their development process.”

In my personal opinion, tools like Cell 411 that purport to be “made for activists, by activists” need to be comfortable materially advancing the destruction of whiteness and white identity, as well as standing in solidarity with militant resistance to white supremacy. But even putting aside concerns over Vaduva’s discomfort with anti-racist Twitter handles, any technologist worth his salt who wants his closed-source technology to be trusted should be able to answer some basic questions about it if he’s indeed unwilling to release the source code itself.

Mr. Vaduva and Cell 411 fall short on both counts. The sad thing is that any potentially latent racism in Cell 411’s creator wouldn’t be a technical concern if Cell 411 itself were actually decentralized free software, since the intentions or social beliefs of an app’s creator can’t change how the already-written code works. As I said in the conclusion to my previous post:

It’s obvious, at least to anyone who understands that the purpose of cops is to protect and uphold white supremacy and oppress the working class, why cops would hate a free decentralized emergency response service. Again, I want to use such an app so badly that I began building one myself.

But if Cell 411 is centralized, then it becomes a much more useful tool for law enforcement than it does for a private individual, for exactly the same reason as Facebook presents a much more useful tool for the NSA than it does for your local reading group, despite offering benefits to both.

Cartoon of a protester ineffectually trying to shoot corrupt government officials with a 'Facebook' logo positioned as a gun.

[…]

As long as Cell 411 remains a proprietary, closed-source, centralized tool, all the hype about it being a decentralized app that cops hate will remain hype. And there are few things agents of the State like more than activists who are unable to see the reality of a situation for what it is.

Admiral Ackbar: Proprietary and centralized software-as-a-service? It's a trap!

If you think having a free software, anarchist infrastructural alternative to the police and other State-sponsored emergency services is important and want to see it happen, we need your help making Buoy better. You can find instructions for hacking on Buoy on our wiki.

Cell 411, the “de-centralized” smartphone app that “cops hate” is neither de-centralized nor hated by cops

If you’re following anti-police brutality activists, you might have heard about a new smartphone app that aims to cut down on the need for police. Cell 411 is touted as “the decentralized emergency alerting and response platform” that “cops don’t want you to use.” There’s only one problem: its central marketing claims aren’t true. Cell 411 is not decentralized, and there’s no evidence that cops don’t want you to use it.

Let me be clear that I love the idea of a decentralized emergency alerting response platform. I think it’s incredibly important for such a tool to exist. I’m so committed to that belief that I’ve been building a free software implementation of just such a tool, called Buoy, for a few months now.

Further, I believe it’s equally important that the developers of a tool like this actively eschew the State-sponsored terrorist gangs known as law enforcement, because that mindset will inform the tool’s development process itself. On the face of it and from the research I’ve done to look into Cell 411’s developers, I think there is a lot of welcome overlap between them and myself. Indeed, I’m grateful to them for developing Cell 411 and for dropping their price for it, offering it free-of-charge on the Android and iOS app stores, which is how it should be. Nobody should be charged any money for the opportunity to access tools for self- and community protection; that’s what cops do!

I’ve even reached out both publicly and privately to the developers of Cell 411 through email and Twitter to ask them about a possible collaboration, pointing them at the source code for the Buoy project I’m working on and asking where their source can be found.1 I want to see a project with Cell 411’s claims succeed and be a part of abolishing the police and the State altogether. I think there’s real potential there to make headway on an important social good (abolishing the police, dismantling the prison industrial complex, among other social goods) and I want to offer whatever supportive resources I can to further a project with these goals.

But I am concerned that Cell 411 is not that project. The fact is there are glaring, unexplained inconsistencies between their marketing material, the perception that they encourage the public to have about their tool, and their tool’s legal disclaimers. Such inconsistency is, well, sketchy. But it’s not unfamiliar, because this exact kind of inconsistency is something activists have seen from corporations and even well-meaning individuals before. We should be able to recognize it no matter the flag, no matter how pretty the packaging in which the message is delivered is wrapped in.

On the Google Play store, Cell 411 describes itself like this:

Cell 411 is a De-centralized, micro-social platform that allows users to issue emergency alerts, and respond to alerts issued by their friends.

The problem is in the very first adjective: de-centralized. To a technologist, “decentralization” is the characteristic of having no single endpoint with which a given user must communicate in order to make use of the service. Think trackerless BitTorrent, BitCoin, Tor, or Diaspora. These are all examples of “decentralized” networks or services because if any given computer running the software goes down, the network stays up. One of the characteristics inherent in decentralized networks is an inability of the network or service creator from unilaterally barring access to the network by a given end-user. In other words, there is no one who can “ban” your account from using BitTorrent. That’s not how “piracy” works, duh.

Unfortunately, many of the people I’ve spoken to about Cell 411 seem to believe that “decentralized” simply means “many users in geographically diverse locations.” But this is obviously ignorant. If that were what decentralized meant, then Facebook and Twitter and Google could all be meaningfully described as “decentralized services.” That’s clearly ridiculous. This image shows the difference between centralization and decentralization:

The difference between centralization and decentralization.

As you can see, what matters is not where the end users are located, but that there is more than one hub for a given end user to connect to in order to access the rest of the network.

Armed with that knowledge, have a look at the very first clause of Cell 411’s Terms of Service legalese, which reads, and I quote:

1. We may terminate or suspend your account immediately, without prior notice or liability, for any reason whatsoever, including without limitation if you breach the Terms.

This is immediately suspect. If they are able to actually enforce such a claim, then it is a claim that directly contradicts a claim made by their own description. In a truly decentralized network or service, the ability for the network creator to unilaterlly “terminate or suspend your account immediately, without prior notice or liability” is not technically possible. If Cell 411 truly is decentralized, this is an unenforceable clause, and they know it. On the other hand, if Cell 411 is centralized (and this clause is enforceable), other, more troubling concerns immediately come to mind. Why should activists trade one centralized emergency dispatch tool run by the government (namely, 9-1-1), for another centralized one run by a company? Isn’t this just replacing one monopoly with another? And why bill a centralized service as a decentralized one in the first place?

Virgil Vaduva, Cell 411’s creator, told me on Twitter that the app is not open source but hinted that it might be in the future:

This leaves me with even more questions, which I asked, but received no answer to as yet. (See the Twitter thread linked above.)

Cell 411’s proprietary source code is licensed under an unusual license called the BipCot NoGov license, written by a libertarian group with whom I share distrust and hatred of the United States government. Where we differ, apparently, can be summed up by this Andy Singer quote:

Libertarianism is just Anarchy for rich people.

And that concerns me greatly. Cell 411 originally cost 99¢ per app install on both the Google Play and iTunes app stores. It’s now free, which, again, is a move in the right direction. But by refusing to release the source code, SafeArx holds its users hostage in more ways than one. There are already rumors that the company is intending to monetize the app in the future, perhaps by charging for app downloads or perhaps in some other way in the future. That is fucked. The people who need an alternative to the police most of all are not people with money. That’s why all of Buoy’s code was available as free software from the very beginning; so those people could access the tool. And beyond that, it’s the very people who need an alternative to the prison industrial complex most who are also most in need of safety from capitalism’s exploitative “monetization.”

I hope Virgil chooses to make Cell 411 free software too—i.e., not just free as in no-charge but software libre as in freedom and liberty. A closed-source tool is downright dangerous for activists to rely on, especially for an app that is supposed to be all about communal safety. This has never been more obvious than in the post-Snowden age. If you share our goal of abolishing the State and ending the practice of caging human beings, and you want to dialogue, please do what you can to convince the people running SafeArx and Cell 411 of the obvious strategic superiority of non-cooperation with capitalism.

Which brings me to my next major concern: there is no evidence that cops hate Cell 411, despite the headlines. It’s obvious, at least to anyone who understands that the purpose of cops is to protect and uphold white supremacy and oppress the working class, why cops would hate a free decentralized emergency response service. Again, I want to use such an app so badly that I began building one myself.

But if Cell 411 is centralized, then it becomes a much more useful tool for law enforcement than it does for a private individual, for exactly the same reason as Facebook presents a much more useful tool for the NSA than it does for your local reading group, despite offering benefits to both.

Cartoon of a protester ineffectually trying to shoot corrupt government officials with a 'Facebook' logo positioned as a gun.

I am not saying that Cell 411 is a bad tool. Far from it. My belief is that it is a good tool for individuals and my hope is that it will become a better tool over time. But if Cell 411 is to go from “good” to “great,” then it must actually be decentralized. It must be released freely to the people as free software/software libre. Private individuals who are working to create social infrastructure as an alternative to police must be able to access its source code to integrate it with other tools, to hack on it and make it more secure. This is the free software way, and it is the only feasible anti-capitalist approach. And the only strategically sound way to abolish police is to abolish capitalism, since police are by definition capitalism’s thugs.

It is the explicit intent of police and the State to prevent private individuals from taking their own protection into their own hands, from making their own lives better with their own tools in their own way, by not allowing access to the source of those tools. We, Cell 411 included, should not be emulating that behavior.

I want to be able to run my own Cell 411 server without asking for permission from SafeArx to do so. If Cell 411 were decentralized free software, I would be able to do this today, just as I can publish my own WordPress blog, install my own Diaspora pod, or run my own Tor relay without asking anyone for permission before I do it. This is what I can already do with Buoy, the community-based emergency response system that is already decentralized free software, licensed GPL-3 and available for download and install today from the WordPress plugin repository.

As a developer, I want to see Cell 411 and Buoy both get better. Buoy could become better if it had Cell 411’s mobile app features. Cell 411 could become better if its server could be run by anyone with a WordPress blog, like Buoy can be.

But as long as Cell 411 remains a proprietary, closed-source, centralized tool, all the hype about it being a decentralized app that cops hate will remain hype. And there are few things agents of the State like more than activists who are unable to see the reality of a situation for what it is.

Admiral Ackbar: Proprietary and centralized software-as-a-service? It's a trap!

If you think having a free software, anarchist infrastructural alternative to the police and other State-sponsored emergency services is important and want to see it happen, we need your help making Buoy better. You can find instructions for hacking on Buoy on our wiki.

  1. Here’s the email I sent to Virgil Vaduva, Cell 411’s creator and SafeArx’s founder (the company behind the app):

    From: maymay <bitetheappleback@gmail.com>
    Date: Sat, 27 Feb 2016 20:03:38 -0700

    Hi Virgil,

    My name is maymay. I learned about Cell 411 recently and I’m excited to see its development. It is similar to a web-based project of my own. I am wondering where the source code for the Cell 411 app can be found. I could not find any links to a source code repository from any of the marketing materials that I saw on your website.

    Our own very similar project is called Buoy. The difference is that Buoy is intended for community leaders and intends to be a fully free software “community-based crisis response system,” with the same anti-cop ideology as Cell 411 but built as a plugin for WordPress in order to make it super easy for anyone to host their own community’s 9-1-1 equivalent.

    Our source code is here:

    https://github.com/meitar/better-angels/

    We have focused on the web-app side of things because that’s where our experience lies, but were hoping to create a native mobile app later on. It seems you already made one. Rather than reinvent the wheel, we’re hoping to integrate what you’ve done with Cell 411 with what we’ve already developed in order to facilitate a more decentralized, truly citizen-powered infrastructure alternative to 9-1-1.

    So that’s why we’re interested in looking at Cell 411’s source code.

    Thanks for your work on this so far.

    Cheers,
    -maymay
    Maymay.net
    Cyberbusking.org

    []

Buoy (the first?) anti-policing community-based crisis response system, now available in Spanish

Buoy, (the first?) anti-policing community-based crisis response system, is now available in Spanish.

This is a really, really big deal, because communities of Spanish-speaking residents in the United Snakes of Amerikkka are some of the most oppressively policed communities in this so-called “great” country. These are sometimes families of immigrants, with members who may be undocumented, and for this simple reason they are frequent targets of the xenophobic, racist militarized occupation by the huge number of government-sponsored domestic terror gangs known as “Law Enforcement,” police, or ICE.

With Buoy, residents of these communities finally have the beginnings of a fully community-owned and operated emergency dispatch telecommunication system that does not force or even expect its users to cooperate with 9-1-1, or indeed any other traditional “public safety service” offered by government officials. Buoy users choose people they know and trust in real life and organize “teams” with one another. With the press of a single button, they can then create a private group chat that shows each team member the real-world location of all other team members, allowing team members to share video or pictures and otherwise coordinate appropriate responses to incidents, without the interference of police.

Here is a short video introduction to Buoy’s alert-and-response features:

Of course, there are many other ways social groups of any size can use Buoy. Here’s a list of additional use cases.

If you are interested in helping us crush the monopoly of State-backed so-called “protective services,” if you want to evict the police from your community, if you want to be part of abolishing the police and mercilessly eradicating every reason for their very existence, we want and need you to join this project. Have a look at our “Contributing” guidelines for ways you can help. Liberals, Statists, and cop apologists need not apply.

Kill white supremacy,
-maymay, Buoy developer

P.S. Did you notice how this post has a different tone than my original post announcing Buoy’s prototype release? Guess which one expresses how I really feel.

“Societies With Little Coercion Have Little Mental Illness” is a case study in Consent as a Felt Sense

I am an insane person because I have self-respecting humane reactions to being forced to do, think, and feel things I do not want to do, do not believe, and do not want to experience.

Societies With Little Coercion Have Little Mental Illness“, by Bruce Levine, Ph.D., writing in Mad In America:

Throughout history, societies have existed with far less coercion than ours, and while these societies have had far less consumer goods and what modernity calls “efficiency,” they also have had far less mental illness. This reality has been buried, not surprisingly, by uncritical champions of modernity and mainstream psychiatry. Coercion—the use of physical, legal, chemical, psychological, financial, and other forces to gain compliance—is intrinsic to our society’s employment, schooling, and parenting. However, coercion results in fear and resentment, which are fuels for miserable marriages, unhappy families, and what we today call mental illness.

[…]

Once, when doctors actually listened at length to their patients about their lives, it was obvious to many of them that coercion played a significant role in their misery. But most physicians, including psychiatrists, have stopped delving into their patients’ lives. In 2011, the New York Times (“Talk Doesn’t Pay, So Psychiatry Turns Instead to Drug Therapy”) reported, “A 2005 government survey found that just 11 percent of psychiatrists provided talk therapy to all patients.” As the article points out, psychiatrists can make far more money primarily providing “medication management,” in which they only check symptoms and adjust medication.

Since the 1980s, biochemical psychiatry in partnership with Big Pharma has come to dominate psychiatry, and they have successfully buried truths about coercion that were once obvious to professionals who actually listened at great length to their patients—obvious, for example, to Sigmund Freud (Civilization and Its Discontents (1929) and R.D. Laing (The Politics of Experience, 1967). This is not to say that Freud’s psychoanalysis and Laing’s existential approach always have been therapeutic. However, doctors who focus only on symptoms and prescribing medication will miss the obvious reality of how a variety of societal coercions can result in a cascade of family coercions, resentments, and emotional and behavioral problems.

Modernity is replete with institutional coercions not present in most indigenous cultures. This is especially true with respect to schooling and employment, which for most Americans, according to recent polls, are alienating, disengaging, and unfun. As I reported earlier this year (“Why Life in America Can Literally Drive You Insane, a Gallup poll, released in January 2013, reported that the longer students stay in school, the less engaged they become, and by high school, only 40% reported being engaged. Critics of schooling—from Henry David Thoreau, to Paul Goodman, to John Holt, to John Taylor Gatto—have understood that coercive and unengaging schooling is necessary to ensure that young people more readily accept coercive and unengaging employment. And as I also reported in that same article, a June 2013 Gallup poll revealed that 70% of Americans hate their jobs or have checked out of them.

Unengaging employment and schooling require all kinds of coercions for participation, and human beings pay a psychological price for this. In nearly three decades of clinical practice, I have found that coercion is often the source of suffering.

[…]

In all societies, there are coercions to behave in culturally agreed-upon ways. For example, in many indigenous cultures, there is peer pressure to be courageous and honest. However, in modernity, we have institutional coercions that compel us to behave in ways that we do not respect or value. Parents, afraid their children will lack credentials necessary for employment, routinely coerce their children to comply with coercive schooling that was unpleasant for these parents as children. And though 70% of us hate or are disengaged from our jobs, we are coerced by the fear of poverty and homelessness to seek and maintain employment.

In our society, we are taught that accepting institutional coercion is required for survival. We discover a variety of ways—including drugs and alcohol—to deny resentment. We spend much energy denying the lethal effects of coercion on relationships. And, unlike many indigenous cultures, we spend little energy creating a society with a minimal amount of coercion.

Accepting coercion as “a fact of life,” we often have little restraint in coercing others when given the opportunity. This opportunity can present itself when we find ourselves above others in an employment hierarchy and feel the safety of power; or after we have seduced our mate by being as noncoercive as possible and feel the safety of marriage. Marriages and other relationships go south in a hurry when one person becomes a coercive control freak; resentment quickly occurs in the other person, who then uses counter-coercive measures.

Pair with:

You’re probably a non-racist and a non-rapist, but that’s a pathetically low standard that should be beneath you.

So, I have a question for you: are you you non-, or are you anti-?

Several months ago in response to Ferguson, Baltimore, the killings of Freddie Gray and Tamir Rice my friend Kaitlyn put up a Facebook post breaking down the difference between non-racism and anti-racism.

Most of are non-racist. Because racism is looked upon as some moral lapse, we feel self-assured by simply not being racist. I’m not a bigot. I don’t sing that N-word when my favorite rap jam comes on. I didn’t vote for that guy. I’m not burning any crosses. I’m not a skinhead. “I don’t,” “I won’t”, “I’m not”, “I’ve never,” “I can’t.”

What you end up with is an entire moral stance, an entire code for living your life and dealing with all the injustice in the world by not doing a damn thing.

That’s the great thing about “non-“: you can pull it off by simply rolling over in your bed and going to sleep. So why are you sitting at home and watching unfold on TV instead of doing something about it? Because you’re a non-racist, not an anti-racist.

Now do this for me: take the “C” out of “racist,” and replace it with a “P.” I’m not a rapist. I’m not friends with any rapists. I didn’t buy that rapist’s last album. All these things that you’re not doing. Meanwhile, people are still getting raped. And Black boys are being killed.

It’s not enough that you don’t do these things.

Your going to bed with a clear conscience is not going to stop college students from being assaulted. You thinking climate change is terrible is not going to stop climate change. You being so assured that you’re not anti-black, anti-muslim, won’t stop the next hate crime. And it’s wonderful that you recognize how brave gay people are when facing persecution, but they aren’t the ones who need to be brave.

We need to get active. We need to hold people accountable. We need to accept that what hurts one of us hurts all of us. And we need to stop thinking that injustice going on in the world isn’t to an extent our fault.

We need to stop being non- and start being anti-.

By Marlon James, via The Guardian.

Pair with Allies Must Be Traitors: On Barnor Hesse’s “action-oriented identities.” for more on anti-racism and You Can Take It Back: Consent as a Felt Sense along with “I said ‘yes.’ But I feel raped” for more about how we’re conditioned to behave as “non-rapists” rather than anti-rape.

What tools should we be building to end capitalism?

Someone recently asked me:

In terms of ending capitalism, what tools do we need to start building? How can we help one another connect to the resources we need? If we need laptops and phones to stay connected, but we do not have the natural resources to build them in communities close to us, how do we help one another connect and create while staying decentralized? Does that make sense? Are you already envisioning particular tools?

I wrote an answer I think is the synthesis of a lot of my thoughts about this, and want to share:

That is a really big question. To fully answer, I think it requires an agreement on definitions and a solid shared understanding of those definitions. That’s not something a lone email will be able to offer, so I have to refer you to a number of other sources for that kind of background. (We’ve talked about a lot of them in person, already.)

That said, with the necessary background, I think the answer to “what tools should we be building in terms of ending capitalism” is to rephrase the question so it’s more like: “What are some useful paradigms/models/frameworks we should be building tools based on in order to speed capitalism’s demise?”

I think it’s more important to understand capitalism as a way of thinking than it is to understand that a given tool X is implemented “capitalistically,” because ultimately capitalism is not a thing any more than love or hate are “things.” Capitalism is not a thing one can hold in one’s hand. Rather, it is a way of experiencing the things one holds in one’s hands, or feels about other people with whom one has relationships. There is no physical or digital tool that can directly change such an abstract thing.

Change must come from the other direction: how one thinks and what one values. It is obvious that “how one thinks and what one values” greatly affects the tools one makes, as well as affecting how one chooses to use said tool(s). If you value domination, you will choose to make tools that increase your ability to be dominating. Domination is ultimately what capitalism—the way of being a productive member of society as we know it today—rewards, both financially and otherwise. If society is to thrive, that needs to change away from valuing domination and towards valuing empathy and trust. A society based on domination is not one in which most people’s individual quality of life is high. That’s not just my opinion; a lot has been written in a great many academic and other fields about the importance and correlation of empathy and trust in societies for a joyous life. (Google it.)

But no tool, even tools that were carefully crafted to avoid conferring the ability to dominate on their users, are immune from being used in ways that dominate others. The evidence of this is simply that someone who wishes to dominate someone else can simply withhold knowledge of said tool from them (using the innate human ability of not speaking to that person), thereby increasing the gap of capability between themselves and the person they seek to dominate. And notice that this has nothing to do with the design of said tool. The problem is a human, cultural one, not a technological one.

So with all that said (and hopefully understood), if one chooses to build tools anyway, as I do, and if one chooses to do so with the intent of destroying capitalism, as I do, then it’s important that the tools we choose to build are carefully chosen so their predictable impacts have the most benefit to those who share our intent of destroying capitalism and the least benefit to capitalists.

There are some tools that benefit one group of people more than others. But knowing which these are or will be is complex because that trade-off is never static; it changes with each new tool’s introduction and also with the changing cultural morays of a given society in a given time. This isn’t always predictable, but what is predictable is the ways in which different groups incorporate new tools. Bruce Schneier writes about this when he says:

There are technologies that immediately benefit the defender and are of no use at all to the attacker – for example, fingerprint technology allowed police to identify suspects after they left the crime scene and didn’t provide any corresponding benefit to criminals. The same thing happened with immobilizing technology for cars, alarm systems for houses, and computer authentication technologies. Some technologies benefit both but still give more advantage to the defenders. The radio allowed street policemen to communicate remotely, which increased our level of safety more than the corresponding downside of criminals communicating remotely endangers us.

As anti-capitalists, one of our goals should be to identify, design, and deploy technologies that are more use to anti-capitalists than capitalists. There are many good examples of this. Food banks. Public libraries. Distributed telecommunications (like BitTorrent, IPFS, Tor onion services, etc.). Fighting for truly public spaces (like how Occupy Wall Street tried to take back public parks for living purposes). All of these things are anti-capitalist, and there are many more more like them. We should support all of these things and anything that supports those things, would be great.

In other words, we need to be building infrastructure. And when I say infrastructure, I don’t just mean anti-capitalist infrastructure (infrastructure useful for directly attacking capitalism, such as defunding and directly combating the existence of militaries and police, as projects like CopWatch or our project, Buoy, aims to do, although I do think this is useful and important, too). I specifically mean ALTERNATIVE infrastructure: infrastructure useful for doing things other than capitalism.

What does infrastructure enabling doing things other than capitalism look like? That’s a HUGE, diverse array of things that are actually pretty familiar. Public (shared) roadways are the canonical example. Roads themselves are a tool; they are neither capitalist nor anti-capitalist, they have existed long before capitalism. The capitalist part of the modern conception of a roadway is the part where someone thinks to themselves, “there’s a pothole here, but I’ll do nothing about that because it is not my job to fix it, it is the State’s job to send someone here to patch this up.” That’s how capitalism ends up taking over control of roadways. That’s the force that ultimately enables a powerful, dominating entity, such as a government or corporation, to put up toll booths and “privatize” and thereby control access to an otherwise uncontrollable, un-ownable thing such as physical movement.

We’ve already begun building alternatives to this way of thinking. For example, see the “citizen pothole reporting mobile app” developed over 6 years ago.

This kind of app is a nice try, and there have been a lot of these coming from initiatives like (the badly misguided) “Code for America” brigades, but it ultimately benefits capitalists because the developers of these apps take the basic assumption of capitalism (that someone “owns” the road—and that this owner is the State) and amplifies it.

A more anti-capitalist or capitalist-alternative “pothole fixing” app would have included instructions for how to fix potholes in the app itself, included a feature for locating the materials needed to fix potholes on the map (even if that just means directions to the nearest Home Depot), and then walked the end-user through the process of traveling to and fixing the potholes that they navigated to. Of course, anti-capitalism is a gradient. To offer an even more effective alternative to capitalism, the app could include a feature where people are able to list their own garages as spaces where other users (pothole-fixers) could freely take and/or borrow the supplies needed for fixing potholes. Like a pothole-fixing equivalent of a food bank. Instead, all the app does is further centralize responsibility, not to mention the knowledge, for fixing potholes in the entity who is already not doing a good job of fixing potholes: the local (capitalist) government, while also turning citizens into agents who, themselves, further enforce the cult of capitalism amongst their peers.

Do you see the difference?

So when you ask me, “what tools do we need to build in terms of ending capitalism?” my answer is: “we need to rebuild every single tool that exists, including the tools used for fixing potholes in the streets.”

Which tool will you work on? There are many to choose from. Each is important. Each is necessary. The key point to understand is that building alternatives to capitalism do not come about by building anti-capitalist technology. It comes about by building pro-social technologies IN AN ANTI-CAPITALIST WAY.

In other words, alternatives to capitalism are all about the process, the journey, the way in which you do a thing, not the product, the destination, or the specific thing you choose to do or build.

Hope this helps,
-maymay
Maymay.net
Cyberbusking.org