Tag: ethics

Relationship Anarchy is not for fuckboys (or polyamorists)

This is really a great piece. Really great.

Real relationship anarchy is political. There’s just no way around it. How could it be otherwise, when it has roots in political anarchism? Relationship anarchy is not about getting your dick wet and looking cool while you do it. It’s not about sounding hipper than all the other polyamorists. You can do polyamory without any political consciousness whatsoever, and you can definitely do monogamy without it. You can be mono or poly in service of the capitalist hetero-patriarchy. Most people are. But you can’t do relationship anarchy without some awareness of the socio-political context you’re operating in and how you’re attempting to go against that grain out of a genuine belief in certain concrete principles. Those concrete principles are nothing so basic and shallow as “freedom” (to fuck) or “honesty.” They’re the kind of political principles that you can base an effective social movement on: a movement that offers an alternative to the capitalist hetero-patriarchy’s commodification of bodies, sex, and love; to the sabotage of female solidarity in friendship and romantic love; to neoliberal capitalism’s goal of the isolated couple and nuclear family; to the homophobia and toxic gender crap that prevents even nonsexual/nonromantic connection and intimacy between members of the same sex.

[…R]elationship anarchy resonates with me so much because its principles amount to a friendship ethic. The word “friendship” is widely used as a broad, vague, often meaningless term, but to me, friendship as this deep, intimate, important, positive bond between humans is described really well by the above set of principles. Friendship leans away from interpersonal coercion by default and can’t survive under the burden of it for long. Mutual aid and cooperation are in friendship’s very nature; you could even define friendship by those qualities: helping and supporting each other out of desire and not duty. And when friendship is committed, that commitment is done in a spirit of communication, not drawn up as a contract, which what marriage is: a legal contract binding romantic partners.

[…]

Being a relationship anarchist doesn’t mean you have to fuck more than one person at a time, because relationship anarchy is not about sexual nonmonogamy, even though it is usually inclusive of sexual nonmonogamy. Relationship anarchy is not polyamory sans the obvious hierarchy of romantic partners. It’s about doing relationships with community-centric values, not couple-centric values. Above all, it’s about relating to other human beings without coercive authority in play and without hierarchy in your group of relationships or in any relationship itself.

I fucking cringe when I read about polyamorous people defining “relationship anarchy” using nonhierarchal polyamory’s terms, just as I cringe when I hear stories of men pulling the RA card on their casual sexcapades. Not just because of how unbelievably inaccurate, apolitical, and ignorant it is but because in both cases, “relationship anarchy” is falsely used to describe the kind of romance supremacist, friendship-excluding, sex-centric lifestyles that are diametrically opposed to authentic relationship anarchy.

The capitalist, heteronormative, patriarchal state promotes relationship hierarchies based on romance supremacy and amatonormativity. It endorses treating sex like a product, protects heterosexual men in their consumption of female bodies as sexual objects, promotes the buying and selling of women’s sexualized bodies. The capitalist heteronormative patriarchal state WANTS you to invest all of your free time, energy, resources, and emotion into romantic couplehood, into marriage, into sex. It WANTS you to devalue friendship, to stay isolated from everyone who isn’t your romantic partner, to be a self-interested individual with no ties or commitments to anyone but your spouse. Why? Because friendship could lead to community and community could lead to collective political action, which could turn into revolution. And because friendship and community are almost impossible to commodify and harness for the purpose of feeding into the capitalist economy and creating bigger profits for the wealthy elite. Sex and romance make rich people money all day every day. They sell it to you every waking moment. They can’t use friendship and community to sell you shit. They can’t turn friendship and community into products. If they could, they would’ve spent the last century doing so, instead of teaching the public that friendship is worthless and money is more important than community.

So don’t tell me that you’re entitled to call your polyamory or your casual sex “relationship anarchy,” as you conduct your social life with anti-anarchism principles and the same amatonormativity that all the coupled up monogamists preach and believe in. Don’t tell me you’re a “relationship anarchist” when you don’t give a fuck about friendship or community or political resistance, just sex and romance and your freedom to be nonmonogamous.

Relationship anarchy is not a cover for fuckboys. And it is not nonhierarchical polyamory.

Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

The Internet as an Identity-Multiplying Technology

When I saw that a friend had shared this years-old post about Facecebook founder Mark Zuckerberg‘s infamous remark that “Having two identities for yourself is an example of a lack of integrity,” I thought I’d chime in:

Actually, Zuckerberg’s is a common misunderstanding of telecommunications.

If you’ve done even a tiny bit of academic study on media you will have encountered McLuhan’s “The Medium Is the Massage,” which talks about the ways that many people “approach the new with the psychological conditioning and sensory responses of the old.” In other words, people treat the Internet like TV we can click on, just as they treated TV like radio we can see. This is obviously wrong, but it takes a lot of time for people as a demographic whole to approach new technological abilities in what we might call a “native” way. See, for instance, the entire discussion around “Digital natives,” of which I will note Zuckerberg is not.

What’s at issue in the “nymwars” (or “Real Names Policies”) is not integrity at all, but rather power and control. Namely, that of an authoritarian entity such as a government to have the power to legitimize what your identity is (your “real name”), and to control what you can do with that identity. Facebook has a cozy relationship with governments because the interests of both governments and Facebook are well-aligned with respect to how they would like people to use identities. This is why Facebook appeals to the legal system to enforce its “Real Names” policy, see specifically the Computer Fraud and Abuse Act clauses about “misrepresenting identity” for “authorized” versus “unauthorized access.”

In point of fact, however, identities are not inherently static things—there is no “real” you distinct from any other you, at least not any more or less “real” than any other (“part of”) you. They can and do change with time, space, and other factors. The physical capability of communicating to people far away from us therefore has a direct impact on the identities we hold, and subsequently, choose to claim, because that is a fundamentally different thing than speaking to someone who is next to you. This began with the invention of writing, not the telegraph. The telegraph simply sped up the process.

What Zuckerberg and many other people don’t understand is that the impact telecommunication actually has on identities is a fracturing and multiplying of identities. They are still stuck cognitively processing the Internet as a “window” through which you can “look at things” like “pages.” (Why do you think they called it a “Browser window”?) But what the Internet actually is, with respect to who we are (as opposed to we do) is very different. The Internet is much more like a ham radio than a telephone. Just as ham radio operators took callsigns when transmitting, so do we take “screen names” when writing online forum posts.

What this means in the Internet, a world with unlimited space distinctly unlike ham radio, is that an individual body can be influential in an unlimited number of arenas that may never intersect. And, given that, it means an individual body can have an unlimited number of distinct identities, each one time-and-space-sliced. There is a real, whole “identity” in each of these time-and-space slices of influence.

The Internet is therefore unique in that exactly contrary to Zuckerberg’s self-serving assertions, the Internet is an identity multiplexing technology. It is not, never has been, and I strongly argue must never be allowed to be an identity trunking technology.

End rant.

The interaction between telecommunication and identity, as well as this interaction’s effect on societal notions of safety and privacy, has been one of my primary philosophical inquiries. For more, see also:

Your Consent Is Not Being Violated By Accident

unquietpirate:

When you start looking for examples of nonconsensual culture in technology, you find them absolutely everywhere.

– Deb Chachra, Age of Non-Consent

About a month ago, someone sent me this lovely rant and asked me to publish it anonymously. I’ve been sitting on it mostly because I got wrapped up in other things. But I was reminded of it tonight when I read Deb Chachra’s “Age of Non-Consent” and Betsy Haibel’s “The Fantasy and Abuse of the Manipulable User”.

Both of the above pieces draw links between rape culture and issues of consent in software design. I recommend them both, particularly the Haibel piece, for incisive and disturbing analysis of the details of how the Stacks intentionally build software to violate their users’ consent — and what a major problem this is given technology’s influence on culture as a whole.

This coercion is picked up on and amplified by the platforms themselves – when someone I know tried to delete his Facebook account, it tried to guilt him out of it by showing him a picture of his mother and asking him if he really wanted to make it harder to stay in touch with her.

I’ve been in meetings where co-workers have described operant conditioning techniques to the higher-ups, in those words – talking about Skinner boxes and rat pellets and everything. I’ve been in meetings where those higher-ups metaphorically drooled like Pavlov’s dogs. The heart of abuse is a fantasy of power and control – and what fantasy is more compelling to a certain kind of business mind than that of a placidly manipulable customer?

– Betsy Haibel, The Fantasy and Abuse of the Manipulable User

However, where these otherwise terrific articles don’t go far enough is in explicitly acknowledging that the people who are most responsible for perpetuating rape culture and the people writing consent-violating software are the same people. It’s no coincidence that Facebook doesn’t care about your consent, because most of the people who work at Facebook wouldn’t think twice about getting you drunk and “taking advantage” of you at a party, or of defending a friend who did.

So, while both of the above authors optimistically implore high-level developers and other elite tech workers to adopt an ethic of “enthusiastic consent” when it comes to software design — as if the majority of workers in that sphere understand what that is or would even care if they did — my angry and extremely on-point friend below has another solution:

There has been much gnashing of teeth recently about how blatantly people’s privacy is violated by software like the new Facebook messenger app. These articles or editorials will rage about “companies like facebook” and often have a picture of Mark Zuckerberg’s punchable face just so people know who to have rage at.  One imagines Zuckerberg, possibly at the same table as the director of the NSA, maybe a CIA agent, and maybe the ghost of Steve Jobs all conspiring to violate your privacy and make hardware you bought do what they want against your will. The villain in these stories is either the CEO of some company or “the corporation” as a faceless monster.     

But what’s really going on here?  What we have, overwhelmingly, is a lot of technology being built which ignores the consent of the user.  A app which no one wants is forced on everyone, things which clearly everyone will hate are put in vague terms of service which essentially say that the service provider can do anything they want any time they want and there is nothing you can do about it.  How did this happen?  

Meanwhile, if you follow technology media and especially feminist technology media you see constant stories about what a festering shithole of sexism the technology industry is.  These articles are generally along the lines of a narrative about female engineers trying to be at conferences or trade shows and facing constant harassing of just about every kind from their overwhelmingly male peers.  They are constantly being touched, catcalled, and generally treated like shit, obviously against their will. Articles will talk about how this needs to be addressed in order to improve the quality of life for women in tech as well as to bring more women into tech.  As tech insider media, they meanwhile generally ignore the role of the user in all this.

What I find disappointing here, and is the point of this article, is that these are all the same shit heads, and that this is no accident.  Is it an accident that the same men who think it’s ok to grab ass at a technical conference are writing software that deliberately and blatantly ignores the consent of the user all the time?  No.  Because software is simply one of the worst industries in the history of technology.  I think it would be hard to find any industry in the history of technological capitalism that has held itself to such low standards and shown such consistent contempt for the user or for quality of their product.  

It is time for people in the public at large to stop seeing companies like Facebook as either a monolithic inhuman monster, or the personal fiefdom of some monstrous oligarch like Zuckerberg, but rather like just a big group of horrible people doing horrible work.  It’s time for the tech backlash within the industry to wake up to just how fucked the rest of us are by this, and for the rest of us to wake up to just how fucked this industry is from the inside.  

It’s time to smash Silicon Valley.

Yes, to all of this. My personal experiences of working in the software industry validates every word of this. It is why I left.

So, you work for The Borg, do you? An anecdote of adiophora in Silicon Valley.

At a recent party in Silicon Valley, I met a dude who worked for Palantir and was baking a pie. He said he loved his job. And he loved baking pies.

“Oh, you have it all wrong,” he told me in between glances at the oven. “We don’t make surveillance equipment. We just make the tools that they use to analyze the surveillance data they collect.”

I literally laughed in his face. “And you think that’s any better?”

“Well, yeah. Someone’s going to do that, if not us.”

“Good point,” I scoffed. “Why wouldn’t you want to be the person who pulls the trigger if someone else is just gonna do the same thing anyway, amirite?”

The dude pouted and insisted, again, that he loved his job. “The same tools are also used to stop child sex trafficking,” he said.

I rolled my eyes. “Even if that’s true, and I happen to be well-informed enough that I sincerely doubt it, are you really trying to argue that enough good things you do ‘cancel out’ evil things you do? Dude, that’s not how ethics work, and you know it. Just think: would you do this job if you weren’t getting paid?” I pressed.

“No, I need some way to make money.”

Again, I laughed. “No you don’t. You need food. You need companionship. You need sleep. You don’t need money. You’re obviously a smart guy, really talented, and as you said, you want there to be good things in the world. You clearly do a lot of things because you want to. That pie smells delicious! Why do you keep helping governments kill people?”

“Look, I need a job,” he said, clearly tiring of the conversation.

“No, you don’t. You need food. You need companion–.”

“It’s better than flipping burgers.”

“Well, of course. But a shit-covered meal is still worse than a meal that isn’t laced with shit.”

“It’s not shit-covered!”

“Okay then, if you like it so much, do the work you’re doing for Palantir but refuse the paycheck.”

“What? That’s ridiculous.”

“No, what’s ridiculous is that you’re telling yourself some fantasy story about helping sexually enslaved children when what you’re really doing is the modern day equivalent how IBM built technology for the Nazis to commit genocide.”

The dude looked flabbergasted. I rolled my eyes at him. “Yeah, I know.” I said. “You ‘need a job.’ Hey, can I have a slice of that pie? Smells great.”

A scatter plot graph along an axis of Control and Corporeality shows that the least controlling and corporeal technology are the tools produced by the GNU project, whereas a system of total control and corporeality would be Star Trek's The Borg.
How Borg-y is the technology you work on? The image in this post depicts a scatter plot graph along an axis of Control and Corporeality which shows that the least controlling and corporeal technology are the tools produced by the GNU project, whereas a system of total control and corporeality would be Star Trek’s vision of The Borg.

Silicon Valley’s technologists are thanatical, suicidal idiots.

Oh, don’t get me wrong. Their engineering skills are world-class. But that’s not what makes them idiots.

What makes them idiots is that they are unthinkingly creating the very technologies to which they enslave themselves. And most of them are doing that not because it’s some personal calling, but because they believe a number in their bank account is important and keeps them alive—even while they laugh at the superstitiousness of religious people.

As Roger Forsgren writes in, “The Architecture of Evil”:

The technical professions occupy a unique place in modern society. Engineers and architects possess skills most others lack — skills that allow them to transform dreams of design into reality. Engineers can convert a dry, infertile valley into farmland by constructing a dam to provide irrigation; they have made man fly; and architects have constructed buildings that reach thousands of feet into the sky. But these same technical gifts alone, in the absence of a sense of morality and a capacity for critical thought and judgment, can also make reality of nightmares.

If you think such moral blindness (adiaphora) is merely the stuff of history, think again. You might even find it when you look in the mirror.

Technology, itself, cannot be “evil”

This post was originally published on February 4th, 2013, over at my other blog.

In many of the tech-heavy circles I run in—start up culture, Silicon Valley or tech industry culture, and, to a lesser extent, even hacker culture—there is a profound apathy for and reticence to engage with people of more passionate politics. On the other hand, in many of the politically active circles I run in (especially mainstream anarchist communities), radicals and moderates alike seem to have a deeply ingrained distaste for technology.

Oh, sure, they begrudgingly use technologies. But, much like a cigarette smoker may feel obligated to preemptively point out the unhealthiness of their habit, for some reason, it seems necessary to preemptively note one’s own hypocrisy for complaining about Facebook on Facebook.

This reached a head today when I saw someone I respect for their politics and work and lifestyle and kindness and everything else I know about them proclaim that “the Internet is clearly evil.”

Ugh. I replied:

While I can appreciate and even share much of [the author’s] perspective and approach to this situation, there is a lot of ignorance evident to me in this thread. As a radical and a social justice technologist, that bothers me a lot.

Most technology, like most hammers, are actually simply knowledge-things, i.e., physical (or electronic) manifestations of ideas. If you understand the idea, you can learn to talk to the technology. If you can talk to the technology, you can at least influence it to awesome effect. And if you can influence or even outright control the technology, then you don’t need to fear it to such an absurd degree as to call an inanimate, amoral thing “evil.”

At the end of this admittedly verbose message are links to a few simple browser tools I recommend you install to better shield yourself from the kind of advertising and tracking you are concerned about. But my point is this:

It is foolish and self-sabotaging to see a tool used for evil purposes and conclude that the tool itself is evil, rather than the way it is being used. To use a crude and simple analogy for the sake of illustration, would you condemn the Pink Pistols, an organization that trains LGBTQIA people in the use of firearms for self-defense, as loudly as I suspect many of you would condemn the National Rifle Association? You would be foolish if you did.

Technology has no morals, nor ethics. Technology simply exists. It is functionally a sophisticated hammer, which is little more than a sophisticated rock. This doesn’t mean a world that has hammers in it is the same as a world that only has rocks in it, as in much the same way that the birth of a new person (or subculture, or movement) shapes the world in which they were born, so too does the birth of a new technology.

And that means we need to understand it, treat it with respect, learn how to communicate with it, and imbue our interactions with it with the same values and principles that make us ethical, compassionate human beings.

That is what I do not see Facebook doing. And it is also what I do not see you doing[…].

A famous phrase is that “any sufficiently advanced technology is indistinguishable from magic.” The recently-coined corollary is, “any sufficiently technical expert is indistinguishable from a witch.” Think about what the “good guy” magic-users in your favorite stories did with their awesome powers. Glenda the Good Witch and Obi-Wan Kenobi, for example, both used their “magic” to share knowledge and offer—not take, but offer—leadership and assistance (for what is compassionate leadership if not an offer of assistance?).

The Web is actually a fantastic example of a technology that gave individual people (like you and me) more control over their experience of advertising than any other in the history of advertising. Why else do you think Big Content like Disney and Viacom and Clearchannel spend so much money to buy laws that criminalize the most basic facets of computer use? (We are all already computer criminals, whether you know it or not.)

Ad blockers, mentioned earlier in this thread, work because everything you see on your screen is determined by your User Agent (hereafter UA), the technical term for “thing you use to browse the Web.” The UA is the extension of your biological tools, like your hands, and can still be commanded with precision just like your hands can be. I wrote in 2009 that, “To many designers, […] the fact that users can change the presentation of their content is an alarming concept. Nevertheless, this isn’t just the way the Web was made to work; this is the only way it could have worked. Philosophically, the Web is a technology that puts control into the hands of its users.”

You can learn to talk to your technology if you want to and have the community support to do it. But even if you don’t, you can rely on other Glenda the Good Witches and Obi-Wan Kenobis (like me, I dare say, acknowledging the arrogance inherent in this statement) who do.

In other words, like it or not, The Revolution is going to need Information Technology people. It is in no one’s interests to demonize some of the tools we need to use to make the world a little less unfair. So, please, don’t.

As promised, here is a list of useful software that I recommend everyone install, since they block ads and marketing trackers and require literally 5 minutes to learn (in total, not individually):

If you’re willing to put up with an additional learning curve, then also consider installing:

I’m happy to discuss this topic at further length and in further depth, so anyone interested in doing so is invited to send me a “friend request” or have a look at my website: Cyberbusking.org.

Sorry this was verbose. This matters to me. And it’s personal. Thanks for listening.

My friend conceded that “when you give the argument that a tool is ethically neutral, i agree with you up to a point. I definitely, you know, like freedom fighters with guns and hate cops with guns.” But still, my friend had a legitimate grievance, and I wanted to make sure that point wasn’t lost either:

And I don’t disagree with you […] that there are currently more people and more societal resources that seem hell-bent on using this particular tool for evil rather than for good. Just the other day, I posted:

…what I’m saying is, firstly, that the environment in which we live currently both actively provides material support for dreaming up and manufacturing Shit We Don’t Need as well as actively punishing people who do things that are purposefully designed to mitigate some of the world’s horror.

Secondly, moreover, what I’m saying is that there is such an obscene disparity between the available resources for the former versus the latter that I am thoroughly disgusted to the point of hourly depression by both the existence of this disparity and, further, the ways so many people are seemingly NOT vocally and continually disgusted by this disparity.

I just don’t think it serves us in this case to say the tool is evil rather than what people are doing with it. That kind of thing really muddies the waters, which, bluntly, helps cops, not us.

One of the reasons this kind of thing hits me so close to home is because of the many ways people often seem to care more about protecting polite fictions than a person’s well-being. As a hacker (which is not limited to technical “witchcraft,” mind you), my expressly articulated purpose is to align more people’s expectations with what is possible, not what is polite.

Moreover, when terrible things I know are possible are already happening while at the same time people who could do something to help mitigate or even prevent the terrible thing from happening opt to protect a polite fiction, I get angry about it. And I don’t think anyone who’s apathetic or greedily self-serving in the face of such terrible things deserves the honor of being called an ethical person.