Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

One comment

Comments are closed.