Tag: programming

Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

A Sneak Peek at Better Angels’ Buoy: the private, enhanced 9-1-1 for your personal community

As some of you already know, over the past several months, I’ve been working with a team of collaborators spanning four States and several issue areas ranging from alternative mental health/medical response, to domestic violence survivor support, to police and prison abolitionists. Although we don’t all share the exact same politics, we’ve come together as one group (we’re calling ourselves the “Better Angels”) because we all agree that more has to be done to support communities of people whom the current system fails, regardless of whether that failure is deliberate or not. In the spirit of software development as direct action, we set out to design and implement free software that would have the maximum social impact with the minimum lines of code, as quickly as possible.

Today, I want to introduce you to that software project, which we’re calling Buoy.

Screenshot of the Better Angels Buoy community-driven emergency dispatch system sending an alert to a crisis response team.

What is Buoy

Buoy is a private, enhanced 9-1-1 for your website and community. We call it a “community-driven emergency dispatch system” because everything about its design is based on the idea that in situations where traditional emergency services are not available, reliable, trustworthy, or sufficient, communities can come together to aid each other in times of need. Moreover, Buoy can be used by groups of any size, ranging from national organizations like the National Coaliation Against Domestic Violence (NCADV), to local community groups such as Solidarity Houston, or even private social clubs such as your World of WarCraft guild.

Indeed, the more community leaders who add the Buoy system on their websites, the safer people in those communities can be. One can imagine the Internet as a vast ocean, its many users as people sailing to the many ports on the high seas. Buoy is software that equips your website with tools that your users can use to help one another in the real world; the more buoys are deployed on the ocean, the safer traveling becomes for everyone.

How does Buoy work?

Using Buoy is simple. After a website admin installs and activates Buoy, each user of that website can define their personal response team by entering other users as their emergency contacts. This is shown in the screenshot below.

Screenshot of Buoy's "Choose your response team" page.

The “Choose your team members” page, available under the “My Team” heading in the WordPress dashboard menu, allows you to add or remove users from your response team. When you add a user, they receive an email notification inviting them to join your team.

Screenshot of Buoy's "Team Membership" page.

When you are invited to join someone’s response team, you receive an email with a link to the “Team Membership” page, shown here. On this page you can accept another user’s invitation to join their team or leave the teams you have previously joined.

After at least one person accepts your invitation to join your response team (i.e., they have opted-in to being one of your emergency contacts), you can access the Buoy emergency alert screen.

screenshot-3

You can bookmark this page and add it to your phone’s home screen so you can launch Buoy the same way you would launch any other app you installed from the app store. Pressing the large button nearest the bottom of the screen activates an alert and immediately sends notifications to your response team. Clicking on the smaller button with the chat bubble icon on it opens the custom alert dialog, shown next.

screenshot-4

Using that button with the chat-bubble icon on it, you can provide additional context about your situation that will be sent as part of the notification responders receive.

For some use cases, however, sending an alert after an emergency presents itself isn’t enough. Unfortunately, this is the only option that traditional 9-1-1 and other emergency dispatch services offer. In reality, though, there are many cases where people know they’re about to do something a little risky, and want support around that. This is what the other button with the clock icon on it is for.

Clicking on the smaller button with the clock icon on it opens the timed alert (“safe call”) dialog, shown next.

screenshot-5

Use this button to schedule an alert to be sent some time in the future. This way you can alert your response team to an emergency in the event that you are unable to cancel the alert, rather than the other way around. This is especially useful for “bad dates.” It’s also useful for border crossings or periodic check-ins with vulnerable people, such as journalists traveling overseas.

Regardless of which alert option you select, Buoy will gather some information from your device (including your location and your alert message) and either send your alert to your response team immediately or schedule the alert with the Buoy server. A nice pulsing circle animation provides visual feedback during this process.

screenshot-6

If you pressed one of the immediate alert buttons, the next thing you’ll see when you use Buoy is some safety information. This information is currently provided by the website admin, but we have some ideas of how to make this even more useful. Either way, if it is safe to do so, you can read through this information and/or take one of the suggested actions immediately. In the example screenshot here, Buoy has been installed on the website of a domestic violence survivor’s shelter, so the admin composed safety information that helps DV survivors quickly find and access even more supportive resources, such as hotlines and other nearby services like animal rescuers.

screenshot-7

If you’re in an emergency situation where interacting with your phone isn’t feasible, such as if you are being beaten or chased, you can simply ignore this screen. As long as you don’t lose or shut off your device, your device will send your location to your response team so that they will be able to track and find you, even if you travel away from the spot where the crisis originally began.

If you can interact with your phone, you can also close the safety information window at any time. When you do, you will see that behind the safety information window, a private, temporary chat room has been loaded in the background.

screenshot-8

When one of your response team members responds to your alert, they will join you in this chat room.

In addition to the chat room, behind the safety information window is also a real-time map. (The map can be accessed at any time by clicking or tapping the “Show Map” button. Tapping the same button again hides the map.)

screenshot-9

On the map, a red pin shows the initial location of the emergency. Your avatar shows your current position. As responders respond to your alert, their avatars will also be added to the map.

Buoy is just as easy to use from the point of view of a responder, as it is from the point of view of someone sending an alert. When a responder clicks on a notification from the alert (either by email, SMS/txt message, or whatever other notification mechanism they prefer—we are continually working to add new notification channels as our people-power and resources allow), they will be shown your alert message along with a map. They can click on the red pin to get turn-by-turn directions from their current location to the emergency alert signal. If they choose to respond, they click on the “Respond” button and will automatically be added to the group chat shown earlier.

screenshot-10

When a responder clicks the “Respond” button, they will automatically be added to the same live chat room that the alerter is in. They will also see the same map.

screenshot-11

The alerter and all current responders become aware of new responders as they are added to the chat room and the map. As people involved in the incident move around in the physical world, the map shown to each of the other people also updates, displaying their new location in near real time.

screenshot-12

Clicking on any of the user icons on the map reveals one-click access to both turn-by-turn directions to their location and one-click access to call them from your phone, Facetime, Skype, or whatever default calling app your device uses.

Who should use Buoy? Should it only be used in emergencies?

Although Buoy is designed to be useful in even the most physically high-risk situations such as domestic or dating violence abuses, kidnapping, home invasion, and other frightening scenarios, you can use Buoy however you want. We particularly encourage you to use Buoy when you feel like your situation may not rise to the level of calling 9-1-1 or when you feel like the presence of police officers will not improve the situation.

For instance:

  • If you feel you are being followed as you walk home on campus, use Buoy. Your friends will be able to watch your location on their screens and quietly chat with you as you walk home, ensuring you reach your destination safely.
  • If you or someone you are with feels suicidal, or is having a “bad trip,” and you don’t want cops showing up to your house but need assistance, use Buoy. Responders will be notified of your physical location and will be able to coordinate a response action with you and with each-other in real time without ever notifying the authorities of the situation.
  • If you are with a group at an outing such as a hike or a large amusement park and get separated from your group, use Buoy. Each group member will be able to see one another’s current location on a map, can easily coordinate where to meet up, and can even access turn-by-turn directions to one another’s locations with one tap of a finger.

We’ve designed Buoy with people for whom “calling the cops” is not possible or safe, such as:

  • Undocumented immigrant and homeless populations.
  • Domestic violence victims and survivors.
  • Social justice and social change activists/political dissidents.
  • Freed prisoners.
  • Frequent targets of assault and street harassment (trans/queer people, women).
  • People suffering from a medical or mental health emergency.
  • Especially all the intersections of the above (homeless feminine queer youth of color, for instance).

In other words, these are all demographics who could benefit by having “someone to call” in the event of an emergency for whom “the police” is obviously a counterproductive answer, because when police are involved they are more likely to escalate the situation than de-escalate it.

That said, even if these descriptions don’t fit who you are, you can still use Buoy and if you do, we hope you find it useful.

How can I get Buoy?

Buoy is a bit like a very advanced telephone. Just like a telephone, it’s not very useful if no one else you know has one! For Buoy, or a telephone, to be useful, you have to know someone else who already has it.

Since Buoy is so new and is designed to be used in real-life emergencies, we are only working with a small group of alpha testers in order to ensure that there are no major technical or usability issues before its widespread adoption. However, we are very excited about the possibilities and we are currently looking to include more people in the testing process. If you think this is exciting and want to help put the finishing polish on this tool, please get in touch with someone from the Better Angels collective directly; links to our contact information is posted on the Buoy project’s development site. (Or just email me at bitetheappleback+better.angels.buoy@gmail.com directly.)

That being said, if you are a community leader, and you maintain a WordPress-powered website, you can try out Buoy right now by installing it directly from your WordPress admin screens! It’s just as easy to install as any other WordPress plugin. Similarly, if you yourself are not a “community leader,” but you want to try it out, you can either ask to join our private testing phase or you can tell others in your community about Buoy and see if the group of you can install it on your own group’s website.

If you do that, don’t hesitate to ask for technical or other help of any kind over at the Buoy support forums.

How can I help Better Angels projects?

There’s a lot you can do to help make Buoy better or help the Better Angels collective more generally! Check out our contributor guides for more information! Of course, one of the most immediate things you can do to help is spread the word about this project. (Hint hint, click the reshare button, nudge nudge!) Cash donations are also very helpful! Finally, we’re also trying very hard to get the entire tool translated into Spanish, so if you’re bilingual and want to help, please sign up to be a Better Angels translator here.

We think Buoy is a great tool for building strong, autonomous, socially responsible, self-sufficient communities, and we hope you’ll join us in empowering those communities by making them aware of Buoy.

Easy template injection in JavaScript for userscript authors, plugin devs, and other people who want to fuck with Web page content

The Predator Alert Tool for Twitter is coming along nicely, but it's frustratingly slow going. It's extra frustrating for me because ever since telling corporate America and its project managers to go kill themselves, I've grown accustomed to an utterly absurd speed of project development. I know I've only been writing and rewriting code for just under two weeks (I think—I honestly don't even know or care what day it is), but still.

I think it also feels extra slow is because I'm learning a lot of new stuff along the way. That in and of itself is great, but I'm notoriously impatient. Which I totally consider a virtue because fuck waiting. Still, all this relearning and slow going has given me the opportunity to refine a few techniques I used in previous Predator Alert Tool scripts.

Here's one technique I think is especially nifty: template injection.

Read more