Category: Programming

My 2009 essay kinda-sorta about an Anarchist “Internet of Things”

I wrote an essay in 2009 about the Internet of Things, before people were calling it “the Internet of Things.” When I re-read it this afternoon, in 2017, I noticed something rather queer. It wasn’t actually about the Internet of Things at all. It was actually a personal manifesto advocating Anarchism, and condemning techno-capitalist fascism.

Yes, really.

In 2009, despite having barely turned 25 years old, I had already been working as a professional web developer for a little over a decade. (That arithmetic is correct, I assure you.) At the time, I had some embarrassingly naïve ideas about Silicon Valley, capitalism, and neoliberalism. I also had no idea that less than two years later, I’d be homeless and sleeping in Occupy encampments, and that I’d remain (mostly) happily houseless and jobless for the next six years, up to and including the time of this writing.

The story of my life during those two years is a story worth telling…someday. Today, though, I want to remind myself of who I was before. I was a different person when 2009 began in some very important ways. I was so different that by the time it ended I began referring to my prior experiences as “my past life,” and I’ve used the same turn of phrase ever since. But I was also not so different that, looking back on myself with older eyes, I can clearly see the seeds of my anti-capitalist convictions had already begun to germinate and root themselves somewhere inside me.

Among the many other things that I was in my past life, I was an author. I’ve always loved the art of the written word. My affinity for the creativity I saw in and the pleasure I derived from written scripts drew me to my appreciation for computer programming. That is its own story, as well, but the climax of that trajectory—at least by 2009—is that I was employed as a technical writer. I blogged on a freelance basis for an online Web development magazine about Web development. I had already co-authored and published significant portions of my first technical book. And, in 2009, I had just completed co-authoring a second.

That second book was called, plainly enough, Advanced CSS, and was about the front-end Web development topic more formally known as Cascading Style Sheets. But that’s not interesting. At least, no more interesting than any other fleeting excitement over a given technical detail. What’s arguably most revealing about that book is the essay I contributed, which for all intents and purposes is the book’s opening.

My essay follows in its entirety:

User agents: our eyes and ears in cyberspace

A user agent is nothing more than some entity that acts on behalf of users themselves.1 What this means is that it’s important to understand these users as well as their user agents. User agents are the tools we use to interact with the wealth of possibilities that exists on the Internet. They are like extensions of ourselves. Indeed, they are (increasingly literally) our eyes and ears in cyberspace.

Understanding users and their agents

Web developers are already familiar with many common user agents: web browsers! We’re even notorious for sometimes bemoaning the sheer number of them that already exist. Maybe we need to reexamine why we do that.

There are many different kinds of users out there, each with potentially radically different needs. Therefore, to understand why there are so many user agents in existence we need to understand what the needs of all these different users are. This isn’t merely a theoretical exercise, either. The fact is that figuring out a user’s needs helps us to present our content to that user in the best possible way.

Presenting content to users and, by extension, their user agents appropriately goes beyond the typical accessibility argument that asserts the importance of making your content available to everyone (though we’ll certainly be making that argument, too). The principles behind understanding a user’s needs are much more important than that.

You’ll recall that the Web poses two fundamental challenges. One challenge is that any given piece of content, a single document, needs to be presented in multiple ways. This is the problem that CSS was designed to solve. The other challenge is the inverse: many different kinds of content need to be made available, each kind requiring a similar presentation. This is what XML (and its own accompanying “style sheet” language, XSLT) was designed to solve. Therefore, combining the powerful capabilities of CSS and XML is the path we should take to understanding, technically, how to solve this problem and present content to users and their user agents.

Since a specific user agent is just a tool for a specific user, the form the user agent takes depends on what the needs of the user are. In formal use case semantics, these users are called actors, and we can describe their needs by determining the steps they must take to accomplish some goal. Similarly, in each use case, a certain tool or tools used to accomplish these goals defines what the user agent is in that particular scenario.2

A simple example of this is that when Joe goes online to read the latest technology news from Slashdot, he uses a web browser to do this. Joe (our actor) is the user, his web browser (whichever one he chooses to use) is the user agent, and reading the latest technology news is the goal. That’s a very traditional interaction, and in such a scenario we can make some pretty safe assumptions about how Joe, being a human and all, reads news.

Now let’s envision a more outlandish scenario to challenge our understanding of the principle. Joe needs to go shopping to refill his refrigerator and he prefers to buy the items he needs with the least amount of required driving due to rising gas prices. This is why he owns the (fictional) Frigerator2000, a network-capable refrigerator that keeps tabs on the inventory levels of nearby grocery stores and supermarkets and helps Joe plan his route. This helps him avoid driving to a store where he won’t be able to purchase the items he needs.

If this sounds too much like science fiction to you, think again. This is a different application of the same principle used by feed readers, only instead of aggregating news articles from web sites we’re aggregating inventory levels from grocery stores. All that would be required to make this a reality is an XML format for describing a store’s inventory levels, a bit of embedded software, a network interface card on a refrigerator, and some tech-savvy grocery stores to publish such content on the Internet.

In this scenario, however, our user agent is radically different from the traditional web browser. It’s a refrigerator! Of course, there aren’t (yet) any such user agents out crawling the Web today, but there are a lot of user agents that aren’t web browsers doing exactly that.

Search engines like Google, Yahoo!, and Ask.com are probably the most famous examples of users that aren’t people. These companies all have automated programs, called spiders, which “crawl” the Web indexing all the content they can find. Unlike humans and very much like our hypothetical refrigerator-based user agent, these spiders can’t look at content with their eyes or listen to audio with their ears, so their needs are very different from someone like Joe’s.

There are still other systems of various sorts that exist to let us interact with web sites and these, too, can be considered user agents. For example, many web sites provide an API that exposes some functionality as web services. Microsoft Word 2008 is an example of a desktop application that you can use to create blog posts in blogging software such as WordPress and MovableType because both of these blogging tools support the MetaWeblog API, an XML-RPC3 specification. In this case, Microsoft Word can be considered a user agent.

As mentioned earlier, the many incarnations of news readers that exist are another form of user agent. Many web browsers and email applications, such as Mozilla Thunderbird and Apple Mail, do this, too.4 Feed readers provide a particularly interesting way to examine the concept of user agents because there are many popular feed reading web sites today, such as Bloglines.com and Google Reader. If Joe opens his web browser and logs into his account at Bloglines, then Joe’s web browser is the user agent and Joe is the user. However, when Joe reads the news feeds he’s subscribed to in Bloglines, the Bloglines server goes to fetch the RSS- or Atom-formatted feed from the sourced site. What this means is that from the point of view of the sourced site, Bloglines.com is the user, and the Bloglines server process is the user agent.

Coming to this realization means that, as developers, we can understand user agents as an abstraction for a particular actor’s goals as well as their capabilities. This is, of course, an intentionally vague definition because it’s technically impossible for you, as the developer, to predict the features or capabilities present in any particular user agent. This is a challenge we’ll be talking about a lot in the remainder of this book because it is one of the defining characteristics of the Web as a publishing medium.

Rather than this lack of clairvoyance being a problem, however, the constraint of not knowing who or what will be accessing our published content is actually a good thing. It turns out that well-designed markup is also markup that is blissfully ignorant of its user, because it is solely focused on describing itself. You might even call it narcissistic.

Why giving the user control is not giving up

Talking about self-describing markup is just another way of talking about semantic markup. In this paradigm, the content in the fetched document is strictly segregated from its ultimate presentation. Nevertheless, the content must eventually be presented to the user somehow. If information for how to do this isn’t provided by the markup, then where is it, and who decides what it is?

At first you’ll no doubt be tempted to say that this information is in the document’s style sheet and that it is the document’s developer who decides what that is. As you’ll examine in detail in the next chapter, this answer is only mostly correct. In every case, it is ultimately the user agent that determines what styles (in which style sheets) get applied to the markup it fetches. Furthermore, many user agents (especially modern web browsers) allow the users themselves to further modify the style rules that get applied to content. In the end, you can only influence—not control—the final presentation.

Though surprising to some, this model actually makes perfect sense. Allowing the users ultimate control of the content’s presentation helps to ensure that you meet every possible need of each user. By using CSS, content authors, publishers, and developers—that is, you—can provide author style sheets that easily accommodate, say, 80 percent of the needs of 90 percent of the users. Even in the most optimistic scenario, edge cases that you may not ever be aware of will still escape you no matter how hard you try to accommodate everyone’s every need.5 Moreover, even if you had those kinds of unlimited resources, you may not know how best to improve the situation for that user. Given this, who better to determine the presentation of a given XML document that needs to be presented in some very specific way than the users with that very specific need themselves?

A common real-life example of this situation might occur if Joe were colorblind. If he were and he wanted to visit some news site where the links in the article pullouts were too similar a color to the pullout’s background, he might not realize that those elements are actually links. Thankfully, because Joe’s browser allows him to set up a web site with his own user style sheet, he can change the color of these links to something that he can see more easily. If CSS were not designed with this in mind, it would be impossible for Joe to personalize the presentation of this news site so that it would be optimal for him.

To many designers coming from traditional industries such as print design, the fact that users can change the presentation of their content is an alarming concept. Nevertheless, this isn’t just the way the Web was made to work; this is the only way it could have worked. Philosophically, the Web is a technology that puts control into the hands of users. Therefore, our charge as web designers is to judge different people’s needs to be of equal importance, and we can’t do this if we treat every user exactly the same way.6

  1. This is purposefully a broad definition because we’re not just talking about web pages here, but rather all kinds of technology. The principles are universal. There are, however, more exacting definitions available. For instance, the W3C begins the HTML 4 specification with some formal definitions, including what a “user agent” is. See http://www.w3.org/TR/REC-html40/conform.html. []
  2. In real use cases, technical jargon and specific tools like a web browser are omitted because such use cases are used to define a system’s requirements, not its implementation. Nevertheless, the notion of an actor and an actor’s goals are helpful in understanding the mysterious “user” and this user’s software. []
  3. XML-RPC is a term referring to the use of XML files describing method calls and data transmitted over HTTP, typically used by automated systems. It is thus a great example of a technology that takes advantage of XML’s data serialization capabilities, and is often thought of as a precursor to today’s Ajax techniques. []
  4. It was in fact the much older email technology from which the term user agent originated; an email client program is more technically called a mail user agent (MUA). []
  5. As it happens, this is the same argument open source software proponents make about why such open source software often succeeds in meeting the needs of more users than closed source, proprietary systems controlled solely by a single company with (by definition) relatively limited resources. []
  6. This philosophy is embodied in the formal study of ethics, which is a compelling topic for us as CSS developers, considering the vastness of the implications we describe here. []

Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

In Memoriam

Today is the fourth anniversary of Len Sassaman‘s passing. Len was a gifted programmer, he was a passionate privacy advocate—Len pioneered and maintained the Mixmaster anonymous remailer software for many years—and he was a very, very kind person. He was also a friend.

Len was the first person to walk me through setting up OTR (encrypted chat), and one of the only people I have ever known of his awesome caliber who was nevertheless able to make you feel comfortable asking what were obviously “newbie” questions.

A lot has happened in the last four years. Len’s passing lit a fire under me, personally. I couldn’t have gotten to where I am today, both in terms of practical skills and in terms of philosophical approach, without the brief but powerful influence Len had on me. I’m not the person who knew him best, but I miss him all the same.

I’ve got nowhere near the expertise he had, and if it weren’t for him, I might have let that stop me. Thanks to him, I’m not. It’s slow going, but I’m still moving forward.

Tonight, I’m publishing a small, simple utility script called remail.sh that makes it just a little bit easier to use an anonymous remailer system such as the kind he maintained. It’s not much, but hopefully it can serve as a reminder that privacy is a timeless human need, and that it needs people like Len to support it as much as people like Len need supporters like us.

Len Sassaman (b. 1980 d. July 3, 2011). I miss you.

Turn your Android phone into a full fledged programming environment

These days, mobile phones are basically computers. And not just any computer. If you have a smartphone, then it's the same kind of computer as a regular ol' laptop. Sure, the two look different, but once you get "under the hood" they look and feel remarkably similar.

My mission, which I chose to accept, was to see if I could turn my Android phone into a fully fledged web development console. Lo and behold, I could. And it's not even that hard, but I did have to do some digging.

That's because searching the 'net for phrases like "web development on Android" mostly returns information on how to code and debug websites for mobile browsers, rather than how to use mobile phones as your environment for developing websites. Once I figured out which tools were suited for the task (and my personal tastes), though, everything else fell into place.

Read the full post.

Read more

Easy template injection in JavaScript for userscript authors, plugin devs, and other people who want to fuck with Web page content

The Predator Alert Tool for Twitter is coming along nicely, but it's frustratingly slow going. It's extra frustrating for me because ever since telling corporate America and its project managers to go kill themselves, I've grown accustomed to an utterly absurd speed of project development. I know I've only been writing and rewriting code for just under two weeks (I think—I honestly don't even know or care what day it is), but still.

I think it also feels extra slow is because I'm learning a lot of new stuff along the way. That in and of itself is great, but I'm notoriously impatient. Which I totally consider a virtue because fuck waiting. Still, all this relearning and slow going has given me the opportunity to refine a few techniques I used in previous Predator Alert Tool scripts.

Here's one technique I think is especially nifty: template injection.

Read more

Play “The Legend of Zelda: The Wind Waker” on Mac OS X 10.6 using Dolphin with heat distortion fix

If you’re running Mac OS X 10.6 Snow Leopard but wanted to play The Legend of Zelda: The Wind Waker, you probably ran into this very annoying “heat distortion” bug:

This issue actually crops up in two ways throughout Wind Waker. The more common way is that all of the flame effects result in doubling rather than distortion. The “copy” ends up down and to the right quite a bit, making it awkward at best. More troublesome is the way extreme heat in Dragon Roost Island is rendered. It results in severe screen tearing and detracts from the strong atmosphere presented by the game.

[…]

AR codes offer a partial resolution to the screen tearing issue shown in this animation loop.

That’s because, prior to Dolphin version 4.0-593, there was a bug in the way texture maps were loaded. The fix is thankfully very simple: two lines of corrected hexadecimal arithmetic, courtesy delroth. Sadly, the last version of Dolphin that runs on Mac OS X 10.6 Snow Leopard is 3.5. Later versions require Mac OS X 10.7 Lion or greater.

Of course, one could upgrade one’s operating system and install a brand new Mac OS X. But why pay for an operating system upgrade when what you want to fix is an arithmetic mistake? And a simple one, at that.

Instead, I forked Dolphin, backported delroth’s fix,1 and rebuilt my Dolphin. And in the spirit of the free software code that let me do this without having to buy an OS upgrade from Apple (because, again, fuck capitalism), I’m making my build available for any other Mac OS X 10.6 (Intel) users who want to calm the great and wonderful Valoo on Dragon Roost Island without looking at frustrating screen tearing. :)P

This Dolphin 3.5 WindWaker bugfix rebuild should run on any Mac with an Intel processor, but I’ve only tested it on my own system, of course. If you have trouble with the binary, consider building your own Dolphin for Mac OS X, too.

That way, when people ask you why you’re playing video games all day, you can tell them, “Because everything is in everything, so playing video games can teach me about C compilers.”

  1. Wikipedia explains backporting well. []

Quick and Dirty: Clone Custom Field, Template Linked Files on Movable Type

Movable Type is a pretty frustrating platform to work with because every so often (or, “way too often,” depending on who you ask) a function of the system simply doesn’t do what you’d expect it to do. Such is the case with the “Clone Blog” functionality. Although it dutifully copies most of a website from one “blog” object to another, a few things are missing.

Most notably, custom fields and templates’ linked files are not copied. This is a deal-breaker for any large installation that uses the built-in MT “Clone” feature.

To get around this limitation, I wrote a stupid, quick ‘n’ dirty PHP script to finish the cloning process, called finishclone.php. It takes only 1 argument: the “new ID” you are cloning to. If all goes well, you’ll see output like this:

[root@dev www]$ php finishclone.php 28
Cloning process complete.
[root@dev www]$ 

In this example, 28 is the newly created blog’s ID. The blog you want to clone from is set as a constant within the script. I’ll leave modifying the script to support more flexible command line arguments as an exercise for the reader.

<?php
/**
 * Ease the final steps in cloning a Movable Type blog.
 *
 * Description:   This script should be run after Movable Type's "Clone Blog"
 *                function has completed and before the cloned blog is used.
 * 
 * Author:        "Meitar Moscovitz" <meitar@maymay.net>
 */

// Set constants.
define('MT_ORIG_BLOG', 0); // the ID of the blog you are cloning from
define('MYSQL_HOST', 'localhost');
define('MYSQL_USER', 'movabletype');
define('MYSQL_PASS', 'PASSWORD_HERE');
define('MYSQL_DB', 'movabletype');

// Get command line arguments.
if (2 > $_SERVER['argc']) { die('Tell me the ID of the blog to clone into.'); }

$blog_id = (int) $argv[1];

// Connect to db
if ( !mysql_pconnect( MYSQL_HOST, MYSQL_USER, MYSQL_PASS ) ) {
	die( 'Connection to the database has failed: ' . mysql_error( ) );
}
mysql_select_db( MYSQL_DB );

// Clone custom fields.
$result = mysql_query('SELECT * FROM mt_field WHERE field_blog_id='.MT_ORIG_BLOG.';');
while ($row = mysql_fetch_object($result)) {
	mysql_query(
		sprintf("INSERT INTO mt_field ("
			."field_basename,"
			."field_blog_id,"
			."field_default,"
			."field_description,"
			."field_name,"
			."field_obj_type,"
			."field_options,"
			."field_required,"
			."field_tag,"
			."field_type) "
			."VALUES('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s');",
			mysql_real_escape_string($row->field_basename),
			mysql_real_escape_string($blog_id),
			mysql_real_escape_string($row->field_default),
			mysql_real_escape_string($row->field_description),
			mysql_real_escape_string($row->field_name),
			mysql_real_escape_string($row->field_obj_type),
			mysql_real_escape_string($row->field_options),
			mysql_real_escape_string($row->field_required),
			mysql_real_escape_string($row->field_tag),
			mysql_real_escape_string($row->field_type)
		)
	) OR print mysql_error() . "\n";
}

// Link template files to filesystem.
$arr = array();
$result = mysql_query('SELECT template_name,template_linked_file FROM mt_template WHERE template_blog_id='.MT_ORIG_BLOG.';');
while ($row = mysql_fetch_object($result)) {
	$arr[$row->template_name] = $row->template_linked_file;
}
foreach ($arr as $k => $v) {
	mysql_query("UPDATE mt_template SET template_linked_file='$v' WHERE template_blog_id=$blog_id AND template_name='$k';");
}

print "Cloning process complete.\n";

Cross-post: Edenfantasys’s unethical technology is a self-referential black hole

This entry was originally published at my other blog. I’m cross-posting it here in order to make sure it gets copied to more servers, as some people have suggested I’ll face a cease and desist order for publishing it in the first place. Please help distribute this important information by freely copying and republishing this post under the conditions of my CC-BY-NC-ND license: provide me with attribution and a (real) back link, and you are free to republish an unaltered version of this post wherever you like. Thanks.

A few nights ago, I received an email from Editor of EdenFantasys’s SexIs Magazine, Judy Cole, asking me to modify this Kink On Tap brief I published that cites Lorna D. Keach’s writing. Judy asked me to “provide attribution and a link back to” SexIs Magazine. An ordinary enough request soon proved extraordinarily unethical when I discovered that EdenFantasys has invested a staggering amount of time and money to develop and implement a technology platform that actively denies others the courtesy of link reciprocity, a courtesy on which the ethical Internet is based.

While what they’re doing may not be illegal, EdenFantasys has proven itself to me to be an unethical and unworthy partner, in business or otherwise. Its actions are blatantly hypocritical, as I intend to show in detail in this post. Taking willful and self-serving advantage of those not technically savvy is a form of inexcusable oppression, and none of us should tolerate it from companies who purport to be well-intentioned resources for a community of sex-positive individuals.

For busy or non-technical readers, see the next section, Executive Summary, to quickly understand what EdenFantasys is doing, why it’s unethical, and how it affects you whether you’re a customer, a contributor, or a syndication partner. For the technical reader, the Technical Details section should provide ample evidence in the form of a walkthrough and sample code describing the unethical Search Engine Optimization (SEO) and Search Engine Marketing (SEM) techniques EdenFantasys, aka. Web Merchants, Inc., is engaged in. For anyone who wants to read further, I provide an Editorial section in which I share some thoughts about what you can do to help combat these practices and bring transparency and trust—not the sabotage of trust EdenFantasys enacts—to the market.

EXECUTIVE SUMMARY

Internet sex toy retailer Web Merchants, Inc., which bills itself as the “sex shop you can trust” and does business under the name EdenFantasys, has implemented technology on their websites that actively interferes with contributors’ content, intercepts outgoing links, and alters republished content so that links in the original work are redirected to themselves. Using techniques widely acknowledged as unethical by Internet professionals and that are arguably in violation of major search engines’ policies, EdenFantasys’s publishing platform has effectively outsourced the task of “link farming” (a questionable Search Engine Marketing [SEM] technique) to sites with which they have “an ongoing relationship,” such as AlterNet.org, other large news hubs, and individual bloggers’ blogs.

Articles published on EdenFantasys websites, such as the “community” website SexIs Magazine, contain HTML crafted to look like links, but aren’t. When visited by a typical human user, a program written in JavaScript and included as part of the web pages is automatically downloaded and intercepts clicks on these “link-like” elements, fetching their intended destination from the server and redirecting users there. Due to the careful and deliberate implementation, the browser’s status bar is made to appear as though the link is legitimate, and that a destination is provided as expected.

For non-human visitors, including automated search engine indexing programs such as Googlebot, the “link” remains non-functional, making the article a search engine’s dead-end or “orphan” page whose only functional links are those whose destination is EdenFantasys’s own web presence. This makes EdenFantasys’ website(s) a self-referential black hole that provides no reciprocity for contributors who author content, nor for any website ostensibly “linked” to from article content. At the same time, EdenFantasys editors actively solicit inbound links from individuals and organizations through “link exchanges” and incentive programs such as “awards” and “free” sex toys, as well as syndicating SexIs Magazine content such that the content is programmatically altered in order to create multiple (real) inbound links to EdenFantasys’s websites after republication on their partner’s media channels.

How EdenFantasys’s unethical practices have an impact on you

Regardless of who you are, EdenFantasys’s unethical practices have a negative impact on you and, indeed, on the Internet as a whole.

See for yourself: First, log out of any and all EdenFantasys websites or, preferably, use a different browser, or even a proxy service such as the Tor network for greater anonymity. Due to EdenFantasys’s technology, you cannot trust that what you are seeing on your screen is what someone else will see on theirs. Next, temporarily disable JavaScript (read instructions for your browser) and then try clicking on the links in SexIs Magazine articles. If clicking the intended off-site “links” doesn’t work, you know that your article’s links are being hidden from Google and that your content is being used for shady practices. In contrast, with JavaScript still disabled, navigate to another website (such as this blog), try clicking on the links, and note that the links still work as intended.

Here’s another verifiable example from the EdenFantasys site showing that many other parts of Web Merchants, Inc. pages, not merely SexIs Magazine, are affected as well: With JavaScript disabled, visit the EdenFantasys company page on Aslan Leather (note, for the sake of comparison, the link in this sentence will work, even with JavaScript off). Try clicking on the link in the “Contact Information” section in the lower-right hand column of the page (shown in the screenshot, below). This “link” should take you to the Aslan Leather homepage but in fact it does not. So much for that “link exchange.”

(Click to enlarge.)

  • If you’re an EdenFantasys employee, people will demand answers from you regarding the unethical practices of your (hopefully former) employer. While you are working for EdenFantasys, you’re seriously soiling your reputation in the eyes of ethical Internet professionals. Ignorance is no excuse for the lack of ethics on the programmers’ part, and it’s a shoddy one for everyone else; you should be aware of your company’s business practices because you represent them and they, in turn, represent you.
  • If you’re a partner or contributor (reviewer, affiliate, blogger), while you’re providing EdenFantasys with inbound links or writing articles for them and thereby propping them up higher in search results, EdenFantasys is not returning the favor to you (when they are supposed to be doing so). Moreover, they’re attaching your handle, pseudonym, or real name directly to all of their link farming (i.e., spamming) efforts. They look like they’re linking to you and they look like their content is syndicated fairly, but they’re actually playing dirty. They’re going the extra mile to ensure search engines like Google do not recognize the links in articles you write. They’re trying remarkably hard to make certain that all roads lead to EdenFantasys, but none lead outside of it; no matter what the “link,” search engines see it as stemming from and leading to EdenFantasys. The technically savvy executives of Web Merchants, Inc. are using you without giving you a fair return on your efforts. Moreover, EdenFantasys is doing this in a way that preys upon people’s lack of technical knowledge—potentially your own as well as your readership’s. Do you want to keep doing business with people like that?
  • If you’re a customer, you’re monetarily supporting a company that essentially amounts to a glorified yet subtle spammer. If you hate spam, you should hate the unethical practices that lead to spam’s perpetual reappearance, including the practices of companies like Web Merchants, Inc. EdenFantasys’s unethical practices may not be illegal, but they are unabashedly a hair’s width away from it, just like many spammers’. If you want to keep companies honest and transparent, if you really want a “sex shop you can trust,” this is relevant to you because EdenFantasys is not it. If you want to purchase from a retailer that truly strives to offer a welcoming, trustworthy community for those interested in sex positivity and sexuality, pay close attention and take action. For ideas about what you can do, please see the “What you can do” section, below.
  • If you’ve never heard about EdenFantasys before, but you care about a fair and equal-opportunity Internet, this is relevant to you because what EdenFantasys is doing takes advantage of non-tech-savvy people in order to slant the odds of winning the search engine game in their favor. They could have done this fairly, and I personally believe that they would have succeeded. Their sites are user-friendly, well-designed, and solidly implemented. However, they chose to behave maliciously by not providing credit where credit is due, failing to follow through on agreements with their own community members and contributors, and sneakily utilizing other publishers’ web presences to play a very sad zero-sum game that they need not have entered in the first place. In the Internet I want, nobody takes malicious advantage of those less skilled than they are because their own skill should speak for itself. Isn’t that the Internet and, indeed, the future you want, too?

TECHNICAL DETAILS

What follows is a technical exploration of the way the EdenFantasys technology works. It is my best-effort evaluation of the process in as much detail as I can manage within strict self-imposed time constraints. If any of this information is incorrect, I’d welcome any and all clarifications provided by the EdenFantasys CTO and technical team in an appropriately transparent, public, and ethical manner. (You’re welcome—nay, encouraged—to leave a comment.)

Although I’m unconvinced that EdenFantasys understands this, it is the case that honesty is the best policy—especially on the Internet, where everyone has the power of “View source.”

The “EF Framework” for obfuscating links

Article content written by contributors on SexIs Magazine pages is published after all links are replaced with a <span> element bearing the class of linklike and a unique id attribute value. This apparently happens across any and all content published by Web Merchants, Inc.’s content management system, but I’ll be focusing on Lorna D. Keach’s post entitled SexFeed:Anti-Porn Activists Now Targeting Female Porn Addicts for the sake of example.

These fake links look like this in HTML:

And according to Theresa Flynt, vice president of marketing for Hustler video, <span class="linklike" ID="EFLink_68034_fe64d2">female consumers make up 56% of video sales.</span>

This originally published HTML is what visitors without JavaScript enabled (and what search engine indexers) see when they access the page. Note that the <span> is not a real link, even though it is made to look like one. (See Figure 1; click it to enlarge.)

Figure 1:

In a typical user’s browser, when this page is loaded, a JavaScript program is executed that mutates these “linklike” elements into <a> elements, retaining the “linklike” class and the unique id attribute values. However, no value is provided in the href (link destination) attribute of the <a> element. See Figure 2.

Figure 2:

The JavaScript program is downloaded in two parts from the endpoint at http://cdn3.edenfantasys.com/Scripts/Handler/jsget.ashx. The first part, retrieved in this example by accessing the URI at http://cdn3.edenfantasys.com/Scripts/Handler/jsget.ashx?i=jq132_cnf_jdm12_cks_cm_ujsn_udm_stt_err_jsdm_stul_ael_lls_ganl_jqac_jtv_smg_assf_agrsh&v_14927484.12.0, loads the popular jQuery JavaScript framework as well as custom code called the “EF Framework”.

The EF Framework contains code called the DBLinkHandler, an object that parses the <span> “linklike” elements (called “pseudolinks” in the EF Framework code) and retrieves the real destination. The entirety of the DBLinkHandler object is shown in code listing 1, below. Note the code contains a function called handle that performs the mutation of the <span> “linklike” elements (seen primarily on lines 8 through 16) and, based on the prefix of each elements’ id attribute value, two key functions (BuildUrlForElement and GetUrlByUrlID, whose signatures are on lines 48 and 68, respectively) interact to set up the browser navigation after responding to clicks on the fake links.

var DBLinkHandler = {
    pseudoLinkPrefix: "EFLink_",
    generatedAHrefPrefix: "ArtLink_",
    targetBlankClass: "target_blank",
    jsLinksCssLinkLikeClass: "linklike",
    handle: function () {
        var pseudolinksSpans = $("span[id^='" + DBLinkHandler.pseudoLinkPrefix + "']");
        pseudolinksSpans.each(function () {
            var psLink = $(this);
            var cssClass = $.trim(psLink.attr("class"));
            var target = "";
            var id = psLink.attr("id").replace(DBLinkHandler.pseudoLinkPrefix, DBLinkHandler.generatedAHrefPrefix);
            var href = $("<a></a>").attr({
                id: id,
                href: ""
            }).html(psLink.html());
            if (psLink.hasClass(DBLinkHandler.targetBlankClass)) {
                href.attr({
                    target: "_blank"
                });
                cssClass = $.trim(cssClass.replace(DBLinkHandler.targetBlankClass, ""))
            }
            if (cssClass != "") {
                href.attr({
                    "class": cssClass
                })
            }
            psLink.before(href).remove()
        });
        var pseudolinksAHrefs = $("a[id^='" + DBLinkHandler.generatedAHrefPrefix + "']");
        pseudolinksAHrefs.live("mouseup", function (event) {
            DBLinkHandler.ArtLinkClick(this)
        });
        pseudolinksSpans = $("span[id^='" + DBLinkHandler.pseudoLinkPrefix + "']");
        pseudolinksSpans.live("click", function (event) {
            if (event.button != 0) {
                return
            }
            var psLink = $(this);
            var url = DBLinkHandler.BuildUrlForElement(psLink, DBLinkHandler.pseudoLinkPrefix);
            if (!psLink.hasClass(DBLinkHandler.targetBlankClass)) {
                RedirectTo(url)
            } else {
                OpenNewWindow(url)
            }
        })
    },
    BuildUrlForElement: function (psLink, prefix) {
        var psLink = $(psLink);
        var sufix = psLink.attr("id").toString().substring(prefix.length);
        var id = (sufix.indexOf("_") != -1) ? sufix.substring(0, sufix.indexOf("_")) : sufix;
        var url = DBLinkHandler.GetUrlByUrlID(id);
        if (url == "") {
            url = EF.Constants.Links.Url
        }
        var end = sufix.substring(sufix.indexOf("_") + 1);
        var anchor = "";
        if (end.indexOf("_") != -1) {
            anchor = "#" + end.substring(0, end.lastIndexOf("_"))
        }
        url += anchor;
        return url
    },
    ArtLinkClick: function (psLink) {
        var url = DBLinkHandler.BuildUrlForElement(psLink, DBLinkHandler.generatedAHrefPrefix);
        $(psLink).attr("href", url)
    },
    GetUrlByUrlID: function (UrlID) {
        var url = "";
        UrlRequest = $.ajax({
            type: "POST",
            url: "/LinkLanguage/AjaxLinkHandling.aspx",
            dataType: "json",
            async: false,
            data: {
                urlid: UrlID
            },
            cache: false,
            success: function (data) {
                if (data.status == "Success") {
                    url = data.url;
                    return url
                }
            },
            error: function (xhtmlObj, status, error) {}
        });
        return url
    }
};

Once the mutation is performed and all the content “links” are in the state shown in Figure 2, above, an event listener has been bound to the anchors that captures a click event. This is done using prototypal extension, aka. classic prototypal inheritance, in another part of the code, the live function on line 2,280 of the (de-minimized) jsget.ashx program, as shown in code listing 2, here:

        live: function (G, F) {
            var E = o.event.proxy(F);
            E.guid += this.selector + G;
            o(document).bind(i(G, this.selector), this.selector, E);
            return this
        },

At this point, clicking on one of the “pseudolinks” triggers the EF Framework to call code set up by the GetUrlByUrlID function from within the DBLinkHandler object, initiating an XMLHttpRequest (XHR) connection to the AjaxLinkHandling.aspx server-side application. The request is an HTTP POST containing only one parameter, called urlid, and its value matches a substring from within the id value of the “pseudolinks.” In this example, the id attribute contains a value of EFLink_68034_fe64d2, which means that the unique ID POST’ed to the server is 68034. This is shown in Figure 3, below.

Figure 3:

The response from the server, shown in Figure 4, is also simple. If successful, the intended destination is retrieved by the GetUrlByUrlID object’s success function (on line 79 of Code Listing 1, above) and the user is redirected to that web address, as if the link was a real one all along. The real destination, in this case to CNN.com, is thereby only revealed after the XHR request returns a successful reply.

Figure 4:

All of this obfuscation effectively blinds machines such as the Googlebot who are not JavaScript-capable from seeing and following these links. It deliberately provides no increased Pagerank for the link destination (as a real link would normally do) despite being “linked to” from EdenFantasys’s SexIs Magazine article. While the intended destination in this example link was at CNN.com, it could just as easily have been—and is, in other examples—links to the blogs of EdenFantasys community members and, indeed, everyone else linked to from a SexIs Magazine article or potentially any website operated by Web Merchants, Inc. that makes use of this technology.

The EdenFantasys Outsourced Link-Farm

In addition to creating a self-referential black hole with no gracefully degrading outgoing links, EdenFantasys also actively performs link-stuffing through its syndicated content “relationships,” underhandedly creating an outsourced and distributed link-farm, just like a spammer. The difference is that this spammer (Web Merchants, Inc. aka EdenFantasys) is cleverly crowd-sourcing high-value, high-quality content from its own “community.”

Articles published at SexIs Magazine are syndicated in full to other large hub sites, such as AlterNet.org. Continuing with the above example post by Lorna D. Keach, Anti-Porn Activists Now Targeting Female Porn Addicts, we can see that this content was republished on AlterNet.org shortly after original publication through EdenFantasys’ website on May 3rd at http://www.alternet.org/story/146774/christian_anti-porn_activists_now_targeting_female_. However, a closer look at the HTML code of the republication shows that each and every link contained within the article points to the same destination: the same article published on SexIs Magazine, as shown in Figure 5.

Figure 5:

Naturally, these syndicated links provided to third-party sites by EdenFantasys are real and function as expected to both human visitors and to search engines indexing the content. The result is “natural,” high-value links to the EdenFantasys website from these third-party sites; EdenFantasys doesn’t merely scrounge pagerank from harvesting the sheer number of incoming links, but as each link’s anchor text is different, they are setting themselves up to match more keywords in search engine results, keywords that the original author likely did not intend to direct to them. Offering search engines the implication that EdenFantasys.com contains the content described in the anchor text, when in fact EdenFantasys merely acts as an intermediary to the information, is very shady, to say the least.

In addition to syndication, EdenFantasys employs human editors to do community outreach. These editors follow up with publishers, including individual bloggers (such as myself), and request that any references to published material provide attribution and a link back to us, to use the words of Judy Cole, Editor of SexIs Magazine in an email she sent to me (see below), and presumably many others. EdenFantasys has also been known to request “link exchanges,” and offer incentive programs that encouraged bloggers to add the EdenFantasys website to their blogroll or sidebar in order to help raise both parties search engine ranking, when in fact EdenFantasys is not actually providing reciprocity.

More information about EdenFantasys’s unethical practices, which are not limited to technical subterfuge, can be obtained via AAGBlog.com.

EDITORIAL

It is unsurprising that the distributed, subtle, and carefully crafted way EdenFantasys has managed to crowd-source links has (presumably) remained unpenalized by search engines like Google. It is similarly unsurprising that nontechnical users such as the contributors to SexIs Magazine would be unaware of these deceptive practices, or that they are complicit in promoting them.

This is no mistake on the part of EdenFantasys, nor is it a one-off occurrence. The amount of work necessary to implement the elaborate system I’ve described is also not even remotely feasible for a rogue programmer to accomplish, far less accomplish covertly. No, this is the result of a calculated and decidedly underhanded strategy that originated from the direction of top executives at Web Merchants, Inc. aka EdenFantasys.

It is unfortunate that technically privileged people would be so willing to take advantage of the technically uneducated, particularly under the guise of providing a trusted place for the community which they claim to serve. These practices are exactly the ones that “the sex shop you can trust” should in no way support, far less be actively engaged in. And yet, here is unmistakable evidence that EdenFantasys is doing literally everything it can not only to bolster its own web presence at the cost of others’, but to hide this fact from its understandably non-tech-savvy contributors.

On a personal note, I am angered that I would be contacted by the Editor of SexIs Magazine, and asked to properly “attribute” and provide a link to them when it is precisely that reciprocity which SexIs Magazine would clearly deny me (and everyone else) in return. It was this request originally received over email from Judy Cole, that sparked my investigation outlined above and enabled me to uncover this hypocrisy. The email I received from Judy Cole is republished, in full, here:

From: Judy Cole <luxuryholmes@gmail.com>
Subject: Repost mis-attributed
Date: May 17, 2010 2:42:00 PM PDT
To: kinkontap+viewermail@gmail.com
Cc: Laurel <laurelb@edenfantasys.com>

Hello Emma and maymay,

I am the Editor of the online adult magazine SexIs (http://www.edenfantasys.com/sexis/). You recently picked up and re-posted a story of ours by Lorna Keach that Alternet had already picked up:

http://kinkontap.com/?s=alternet

We were hoping that you might provide attribution and a link back to us, citing us as the original source (as is done on Alternet, with whom we have an ongoing relationship), should you pick up something of ours to re-post in the future.

If you would be interested in having us send you updates on stories that might be of interest, I would be happy to arrange for a member of our editorial staff to do so. (Like your site, by the way. TBK is one of our regular contributors.)

Thanks and Best Regards,

Judy Cole
Editor, SexIs

Judy’s email probably intended to reference the new Kink On Tap briefs that my co-host Emma and I publish, not a search result page on the Kink On Tap website. Specifically, she was talking about this brief: http://KinkOnTap.com/?p=676. I said as much in my reply to Judy:

Hi Judy,

The URL in your email doesn’t actually link to a post. We pick up many stories from AlterNet, as well as a number from SexIs, because we follow both those sources, among others. So, did you mean this following entry?

http://KinkOnTap.com/?p=676

If so, you should know that we write briefs as we find them and provide links to where we found them. We purposefully do not republish or re-post significant portions of stories and we limit our briefs to short summaries in deference to the source. In regards to the brief in question, we do provide attribution to Lorna Keach, and our publication process provides links automatically to, again, the source where we found the article. :) As I’m sure you understand, this is the nature of the Internet. Its distribution capability is remarkable, isn’t it?

Also, while we’d absolutely be thrilled to have you send us updates on stories that might be of interest, we would prefer that you do so in the same way the rest of our community does: by contributing to the community links feed. You can find detailed instructions for the many ways you can do that on our wiki:

http://wiki.kinkontap.com/wiki/Community_links_feed

Congratulations on the continued success of SexIs.

Cheers,
-maymay

At the time when I wrote the email replying to Judy, I was perturbed but could not put my finger on why. Her email upset me because she seemed to be suggesting that our briefs are wholesale “re-posts,” when in fact Emma and I have thoroughly discussed attribution policies and, as mentioned in my reply, settled on a number of practices including a length limit, automated back linking (yes, with real links, go see some Kink On Tap briefs for yourself), and clearly demarcating quotes from the source article in our editorializing to ensure we play fair. Clearly, my somewhat snarky reply betrays my annoyance.

In any event, this exchange prompted me to take a closer look at the Kink On Tap brief I wrote, at the original article, and at the cross-post on AlterNet.org. I never would have imagined that EdenFantasys’s technical subterfuge would be as pervasive as it has proven to be. It’s so deeply embedded in the EdenFantasys publishing platform that I’m willing to give Judy the benefit of the doubt regarding this hypocrisy because she doesn’t seem to understand the difference between a search query and a permalink (something any laymen blogger would grok). This is apparent from her reply to my response:

From: Judy Cole <luxuryholmes@gmail.com>
Subject: Re: Repost mis-attributed
Date: May 18, 2010 4:57:59 AM PDT
[…redundant email headers clipped…]

Funny, the URL in my email opens the same link as the one you sent me when I click on it.

Maybe if you pick up one of our stories in future, you could just say something like “so and so wrote for SexIs.” ?

As it stands, it looks as if Lorna wrote the piece for Alternet. Thanks.

Judy

That is the end of our email exchange, and will be for good, unless and until EdenFantasys changes its ways. I will from this point forward endeavor never to publish links to any web property that I know to be owned by Web Merchants, Inc., including EdenFantasys.com. I will also do my best to avoid citing any and all SexIs Magazine articles from here on out, and I encourage everyone who has an interest in seeing honesty on the Internet to follow my lead here.

As some of my friends are currently contributors to SexIs Magazine, I would like all of you to know that I sincerely hope you immediately sever all ties with any and all Web Merchants, Inc. properties, suppliers, and business partners, especially because you are friends and I think your work is too important to be sullied by such a disreputable company. Similarly, I hope you encourage your friends to do the same. I understand that the economy is rough and that some of you may have business contracts bearing legal penalties for breaking them, but I urge you to nevertheless consider looking at this as a cost-benefit analysis: the sooner you break up with EdenFantasys, the happier everyone on the Internet, including you, will be (and besides, you can loose just as much of your reputation, money, and pagerank while being happy as you can being sad).

What you can do

  • If you are an EdenFantasys reviewer, a SexIs Magazine contributor, or have any other arrangement with Web Merchants, Inc., write to Judy Cole and demand that content you produce for SexIs Magazine adheres to ethical Internet publication standards. Sever business ties with this company immediately upon receipt of any non-response, or any response that does not adequately address every concern raised in this blog post. (Feel free to leave comments on this post with technical questions, and I’ll do my best to help you sort out any l33t answers.)
  • EdenFantasys wants to stack the deck in Google. They do this by misusing your content and harvesting your links. To combat this effort, immediately remove any and all links to EdenFantasys websites and web presences from your websites. Furthermore, do not—I repeat—do not publish new links to EdenFantasys websites, not even in direct reference to this post. Instead, provide enough information, as I have done, so visitors to your blog posts can find their website themselves. In lieu of links to EdenFantasys, link to other bloggers’ posts about this issue. (Such posts will probably be mentioned in the comments section of this post.)
  • Boycott EdenFantasys: the technical prowess their website displays does provide a useful shopping experience for some people. However, that in no way obligates you to purchase from their website. If you enjoy using their interface, use it to get information about products you’re interested in, but then go buy those products elsewhere, perhaps from the manufacturers directly.
  • Watch for “improved” technical subterfuge from Web Merchants, Inc. As a professional web developer, I can identify several things EdenFantasys could do to make their unethical practices even harder to spot, and harder to stop. If you have any technical knowledge at all, even if you’re “just” a savvy blogger, you can keep a close watch on EdenFantasys and, if you notice anything that doesn’t sit well with you, speak up about it like I did. Get a professional programmer to look into things for you if you need help; yes, you can make a difference just by remaining vigilant as long as you share what you know and act honestly, and transparently.

If you have additional ideas or recommendations regarding how more people can help keep sex toy retailers honest, please suggest them in the comments.

Update: To report website spamming or any kind of fraud to Google, use the authenticated Spam Report tool.

Update: Google provides much more information about why the kinds of practices EdenFantasys is engaged in degrade the overall web experience for you and me. Read Cloaking, sneaky Javascript redirects, and doorway pages at the Google Webmaster Tools help site for additional SEO information. Using Google’s terminology, EdenFantasys’s unethical technology is a very skilled mix of social engineering and “sneaky JavaScript redirects.”

How to work around “sorry, you must have a tty to run sudo” without sacrificing security

While working on $client‘s Linux server last week, I found myself installing a cron job that ran as root. The cron job called a custom bash script that, in turn, called out to various custom maintenance tasks client had already written. One task in particular had to run as a different user.

During testing, I discovered that the odd-ball task failed to run, and found the following error in the system log:

sudo: sorry, you must have a tty to run sudo

I traced this error to a line trying to invoke a perl command as a user called dynamic:

sudo -u dynamic /usr/bin/perl run-periodic-tasks --load 5 --randomly

A simple Google search turned up an obvious solution to the error: use visudo to disable sudo’s tty requirement, allowing sudo to be invoked from any shell lacking a tty (including cron). This would have solved my problem, but it just felt wrong, dirty, and most troublingly insecure.

One reason why sudo ships with the requiretty option enabled by default is, among other reasons, to prevent remote users from exposing the root password over SSH. Disabling this security precaution for a simple maintenance task already running as root seemed totally unnecessary, not to mention irresponsible. Moreover, client‘s script didn’t even need a tty.

Thankfully, there’s a better way: use su --session-command and send the whole job to the background.

su --session-command="/usr/bin/perl run-periodic-tasks --load 5 --randomly" dynamic &

This line launches a new, non-login shell (typically bash) as the other user in a separate, background process and runs the command you passed using the shell’s -c option. Sending the command to the background (using &) continues execution of the rest of the cron job.

A process listing would look like this:

root     28109     1  0 17:10 ?        00:00:00 su --session-command=/usr/bin/perl run-periodic-tasks --load 5 --randomly dynamic
dynamic  28110 28109  0 17:10 ?        00:00:00 bash -c /usr/bin/perl run-periodic-tasks --load 5 --randomly

Note the parent process (PID 28109) is owned by root but the actual perl process (PID 28110) is being run as dynamic.

This in-script solution that replaces sudo -u user cmd with su --session-command=cmd user seems much better than relying on a change in sudo‘s default (and more secure) configuration to me.

How To Use Git-SVN as the Only Subversion Client You’ll Need

I’ve been using git as my favorite version control tool for quite a while now. One of its numerous distinguishing features is an optional component called git-svn, which serves as a bi-directional “bridge” that enables native git repositories to interact with a Subversion repository, performing all the normal operations you would need to use svn for. In other words, since you can checkout, commit to, and query the logs of Subversion repositories (among other things) using git-svn, git can serve as your all-in-one Subversion client.

One reason why you might use git-svn because your project actually resides in a Subversion repository and other people need to access it using Subversion-only tools. Another might be because you have multiple projects, some that use git and others that use Subversion, and you’re tired of switching between svn and git commands—like me. For us, it’s far easier to simply use git as a Subversion client and never have to call svn directly.

As an important aside, please note that I would strongly discourage people who are new to git from learning about it by using git-svn. Although you may think that moving to git from Subversion would be eased by using the git-svn bridge, I really don’t think that’s the case. You’re much, much better off simply using git by itself right off the bat, and you can do this even if your fellow committers are using subversion.

Also, I’m going to assume you’ve already got a Subversion repository set up somewhere.

First, checkout the subversion repository. In Subversion you would do this:

svn checkout http://example.com/path/to/svn/repo

With git-svn, you do this:

git svn clone http://example.com/path/to/svn/repo

This will cause git-svn to create a new directory called repo, switch to it, initialize a new git repository, configure the Subversion repository at http://example.com/path/to/svn/repo as a remote git branch (confusingly called git-svn by default, although you can specify your name by passing a -Rremote_name or --svn-remote=remote_name option), and then does a checkout.

The output of this command will be a little awkward. Here’s a sample from one my repositories:

r14 = dbd7266f328ef2ad061ea4532f39ce7cebaba0c5 (git-svn)
	M	trunk/Chapter 6/Chapter 6.doc
	M	trunk/Chapter 6/code examples/6.1.html
	A	trunk/Chapter 6/code examples/6.2.html
r15 = 4cca08341ab0600069cece77ce67afc449caca68 (git-svn)
	M	trunk/Chapter 6/Chapter 6.doc
	A	trunk/Chapter 6/code examples/print.css
	A	trunk/Chapter 6/code examples/screen.css
	M	trunk/Chapter 6/code examples/6.1.html
	M	trunk/Chapter 6/code examples/6.2.html
r16 = 7b2f3e0ccfd79be61b527b6ba325f8689475dc01 (git-svn)
	M	trunk/Chapter 5/Chapter 5.doc
r17 = a319764855361d92bb6e006cfd18a51319046cae (git-svn)
	M	trunk/Chapter 5/Chapter 5.doc
r18 = 4cd5cb43d33b2dd45bd39b9a2b7ea9416f9e3d8f (git-svn)
	M	trunk/Chapter 6/Chapter 6.doc
	M	trunk/Chapter 6/code examples/screen.css
	M	trunk/Chapter 6/code examples/6.1.html

As you can see, git-svn is associating specific Subversion revisions with particular git commit objects. Due to this required mapping, the initial cloning process of a Subversion repository may take some time. This is a good opportunity for your morning coffee break.

When this process is done, you’ll have a typical git repository with a local master branch and one remote branch for the Subversion repository:

Perseus:repo maymay$ git branch
* master
Perseus:repo maymay$ git branch -r
  git-svn

You can now treat the Subversion repository as though it were a remote branch of sorts. Say you’ve done a bunch of work and, as you typically do with git, you commit this work to your topic branch.

Perseus:repo maymay$ git checkout -b awesome-feature
Switched to a new branch "awesome-feature"
Perseus:repo maymay$ vim awesome-feature-stylesheet.css
Perseus:repo maymay$ git add awesome-feature-stylesheet.css 
Perseus:repo maymay$ git commit -m "Now I'm perty."
Created commit 07ee832: Now I'm perty.
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 awesome-feature-stylesheet.css

Right now your changes are still in the topic branch (called awesome-feature in the above example). To get them to Subversion, you merely need to say git svn dcommit:

Perseus:repo maymay$ git svn dcommit
Committing to http://example.com/path/to/svn/repo ...

Note that pesky extra “d” in the command. This is the equivalent of Subversion’s svn commit, but the commit message used is the one from the previous command, which in this case was git commit -m "Now I'm perty.". Also interesting to note here is that because Subversion doesn’t understand git branches, any change on any branch can be “pushed” to Subversion at any time using git svn dcommit—the git commits don’t have to be on any specific branch, since all git-svn does is map a git commit object to a Subversion revision and vice versa.

Similarly, you can at any time run the equivalent of svn update to get the latest changes from the Subversion repository into your Subversion branch.

  • To do this, without affecting your working tree—that is, to only fetch the latest changes but not write them to the filesystem, just to the git-svn metadata area and the remote git branch—use git svn fetch. To apply these changes to your local branch, you simply merge: git checkout master; git merge git-svn.
  • If you do want to write out the changes to the filesystem (as svn update would do), use git svn rebase, which automatically linearizes your local git commit history after the commit history of the incoming Subversion changesets. Very slick.

If your fetching/rebasing causes a conflict, you’ll be notified and will have to resolve it as per usual. If your “pushes” to the svn repo causes a Subversion conflict, you’ll be notified and you should again edit the appropriate files to resolve it, but this time make sure you run a git svn rebase before you try dcommit-ing again (since, remember, Subversion can only handle linear commit history).

As always, saying man git-svn or git help svn to your shell will give you all the other details. Among these, the most likely you’ll probably want to learn about is how to track multiple Subversion branches as normal git branches.