Category: Programming

WordPress NYC: Enterprise Features for Small Businesses Running WordPress

Earlier this week, the WordPress NYC Meetup group hosted me at their space inside the Microsoft Technology Center. I was there to present some of my recent work on “Enterprise Features for Small Businesses Running WordPress.” I had a lot of fun and really appreciated the opportunity to showcase three projects I’ve been working on recently.

You can find an archive of all recorded sessions that the organizers, Steve and Scott, of the WordPress NYC Meetup have produced at the “WPNYC TV” page on their website. Below, you can find my own presentation from their latest evening, along with a transcript and links to the original presentation materials. This includes my slide deck, presenter notes, and presentation runbook.

>> SCOTT BECKER: Maymay and I just met tonight, but I find what he does fascinating. So, instead of giving you, like, some spiel that, y’know, we’ve written up, I’m just going to read a bit about what we had here at Meetup so you know a little bit about what Maymay’s gonna talk to you about. I find it fascinating.

He’s a Free Software developer and technology consultant who—get this—works without money. Anybody else say that?

>> AUDIENCE MEMBERS: Sometimes.

[laughter]

>> MAYMAY: Good answer!

>> SCOTT BECKER: Instead of owning a home, Maymay lives on the road, traveling wherever I guess he’s needed and wanted, working to help secure and scale small businesses, non-profits, and community groups. He’s on the road and he helps people take advantage of enterprise features through easy to use and easy to understand Free and Open Source Software.

So, with that being said, maymay.

>> MAYMAY: Yeah! Thank you. Thank you so much, Scott and Steve, who I know is not here. And to Microsoft, for the space of course. So, you just introduced a little bit about me. I kinda just want to spend one moment say a little bit more about myself.

People call me “maymay,” that’s the name I prefer. This is a screenshot of my homepage at maymay.net. It’s spelled like the month of May, but twice. Um, I get DDoS’ed occasionally, so if my site’s not up right now, don’t worry about it. It’ll come back in a sec. Apologies if that’s happening, but go there to learn a little bit more about me, and about the work that I do.

In the meantime, I want to talk a bit about what we’re gonna do here. So, I talked a little bit about myself. I won’t bore you more with that. Next we’re gonna quickly spin up a new WordPress Multisite instance, so that we can show some of the demos that I want to show you. I’ll be showing you three WordPress plugins that I wrote that I think you might want to know about. That may be why you’re here. And finally, if there’s time, we can do some Q&A. Hopefully I’ll have some answers to that.

So, all right, let me go ahead and spin up a new WordPress Multisite instance. And, for this, I’m actually gonna go out of the slides, and I’m gonna go to my little demo here. Now, what I am going to do is go to this website that doesn’t yet exist, just to prove that it’s not actually there: W P N Y C dot DEMO. This website doesn’t exist, no one can get to it. It doesn’t yet exist. So we’re going to go ahead and make it!

Can y’all see that, is that big enough? That text is big? Okay.

I’m gonna be using a couple tools that I’m gonna explain in just a second. The one that I want to start with is VV. And, this is gonna basically automate creating an entire new WordPress Multisite install on my machine, and make the site available at WPNYC.DEMO.

So, I’m gonna tell VV to, hey, please create for me a domain called WPNYC.DEMO. And I want the name of this site to be WPNYCDEMO. Is that right? That’s right. And I want it to be a multisite install with a subdomain scheme. You know how WordPress can do subdirectories and subdomains? I want subdomains. I want the admin username to be admin, I want the password to be password, which I know is not a good password, but this is a demo.

>> AUDIENCE MEMBER: Password with five asterisks!

>> MAYMAY: Yeah, and an at-sign!

>> AUDIENCE MEMBER: What is this software you’re using?

>> MAYMAY: I’ll talk about the software in just a sec, I just want to kick this off.

The admin email address is gonna be admin@wpnyc.demo. I want also to remove these defaults. Y’know how sometimes when you install a new WordPress site it installs plugins like Akismet and the Hello Dolly plugin and a bunch of different themes that you almost never use? I wanna remove all those, so I don’t actually want those to be part of the final build. And I’m also going to use the debug flag here because I want to set the WP_DEBUG constant in the wp-config file. This will help me show you some of the output from some of the plugins that we’re gonna demo. Just so that we make sure that’s there.

All right, so I’m gonna go ahead and create that. And VV’s gonna ask me whether or not I want to create a site with a blueprint. I don’t. Whether I want to install a specific version of WordPress. I don’t. I’ll just use the latest version, rather. I’m not going to use any sample content so it’s gonna be totally blank, no users, no nothing. We’re not gonna import any database files. We’re not going to add sample content to any of this. And we’re gonna go ahead and start that off.

Now, VV is gonna go ahead and build me a new server and it’s going to make that site available over on the left-hand side there. And while that’s happening, we can switch back to our presentation and we can talk a little bit about the demos that I’m going to be showing you.

So, very briefly. If you don’t already know about the tools there—

>> AUDIENCE MEMBER: Where’s this site going to be set up?

>> MAYMAY: It’s setting up right on my computer here, in a development environment. And it’s going to be using these things: Virtual Box, which is a virtual machine hypervisor, a type 2 virtual machine hypervisor. That means that I’m gonna have a totally new computer, a Linux server, on my Mac here. That, is being configured using Vagrant, which is a virtual machine hypervisor automation tool. So with Vagrant commands I can tell Virtual Box how to set up that machine; what network interfaces I want, what kind of operating system I want, what kind of ports to use, all that kind of stuff. I’m also using VVV, which is the Varying Vagrant Vagrants project. This is a project originally started by the 10Up company, which is a WordPress development shop. It’s a Vagrant config specific for WordPress development. So, I’ll be using that. And, last one is Variable VV, which is the one that I used that you saw and this is the command that I was using to tell VVV how I wanted it to configure that WordPress setup.

So, all these tools are Free Software, open source. You can grab them on GitHub or these project pages. Variable VV is written by Brad Parbs, who’s an excellent developer and this tool is probably the easiest way to set up a WordPress site I’ve ever seen. I’ve contributed a number of features to it. It’s really, really nice.

Using these kinds of tools makes development a lot easier, a lot more robust, a lot more reliable, ’cause it’s all automated. It takes out human error, and it’s much faster, of course.

So, let’s—

>> AUDIENCE MEMBER: Quick question?

>> MAYMAY: Yeah?

>> AUDIENCE MEMBER: Are you familiar with Local by Flywheel? Is this similar, or are there major differences?

>> MAYMAY: I am only passingly familiar with Local by Flywheel, but if I understand correctly, they’re basically equivalent tools. Right? It’s kind of—again, I’m actually not that familiar with Local by Flywheel. I’ve heard of it. My understanding is that it sets up a development environment for WordPress. Some of you may have heard about XAMP, right? That old thing.

[laughter]

That still exists. It’s kind of like a packaged server in a box. Like an application box. This is using—uh, this is the same effect, but we have virtual machines to do it with instead of putting, like, an Apache server on your laptop. That kind of thing.

All right, so, while that’s all building, we’re here to talk about Enterprise, right? So, I’m gonna assume you all know what WordPress is, and I’m gonna assume you all know what a small businesses are, and what features are, but we’re here to talk about “Enterprise Features for Small Businesses.” So what does, “Enterprise” mean? And some of you may think you already know what this is, and that’s great. Obviously, I’m not here to tell you what to think. That is, or may be, your employer’s job. So instead, I wanna make sure we’re all on the same page by letting you know what I mean when I say “enterprise.”

So what I mean when I say “enterprise” is important capabilities for secure and private collaboration, which utilize multiple tools simultaneously, typically sold to larger corporations that have a lot of money. Right? So, in other words we’re talking about anything that has to do with process or workflow automation, anything that has to do with objectives that touch multiple disciplines at once. Tools, for example, that interoperate between multiple vendors, typically to avoid vendor lock-in so that you don’t have to be beholden to a Facebook or an Amazon or a Google for the rest of your business’s life. Any capability that’s, perhaps, perceived to be super advanced or maybe even unnecessary for small groups, like those zero-to-one employers shops, or sole proprietorships, small businesses of that kind, particularly when they’re security and privacy related. Because those are the kinds of things almost always sold as the “pro” features in add-ons and upgrades that are unavailable to people with not huge budgets.

So, in short, any kind of system or tool that supports truly resilient autonomy. Something you can do yourself. Not this B2B stuff. Right?

So, with that said, I see my role as a Free Software developer to make it more possible for more people to independently access more of those capabilities without needing to have money and without needing to engage in any other form of abusive or coercive relationships in order to do so. I think that’s especially important to do in service to and in solidarity with the specific people whose lives are made dramatically worse by capitalist efforts to do the contrary.

All right, so let’s see where we are with the build of this new website and how far we’ve gotten on creating the server. Okay, there we go. It says here the server URL is wpnyc.demo. Let’s take a look and see if we actually have this available.

There we go, we got a new website up. So this is a pretty standard WordPress site. It’s all empty here, nothing fancy about this at all. We can go ahead and try to log in. And we’ll use our admin and password. And you can see that we have a Multisite install, so we have a Network Admin. We’ve got a users database. It’s all empty. So it’s just a standard, brand-new, WordPress site that we’ve created there.

>> AUDIENCE MEMBER: So, is this replacing what everyone has to do with their hosting company? Making databases, and—

>> MAYMAY: No, this is what your hosting company uses!

>> AUDIENCE MEMBER: Right, okay.

>> MAYMAY: You’re not going to be able to go to this website from, for example, the Starbucks down the street because it’s on my computer. It’s physically here. However, if you were a hosting company and you, for example, didn’t want to go and create a database and install WordPress every time you get a customer request, you’d probably use a tool like this. So, these tools are Free Software, they’re open source, you can do that if you wanted to. They don’t tell you that, but you can.

>> AUDIENCE MEMBER: It sets things up in seconds!

>> MAYMAY: It does. It doesn’t take very long at all. All right! So, we got our website, so we spun up a new WordPress Multisite instance.

>> AUDIENCE MEMBER: Question?

>> MAYMAY: Yeah?

>> AUDIENCE MEMBER: What does Multisite mean?

>> MAYMAY: Multisite means you can have multiple domains, multiple websites, that are running on one WordPress database. You have one database but you have, for example, blog.wpnyc.demo, and test.wpnyc.demo, and maybe even othersite.com, all running on one WordPress installation.

>> AUDIENCE MEMBER: So you can back the whole thing up?

>> MAYMAY: Yeah! It’s all one database, so whenever you backup that database, you backup the whole thing, the whole network.

>> AUDIENCE MEMBER: So you’re managing it from one WordPress site?

>> MAYMAY: Yeah. If Multisite is new to you, definitely check out the WordPress codex, which has a Multisite page. It describes it right there and also tells you how to set it up manually, in case you want to do that. It is good to do that manually at least once or twice so you know what these tools are doing. But once you do know how to set it up manually, using these tools obviously makes the job a lot faster and a lot less error prone.

All right, so we’ve created our new WordPress multisite instance. Let’s move on now. We’ll learn about the Subresource Integrity Manager for WordPress.

So, first, how many of you may already have heard about Subresource Integrity? No? Okay. That’s cool. I really like the Mozilla Developer Network’s definition. It’s pretty clear.

“Subresource Integrity is a security feature that enables browsers to verify that files they fetch (for example, from a CDN [a Content Distribution Network]) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched file must match.”

So, what does that mean? For example, let’s say some attacker—or you—want to run cryptocurrency mining JavaScript on hundreds of thousands of users’s Web browsers. You don’t have to actually attack thousands of users’s websites, thousands of websites on the Internet, in order to do that. You could compromise the one website that all those other websites are loading files from. For example, like a [Facebook tracking] pixel, right? Like some sort of web bug. If you can change that and everyone is pulling that site from you, well, there you go. You’re now loading your JavaScript code on multiple websites. So, for example, if I wanted to make the users of USCourts.gov mine Monero or BitCoin for me, then I wouldn’t necessarily have to attack USCourts.gov, I could attack TextHelp.com, because TextHelp.com is serving files for the other sites, USCourts.gov and ICO.org.uk.

Now this isn’t theoretical, either, this happened just last week, with these exact websites—USCourts.gov included—and it’s exactly the sort of scenario that Subresource Integrity is designed to mitigate. So, let’s see how you can prevent this attack against your site’s users for free using the Subresource Integrity Manager for WordPress.

Let’s go back to our demo here. And we’ll switch back to Firefox.

>> AUDIENCE MEMBER: Is this similar to what SSL does?

>> MAYMAY: No. No, SSL would not protect against this because you’re still getting the content and not verifying that what you’re getting is what you expected to get. It just means that no one has listened to what you’re getting on the way. I’ll show you what I mean in just a sec. This will be a lot clearer if you see this in action than if you just see some slides without a demonstration about it.

All right, so the first thing we’re gonna do is, obviously, is we’re gonna go to our Network Admin screen. We’re gonna go to Plugins. We don’t have any. And we need to get one because we need to get the Subresource Integrity Manager, called WP-SRI. I think you can also search for “Subresource Integrity.” Regardless, here it is. We’re gonna go ahead and install this. And there it is.

Now, when you have a Multisite install you can activate plugins for an entire Network all at once or, what we’re gonna do, is we’re going to go to the site itself, our site’s Dashboard—it’s hard to do this backwards. Go to our Plugins, and we’re gonna go ahead and activate the Subresource Integrity Manager for WordPress. That’s it, you’re now protected. By which I mean, if you go to your Tools page, you’ll see a new item called Subresource Integrity Manager, and you see here all the listing of resources that your site’s requesting. Every single one, including the ones on your own site because we set that debug flag—I wanted to make sure there was some content here. Over on the left you see the URL column. That’s the source address of the file you’re loading. So these are JavaScript and CSS styles, etcetera. And over on the right, you’ll see these hashes. And this is the cryptographic hash, this is the one-way mathematical function that proves that the content that’s being served to your site is the content that you expected to get when you first loaded that resource.

So, what do I mean by that? Let’s go to the site itself, and I’m going to go ahead and open up the View Source thing here—and let’s make that a little bit bigger so you can see as well—and let’s just have a look at one of these link elements. Actually, let’s look for stylesheet—style sheet—there we go. So this probably looks familiar to any of you who have seen CSS before, right? Link rel stylesheet. Here’s an href, a reference to the stylesheet itself, but over on the right, you can see crossorigin anonymous and integrity equals SHA-256 dash and then this hash. This hash is the metadata of what was expected to be in that stylesheet. So if some attacker modifies it, it’s not going to load in your browser if you’re using these, if you’re expecting to get this particular hash value.

What do I mean by that? Let’s go a little bit deeper into that. So, let’s close—close? What happened there. There. Let’s close out of this. And I want to create now an example attack on this website to show you how this is actually enforced.

So the very first thing we’re gonna do, I’m going to go to my command line here and I’m going to create a new JavaScript file. This is gonna be going to a CDN that I set up ahead of time, here. CDN dot demo. The important thing about this site is that, again, it’s just a WordPress site, it’s just a different domain. The point being that we have two different domains. One requesting JavaScript sources from another. This is exactly the same thing as you would, for example, do when you do an embed from YouTube and you say, “Hey, YouTube, please load some content on my site.” I’m gonna do the same thing, but it’s gonna be very, very simple. So we’re going to echo some JavaScript, how about alert—whoops, this is hard to do backwards—alert, how about ‘Hello, WP NYC Meetup.’ Right? And we’re gonna go ahead and put this into the root of my CDN demo at htdocs and we can call that test.js for example. And I want that to be output there, whoops! Where did I mess that up? Echo, alert, yup. That’s fine. Uh, that’s not fine? Oh, there’s some other, I can’t see [the full screen on the project] or I mistyped some stuff? I can’t actually see all that. There we go. Nope. It looks like I’m not able to see what is there. Echo alert ‘Hello, WP NYC Meetup.’ Close. Let’s not do this full-screen, because I don’t see what’s over there. Oh! I have another—

>> AUDIENCE MEMBER: Can you explain what you’re doing now?

>> MAYMAY: Yeah! So this is gonna put this text, “alert(‘Hello, WP NYC Meetup’)” into this file. There we go. So if I look at this file, let’s say cat that file out, you’ll see that now that file has “alert(‘Hello, WP NYC Meetup’)” and if we go to our CDN dot demo slash test dot js, we’ll have that file being served in the Web browser.

The next thing I need to do of course is I need to load this JavaScript into the theme that we’re using. So I’m going to go ahead and use Vim to edit wpnycdemo, htdocs, wp content—

>> AUDIENCE MEMBER: And what’s Vim?

>> MAYMAY: Vim is a text editor, like TextEdit, but in a command line. So, like, Notepad or something. WP content, themes, twenty seventeen, because that’s the theme we’re using, and header.php. Right, so this is gonna look pretty familiar to anyone who’s edited a WordPress theme before. You can see here the wp_head hook, function there. And we’re just gonna add another one and it is going to be wp_enqueue_script, and this is gonna be WP-SRI demo, and we’re gonna load CDN.demo/test.js. All right, now our website—we put that embed code, basically, into our website. So now, when we load the WP demo site, we should get an alert that says, hey, “Hello, WP NYC Meetup.” There we go. So we now have that alert. We’re running that JavaScript code.

Here’s the thing. What if I then change that JavaScript code? Well, without Subresource Integrity, if I change that JavaScript code, it’s still gonna run, which means that if some attacker then changes that source code on some other server that you don’t control, that you have no insight into, they’re now running their code on your site. With Subresource Integrity, however, if I change that, so let’s do that, let’s vim this again. This time, let’s vim the CDN site, htdocs, test.js, let’s make something much more malicious happen here. Maybe something like a cross-site scripting attack. With a cross-site scripting attack if I reload this now, if I wasn’t using Subresource Integrity—we’re going to go into Subresource Integrity Manager to see that we have a hash for it—it’s not going to load.

Okay. Why didn’t it load? It didn’t load because, if we look at the Inspector, and go to our Console, you can see, “None of the SHA 256 hashes in the integrity attribute match the content of the subresource.” In other words, the browser says, “Hey, I saw this file, but it doesn’t match what you told me it should contain. And because it doesn’t match, I’m not gonna run it.” Therefore, your users also will not be subjected to that JavaScript. If I—

>> AUDIENCE MEMBER: What will they see? Will they see an error?

>> MAYMAY: No. It just won’t run. So, for example, if you had a personalized resource—Google Fonts does this, where it sends personalized content to each individual user but it’s the same URL, it’s always google.com/fonts/something—that is gonna break this. And so hopefully Google will fix that and will give, like, individual URLs to people. Until then, you have this little Exclude button which just takes out that integrity attribute in case you run into a problem where you’re loading one resource for multiple users but the content of that one resource is different for each user. In that case the integrity attribute won’t be printed, and if I reload this page, we should now have an XSS attack happening. Does that make sense?

>> AUDIENCE MEMBER: Yeah.

>> MAYMAY: All right, so that happens, again, because we did not include the integrity attribute in our test.js script element. If we again go back to Subresource Integrity Manager and re-include it, then when we reload, we will not get the XSS popup because the integrity attribute will be printed and we’ll see in fact the SHA 256 hash was not matching what we expected it to be.

So, let’s return this to the original ‘Hello, WP NYC Meetup’, and we’ll save that. And hopefully, with this, because again, it matches, it’s not excluded, but it does match the content that we’re expecting it to be, we visit the site and we get this back.

So that is Subresource Integrity Manager for WordPress.

>> AUDIENCE MEMBER: Does that add any weight to the site?

>> MAYMAY: It will increase your HTML page sizes, but hopefully you’re using, y’know, HTTP2 for good compression, and it’s not really adding a lot in comparison to the kind of attack that this—the kind of vulnerability that this is for your users is pretty serious. So, this is basically considered a really good thing, it’s been widely developed and deployed.

Some pro-tips for using this. SRIHash.org is a great site to go to if you want one of these one-off. If you have a JavaScript file that you want to insert yourself but you want to include the integrity attribute, go to SRIHash.org, plug the URL into the form there. Hit Hash. It’ll give you the exact code you want to use for that.

>> AUDIENCE MEMBER: That looks like a good password generator.

[laughter]

>> MAYMAY: You can also further harden your site by using the CSP HTTP headers. These are content security policies which tell your visitor’s browsers not to load anything that doesn’t include an integrity attribute if you use the require-sri-for and you can say either script or style, or both. And finally of course, because the free and open Internet is a platform agnostic technology by design you don’t have to be using WordPress to be using this. You can, for instance if you’re using Ruby on Rails, use the sprockets-rails gem. Just use your javascript include tag and add a new parameter there, integrity equals true, and sprockets-rails will take care of it. Similarly, if you’re using any of the NodeJS tools, pick up the SSRI package over on NPM. And again, it’s ssri.fromData or dot fromUrl or something. Give it an algorithm that you wanna use, and the toString method will give you the integrity attribute, what you actually want to print out.

Okay, so that was Subresource Integrity for WordPress.

Yeah?

>> AUDIENCE MEMBER: What happens if I want to change the plugin or the script or whatever?

>> MAYMAY: Yeah!

>> AUDIENCE MEMBER: Do I have to run something again so I can update the hash tags or…?

>> MAYMAY: The easiest thing to do is to go back to your tool, so let’s go back to WPNYC Demo, we’ll go to your tool, and you find the resource you want. So, let’s say, test.js, here. And delete it. Now, the next time it loads, Subresource Integrity Manager will say, “Oh, I don’t know this. I’ll go ahead and fetch it, re-hash it,” and that way basically you’re forgetting the old hash and putting in the new one. So that’s that.

>> AUDIENCE MEMBER: Can you give us an example of how to use this on a site we might already be familiar with?

>> MAYMAY: This is already used on many sites you’re probably familiar with! And it’s used specifically to make sure that there’s no unexpected manipulation from the CDN side. So, for instance, a long time ago—well, not a long time ago but, like, two years ago—there was one of these JavaScript DDoS attacks that happened by Baidu, which was one of the Chinese analytics firms. The Great Firewall of China, evidently, decided to change anyone who was loading Baidu analytics JavaScript that it was sending to DDoS GitHub. As in, to try to get every browser on the planet who was loading Baidu analytics to send a bunch of requests to GitHub.com to take them down. And it worked because, at the time, there was nothing like this. So that, among other reasons is why SRI features became a W3C standard and is now deployed in all these different frameworks.

So, for WordPress, you can use the Subresource Integrity Manager until we get this into core. It looks like it will be into core at some point, but I don’t want to promise about when or how, because I don’t know.

[laughs]

>> AUDIENCE MEMBER: [quiet speaking]

>> MAYMAY: Oh, you just have to hit “exclude.”

>> AUDIENCE MEMBER: [quiet speaking]

>> MAYMAY: Well, I mean, yes. For Google Fonts, or any resource that has the same URL but whose content is different, right? Because, if the content is different, it’s not going to match the hash for all users, it will only match the hash for one user. Probably you, because you’re the first one who has requested it. So you want to exclude those until sites like Google and other CDN providers start making unique URLs per user. All this personalization that happens on just the content and not the URL needs to go away for this to work.

>> AUDIENCE MEMBER: How do I know if I need to exclude a script like that?

>> MAYMAY: See if your site still works.

[laughter]

>> AUDIENCE MEMBER: Oh, just load it again?

>> MAYMAY: Yeah. All right, so, we’re gonna move on to GPG and OpenPGP signing and encryption for WordPress. So, first of all, how many of you are familiar already with GPG or OpenPGP or signing and encryption and technologies? One, two, three, four hands in the back. Okay, cool. Great! Any of you wanna shout out what that is? No? All right.

Well basically, it’s secured email, is the answer to that. In short, GPG or OpenPGP—they’re kind of interchangeable terms—means secure email. But here we have to be pretty careful about what we’re talking about when we say “secured.” What does secured really mean? In a very brief nutshell, ’cause I don’t have that much time, when infosec pros talk about security they’re usually speaking about something that’s known as the CIA triad. It’s called a triad because it has, of course, three parts, and these are confidentiality for the “C,” integrity for the “I,” and availability for the “A.” Now, for the purposes of this presentation we’re only going to concern ourselves with the first two pieces of this triad.

Another common word for confidentiality—you’ll often hear this a lot—is privacy. Much more often used word. And similarly, very similarly to that, another word that’s used for integrity is “authenticity.” So, in the next demo, when I talk about GPG encryption, what I want to be talking about it ensuring privacy: the ability for your website to send a message that only its intended recipient can read. And when I talk about GPG signing, what I’m talking about is ensuring authenticity: the ability for the recipient of that message to verify that it the message was actually sent by your server, not some other imposter, and that the message that they got was unmodified in transit. It’s the actual message that you sent. Very much like the Subresource Integrity thing where you’re hashing stuff. All right, so now let’s see how you can accomplish this with the PGP encrypted emails plugin that I wrote.

So, we’re gonna go back to our demo, and—whoop, there’s my, there it is. And, I want to go to my WordPress site. And, as before, we’re gonna create a new plugin by going to Network Admin, Plugins, Add New, and we’re gonna search for WP PGP Encrypted Emails. That’s the full name of it, but you can also probably search “PGP encrypted emails.” Anyway, we’re gonna hit “Install Now” and there it goes. And we can Network Activate this or we can, again, just go to a site, go to the demo site, go to our plugins, and activate—whoops, no, I don’t want to, I want to do this one—plugins, and activate that. All right, that’s it.

Now, very first thing you’ll notice is that we have one of these admin notices up at the top. It says hold on, you’re not done yet. You’ve got to create or generate an OpenPGP key pair for the website to sign outgoing emails with. In other words, you can’t just send an email. You have to actually stamp that email with the identity of the website, cryptographically. So that’s what we’re gonna do when we generate PGP signing key pair. That’s it, that’s all you gotta do. And this will take you to the Settings screen, with a new item here under Email Encryption.

This email encryption settings will have a number of options. The important one for here is the PGP signing key pair. This is a low-trust, single-purpose key (identity), for the website that you need to distribute any user who wants to make sure that when they’re getting emails from you, it’s coming from the right place. There’s a theme function that you can add to your theme that makes this button, so you don’t have to worry about the code itself, or you can go to or tell your users to go the profile that they have and click on “Download public key” at the very bottom under their personal encryption settings. We’re gonna go ahead and do that. I’m gonna click “Download public key,” and it’s gonna give me this file. I’m just gonna save that, for the moment.

But before we do anything with that, I want to show you what an unsigned and unencrypted email looks like. Regular old email, nothing fancy. This is what you’re doing, probably, right now. And to show you that, I’m gonna go to wpnyc.demo, and I’m gonna go to [port] 1080 here. This is Mailcatcher. This is another one of those development tools that was installed when I did the VV build and what this does is it kind of intercepts any outgoing email from that website and shows it to me in this interface so I can debug it.

You can see that we already have an email in here. This one is the email that was sent when the WordPress site was kicked off and built. Sometimes you’ll see this in the One-click Installers. Y’know, you’ll get an email saying, “Hey, your WordPress site is ready.” That’s what this is. So, this is the source code of that email. You can see the headers up here, and the body down here. And there’s nothing fancy about this, nothing special, nothing cryptographic, no hashes, none of these security features. It’s all just plain text. This is like sending a postcard through the post. Anytime you send a postcard, anyone viewing or handling that postcard can read the contents. That’s what all email, all text messages, all unencrypted HTTP—not HTTPS—traffic is. So we’re now going to add the equivalent of a digital envelope to protect the contents of this message and a digital stamp that says, much like those Game of Thrones, y’know, wax stamps. This definitely came from Jamie Lanister or whoever. We’re gonna add that to our emails. I’m gonna show you how to do that.

Firstly, we have this key that we downloaded. So let’s go ahead and take a look at my Downloads folder. And, or actually, take a look at this here. “To authenticate the emails, download the PGP public key and import it to an OpenPGP-compatible client.” This links to PRISM-Break, which is a fantastic website. If you don’t know about it, check it out. PRISM dash Break dot org. And it lists here all the software that you can use PGP or GPG with. So there’s a vast ecosystem of this. It’s available on Windows, Linux, Android phones, iOS devices, basically any computing device that you have can do this for free already, with either one of these apps if it’s not already built-in. Many Linuxes have this built-in, for example.

So, I’m going to be using GPGTools, or MacGPG for this. That is at GPGTools.org. If you’re on a Mac today and you wanna try this out, go grab this. It’s the best tool for the job I’ve seen. All right, so we’re gonna go ahead and open up this email, I’m sorry, this key, and we’re gonna open it with an application that was installed with the GPGTools package that I installed earlier called GPG Keychain, and we’re gonna import that key, and there we go. Now we have this key.

Now what this means is that we are aware of a cryptographic identify for the website wpnyc.demo. I can now authenticate any emails that are sent there. So let’s go ahead and get an email from there. You could trigger an email by purchasing an order or making a new user account, or you can use the handy “Send me a test email” button, which is what I’m going to do. And when I click this I want you to take a look at the Mailcatcher tab up here, and I want you to take a look at this. This is going to go from 1 to 2. Okay? Ready?

Send me a test email. There it is, now there’s 2. There’s the test email. And now, take a look at how the email is different. We have this “begin PGP signed message” text on top and on the bottom. This is what’s known as a clear signed message. This is saying that this is the, effectively, the integrity attribute, or that metadata for the contents of that message, just like the SRI stuff. It functions very similarly.

If you were using an email client, this would automatically authenticate—

>> AUDIENCE MEMBER: You’d use both?

>> MAYMAY: Use both what?

>> AUDIENCE MEMBER: Use both the GPG and the SRI?

>> MAYMAY: Yeah, they’re separate plugins, they’re separate technologies, but they use the same what are known as cryptographic primitives, which is to say, they use the same mathematics under the hood.

>> AUDIENCE MEMBER: So you’d use one or the other?

>> MAYMAY: No, you can use both, because they do different things. In fact, I would recommend that you use all the plugins I’m gonna demo. [laughs] ‘Cause, they’re all free and they run on any WordPress site of any size. All right.

So, we’re gonna grab this PGP signature here, the contents of this email, and again if you were using an email client like Apple Mail or Microsoft Outlook or something, you wouldn’t have to do this copying-and-pasting, but I’m using a debugging tool so I am. I’m just going to go ahead and open up a TextEdit window and I’m gonna paste this into a new file and we’re gonna say test email, and I’m gonna save it on my desktop—and, sure, you can use the email extension. All right, and now, to verify that this is in fact the—I don’t need this anymore—the message that came from the site, I’m gonna right-click, I’m gonna go to services, and “Verify Signature of File.” See that, there? “Verify Signature of File.”

All right, click that, and we get a verification result: Signed by wordpress@wpnyc.demo. This means, yes, this email came from the site that you think it did and it was unmodified. No one changed the message between the time that the site sent it and the time that you received it. So, to prove that, we can open up this test email again. TextEdit—whoops, come on. Why is it not dragging this over? Drag. Seriously? For real now? All right, let’s do it this way. Other, and we want to use TextEdit, which is here. All right, so now let’s change this email in some way. We’ll just delete some text. We’ll re-save it.

Let’s try to verify again. Now we should get a failed message. This should not verify, because the message was changed. This is really important for things like, for example, security announcements. Apple, Inc, like, the company that makes this computer. They do this, exactly this process for when they send emails to their security announce list because it would be pretty bad if they sent a security announcement to say, “Hey, there’s a new patch available,” and that was actually a fraudulent email. Web hosting providers, DreamHost sends this with their billing emails, with their receipts. Now you can do the same with this plugin. All right, so that is a test email.

So that was signing. Do we have time? Do you think we should do encryption? Do we have time for this?

>> AUDIENCE: Yes!

>> MAYMAY: Yes? Okay. [laughs] Is there a question, there?

>> AUDIENCE MEMBER: [quiet speaking]

>> MAYMAY: I can’t quite hear you. Are you asking if it’s possible, if this works with an external service?

>> AUDIENCE MEMBER: Yes. If you don’t want to use your WordPress server to send the emails.

>> MAYMAY: Yeah. It, um, you will have to do a little bit more work to use an external service because you’ll need to send them pre-signed messages, right? So, for example, a lot of these services will insert things like, you know, “Hello, name,” or, y’know, “who lives at such and such, your account number is such and this.” If you sign the message before they do that, none of these verifications are gonna work, because they’re going to be changing the message on your behalf. On the other hand, there’s many free software plugins for WordPress that can function similarly to MailChimp, for instance, and this plugin is compatible with all of them that I’ve tested, which at this point has been hundreds. So, you could do that, it might take you a little bit more time to send those emails but at least that way you’re actually doing the work on your site, yourself, and not farming that out to a third party that may or may not—and probably is—mining you for data. So, but, y’know, obviously up to you?

All right. What time is it? It is, 8:30. I have until 9, I got a lot of content. You sure you want to see encryption?

>> AUDIENCE: Yeah!

>> MAYMAY: Yeah, y’all are good for that?

>> SCOTT BECKER: Quarter to nine!

>> MAYMAY: Quarter to nine! All right, I don’t have a lot of time, then. Here’s what I’m gonna do—

>> SCOTT BECKER: Well, ten to.

>> MAYMAY: All right. Well, let’s, I’m going to go through this a little bit quicker. So what I’m going to do is I’m just going to create a new key here. This doesn’t really matter. To do encryption we have to do the reverse process. Rather than getting the key from the website, I have to give the website my identity. But to have an identity, I need to make one. So that’s what this is, that’s what the “New” thingy is here. So we’ll do that. Some email dot invalid, none of this matters. The password is password. And password here. And, under advanced options, what’s there? Oh yeah. So I’m just gonna say “this is a test do not use this key.” We’re gonna generate that key. We’re gonna continue with a simple password. And again, all the GPG tools will do this, whether you’re using Windows or Linux, I just happen to be using a Mac. I don’t want to upload the key to the key server, again, because it’s a test.

And what we’ve done is created a new key pair. What’s known as basically a digital lock and a key that opens that digital lock. So if I hit export here, I get a file on this desktop. Let’s go ahead right there, and desktop, and there is my file which just like the other email, this can be opened. Because a key, an identity, a cryptographic hash, is just text, we can open it with TextEdit and you can take a look at what a key looks like. It’s just a really big number, really.

So if I copy this into my—because I’m the admin here, I’m gonna copy it into the Settings, Email Encryption, and Admin Email PGP Public Key textbox here, and hit save. Now, that’s all I gotta do. Now if I get another message, let’s trigger that again. Send me another test email. It’ll go from 2 to 3. And now this was the signed message, now we’ll have an encrypted message. So now we just have this “begin PGP message” block and this is the content that’s gonna be stored, for example, by Google or by Microsoft Live, right? When you are actually using a GMail account, they’re reading all your email. Well that’s because it’s not encrypted. If you use this, and you give the sites that are letting you send encrypted messages your cryptographic identity, Google can no longer read your email because to them it looks like this.

So the question is, how does it look like to you? Well, we’ll go back here, we’ll go to TextEdit, and we’ll just paste this message in again. And, again, on an actual email client this is much, much simpler. I’m just gonna right-click, go to Services, and I’m gonna decrypt selection to new window. And what we should see here is I’m being asked for my password for this identity, which I put before. Right, and say password. Oh, it opened it up over here, so I’m just gonna move this over. Right. This is a test message from wpnycdemo. It’s still signed, right, because the site has a signing key, and it’s decrypted, because I have the matching key to the lock that I gave the website. So that is OpenPGP signing and encryption for WordPress.

Quick pro-tips for making even more use of this plugin. Number one, importantly for small businesses you should know that the WP PGP Encrypted Emails plugin features zero-configuration, out-of-the-box support with WooCommerce. So if you have a WooCommerce store, right, as long as your chosen theme supports the WooCommerce account pages, then all you have to do is install this and your customers will get an out of the way form that looks exactly like this that they can use to opt-in to signed emails or even encrypted emails if they go ahead through the process of making their own identity and uploading a key and giving it to your store. This is an example of what a signed email might look like in Apple Mail. Instead of the Mailcatcher interface it would just look like this. It would say “Security: Signed” and that’s how they would know, yes, this actually came from your store. So this is really important for secured email receipts, for private transfers of communications between, like, tech support. Anything that you basically don’t want other people reading. This is all from a blog post out of New York City called Flora Posidonia: FloraPosidonia.xyz. Check that out at some point if you wanna see this in practice in the wild.

For developers, WP PGP Encrypted Emails features a general-purpose API to cryptographic operations using familiar WordPress plugin hooks. So what I mean by that is that the plugin uses the same hooks that it makes available for other plugins for itself. And that means with as few as about four lines of PHP you actually, as a developer, can build PGP or S/MIME encryption into your own plugins and themes. So you can see here we’re getting the user object, we’re applying the wp_openpgp_user_key filter to that user object to get the key itself, and then we’re using openpgp_encrypt with the message and the public key to get an encrypted message. You can now, using PHP, send this variable over the Internet or in an email or anywhere you want, and it’s that PGP message block instead of the plain text content.

Okay, so that was GPG and OpenPGP signing and encryption for WordPress. I got ten minutes, so I’m going to hold questions. And at this point we’re going to go on to centralized authentication service using OpenLDAP for WordPress.

Now, as before, centralized authentication services with LDAP, anyone use this already? Sound familiar to anybody? No? All right, let’s start with LDAP. So that stands for the Lightweight Directory Access Protocol. It is an open vendor-neutral industry standard application protocol for accessing and maintaining distributed information services. So what that means, for our purposes at the moment, is that an LDAP database, which is called a Directory Information Tree or a DIT, can store user account login details like usernames and passwords and email addresses and phone numbers and this kind of thing, in an application-independent way so that any app that can speak LDAP can actually use the LDAP store as its user database.

Fun fact: LDAP was written by Tim Howes. He was the CTO and founder of a company called Opsware that I worked at for a while and that’s now HP Server Automation suite, HPSA, for those of you actually working in Enterprise.

Let’s take a step back for a moment, though, and talk about what this might look like—a website’s system might look like—without LDAP at all. You have a website. It’s running WordPress. We’ll call it YourSite.com and one of your users, we’ll call them alice, logs into the site. So to successfully log in to that site, WordPress first checks its WP users table for an entry that matches Alice’s account credentials. If those credentials exist and they match the ones submitted by the user, then Alice is successfully logged in, everything is okay, you get the Dashboard, or you get the homepage, it all looks fine. In this setup the user’s account information is stored by WordPress, for WordPress, and is only available to WordPress so we call that application-specific data.

Now let’s imagine that you want to add another app to your network. Maybe you have an intranet, right, and you want to add Nextcloud. This is kind of a Google Docs replacement. You could, and most organizations that I’ve seen typically do just tell Alice that, y’know, they now have two user accounts. They have a WordPress account, right? And they have a completely separate account for Nextcloud. In my experience, this causes a lot of problems. Among other issues, it means that users now must manage two user accounts: two passwords, two user profiles independently. Most users will probably choose the same password and the same profile information on both systems, but once they change their password on one system, the other system isn’t informed, and that leads to confusion, not to mention a lot of help desk tickets.

So this is a classic problem that LDAP is designed to solve. Now, with an LDAP server, you can store account details in a way where you can provide a Centralized Authentication Service, also called a CAS, for any LDAP-capable application that you choose to add to your network. So now, regardless of which app server Alice logs into, their account credentials are always the same. And when they change their password on the one side, say WordPress, they can immediately use their new password to log in to Nextcloud because that authentication check is happening in one central place, which is that LDAP server at the top.

So, let’s see how you can configure this using—whoops, that was too fast—using, WordPress. We’re going to go back to our WordPress demo and I’m going to try to run through this a little bit quickly, I apologize because I’m running a little short on time. But what I have here is another install of Nextcloud. This is a brand-new Nextcloud instance that’s running on the same machine as the other one. And I’m going to go ahead and create a new admin account. If Nextcloud is familiar to any of you this will look pretty familiar because it’s basically just as I did before. It’s a completely blank Nextcloud instance, without anything pre-loaded. So no users, no files, no nothing.

So this is Nextcloud for those of you who haven’t seen it. It looks a lot like Google Docs. Y’know, you can upload files, you can download files, you can open up text files. You can share photos, that kind of thing. You can take a look at the users database here. There’s just one user, the admin, there’s nothing else. And that is just blank Nextcloud. Now we also need, of course, is the wpnyc.demo—oh, yeah, hello—we need the plugin for WP-LDAP, which is…. There are a lot of LDAP plugins, but mine is the one called just WP-LDAP, by me, here. It’s pretty small and pretty new, because it’s not very well-known. But I’m going to go ahead and install it. And what this is going to let me do, when I go to Network Activate it, I will have a new option under my Network Settings called LDAP Settings.

Unfortunately, I need to be using HTTPS to manage it because this is kinda sensitive. So we’re gonna go ahead and do that like this. Password. There we go, LDAP settings. Now, I’m not going to show this because I don’t have time, but I also have a LDAP server running on port 389 on that same machine. Installing an LDAP server, if you’re a server admin, is usually as simple as “sudo apt install slapd” or the standalone LDAP daemon. For the time being, I’m gonna have to skip that.

I set it up so that it has a Bind DN, which is basically like the user account that you’re using to admin the site. This is basically the same as the MySQL user. Y’know how you have a MySQL database and your WordPress website needs to know what the login credentials for the database are? Same exact procedure. In this case, it’s not a MySQL database, it’s an LDAP database. So the syntax is a little bit different. But in general, it looks something like this. DC equals WPNYC, and DC stands for “domain component,” so this is the dots, right? Instead of dot com or dot demo, I’m using DC equals and “CN” is “common name.”

Let’s go ahead and actually double-check that that is correct because I don’t want it to not be. Okay, so we logged in over there, and I’m just going to copy and paste this from my notes to make double-sure that I have this right. So, I’m using an LDAP search tool on a command line, asking for the host, which is localhost in this case, using external authentication meaning the OS itself, to ask for the config common name, and I want the OLC or online configuration root DN. The root DN is basically superuser. You know how WordPress has “Super Admin”? This is Super Admin for LDAP. And sure enough it’s cn admin, dc wpnyc and demo. So that’s right. And then the base is just the end here, the same. WPNYC Demo. You can change this if you want, but it is effectively the same. So, for example, if you wanted to do, a different directory tree you could do OU equals people and this is like, basically choosing the table. Where in the database do you actually want to store what we’re about to put? In our case, ’cause this is a simple demo, we’ll just put it at the root over there. And that’s it. We save that change.

Now your WordPress can talk to LDAP. What does that mean? It means if we create a new user here—let’s say we’re going to make a new user on our Network, and we’ll call it test LDAP and it’s gonna be testldap@wpnyc.demo. We’re gonna add that user. Hopefully, if I got that right—no, I don’t want notifications—if we configure now Nextcloud to add LDAP integration, too. Now, Nextcloud ships with LDAP integration so we don’t even have to write a plugin for this. We’re gonna go to the Apps page on Nextcloud, enable the LDAP user and group backend plugin. We’ll now go to the admin screen for LDAP. We’ll go to LDAP and AD integration, and we’re gonna give it the same details that we gave WordPress. So, 127.0.0.1. Nextcloud has some nice JavaScript that can detect the port. The user DN was cn equals admin, domain component wpnyc, domain component demo, I believe. And the password was password. Let’s see if that, yup. And detect base DN. There we go. And test base DN. We look good.

So we’re gonna hit “Continue.” And now you can see we’ve got three entries available. These are inetOrgPersons. A user account in LDAP is an inetOrgPerson, an Internet organization person. We can verify this, we found 1 user. We’re gonna continue. And here you can say how do you want them to login? Username only, or username and email? We can go with either. This is basically, y’know, log in with your email, log in with your username. Just like WordPress. And if we do testldap here. Yeah, “User found and settings verified.” Continue on, and we’re good. So now, when we go to the users screen, you’ll see another one. And there you go. Now we can log in with this user.

So for instance, let’s say—actually, I can’t log in with this user, because I don’t know their password.

[laughs]

But, if we edit their password and set it, let’s set it to something simple like password. Yeah, confirm that password. We’ll update that. We’ll log out of Nextcloud as the admin, and we’ll log back in as the testldap user. And there we go. We didn’t have to make a user account on Nextcloud because we made on WordPress, and now, no matter how many apps you add to your intranet or your site, you now have one authentication store for all of your users. This is really useful for, for example, employees inside of a company. Taking out one, deleting them from the LDAP store will remove their access to all your apps. It’s also portable so you can transfer it from, y’know, one app to another, as long as the apps you’re using can speak LDAP. And, with WP-LDAP, WordPress can. So that was WP-LDAP on WordPress.

Pro-tips on this: it’s built for Multi-network, not just Multi-site installs. So if you’re not familiar with WordPress Multisite, read about that. Once you’ve read about that, check out the article, again on the WordPres Codex on Multi-network. This is a Network of WordPress Networks, and WP-LDAP works with that, too. You can set different servers, LDAP servers, for different networks so you can do things like network segmentation or perhaps round-robin load balancing, it’s kind of up to you. It’s also already aware of the WP PGP Encrypted Emails plugin, so if you use both of those and your users supply an S/MIME certificate, that will get sent over to LDAP and that will allow you to do transparent email encryption for things that are configured for that, such as iPhones in a BYOD or Bring-Your-Own-Device environment. Microsoft Outlook supports this out of the box as well. All of these features are standard protocols that are for free that you can install on WordPress sites of any size that you never have to pay for, if you don’t want to. You could. I wouldn’t. All of this, of course, is RFC 2798 compliant so any consumer that speaks LDAP—Apple Contacts, iOS, Mozilla Thunderbird Address Book—you can get a people directory and actually have, like, email autocomplete lookups for all of your employees or mailing list subscribers or anything like that.

All right, that was Centralized Authentication Service for LDAP. I don’t know if we have time for Q&A?

>> SCOTT BECKER: We can do a five minute Q&A.

>> MAYMAY: Very short Q&A. I realize that was a lot of information very quickly as well. Yeah.

>> AUDIENCE MEMBER: I run an NGO, a non-profit organization, and we have an account for Google Apps, because they gave us for free. Can I use LDAP instead of Google?

>> MAYMAY: Oh, yeah. Google uses LDAP behind the scenes. So, this is what they’re using. Right, like, there’s not a difference in the technology between Google, Facebook, and this. It’s just a matter of whether or not they put the branding and the sheen on it to make sure that you feel like you’re using their thing, as opposed to the standard thing.

>> AUDIENCE MEMBER: Can you use PGP encryption with Google?

>> MAYMAY: Oh, yeah! I do all the time. Yeah.

>> AUDIENCE MEMBER: Does this address the problem where WordPress sends out emails and it looks like spam? Like, in GMail, it goes to the spam folder, but with Sendgrid and some of these other guys it doesn’t? Does that fix that?

>> MAYMAY: Sadly, no. So the question was if these plugins fix email looking like spam from WordPress. And unfortunately, they don’t. They don’t because of a number of reasons, which we can talk about maybe later if you want, but the short answer is no. The longer answer is they might even make you look a little more suspicious only because they would rather you use their thing.

All right, so, any last question about that? All right.

In case it wasn’t clear, all these plugins are freely available on the WordPress plugin repository today. Here are their permalinks. I’m gonna put up these slides somewhere so you can take a look at them on your time and hopefully we’ve got the recording at some point as well.

Again, my name is maymay. My homepage is maymay.net. It might be down. If it is, try again in a little bit. Again, I get DDoS’ed a lot. At maymay.net the very top link is “Download my digital business card.” Click it to download and import my vCard into your contact app, and that’s all I got. Thank you so much for your time and attention, everyone.

My 2009 essay kinda-sorta about an Anarchist “Internet of Things”

I wrote an essay in 2009 about the Internet of Things, before people were calling it “the Internet of Things.” When I re-read it this afternoon, in 2017, I noticed something rather queer. It wasn’t actually about the Internet of Things at all. It was actually a personal manifesto advocating Anarchism, and condemning techno-capitalist fascism.

Yes, really.

In 2009, despite having barely turned 25 years old, I had already been working as a professional web developer for a little over a decade. (That arithmetic is correct, I assure you.) At the time, I had some embarrassingly naïve ideas about Silicon Valley, capitalism, and neoliberalism. I also had no idea that less than two years later, I’d be homeless and sleeping in Occupy encampments, and that I’d remain (mostly) happily houseless and jobless for the next six years, up to and including the time of this writing.

The story of my life during those two years is a story worth telling…someday. Today, though, I want to remind myself of who I was before. I was a different person when 2009 began in some very important ways. I was so different that by the time it ended I began referring to my prior experiences as “my past life,” and I’ve used the same turn of phrase ever since. But I was also not so different that, looking back on myself with older eyes, I can clearly see the seeds of my anti-capitalist convictions had already begun to germinate and root themselves somewhere inside me.

Among the many other things that I was in my past life, I was an author. I’ve always loved the art of the written word. My affinity for the creativity I saw in and the pleasure I derived from written scripts drew me to my appreciation for computer programming. That is its own story, as well, but the climax of that trajectory—at least by 2009—is that I was employed as a technical writer. I blogged on a freelance basis for an online Web development magazine about Web development. I had already co-authored and published significant portions of my first technical book. And, in 2009, I had just completed co-authoring a second.

That second book was called, plainly enough, Advanced CSS, and was about the front-end Web development topic more formally known as Cascading Style Sheets. But that’s not interesting. At least, no more interesting than any other fleeting excitement over a given technical detail. What’s arguably most revealing about that book is the essay I contributed, which for all intents and purposes is the book’s opening.

My essay follows in its entirety:

User agents: our eyes and ears in cyberspace

A user agent is nothing more than some entity that acts on behalf of users themselves.1 What this means is that it’s important to understand these users as well as their user agents. User agents are the tools we use to interact with the wealth of possibilities that exists on the Internet. They are like extensions of ourselves. Indeed, they are (increasingly literally) our eyes and ears in cyberspace.

Understanding users and their agents

Web developers are already familiar with many common user agents: web browsers! We’re even notorious for sometimes bemoaning the sheer number of them that already exist. Maybe we need to reexamine why we do that.

There are many different kinds of users out there, each with potentially radically different needs. Therefore, to understand why there are so many user agents in existence we need to understand what the needs of all these different users are. This isn’t merely a theoretical exercise, either. The fact is that figuring out a user’s needs helps us to present our content to that user in the best possible way.

Presenting content to users and, by extension, their user agents appropriately goes beyond the typical accessibility argument that asserts the importance of making your content available to everyone (though we’ll certainly be making that argument, too). The principles behind understanding a user’s needs are much more important than that.

You’ll recall that the Web poses two fundamental challenges. One challenge is that any given piece of content, a single document, needs to be presented in multiple ways. This is the problem that CSS was designed to solve. The other challenge is the inverse: many different kinds of content need to be made available, each kind requiring a similar presentation. This is what XML (and its own accompanying “style sheet” language, XSLT) was designed to solve. Therefore, combining the powerful capabilities of CSS and XML is the path we should take to understanding, technically, how to solve this problem and present content to users and their user agents.

Since a specific user agent is just a tool for a specific user, the form the user agent takes depends on what the needs of the user are. In formal use case semantics, these users are called actors, and we can describe their needs by determining the steps they must take to accomplish some goal. Similarly, in each use case, a certain tool or tools used to accomplish these goals defines what the user agent is in that particular scenario.2

A simple example of this is that when Joe goes online to read the latest technology news from Slashdot, he uses a web browser to do this. Joe (our actor) is the user, his web browser (whichever one he chooses to use) is the user agent, and reading the latest technology news is the goal. That’s a very traditional interaction, and in such a scenario we can make some pretty safe assumptions about how Joe, being a human and all, reads news.

Now let’s envision a more outlandish scenario to challenge our understanding of the principle. Joe needs to go shopping to refill his refrigerator and he prefers to buy the items he needs with the least amount of required driving due to rising gas prices. This is why he owns the (fictional) Frigerator2000, a network-capable refrigerator that keeps tabs on the inventory levels of nearby grocery stores and supermarkets and helps Joe plan his route. This helps him avoid driving to a store where he won’t be able to purchase the items he needs.

If this sounds too much like science fiction to you, think again. This is a different application of the same principle used by feed readers, only instead of aggregating news articles from web sites we’re aggregating inventory levels from grocery stores. All that would be required to make this a reality is an XML format for describing a store’s inventory levels, a bit of embedded software, a network interface card on a refrigerator, and some tech-savvy grocery stores to publish such content on the Internet.

In this scenario, however, our user agent is radically different from the traditional web browser. It’s a refrigerator! Of course, there aren’t (yet) any such user agents out crawling the Web today, but there are a lot of user agents that aren’t web browsers doing exactly that.

Search engines like Google, Yahoo!, and Ask.com are probably the most famous examples of users that aren’t people. These companies all have automated programs, called spiders, which “crawl” the Web indexing all the content they can find. Unlike humans and very much like our hypothetical refrigerator-based user agent, these spiders can’t look at content with their eyes or listen to audio with their ears, so their needs are very different from someone like Joe’s.

There are still other systems of various sorts that exist to let us interact with web sites and these, too, can be considered user agents. For example, many web sites provide an API that exposes some functionality as web services. Microsoft Word 2008 is an example of a desktop application that you can use to create blog posts in blogging software such as WordPress and MovableType because both of these blogging tools support the MetaWeblog API, an XML-RPC3 specification. In this case, Microsoft Word can be considered a user agent.

As mentioned earlier, the many incarnations of news readers that exist are another form of user agent. Many web browsers and email applications, such as Mozilla Thunderbird and Apple Mail, do this, too.4 Feed readers provide a particularly interesting way to examine the concept of user agents because there are many popular feed reading web sites today, such as Bloglines.com and Google Reader. If Joe opens his web browser and logs into his account at Bloglines, then Joe’s web browser is the user agent and Joe is the user. However, when Joe reads the news feeds he’s subscribed to in Bloglines, the Bloglines server goes to fetch the RSS- or Atom-formatted feed from the sourced site. What this means is that from the point of view of the sourced site, Bloglines.com is the user, and the Bloglines server process is the user agent.

Coming to this realization means that, as developers, we can understand user agents as an abstraction for a particular actor’s goals as well as their capabilities. This is, of course, an intentionally vague definition because it’s technically impossible for you, as the developer, to predict the features or capabilities present in any particular user agent. This is a challenge we’ll be talking about a lot in the remainder of this book because it is one of the defining characteristics of the Web as a publishing medium.

Rather than this lack of clairvoyance being a problem, however, the constraint of not knowing who or what will be accessing our published content is actually a good thing. It turns out that well-designed markup is also markup that is blissfully ignorant of its user, because it is solely focused on describing itself. You might even call it narcissistic.

Why giving the user control is not giving up

Talking about self-describing markup is just another way of talking about semantic markup. In this paradigm, the content in the fetched document is strictly segregated from its ultimate presentation. Nevertheless, the content must eventually be presented to the user somehow. If information for how to do this isn’t provided by the markup, then where is it, and who decides what it is?

At first you’ll no doubt be tempted to say that this information is in the document’s style sheet and that it is the document’s developer who decides what that is. As you’ll examine in detail in the next chapter, this answer is only mostly correct. In every case, it is ultimately the user agent that determines what styles (in which style sheets) get applied to the markup it fetches. Furthermore, many user agents (especially modern web browsers) allow the users themselves to further modify the style rules that get applied to content. In the end, you can only influence—not control—the final presentation.

Though surprising to some, this model actually makes perfect sense. Allowing the users ultimate control of the content’s presentation helps to ensure that you meet every possible need of each user. By using CSS, content authors, publishers, and developers—that is, you—can provide author style sheets that easily accommodate, say, 80 percent of the needs of 90 percent of the users. Even in the most optimistic scenario, edge cases that you may not ever be aware of will still escape you no matter how hard you try to accommodate everyone’s every need.5 Moreover, even if you had those kinds of unlimited resources, you may not know how best to improve the situation for that user. Given this, who better to determine the presentation of a given XML document that needs to be presented in some very specific way than the users with that very specific need themselves?

A common real-life example of this situation might occur if Joe were colorblind. If he were and he wanted to visit some news site where the links in the article pullouts were too similar a color to the pullout’s background, he might not realize that those elements are actually links. Thankfully, because Joe’s browser allows him to set up a web site with his own user style sheet, he can change the color of these links to something that he can see more easily. If CSS were not designed with this in mind, it would be impossible for Joe to personalize the presentation of this news site so that it would be optimal for him.

To many designers coming from traditional industries such as print design, the fact that users can change the presentation of their content is an alarming concept. Nevertheless, this isn’t just the way the Web was made to work; this is the only way it could have worked. Philosophically, the Web is a technology that puts control into the hands of users. Therefore, our charge as web designers is to judge different people’s needs to be of equal importance, and we can’t do this if we treat every user exactly the same way.6

  1. This is purposefully a broad definition because we’re not just talking about web pages here, but rather all kinds of technology. The principles are universal. There are, however, more exacting definitions available. For instance, the W3C begins the HTML 4 specification with some formal definitions, including what a “user agent” is. See http://www.w3.org/TR/REC-html40/conform.html. []
  2. In real use cases, technical jargon and specific tools like a web browser are omitted because such use cases are used to define a system’s requirements, not its implementation. Nevertheless, the notion of an actor and an actor’s goals are helpful in understanding the mysterious “user” and this user’s software. []
  3. XML-RPC is a term referring to the use of XML files describing method calls and data transmitted over HTTP, typically used by automated systems. It is thus a great example of a technology that takes advantage of XML’s data serialization capabilities, and is often thought of as a precursor to today’s Ajax techniques. []
  4. It was in fact the much older email technology from which the term user agent originated; an email client program is more technically called a mail user agent (MUA). []
  5. As it happens, this is the same argument open source software proponents make about why such open source software often succeeds in meeting the needs of more users than closed source, proprietary systems controlled solely by a single company with (by definition) relatively limited resources. []
  6. This philosophy is embodied in the formal study of ethics, which is a compelling topic for us as CSS developers, considering the vastness of the implications we describe here. []

Ethics Refactoring: An experiment at the Recurse Center to address an ACTUAL crisis among programmers

Ethics Refactoring session, part 1

Ethics Refactoring session, part 2

I’ve been struggling to find meaningful value from my time at the Recurse Center, and I have a growing amount of harsh criticism about it. Last week, in exasperation and exhaustion after a month of taking other people’s suggestions for how to make the most out of my batch, I basically threw up my hands and declared defeat. One positive effect of declaring defeat was that I suddenly felt more comfortable being bolder at RC itself; if things went poorly, I’d just continue to distance myself. Over the weekend, I tried something new (“Mr. Robot’s Netflix ‘n’ Hack”), and that went well. Last night, I tried another, even more new thing. It went…not badly.

Very little of my criticism about RC is actually criticism that is uniquely applicable to RC. Most of it is criticism that could be levied far more harshly at basically every other institution that claims to provide an environment to “learn to code” or to “become a dramatically better programmer.” But I’m not at those other institutions, I’m at this one. And I’m at this one, and not those other ones, for a reason: Recurse Center prides itself on being something very different from all those other places. So it’s more disappointing and arguably more applicable, not less, that the criticisms of RC that I do have feel equally applicable to those other spaces.

That being said, because no other institution I’m aware of is structured quite like the Recurse Center is, the experiments I tried out this week after declaring a personal “defeat” would not even be possible in another venue. That is a huge point in RC’s favor. I should probably write a more thorough and less vague post about all these criticisms, but that post is not the one I want to write today. Instead, I just want to write up a bit about the second experiment that I tried.

I called it an “ethics refactoring session.” The short version of my pitch for the event read as follows:

What is the operative ethic of a given feature, product design, or implementation choice you make? Who is the feature intended to empower or serve? How do we measure that? In “Ethical Refactoring,” we’ll take a look at small part of an existing popular feature, product, or service, analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service from a different ethical perspective and see how this affects our development process and design choices.

Basically, I want there to be more conversations among technologists that focus on why we’re building what we’re building. Or, in other words:

Not a crisis: not everybody can code.

Actually a crisis: programmers don’t know ethics, history, sociology, psychology, or the law.

https://twitter.com/bmastenbrook/status/793104148732469248

Here’s an idea: before we teach everybody to code, how about we teach coders about the people whose lives they’re affecting?

https://twitter.com/bmastenbrook/status/793104080214392832

Ethics is one of those things that are hard to convince people with power—such as most professional programmers, especially the most “successful” of them—to take seriously. Here’s how Christian Rudder, one of the founders of OkCupid and a very successful Silicon Valley entrepreneur, views ethics and ethicists:

Interviewer: Have you thought about bringing in, say, like an ethicist to, to vet your experiments?

Christian Rudder: To wring his hands all day for a hundred thousand dollars a year?

Interviewer: Well, y’know, you could pay him, y’know, on a case by case basis, maybe not a hundred thousand a year.

CR: Sure, yeah, I was making a joke. No we have not thought about that.

The general attitude that ethics are just, like, not important is of course not limited to programmers and technologists. But I think it’s clear why this is more an indictment of our society writ large than it is any form of sensible defense for technologists. Nevertheless, this is often used as a defense, anyway.

One of the challenges inherent in doing something that no one else is doing is that, well, no one really understands what you’re trying to do. It’s unusual. There’s no role model for it. Precedent for it is scant. It’s hard to understand unfamiliar things without a lot of explanation or prior exposure to those things. So in addition to the above short pitch, I wrote a longer explanation of my idea on the RC community forums:

Hi all,

I’d like to try an experiment that’s possibly a little far afield from what many folks might be used to. I think this would be a lot more valuable with involvement from the RC alumni community, so I’m gonna make a first attempt this upcoming Tuesday, November 1st, at 6:30pm (when alumni are welcome to stop by 455 Broadway).

And what is this experiment? I’m calling it an “Ethics Refactoring” session.

In these sessions, we’ll take a look at a small part of an existing popular feature, product, or service that many people are likely already familiar with (like the Facebook notification feed, the OkCupid “match percentage” display, and so on), analyze its UX flow/implementation/etc. from the point of view of different users, and discuss the ethical considerations and assumptions implicit in the developer’s design choices. Next we’ll choose a different ethic to accentuate and re-design the same feature/product/service taking a different ethical stance and see how this affects our development process and design choices.

This isn’t about “right” or “wrong,” “better” or “worse,” nor is it about making sure everyone agrees with everyone else about what ethic a given feature “should” prioritize. Rather, I want this to be about:

  • practicing ways of making the implicit values decisions process that happens during product/feature development and implementation more explicit,
  • gaining a better understanding of the ethical “active ingredient” in a given feature, product design, or implementation choice, and
  • honing our own communication skills (both verbally and through our product designs) around expressing our values to different people we work with.

I know this sounds a bit vague, and that’s because I’ve never done anything like this and don’t exactly know how to realize the vision for a session like that’s in my head. My hope is that something like the above description is close enough, and intriguing enough, to enough people (and particularly to the alumnus community) that y’all will be excited enough to try out something new like this with me.

Also, while not exactly what I’m talking/thinking about, one good introduction to some of the above ideas in a very particular area is at the http://TimeWellSpent.io website. Take a moment to browse that site if the above description leaves you feeling curious but wary of coming to this. :)

I think “Ethics Refactoring” sessions could be useful for:

  • getting to know fellow RC’ers who you may not spend much time with due to differences in language/framework/platform choice,
  • gaining insight into the non-obvious but often far-reaching implications of making certain design or implementation choices,
  • learning about specific technologies by understanding their non-technological effects (i.e., learning about a class of technologies by starting at a different place than “the user manual/hello world example”)
  • having what are often difficult and nuanced conversations with employers, colleagues, or even less-technical users for which understanding the details of people’s life experiences as well as the details of a particular technology is required to communicate an idea or concern effectively.

-maymay

And then when, to my surprise, I got a lot more RSVPs than I’d expected, I further clarified:

I’m happy to note that there are 19(!!!) “Yes” RSVP’s on the Zulip thread, but a little surprised because I did not have such a large group in mind when I conceived this. Since this is kind of an experiment from the get-go, I think I’m going to revise my own plan for facilitating such a session to accommodate such a relatively large group and impose a very loose structure. I also only allotted 1 hour for this, and with a larger group we may need a bit more time?

With that in mind, here is a short and very fuzzy outline for what I’m thinking we’ll do in this session tomorrow:

  • 5-10min: Welcome! And a minimal orientation for what we mean when we say “ethic” for the purpose of this session (as in, “identify the operative ethic of a given feature”). Specifically, clarify the following: an “ethic” is distinct from and not the same thing as an “incentive structure” or a “values statement,” despite being related to both of those things (and others).
  • 15-20min: Group brainstorm to think of and list popular or familiar features/products/services that are of a good size for this exercise; “Facebook” is too large, “Facebook’s icon for the Settings page” is too small, but “Facebook’s notification stream” is about right. Then pick two or three from the list that the largest number of people have used or are familiar with, and see if we can figure out what those features’ “operative ethics” can reasonably be said to be.
  • 15-20min: Split into smaller work-groups to redesign a given feature; your work-groups may work best if they consist of people who 1) want to redesign the same given feature as you and 2) want to redesign to highlight the same ethic as you. I.e., if you want to redesign Facebook’s notification stream to highlight a given ethic, group with others who want to work both on that feature AND with towards the same ethic. (It is okay if you have slight disagreements or different goals than your group-mates; the point of this session is to note how ethics inform the collaborative process, not to produce a deliverable or to write code that implements a different design.)
  • 10-15min: Describe the alternate design your group came up with to the rest of the participants, and ask/answer some questions about it.

This might be a lot to cram into 1 hour with 19+ people, but I really have no idea. I’m also not totally sure this will even “work” (i.e., translate well from my head to an actual room full of people). But I guess we’ll know by tomorrow evening. :)

The session itself did, indeed, attract more attendees than I was originally expecting. (Another good thing about Recurse Center: the structure and culture of the space makes room for conversations like these.) While I tried to make sure we stuck to the above outline, we didn’t actually stick strictly to it. Instead of splitting into smaller groups (which I still think would have been a better idea), we stayed in one large group; it’s possible that 1 hour is simply not enough time. Or I could have been more forceful in facilitating. I didn’t really want to be, though; I was doing this partially to suss out who I didn’t yet know “in the RC community” who I could mesh with as much as I was doing it to provide a space for the current RC community to have these conversations or expose them to a way of thinking about technology that I regularly practice already.

The pictures attached to this post are a visual record of the two whiteboards “final” result from the conversation. The first is simply a list of features (“brainstorm to think of and list popular features”), and included:

  • Facebook’s News Feed
  • Yelp recommendation engine
  • Uber driver rating system
  • Netflix auto-play
  • Dating site messaging systems (Tinder “match,” OkCupid private messages, Bumble “women message first”)

One of the patterns throughout the session that kept happening was that people seemed reticent or confused at the beginning of each block (“what do you mean ethics are different from values?” and “I don’t know if there are any features I can think of with these kinds of ethical considerations”) and yet by the end of each block, we had far, far more relevant examples to analyze than we actually had time to discuss. I think this clearly reveals how under-discussed and under-appreciated this aspect of programming work really is.

The second picture shows an example of an actual “ethical refactoring” exercise. The group of us chose to use Uber’s driver rating system as the group exercise, because most of us were familiar with it and it was a fairly straightforward system. I began by asking folks how the system presented itself to them as passengers, and then drawing simplified representations of the screens on the whiteboard. (That’s what you see in the top-left of the second attached image.) Then we listed out some business cases/reasons for why this feature exists (the top-right of the second attached image), and from there we extrapolated some larger ethical frameworks by looking for patterns in the business cases (the list marked “Ethic???” on the bottom-right of the image).

By now, the group of us had vastly different ideas about not only why Uber did things a certain way, but also about what a given change someone suggested to the system would do, and the exercise stalled a bit. I think this in itself revealed a pretty useful point: a design choice you make with the intention of having a certain impact may actually feel very different to different people. This sounds obvious, but actually isn’t.

Rather than summarize our conversation, I’ll end by listing a few take-aways that I think were important:

  • Ethics is a systems-thinking problem, and cannot be approached piecemeal. That is, you cannot make a system “ethical” by minor tweaks, such as by adding a feature here or removing a feature there. The ethics of something is a function of all its component’s and the interactions between them, both technical and non-technical. The analogy I used was security: you cannot secure an insecure design by adding a login page. You have to change the design, because a system is only as secure as its weakest link.
  • Understand and appreciate why different people might look at exactly the same implementation and come away feeling like a very different operative ethic is the driving force of that feature. In this experimental session, one of the sticking points was the way in which Uber’s algorithm for rating drivers was considered either to be driven by an ethic of domination or an ethic of self-improvement by different people. I obviously have my own ideas and feelings about Uber’s rating system, but the point here is not that one group is “right” and the other group is “wrong,” but rather that the same feature was perceived in a very different light by different sets of people. For now, all I want to say is notice and appreciate that.
  • Consider that second-order effects will reach beyond the system you’re designing and impact people who are not direct users of your product. This means that designers should consider the effects their system has not just on their product’s direct user base, but also on the people who can’t, won’t, or just don’t use their product, too. Traditionally, these groups of people are either ignored or actively “converted” (think how “conversions” means “sales” to business people), but there are a lot of other reasons why this approach isn’t good for anyone involved, including the makers of a thing. Some sensitivity to the ecosystem in which you are operating is helpful to the design process, too (think interoperability, for example).
  • Even small changes to a design can massively alter the ethical considerations at play. In our session, one thing that kept coming up about Uber’s system is that a user who rates a driver has very little feedback about how that rating will affect the driver. A big part of the discussion we had centered on questions like, “What would happen if the user would be shown the driver’s new rating in the UI before they actually submitted a given rating to a given driver?” This is something people were split about, both in terms of what ethic such a design choice actually mapped to as well as what the actual effect of such a design choice would be. Similar questions popped up for other aspects of the rating system.
  • Consider the impact of unintended, or unexpected, consequences carefully. This is perhaps the most important take-away, and also one of the hardest things to actually do. After all, the whole point of an analysis process is that it analyzes only the things that are captured by the analysis process. But that’s the rub! It is often the unintentional byproducts, rather than the intentional direct results, of a system that has the strongest impact (whether good or bad) of successful systems. As a friend of mine likes to say, “Everything important is a side-effect.” This was made very clear through the exercise simply by virtue of the frequency and ease with which a suggestion by one person often prompted a different person to highlight a likely scenario in which that same suggestion could backfire.

I left the session with mixed feelings.

On the one hand, I’m glad to have had a space to try this out. I’m pleased and even a little heartened that it was received so warmly, and I’m equally pleased to have been approached by numerous people afterwards who had a lot more questions, suggestions, and impressions to share. I’m also pleased that at no point did we get too bogged down in abstract, philosophical conversations such as “but what are ethics really?” Those are not fruitful conversations. Credit to the participants for being willing to try something out of the ordinary, and potentially very emotionally loaded, and doing so with grace.

On the other hand, I’m frustrated that these conversations seem perpetually stuck in places that I feel are elementary. That’s not intended as a slight against anyone involved, but rather as an expression of loneliness on my part, and the pain at being reminded that these are the sorts of exercises I have been doing by myself, with myself, and largely for myself for long enough that I’ve gotten maddeningly more familiar with doing them than anyone else that I regularly interact with. If I had more physical, mental, and emotional energy, and more faith that RC was a place where I could find the sort of relationships that could feasibly blossom into meaningful collaborations with people whose politics were aligned with mine, then I probably would feel more enthused that this sort of thing was so warmly received. As it stands though, as fun and as valuable as this experiment may have been, I have serious reservations about how much energy to devote to this sort of thing moving forward, because I am really, really, really tired of making myself the messenger, or taking a path less traveled.

Besides, I genuinely believe that “politicizing techies” is a bad strategy for revolution. Or at least, not as good a strategy as “technicalizing radicals.” And I’m just not interested in anything short of revolution. ¯\_(ツ)_/¯

In Memoriam

Today is the fourth anniversary of Len Sassaman‘s passing. Len was a gifted programmer, he was a passionate privacy advocate—Len pioneered and maintained the Mixmaster anonymous remailer software for many years—and he was a very, very kind person. He was also a friend.

Len was the first person to walk me through setting up OTR (encrypted chat), and one of the only people I have ever known of his awesome caliber who was nevertheless able to make you feel comfortable asking what were obviously “newbie” questions.

A lot has happened in the last four years. Len’s passing lit a fire under me, personally. I couldn’t have gotten to where I am today, both in terms of practical skills and in terms of philosophical approach, without the brief but powerful influence Len had on me. I’m not the person who knew him best, but I miss him all the same.

I’ve got nowhere near the expertise he had, and if it weren’t for him, I might have let that stop me. Thanks to him, I’m not. It’s slow going, but I’m still moving forward.

Tonight, I’m publishing a small, simple utility script called remail.sh that makes it just a little bit easier to use an anonymous remailer system such as the kind he maintained. It’s not much, but hopefully it can serve as a reminder that privacy is a timeless human need, and that it needs people like Len to support it as much as people like Len need supporters like us.

Len Sassaman (b. 1980 d. July 3, 2011). I miss you.

Turn your Android phone into a full fledged programming environment

These days, mobile phones are basically computers. And not just any computer. If you have a smartphone, then it's the same kind of computer as a regular ol' laptop. Sure, the two look different, but once you get "under the hood" they look and feel remarkably similar.

My mission, which I chose to accept, was to see if I could turn my Android phone into a fully fledged web development console. Lo and behold, I could. And it's not even that hard, but I did have to do some digging.

That's because searching the 'net for phrases like "web development on Android" mostly returns information on how to code and debug websites for mobile browsers, rather than how to use mobile phones as your environment for developing websites. Once I figured out which tools were suited for the task (and my personal tastes), though, everything else fell into place.

Read the full post.

Read more

Easy template injection in JavaScript for userscript authors, plugin devs, and other people who want to fuck with Web page content

The Predator Alert Tool for Twitter is coming along nicely, but it's frustratingly slow going. It's extra frustrating for me because ever since telling corporate America and its project managers to go kill themselves, I've grown accustomed to an utterly absurd speed of project development. I know I've only been writing and rewriting code for just under two weeks (I think—I honestly don't even know or care what day it is), but still.

I think it also feels extra slow is because I'm learning a lot of new stuff along the way. That in and of itself is great, but I'm notoriously impatient. Which I totally consider a virtue because fuck waiting. Still, all this relearning and slow going has given me the opportunity to refine a few techniques I used in previous Predator Alert Tool scripts.

Here's one technique I think is especially nifty: template injection.

Read more

Play “The Legend of Zelda: The Wind Waker” on Mac OS X 10.6 using Dolphin with heat distortion fix

If you’re running Mac OS X 10.6 Snow Leopard but wanted to play The Legend of Zelda: The Wind Waker, you probably ran into this very annoying “heat distortion” bug:

This issue actually crops up in two ways throughout Wind Waker. The more common way is that all of the flame effects result in doubling rather than distortion. The “copy” ends up down and to the right quite a bit, making it awkward at best. More troublesome is the way extreme heat in Dragon Roost Island is rendered. It results in severe screen tearing and detracts from the strong atmosphere presented by the game.

[…]

AR codes offer a partial resolution to the screen tearing issue shown in this animation loop.

That’s because, prior to Dolphin version 4.0-593, there was a bug in the way texture maps were loaded. The fix is thankfully very simple: two lines of corrected hexadecimal arithmetic, courtesy delroth. Sadly, the last version of Dolphin that runs on Mac OS X 10.6 Snow Leopard is 3.5. Later versions require Mac OS X 10.7 Lion or greater.

Of course, one could upgrade one’s operating system and install a brand new Mac OS X. But why pay for an operating system upgrade when what you want to fix is an arithmetic mistake? And a simple one, at that.

Instead, I forked Dolphin, backported delroth’s fix,1 and rebuilt my Dolphin. And in the spirit of the free software code that let me do this without having to buy an OS upgrade from Apple (because, again, fuck capitalism), I’m making my build available for any other Mac OS X 10.6 (Intel) users who want to calm the great and wonderful Valoo on Dragon Roost Island without looking at frustrating screen tearing. :)P

This Dolphin 3.5 WindWaker bugfix rebuild should run on any Mac with an Intel processor, but I’ve only tested it on my own system, of course. If you have trouble with the binary, consider building your own Dolphin for Mac OS X, too.

That way, when people ask you why you’re playing video games all day, you can tell them, “Because everything is in everything, so playing video games can teach me about C compilers.”

  1. Wikipedia explains backporting well. []

Quick and Dirty: Clone Custom Field, Template Linked Files on Movable Type

Movable Type is a pretty frustrating platform to work with because every so often (or, “way too often,” depending on who you ask) a function of the system simply doesn’t do what you’d expect it to do. Such is the case with the “Clone Blog” functionality. Although it dutifully copies most of a website from one “blog” object to another, a few things are missing.

Most notably, custom fields and templates’ linked files are not copied. This is a deal-breaker for any large installation that uses the built-in MT “Clone” feature.

To get around this limitation, I wrote a stupid, quick ‘n’ dirty PHP script to finish the cloning process, called finishclone.php. It takes only 1 argument: the “new ID” you are cloning to. If all goes well, you’ll see output like this:

[root@dev www]$ php finishclone.php 28
Cloning process complete.
[root@dev www]$ 

In this example, 28 is the newly created blog’s ID. The blog you want to clone from is set as a constant within the script. I’ll leave modifying the script to support more flexible command line arguments as an exercise for the reader.

<?php
/**
 * Ease the final steps in cloning a Movable Type blog.
 *
 * Description:   This script should be run after Movable Type's "Clone Blog"
 *                function has completed and before the cloned blog is used.
 * 
 * Author:        "Meitar Moscovitz" <meitar@maymay.net>
 */

// Set constants.
define('MT_ORIG_BLOG', 0); // the ID of the blog you are cloning from
define('MYSQL_HOST', 'localhost');
define('MYSQL_USER', 'movabletype');
define('MYSQL_PASS', 'PASSWORD_HERE');
define('MYSQL_DB', 'movabletype');

// Get command line arguments.
if (2 > $_SERVER['argc']) { die('Tell me the ID of the blog to clone into.'); }

$blog_id = (int) $argv[1];

// Connect to db
if ( !mysql_pconnect( MYSQL_HOST, MYSQL_USER, MYSQL_PASS ) ) {
	die( 'Connection to the database has failed: ' . mysql_error( ) );
}
mysql_select_db( MYSQL_DB );

// Clone custom fields.
$result = mysql_query('SELECT * FROM mt_field WHERE field_blog_id='.MT_ORIG_BLOG.';');
while ($row = mysql_fetch_object($result)) {
	mysql_query(
		sprintf("INSERT INTO mt_field ("
			."field_basename,"
			."field_blog_id,"
			."field_default,"
			."field_description,"
			."field_name,"
			."field_obj_type,"
			."field_options,"
			."field_required,"
			."field_tag,"
			."field_type) "
			."VALUES('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s');",
			mysql_real_escape_string($row->field_basename),
			mysql_real_escape_string($blog_id),
			mysql_real_escape_string($row->field_default),
			mysql_real_escape_string($row->field_description),
			mysql_real_escape_string($row->field_name),
			mysql_real_escape_string($row->field_obj_type),
			mysql_real_escape_string($row->field_options),
			mysql_real_escape_string($row->field_required),
			mysql_real_escape_string($row->field_tag),
			mysql_real_escape_string($row->field_type)
		)
	) OR print mysql_error() . "\n";
}

// Link template files to filesystem.
$arr = array();
$result = mysql_query('SELECT template_name,template_linked_file FROM mt_template WHERE template_blog_id='.MT_ORIG_BLOG.';');
while ($row = mysql_fetch_object($result)) {
	$arr[$row->template_name] = $row->template_linked_file;
}
foreach ($arr as $k => $v) {
	mysql_query("UPDATE mt_template SET template_linked_file='$v' WHERE template_blog_id=$blog_id AND template_name='$k';");
}

print "Cloning process complete.\n";

Cross-post: Edenfantasys’s unethical technology is a self-referential black hole

This entry was originally published at my other blog. I’m cross-posting it here in order to make sure it gets copied to more servers, as some people have suggested I’ll face a cease and desist order for publishing it in the first place. Please help distribute this important information by freely copying and republishing this post under the conditions of my CC-BY-NC-ND license: provide me with attribution and a (real) back link, and you are free to republish an unaltered version of this post wherever you like. Thanks.

A few nights ago, I received an email from Editor of EdenFantasys’s SexIs Magazine, Judy Cole, asking me to modify this Kink On Tap brief I published that cites Lorna D. Keach’s writing. Judy asked me to “provide attribution and a link back to” SexIs Magazine. An ordinary enough request soon proved extraordinarily unethical when I discovered that EdenFantasys has invested a staggering amount of time and money to develop and implement a technology platform that actively denies others the courtesy of link reciprocity, a courtesy on which the ethical Internet is based.

While what they’re doing may not be illegal, EdenFantasys has proven itself to me to be an unethical and unworthy partner, in business or otherwise. Its actions are blatantly hypocritical, as I intend to show in detail in this post. Taking willful and self-serving advantage of those not technically savvy is a form of inexcusable oppression, and none of us should tolerate it from companies who purport to be well-intentioned resources for a community of sex-positive individuals.

For busy or non-technical readers, see the next section, Executive Summary, to quickly understand what EdenFantasys is doing, why it’s unethical, and how it affects you whether you’re a customer, a contributor, or a syndication partner. For the technical reader, the Technical Details section should provide ample evidence in the form of a walkthrough and sample code describing the unethical Search Engine Optimization (SEO) and Search Engine Marketing (SEM) techniques EdenFantasys, aka. Web Merchants, Inc., is engaged in. For anyone who wants to read further, I provide an Editorial section in which I share some thoughts about what you can do to help combat these practices and bring transparency and trust—not the sabotage of trust EdenFantasys enacts—to the market.

EXECUTIVE SUMMARY

Internet sex toy retailer Web Merchants, Inc., which bills itself as the “sex shop you can trust” and does business under the name EdenFantasys, has implemented technology on their websites that actively interferes with contributors’ content, intercepts outgoing links, and alters republished content so that links in the original work are redirected to themselves. Using techniques widely acknowledged as unethical by Internet professionals and that are arguably in violation of major search engines’ policies, EdenFantasys’s publishing platform has effectively outsourced the task of “link farming” (a questionable Search Engine Marketing [SEM] technique) to sites with which they have “an ongoing relationship,” such as AlterNet.org, other large news hubs, and individual bloggers’ blogs.

Articles published on EdenFantasys websites, such as the “community” website SexIs Magazine, contain HTML crafted to look like links, but aren’t. When visited by a typical human user, a program written in JavaScript and included as part of the web pages is automatically downloaded and intercepts clicks on these “link-like” elements, fetching their intended destination from the server and redirecting users there. Due to the careful and deliberate implementation, the browser’s status bar is made to appear as though the link is legitimate, and that a destination is provided as expected.

For non-human visitors, including automated search engine indexing programs such as Googlebot, the “link” remains non-functional, making the article a search engine’s dead-end or “orphan” page whose only functional links are those whose destination is EdenFantasys’s own web presence. This makes EdenFantasys’ website(s) a self-referential black hole that provides no reciprocity for contributors who author content, nor for any website ostensibly “linked” to from article content. At the same time, EdenFantasys editors actively solicit inbound links from individuals and organizations through “link exchanges” and incentive programs such as “awards” and “free” sex toys, as well as syndicating SexIs Magazine content such that the content is programmatically altered in order to create multiple (real) inbound links to EdenFantasys’s websites after republication on their partner’s media channels.

How EdenFantasys’s unethical practices have an impact on you

Regardless of who you are, EdenFantasys’s unethical practices have a negative impact on you and, indeed, on the Internet as a whole.

See for yourself: First, log out of any and all EdenFantasys websites or, preferably, use a different browser, or even a proxy service such as the Tor network for greater anonymity. Due to EdenFantasys’s technology, you cannot trust that what you are seeing on your screen is what someone else will see on theirs. Next, temporarily disable JavaScript (read instructions for your browser) and then try clicking on the links in SexIs Magazine articles. If clicking the intended off-site “links” doesn’t work, you know that your article’s links are being hidden from Google and that your content is being used for shady practices. In contrast, with JavaScript still disabled, navigate to another website (such as this blog), try clicking on the links, and note that the links still work as intended.

Here’s another verifiable example from the EdenFantasys site showing that many other parts of Web Merchants, Inc. pages, not merely SexIs Magazine, are affected as well: With JavaScript disabled, visit the EdenFantasys company page on Aslan Leather (note, for the sake of comparison, the link in this sentence will work, even with JavaScript off). Try clicking on the link in the “Contact Information” section in the lower-right hand column of the page (shown in the screenshot, below). This “link” should take you to the Aslan Leather homepage but in fact it does not. So much for that “link exchange.”

(Click to enlarge.)

  • If you’re an EdenFantasys employee, people will demand answers from you regarding the unethical practices of your (hopefully former) employer. While you are working for EdenFantasys, you’re seriously soiling your reputation in the eyes of ethical Internet professionals. Ignorance is no excuse for the lack of ethics on the programmers’ part, and it’s a shoddy one for everyone else; you should be aware of your company’s business practices because you represent them and they, in turn, represent you.
  • If you’re a partner or contributor (reviewer, affiliate, blogger), while you’re providing EdenFantasys with inbound links or writing articles for them and thereby propping them up higher in search results, EdenFantasys is not returning the favor to you (when they are supposed to be doing so). Moreover, they’re attaching your handle, pseudonym, or real name directly to all of their link farming (i.e., spamming) efforts. They look like they’re linking to you and they look like their content is syndicated fairly, but they’re actually playing dirty. They’re going the extra mile to ensure search engines like Google do not recognize the links in articles you write. They’re trying remarkably hard to make certain that all roads lead to EdenFantasys, but none lead outside of it; no matter what the “link,” search engines see it as stemming from and leading to EdenFantasys. The technically savvy executives of Web Merchants, Inc. are using you without giving you a fair return on your efforts. Moreover, EdenFantasys is doing this in a way that preys upon people’s lack of technical knowledge—potentially your own as well as your readership’s. Do you want to keep doing business with people like that?
  • If you’re a customer, you’re monetarily supporting a company that essentially amounts to a glorified yet subtle spammer. If you hate spam, you should hate the unethical practices that lead to spam’s perpetual reappearance, including the practices of companies like Web Merchants, Inc. EdenFantasys’s unethical practices may not be illegal, but they are unabashedly a hair’s width away from it, just like many spammers’. If you want to keep companies honest and transparent, if you really want a “sex shop you can trust,” this is relevant to you because EdenFantasys is not it. If you want to purchase from a retailer that truly strives to offer a welcoming, trustworthy community for those interested in sex positivity and sexuality, pay close attention and take action. For ideas about what you can do, please see the “What you can do” section, below.
  • If you’ve never heard about EdenFantasys before, but you care about a fair and equal-opportunity Internet, this is relevant to you because what EdenFantasys is doing takes advantage of non-tech-savvy people in order to slant the odds of winning the search engine game in their favor. They could have done this fairly, and I personally believe that they would have succeeded. Their sites are user-friendly, well-designed, and solidly implemented. However, they chose to behave maliciously by not providing credit where credit is due, failing to follow through on agreements with their own community members and contributors, and sneakily utilizing other publishers’ web presences to play a very sad zero-sum game that they need not have entered in the first place. In the Internet I want, nobody takes malicious advantage of those less skilled than they are because their own skill should speak for itself. Isn’t that the Internet and, indeed, the future you want, too?

TECHNICAL DETAILS

What follows is a technical exploration of the way the EdenFantasys technology works. It is my best-effort evaluation of the process in as much detail as I can manage within strict self-imposed time constraints. If any of this information is incorrect, I’d welcome any and all clarifications provided by the EdenFantasys CTO and technical team in an appropriately transparent, public, and ethical manner. (You’re welcome—nay, encouraged—to leave a comment.)

Although I’m unconvinced that EdenFantasys understands this, it is the case that honesty is the best policy—especially on the Internet, where everyone has the power of “View source.”

The “EF Framework” for obfuscating links

Article content written by contributors on SexIs Magazine pages is published after all links are replaced with a <span> element bearing the class of linklike and a unique id attribute value. This apparently happens across any and all content published by Web Merchants, Inc.’s content management system, but I’ll be focusing on Lorna D. Keach’s post entitled SexFeed:Anti-Porn Activists Now Targeting Female Porn Addicts for the sake of example.

These fake links look like this in HTML:

And according to Theresa Flynt, vice president of marketing for Hustler video, <span class="linklike" ID="EFLink_68034_fe64d2">female consumers make up 56% of video sales.</span>

This originally published HTML is what visitors without JavaScript enabled (and what search engine indexers) see when they access the page. Note that the <span> is not a real link, even though it is made to look like one. (See Figure 1; click it to enlarge.)

Figure 1:

In a typical user’s browser, when this page is loaded, a JavaScript program is executed that mutates these “linklike” elements into <a> elements, retaining the “linklike” class and the unique id attribute values. However, no value is provided in the href (link destination) attribute of the <a> element. See Figure 2.

Figure 2:

The JavaScript program is downloaded in two parts from the endpoint at http://cdn3.edenfantasys.com/Scripts/Handler/jsget.ashx. The first part, retrieved in this example by accessing the URI at http://cdn3.edenfantasys.com/Scripts/Handler/jsget.ashx?i=jq132_cnf_jdm12_cks_cm_ujsn_udm_stt_err_jsdm_stul_ael_lls_ganl_jqac_jtv_smg_assf_agrsh&v_14927484.12.0, loads the popular jQuery JavaScript framework as well as custom code called the “EF Framework”.

The EF Framework contains code called the DBLinkHandler, an object that parses the <span> “linklike” elements (called “pseudolinks” in the EF Framework code) and retrieves the real destination. The entirety of the DBLinkHandler object is shown in code listing 1, below. Note the code contains a function called handle that performs the mutation of the <span> “linklike” elements (seen primarily on lines 8 through 16) and, based on the prefix of each elements’ id attribute value, two key functions (BuildUrlForElement and GetUrlByUrlID, whose signatures are on lines 48 and 68, respectively) interact to set up the browser navigation after responding to clicks on the fake links.

var DBLinkHandler = {
    pseudoLinkPrefix: "EFLink_",
    generatedAHrefPrefix: "ArtLink_",
    targetBlankClass: "target_blank",
    jsLinksCssLinkLikeClass: "linklike",
    handle: function () {
        var pseudolinksSpans = $("span[id^='" + DBLinkHandler.pseudoLinkPrefix + "']");
        pseudolinksSpans.each(function () {
            var psLink = $(this);
            var cssClass = $.trim(psLink.attr("class"));
            var target = "";
            var id = psLink.attr("id").replace(DBLinkHandler.pseudoLinkPrefix, DBLinkHandler.generatedAHrefPrefix);
            var href = $("<a></a>").attr({
                id: id,
                href: ""
            }).html(psLink.html());
            if (psLink.hasClass(DBLinkHandler.targetBlankClass)) {
                href.attr({
                    target: "_blank"
                });
                cssClass = $.trim(cssClass.replace(DBLinkHandler.targetBlankClass, ""))
            }
            if (cssClass != "") {
                href.attr({
                    "class": cssClass
                })
            }
            psLink.before(href).remove()
        });
        var pseudolinksAHrefs = $("a[id^='" + DBLinkHandler.generatedAHrefPrefix + "']");
        pseudolinksAHrefs.live("mouseup", function (event) {
            DBLinkHandler.ArtLinkClick(this)
        });
        pseudolinksSpans = $("span[id^='" + DBLinkHandler.pseudoLinkPrefix + "']");
        pseudolinksSpans.live("click", function (event) {
            if (event.button != 0) {
                return
            }
            var psLink = $(this);
            var url = DBLinkHandler.BuildUrlForElement(psLink, DBLinkHandler.pseudoLinkPrefix);
            if (!psLink.hasClass(DBLinkHandler.targetBlankClass)) {
                RedirectTo(url)
            } else {
                OpenNewWindow(url)
            }
        })
    },
    BuildUrlForElement: function (psLink, prefix) {
        var psLink = $(psLink);
        var sufix = psLink.attr("id").toString().substring(prefix.length);
        var id = (sufix.indexOf("_") != -1) ? sufix.substring(0, sufix.indexOf("_")) : sufix;
        var url = DBLinkHandler.GetUrlByUrlID(id);
        if (url == "") {
            url = EF.Constants.Links.Url
        }
        var end = sufix.substring(sufix.indexOf("_") + 1);
        var anchor = "";
        if (end.indexOf("_") != -1) {
            anchor = "#" + end.substring(0, end.lastIndexOf("_"))
        }
        url += anchor;
        return url
    },
    ArtLinkClick: function (psLink) {
        var url = DBLinkHandler.BuildUrlForElement(psLink, DBLinkHandler.generatedAHrefPrefix);
        $(psLink).attr("href", url)
    },
    GetUrlByUrlID: function (UrlID) {
        var url = "";
        UrlRequest = $.ajax({
            type: "POST",
            url: "/LinkLanguage/AjaxLinkHandling.aspx",
            dataType: "json",
            async: false,
            data: {
                urlid: UrlID
            },
            cache: false,
            success: function (data) {
                if (data.status == "Success") {
                    url = data.url;
                    return url
                }
            },
            error: function (xhtmlObj, status, error) {}
        });
        return url
    }
};

Once the mutation is performed and all the content “links” are in the state shown in Figure 2, above, an event listener has been bound to the anchors that captures a click event. This is done using prototypal extension, aka. classic prototypal inheritance, in another part of the code, the live function on line 2,280 of the (de-minimized) jsget.ashx program, as shown in code listing 2, here:

        live: function (G, F) {
            var E = o.event.proxy(F);
            E.guid += this.selector + G;
            o(document).bind(i(G, this.selector), this.selector, E);
            return this
        },

At this point, clicking on one of the “pseudolinks” triggers the EF Framework to call code set up by the GetUrlByUrlID function from within the DBLinkHandler object, initiating an XMLHttpRequest (XHR) connection to the AjaxLinkHandling.aspx server-side application. The request is an HTTP POST containing only one parameter, called urlid, and its value matches a substring from within the id value of the “pseudolinks.” In this example, the id attribute contains a value of EFLink_68034_fe64d2, which means that the unique ID POST’ed to the server is 68034. This is shown in Figure 3, below.

Figure 3:

The response from the server, shown in Figure 4, is also simple. If successful, the intended destination is retrieved by the GetUrlByUrlID object’s success function (on line 79 of Code Listing 1, above) and the user is redirected to that web address, as if the link was a real one all along. The real destination, in this case to CNN.com, is thereby only revealed after the XHR request returns a successful reply.

Figure 4:

All of this obfuscation effectively blinds machines such as the Googlebot who are not JavaScript-capable from seeing and following these links. It deliberately provides no increased Pagerank for the link destination (as a real link would normally do) despite being “linked to” from EdenFantasys’s SexIs Magazine article. While the intended destination in this example link was at CNN.com, it could just as easily have been—and is, in other examples—links to the blogs of EdenFantasys community members and, indeed, everyone else linked to from a SexIs Magazine article or potentially any website operated by Web Merchants, Inc. that makes use of this technology.

The EdenFantasys Outsourced Link-Farm

In addition to creating a self-referential black hole with no gracefully degrading outgoing links, EdenFantasys also actively performs link-stuffing through its syndicated content “relationships,” underhandedly creating an outsourced and distributed link-farm, just like a spammer. The difference is that this spammer (Web Merchants, Inc. aka EdenFantasys) is cleverly crowd-sourcing high-value, high-quality content from its own “community.”

Articles published at SexIs Magazine are syndicated in full to other large hub sites, such as AlterNet.org. Continuing with the above example post by Lorna D. Keach, Anti-Porn Activists Now Targeting Female Porn Addicts, we can see that this content was republished on AlterNet.org shortly after original publication through EdenFantasys’ website on May 3rd at http://www.alternet.org/story/146774/christian_anti-porn_activists_now_targeting_female_. However, a closer look at the HTML code of the republication shows that each and every link contained within the article points to the same destination: the same article published on SexIs Magazine, as shown in Figure 5.

Figure 5:

Naturally, these syndicated links provided to third-party sites by EdenFantasys are real and function as expected to both human visitors and to search engines indexing the content. The result is “natural,” high-value links to the EdenFantasys website from these third-party sites; EdenFantasys doesn’t merely scrounge pagerank from harvesting the sheer number of incoming links, but as each link’s anchor text is different, they are setting themselves up to match more keywords in search engine results, keywords that the original author likely did not intend to direct to them. Offering search engines the implication that EdenFantasys.com contains the content described in the anchor text, when in fact EdenFantasys merely acts as an intermediary to the information, is very shady, to say the least.

In addition to syndication, EdenFantasys employs human editors to do community outreach. These editors follow up with publishers, including individual bloggers (such as myself), and request that any references to published material provide attribution and a link back to us, to use the words of Judy Cole, Editor of SexIs Magazine in an email she sent to me (see below), and presumably many others. EdenFantasys has also been known to request “link exchanges,” and offer incentive programs that encouraged bloggers to add the EdenFantasys website to their blogroll or sidebar in order to help raise both parties search engine ranking, when in fact EdenFantasys is not actually providing reciprocity.

More information about EdenFantasys’s unethical practices, which are not limited to technical subterfuge, can be obtained via AAGBlog.com.

EDITORIAL

It is unsurprising that the distributed, subtle, and carefully crafted way EdenFantasys has managed to crowd-source links has (presumably) remained unpenalized by search engines like Google. It is similarly unsurprising that nontechnical users such as the contributors to SexIs Magazine would be unaware of these deceptive practices, or that they are complicit in promoting them.

This is no mistake on the part of EdenFantasys, nor is it a one-off occurrence. The amount of work necessary to implement the elaborate system I’ve described is also not even remotely feasible for a rogue programmer to accomplish, far less accomplish covertly. No, this is the result of a calculated and decidedly underhanded strategy that originated from the direction of top executives at Web Merchants, Inc. aka EdenFantasys.

It is unfortunate that technically privileged people would be so willing to take advantage of the technically uneducated, particularly under the guise of providing a trusted place for the community which they claim to serve. These practices are exactly the ones that “the sex shop you can trust” should in no way support, far less be actively engaged in. And yet, here is unmistakable evidence that EdenFantasys is doing literally everything it can not only to bolster its own web presence at the cost of others’, but to hide this fact from its understandably non-tech-savvy contributors.

On a personal note, I am angered that I would be contacted by the Editor of SexIs Magazine, and asked to properly “attribute” and provide a link to them when it is precisely that reciprocity which SexIs Magazine would clearly deny me (and everyone else) in return. It was this request originally received over email from Judy Cole, that sparked my investigation outlined above and enabled me to uncover this hypocrisy. The email I received from Judy Cole is republished, in full, here:

From: Judy Cole <luxuryholmes@gmail.com>
Subject: Repost mis-attributed
Date: May 17, 2010 2:42:00 PM PDT
To: kinkontap+viewermail@gmail.com
Cc: Laurel <laurelb@edenfantasys.com>

Hello Emma and maymay,

I am the Editor of the online adult magazine SexIs (http://www.edenfantasys.com/sexis/). You recently picked up and re-posted a story of ours by Lorna Keach that Alternet had already picked up:

http://kinkontap.com/?s=alternet

We were hoping that you might provide attribution and a link back to us, citing us as the original source (as is done on Alternet, with whom we have an ongoing relationship), should you pick up something of ours to re-post in the future.

If you would be interested in having us send you updates on stories that might be of interest, I would be happy to arrange for a member of our editorial staff to do so. (Like your site, by the way. TBK is one of our regular contributors.)

Thanks and Best Regards,

Judy Cole
Editor, SexIs

Judy’s email probably intended to reference the new Kink On Tap briefs that my co-host Emma and I publish, not a search result page on the Kink On Tap website. Specifically, she was talking about this brief: http://KinkOnTap.com/?p=676. I said as much in my reply to Judy:

Hi Judy,

The URL in your email doesn’t actually link to a post. We pick up many stories from AlterNet, as well as a number from SexIs, because we follow both those sources, among others. So, did you mean this following entry?

http://KinkOnTap.com/?p=676

If so, you should know that we write briefs as we find them and provide links to where we found them. We purposefully do not republish or re-post significant portions of stories and we limit our briefs to short summaries in deference to the source. In regards to the brief in question, we do provide attribution to Lorna Keach, and our publication process provides links automatically to, again, the source where we found the article. :) As I’m sure you understand, this is the nature of the Internet. Its distribution capability is remarkable, isn’t it?

Also, while we’d absolutely be thrilled to have you send us updates on stories that might be of interest, we would prefer that you do so in the same way the rest of our community does: by contributing to the community links feed. You can find detailed instructions for the many ways you can do that on our wiki:

http://wiki.kinkontap.com/wiki/Community_links_feed

Congratulations on the continued success of SexIs.

Cheers,
-maymay

At the time when I wrote the email replying to Judy, I was perturbed but could not put my finger on why. Her email upset me because she seemed to be suggesting that our briefs are wholesale “re-posts,” when in fact Emma and I have thoroughly discussed attribution policies and, as mentioned in my reply, settled on a number of practices including a length limit, automated back linking (yes, with real links, go see some Kink On Tap briefs for yourself), and clearly demarcating quotes from the source article in our editorializing to ensure we play fair. Clearly, my somewhat snarky reply betrays my annoyance.

In any event, this exchange prompted me to take a closer look at the Kink On Tap brief I wrote, at the original article, and at the cross-post on AlterNet.org. I never would have imagined that EdenFantasys’s technical subterfuge would be as pervasive as it has proven to be. It’s so deeply embedded in the EdenFantasys publishing platform that I’m willing to give Judy the benefit of the doubt regarding this hypocrisy because she doesn’t seem to understand the difference between a search query and a permalink (something any laymen blogger would grok). This is apparent from her reply to my response:

From: Judy Cole <luxuryholmes@gmail.com>
Subject: Re: Repost mis-attributed
Date: May 18, 2010 4:57:59 AM PDT
[…redundant email headers clipped…]

Funny, the URL in my email opens the same link as the one you sent me when I click on it.

Maybe if you pick up one of our stories in future, you could just say something like “so and so wrote for SexIs.” ?

As it stands, it looks as if Lorna wrote the piece for Alternet. Thanks.

Judy

That is the end of our email exchange, and will be for good, unless and until EdenFantasys changes its ways. I will from this point forward endeavor never to publish links to any web property that I know to be owned by Web Merchants, Inc., including EdenFantasys.com. I will also do my best to avoid citing any and all SexIs Magazine articles from here on out, and I encourage everyone who has an interest in seeing honesty on the Internet to follow my lead here.

As some of my friends are currently contributors to SexIs Magazine, I would like all of you to know that I sincerely hope you immediately sever all ties with any and all Web Merchants, Inc. properties, suppliers, and business partners, especially because you are friends and I think your work is too important to be sullied by such a disreputable company. Similarly, I hope you encourage your friends to do the same. I understand that the economy is rough and that some of you may have business contracts bearing legal penalties for breaking them, but I urge you to nevertheless consider looking at this as a cost-benefit analysis: the sooner you break up with EdenFantasys, the happier everyone on the Internet, including you, will be (and besides, you can loose just as much of your reputation, money, and pagerank while being happy as you can being sad).

What you can do

  • If you are an EdenFantasys reviewer, a SexIs Magazine contributor, or have any other arrangement with Web Merchants, Inc., write to Judy Cole and demand that content you produce for SexIs Magazine adheres to ethical Internet publication standards. Sever business ties with this company immediately upon receipt of any non-response, or any response that does not adequately address every concern raised in this blog post. (Feel free to leave comments on this post with technical questions, and I’ll do my best to help you sort out any l33t answers.)
  • EdenFantasys wants to stack the deck in Google. They do this by misusing your content and harvesting your links. To combat this effort, immediately remove any and all links to EdenFantasys websites and web presences from your websites. Furthermore, do not—I repeat—do not publish new links to EdenFantasys websites, not even in direct reference to this post. Instead, provide enough information, as I have done, so visitors to your blog posts can find their website themselves. In lieu of links to EdenFantasys, link to other bloggers’ posts about this issue. (Such posts will probably be mentioned in the comments section of this post.)
  • Boycott EdenFantasys: the technical prowess their website displays does provide a useful shopping experience for some people. However, that in no way obligates you to purchase from their website. If you enjoy using their interface, use it to get information about products you’re interested in, but then go buy those products elsewhere, perhaps from the manufacturers directly.
  • Watch for “improved” technical subterfuge from Web Merchants, Inc. As a professional web developer, I can identify several things EdenFantasys could do to make their unethical practices even harder to spot, and harder to stop. If you have any technical knowledge at all, even if you’re “just” a savvy blogger, you can keep a close watch on EdenFantasys and, if you notice anything that doesn’t sit well with you, speak up about it like I did. Get a professional programmer to look into things for you if you need help; yes, you can make a difference just by remaining vigilant as long as you share what you know and act honestly, and transparently.

If you have additional ideas or recommendations regarding how more people can help keep sex toy retailers honest, please suggest them in the comments.

Update: To report website spamming or any kind of fraud to Google, use the authenticated Spam Report tool.

Update: Google provides much more information about why the kinds of practices EdenFantasys is engaged in degrade the overall web experience for you and me. Read Cloaking, sneaky Javascript redirects, and doorway pages at the Google Webmaster Tools help site for additional SEO information. Using Google’s terminology, EdenFantasys’s unethical technology is a very skilled mix of social engineering and “sneaky JavaScript redirects.”

How to work around “sorry, you must have a tty to run sudo” without sacrificing security

While working on $client‘s Linux server last week, I found myself installing a cron job that ran as root. The cron job called a custom bash script that, in turn, called out to various custom maintenance tasks client had already written. One task in particular had to run as a different user.

During testing, I discovered that the odd-ball task failed to run, and found the following error in the system log:

sudo: sorry, you must have a tty to run sudo

I traced this error to a line trying to invoke a perl command as a user called dynamic:

sudo -u dynamic /usr/bin/perl run-periodic-tasks --load 5 --randomly

A simple Google search turned up an obvious solution to the error: use visudo to disable sudo’s tty requirement, allowing sudo to be invoked from any shell lacking a tty (including cron). This would have solved my problem, but it just felt wrong, dirty, and most troublingly insecure.

One reason why sudo ships with the requiretty option enabled by default is, among other reasons, to prevent remote users from exposing the root password over SSH. Disabling this security precaution for a simple maintenance task already running as root seemed totally unnecessary, not to mention irresponsible. Moreover, client‘s script didn’t even need a tty.

Thankfully, there’s a better way: use su --session-command and send the whole job to the background.

su --session-command="/usr/bin/perl run-periodic-tasks --load 5 --randomly" dynamic &

This line launches a new, non-login shell (typically bash) as the other user in a separate, background process and runs the command you passed using the shell’s -c option. Sending the command to the background (using &) continues execution of the rest of the cron job.

A process listing would look like this:

root     28109     1  0 17:10 ?        00:00:00 su --session-command=/usr/bin/perl run-periodic-tasks --load 5 --randomly dynamic
dynamic  28110 28109  0 17:10 ?        00:00:00 bash -c /usr/bin/perl run-periodic-tasks --load 5 --randomly

Note the parent process (PID 28109) is owned by root but the actual perl process (PID 28110) is being run as dynamic.

This in-script solution that replaces sudo -u user cmd with su --session-command=cmd user seems much better than relying on a change in sudo‘s default (and more secure) configuration to me.