Rackspace Email Hosting vs. Google Apps

I’d been using Google Apps for receiving emails sent to my domain up until an hour ago. As I’ve mentioned before, I’m running my app on Slicehost, and as usual they had some great instructions for using Google Apps for your email needs.

That was working kinda ok but there were a couple of things that annoy me about that solution. First is that I just don’t want Google involved in every single thing I do online. I generally trust them, but there are some things I don’t want to use them for, namely anything to do with my business (I don’t use Google Analytics either). The second is that I think it’s highway robbery to pay $50 per user per year for the premier account. I only need 2 right now, but down the line I might need more. I didn’t relish the thought of giving them $300 or $400 a year to provide a beefed up version of their free tools.

So today I discovered that Rackspace has an email hosting solution as well. And if you’re a Slicehost customer and need 3 or fewer inboxes (that me!) it’s only $3/month. The normal starter package’s price is $10/month for up to 10 inboxes, which is still totally reasonable. So in less than an hour I converted from Google Apps to Rackspace Email Hosting. And of course they have the usual helpful configuration instructions to get you started.

I have a couple of concerns that I’ll follow up on in future posts. The first is that according to the representative I chatted with there’s a limit of about 200 outgoing emails per hour. I think that’s going to be ok for my app, but I guess I’ll see. The other is that I’m pretty useless with mail configuration things and I’m a little nervous about how much effort will be involved in connecting my local postfix to their smtp server for outgoing email. I’m sure I’ll figure that out eventually though.

In any case, for $3/month, moving back to Google won’t be a huge issue if it should come to that. Hopefully it won’t. I’ve already gotten a few small tastes of the fanatical support from Rackspace and I have to say it’s pretty nice so far.

13 comments

Over-engineering is like Snoring

A lot of developer cycles are spent discussing the benefits of YAGNI and KISS. On the surface it would seem that there is an army of righteous developers fighting against the demons of over-engineering and maximum complexity. And despite our valiant battles, despite all the books and blog posts and rallying calls from respected technology professionals, the demons are still churning out bloated, impossible to maintain code.

I’ll let you in on a little secret. We are the enemy. Not just the guy who sits next to you, or the guy that churned out mess of code and then left the company. You are the problem. I am the problem. The enemy is us.

Yes, we can all agree in principle that complexity is bad and simplicity is good. The problem is that complexity is completely subjective. Maybe you misjudge or were misinformed about how likely it is that a certain feature will be needed. Maybe you thought of some brilliant solution and you want to leave it as a placeholder in case you need to come back to it later (or so others can see how clever you are). Maybe you don’t want to do it the cheap way because you’re afraid others will snicker at your solution. Maybe you’re afraid a simple solution will lead to longer development times later on. Maybe your definition of simplicity is skewed. Whatever the case, no one sets out to over-complicate a piece of code. And yet it happens time and time again.

There are rules of thumb that can be followed. But what it boils down to is always discipline. It’s not easy to simplify. It sometimes feels wrong. But I’ve never looked at working code and cursed because it was too simple. I’m not even sure it’s possible for working code to be too simple. But it sure as hell easy for it to be too complex.

So why is over-engineering is like snoring? Because no one thinks they do it. And yet somehow there is a market for snoring relief aides.

2 comments

Adventures in SSL – Part II: Integration Strategy

In my first post about SSL integration on my site, I discussed how I came to a decision about a certificate issuer. I chose DigiCert, and have been very happy with them. One great bonus was their extensive list of instructions for setting up the certs on almost any web server known to man. So even though Part II of this series was intended to be about installation, I think DigiCert has that covered. Their instructions for nginx were spot on, so I wouldn’t be able to add anything meaningful to them anyway.

But buying and installing the certificate is a little different than using it. This post will focus on how I integrated the certificate into the site and what additional nginx configuration I had to make to support that strategy.

After kicking it around for a while I realized I really have 2 options. I can either convert the entire site to use https or convert as few pages as possible (e.g. just the login and register pages). The argument for a limited use of https is that all else being equal, the web server will require a little more CPU to encrypt/decrypt the https traffic. This is apparently an issue particularly with nginx as even the creator has said it can drag down performance for high-traffic sites. Since I’m not expecting Amazon-level traffic, this wasn’t as big a deal to me.

Another argument for limiting the use of https is that some low-cost CDNs, such as Amazon CouldFront, don’t support https traffic. This was a concern for me. I will eventually want to move my images, screencasts, stylesheets, and JS files to a CDN, so the fewer https pages I have the less of an issue this would be.

Related to this, some posts I read claimed that browsers will refuse to cache images, CSS, and scripts if they came across https. In my testing with Charles in Firefox and IE on Windows I did not experience that. In other words, any files that could be cached by the brower were cached. Yes, it was a limited test, but it covers a lot of the target base of my app. I believe either this used to be the case and no longer is or it’s one of those old wive’s tales that people just assume is the case but have never really taken the time to test.

I saw a couple of benefits for using https for the whole site. The first was that it simplified my application architecture. For instance, say you have a login page that’s intended to be served over https but it includes a common header image that’s present on all pages. That image has to also be served over https on the login page or the user will get a popup warning message that the page contains both secure and insecure content. That message is at least annoying if not scary to some users, so it’s best to avoid it by ensuring that the image is served up via https. But that means you may have a situation where you have 2 copies of that image so that it can be served up by both https and http. Or your configuration might become more complex in order to support 2 virtual servers pointing at the same image file on disk. Either way it’s a complicating factor that I wasn’t thrilled about wasting time on. If the entire site is served over https this issue goes away.

Secondly, it would be easier to configure than having only some pages be served via https. For instance, let’s say the login page is https. If someone asks for that page via http, the server should be nice and redirect them to https. But for almost all other pages it should allow regular http requests to process normally. These exceptions are easy to handle for one or two pages, but for more than a couple that quickly becomes difficult to manage effectively.

Lastly, my application is targeted at kids in the 10 to 15 years old range. For me, the more security the better. As with any site that relies on cookies to identify logged in users, it’s theoretically possible to hijack someone’s session via the cookie value, and if that were to happen it would lead to some seriously bad press for me. Again, if the entire site is accessed over https this issue goes away.

So as you can probably guess, I decided to serve the entire site over https. The big question I haven’t answered here is what effects this had on performance. I’ll discuss that the final installment in this series. But for those also using nginx, below is an excerpt of the config changes I made to support this. It should be self-explanatory, but leave me a comment if you need any help through it.


# non-secure site - send all requests to https
server {

        server_name www.mysite.com mysite.com;
        listen 80;

        location / {
           rewrite ^/(.*)$ https://www.mysite.com/$1 permanent;
        }
}

# secure site
server {

        server_name www.mysite.com mysite.com;

        listen 443;
        ssl on;
        ssl_certificate /path/to/pem/file;
        ssl_certificate_key /path/to/key/file;
        .....
}

0 comments

The Double-Decker Train Conductor Problem

One of the things I love about being a software developer is the fractal nature of our work. When we design a system we are almost always taking some piece of the universe and attempting to deconstruct it and model it so that it can run inside a computer. Examples of good (or bad) design are all around us, and our work demands that we draw on these examples to create a working piece of software. And software itself is nothing more than a bunch of bits and registers and some electricity that’s pretending to be more than the sum of its parts.

So I found myself reading Coders at Work on the 8:06 train the other day. I don’t usually catch the 8:06. The 8:06 is a double-decker train. And watching the conductor come through to collect our tickets I realized he represented a real-world example of a mutex.

This day there was only 1 conductor for both levels of the double-decker. It dawned on me that it would be very easy for someone to avoid having to pay by hanging out in the upper level and waiting for the conductor to collect the tickets from the lower level, then sneaking down to the lower level while the conductor moves to the upper level.

The lone conductor represented a flawed algorithm. There was no lock on the resource (exit door = I/O stream?). Adding another conductor could solve the leakage problem and lock the resource. But that would limit (or serialize) the free flow of passengers to and from the car.

I could probably go on exaggerating this example for while but I think you probably get the point.

0 comments

Facebook Status Updates and Infinite Session Keys

Anyone have the first clue as to why Facebook’s developer documentation sucks so hard?

I was developing a simple Facebook application for one of my company’s clients that required me to update a user’s status via a scheduled background process. The developer documentation lead me down all kinds of paths by referencing infinite session keys and the “keep me logged in” check box. So I scoured the internets for some examples, only to find that there aren’t many. All these claims that bajillions of people are creating Facebook apps and not a single one of them that are updating a user’s status offline can document it? ARRRGGG!

So, here is what I hope will save someone else a ton of time – a real life, working code sample for updating a user’s Facebook status offline. Careful – make no sudden moves or you might scare this rare beast back into hiding.

Our app is requesting two extended permissions – “offline_access” and “status_update”. This is also using Elliot Haughin’s Facebook plugin for CodeIgniter. Elliot’s package includes an older version of the Facebook PHP Library, so I had to grab the latest version from Facebook and drop it in place. Other than that it was easy to integrate this into my app.

//http://wiki.developers.facebook.com/index.php/Users.hasAppPermission
//must be one of:
//   email, read_stream, publish_stream, offline_access, status_update, photo_upload, 
//   create_event, rsvp_event, sms, video_upload, create_note, share_item
if( $this->facebook_connect->client->users_hasAppPermission("offline_access", $fbUID) &&
    $this->facebook_connect->client->users_hasAppPermission("status_update", $fbUID) ){
    $this->facebook_connect->client->users_setStatus("some status message", $fbUID); 
}

Seriously, that’s it! All those posts, all that searching – for 3 lines of code! The key point that was conveniently left out of other articles is that there is no “session key” required now. Facebook is smart enough to know that the user granted the app permission for offline_access and status_update, so you only need to send the user’s Facebook ID. Moley.

Another annoyance. They make a big deal out of the fact that they provide a REST-ful interface, but none of the examples in their documentation show the format of the REST request (although they do at least provide the REST server URL and a handy hint to include the “Content-Type: application/x-www-form-urlencoded” header). Yes, I get it, you want me to use the PHP Library, which is nicely designed. But for quick and dirty testing I like to whip up some curl commands. If I don’t know how to format the XML I can’t easily do that. Bah!

4 comments

NULL is NOT a valid state

I’m amazed at the amount of code that I see that contains uninitialized variables. I can’t think of a bigger bang-for-the-buck habit you can fall into than taking an extra 2 seconds to properly initialize whatever variable you’ve created. Look, I know a lot of ORM layers use NULL variables as a flag to insert/select/update null values. I get it. I don’t love it, but I get it. Other than that, I can’t think of a single reason to not initialize your variables. Unless you love NullPointerExceptions, or security exploits, or constantly having to check for null before you call a method on an object you just got back, or who know whatever else people have done to themselves because they were to lazy to add ” = 0;” after their variable declarations. Coding is hard enough as it is. This just makes it harder.

0 comments

Psychotic Home Page Design Syndrome

In an earlier post I referred to the tendency of a site’s home page to speak volumes about the character and and principles. I call this Psychotic Home Pager Design Syndrome. There are a couple of great examples of this, but the ones that stand out best to me are sites like GoDaddy.com, ESPN.com, and MLB.com. Imagine you are someone coming to the home page of one of those sites looking for a very specific product. Imagine trying to find that product in the mess of boxes and links and images and ads. It’s impossible.

One might argue that these companies have tons of products, and the home page reflects the need to have their most successful products featured and touted. Exactly. Having worked at MLB.com, I can tell you exactly how this happens. You have 2 distinct business units with shiny products that represent some business interest. You have 2 product managers who equate sales of their product with the size of their year end bonus. You have endless campaigning to have your product featured on the home page, where it will get the most traffic. You have exactly 1 CEO who doesn’t really want to have the product managers draw short straws because, afterall, it’s just pixels on a page. So the end result is a mish-mash of products and services that speaks more to the internal structure of the company than to usability.

Contrast this to a company that gets a lot of fanboys/good press – 37Signals. I won’t go into details here about my thoughts on the 37Signals hype (that should tell you enough), but for a company with a good smattering of products their home page is simple and usable. In the context of what we know of the internals of their company, this makes a ton of sense.

0 comments

In Praise of DigiCert

As I’ve mentioned before, if you develop web sites for a living and haven’t read High Performance Web Sites yet you should be ashamed of yourself. The book’s title unfortunately includes the words “Front-End Engineers” in it, which will cause it to be tuned out by many back-end developers. That’s a mistake on their part. The book does contain information on best practices to improve the experience of a visitor to your site, but many of these solutions require the active participation of backend developers. Other solutions are just important for backend developers to be aware of.

Around the same time the book was released the fellows at Yahoo released the Yahoo Y Slow Plugin for Firefox. It requires the Firebug plugin, which all serious web developers should have installed anyway. The plugin will give you a grade on your compliance with the rules – 0 to 100, just like grade school.

My goal is to have each page in my site score at least 90 in the Y Slow rankings (again, just like grade school). This isn’t terribly hard to do if you’re disciplined. I run a Y Slow check on my pages infrequently to verify that I’m maintaining that goal. So I was a little ticked to see the home page of WhizKidSports.com take a hit when I decided to show the DigiCert badge I purchased (see related post here).

The issue was that 2 images included by DigiCert’s JavaScript. Y Slow was complaining that neither had a far futures expires header or ETags configured. That left my score south of 90, so I decided I’d fire off an email to DigiCert customer support asking if there was any way I could convince them to fix it on their side. I wasn’t expecting much, but figured I should give it a shot anyway. That was at 1am Sunday morning.

Around 11am that same morning I got a response from the CTO of DigiCert, Paul Tiemann. Cool fact #1 – the CTO of DigiCert is scanning customer service emails at 11am on Sunday. Seriously.

He profusely thanked me for noticing this and suggesting it to them. Cool fact #2 – the CTO of DigiCert was willing accept suggestions for improving their service from one of their clients. Seriously.

He got it immediately. As he pointed out, following Y Slow rules not only help visitors to my site, it also reduced bandwidth costs for DigiCert. So he had reconfigured the servers to address the issue. Cool fact #3 – the CTO of DigiCert is still close enough to technology to know how to configure ETags and expires headers on the production servers. Seriously.

I told him that I ran the site back through Y Slow and the news was good. I was back above a grade of 90. And, thanks to this tremendous example of a good business run by good people, I’m a proud DigiCert customer for life.

0 comments

Adventures in SSL – Part I: Shopping Around

I wanted to do a couple of smaller posts around my efforts to obtain and make effective use of a secure certificate for WhizKidSports.com. The smaller posts will let me expand on some of the finer points where those familiar with the process might be able to give feedback.

The first task was to select an SSL issuer. I narrowed my choices down to 2 – GoDaddy.com and InstantSSL. I was leaning towards InstantSSL until I found a chart that shows the SSL issuers for Y Combinator companies. This had some value to me because I figured these companies are generally at a similar place as my company in terms of size and technical requirements. Strangely, after seeing the adoption rates of Godaddy and Comodo (who runs instantssl.com) being two of the top ones, I decided to go with DigiCert anyway.

In terms of GoDaddy, I generally just don’t think too highly of them. I use them for domain registration, but otherwise I tend not to trust them. They’re a little spammy, and I’ve read articles and blog posts over the years with people who have gotten the shaft because of their policies and practices. Few of these articles tend to be flattering. Also, they have a reputation for bargain basement prices and a ton of questionably valuable products. This is something of the antithesis of what I want people to think when they see a secure certificate on my site.

In terms of Comodo, I found the array of products to be a red flag. I was looking at the InstantSSL product, which seemed to suit my needs. The price was reasonable. But something nagged at me. The only differences that I could detect between this product and the InstantSSL Pro product (which is $25 more per year) is telephone support and a larger warranty. Honestly, I don’t expect to need either, but the point was that I also don’t tend to trust companies who invent arbitrary reasons to justify price differences between very similar products. The other research I turned up was good but not incredible, so I didn’t feel they really closed the deal on my business.

And I know this doesn’t have even close to anything to do with the quality of the product, but both GoDaddy and Comodo suffer from psychotic web page design syndrome (that’s a topic for another post). In short, I’ve learned that a company’s home page is usually the best indicator of the soul of that company. Call it crazy.

Whatever the case, I finally decided on SSL Plus certificate from DigiCert. Maybe a little more expensive, but still reasonable. The reviews I found were glowing. And once I saw their instructions for installing the certs on all major web servers – including nginx, I was sold. After I went through the typical purchase flow a real human contacted me for some documents to verify my ownership of the domain. As soon I got them what they needed they issued the certificate. It all went incredibly smoothly and professionally. They even had a cool little wizard that generated the appropriate OpenSSL command to run on the command line. Not essential but nice.

So far so good with DigiCert. Next up I’ll discuss installation, which hit a few tiny snags but was also pretty painless.

(See Part II of this series here)

0 comments

Request Translation

Request: Can you give me expected adoption rates of the new XYZ feature so we can estimate whether we have sufficient capacity to handle incoming traffic?

Translation: Can you pretend to understand the unknowable well enough to pull a random but superficially believable number out of your ass which I will crucify you with later on if it turns out that whatever I did to prepare was insufficient and/or people generally believe that the failure was my fault.

0 comments

« Previous PageNext Page »