CEO Friday: HTTPS Twitter, FourSquare, OMG about time.

David Barrett —  April 8, 2011 — 3 Comments

Even ninjas are scared.Security is a topic near and dear to my heart. Most of the year leading up to launch was spent making a PCI-compliant, super-redundant, ultra-secure financial transaction layer.  And can you even imagine how hard it is to train lions to fight with lightsabers at anything more than the most rudimentary level?  When the blogosphere got all atwitter because Twitter *finally* made a “default HTTPS” option, I was baffled.  And now to see TechCrunch lauding FourSquare for making a:

solid move. And not the easiest one to make. One of the main reasons that every site/service doesn’t turn it on is a simple one: it means a performance hit.

What?!  This isn’t exactly genius stuff.  Quite the opposite: it’s a long-delayed fix to a glaring security hole.  It’s an embarrassment.  We’ve been doing this from the very first day, even before it was cool.  Scratch that — it was never cool, it was just obvious.  It makes me wonder: what are these guys still missing?

David Barrett

Posts

Founder of Expensify, destroyer of expense reports, and savior to frustrated employees worldwide.

3 responses to CEO Friday: HTTPS Twitter, FourSquare, OMG about time.

  1. 

    A split key sounds cool. But I take it that those two keykeepers can never take a vacation. Also, what’s wrong with storing the key in TPM?

  2. 

    algor – That’s why we have three datacenters: we can lose any one and the other two self-organize and keep trucking away. Obviously we want to always have all three going, but there’s no end-user downtime to take one down, so it’s not super-duper critical. Plus we have processes in place such that backup key custodians can “break the glass” and restart the servers, though it triggers all sorts of alarm bells and audit trails. (As for TPM, the big reason is it’s custom hardware and thus complicates cloud hosting.)

    But really, it just hasn’t been a problem. Things generally go wrong when we’re actually changing things, and then we’re on hand to reboot. And most middle-of-the-night datacenter failures are just temporary network problems: those don’t require a restart, and the servers fix themselves when the network reconnects.

    Ultimately the key to uptime is to make the servers really hard to break, and fix themselves wherever possible. Then we sleep soundly knowing that we’re not on hair-trigger alert the moment anything goes wrong.

Trackbacks and Pingbacks:

  1. CEO Friday: Why You Need *At Least* Three Datacenters « Expensify Blog - April 22, 2011

    […] couple weeks back I mentioned that we put an unbelievable amount of effort into creating a secure hosting environment.  A part […]

Leave a Reply