Someone on IRC inquired as to our current setup. Figured I'd describe the server architecture we're currently running, both for the record, and for people to learn / give feedback / etc. As this is dw_dev
, I'm going to be fairly technical. But stop me if you have questions.
We have a total of 13 servers. These servers are what are called VPS -- virtual private servers. They're hosted by Slicehost, a neat company that's been doing VPS hosting for a while now. Henceforth you will hear me refer to the servers as slices -- since they are technically slices of bigger physical machines.
Anyway -- here's the breakdown of the slices we have right now.dfw-admin01
(512MB RAM) ... this box is the administrative box. It runs the puppetmaster for our configuration management system, Cacti, and serves as the distribution point for pushing out code, managing the rest of the cluster, etc.dfw-lb01 / dfw-lb02
(256MB RAM each) ... these two machines run Perlbal right now. They are also the frontend - the site's IP is hosted on one of these. They're configured with heartbeat for failover, so if one machine dies, the other takes over within a few seconds. (This isn't fully tested/deployed, but I'm working on it.) Static files are served by these machines.dfw-web01 / dfw-web02
(1GB RAM each) ... as you might expect, these are the webservers. They run Apache, and that's it. All of the web requests (not static files) are served by these two slices.dfw-jobs01
(1GB RAM) ... this box runs the TheSchwartz workers. It sends email, handles events/subscriptions/notifications, and various other things that go through workers. Oh, and all of the imports are handled by this machine too.dfw-memc01 / dfw-memc02
(512MB RAM each) ... as you might imagine, these are memcache nodes. They're small right now, but will grow over time.dfw-mog01 / dfw-mog02
(256MB RAM each) ... MogileFS storage nodes. While we do not yet have this system deployed on production, we will before open beta hits. Right now MogileFS is mostly used for storing and manipulating userpics.dfw-mail01
(256MB RAM) ... the incoming mailserver. This box just handles incoming mail. It's a separate box for security reasons, and also so we can configure it differently.dfw-db01 / dfw-db02
(1GB RAM each) ... our databases. We run a pair of them, and they will soon be configured with MySQL replication. Although I haven't yet decided how to setup for Open Beta -- we'll probably deploy a couple more sets of databases...
Anyway. That's a basic tour of what we have in terms of physical units of separation. There are a lot more components that go into the production cluster as far as what gets installed where and how it works. That's beyond the scope of this post though, but eventually it will get documented so other people can setup similar sites.
(PS, and because someone is going to ask: the dfw prefix is for Dallas/Ft. Worth, the data center the servers are located in. Years working at companies with globally distributed data centers has taught me how useful it is to know where the server you are talking to actually is located...)