CCP Yokai, the Technical Director over in EVE-land, just posted a dev blog about their new rack setup. This is pretty rare insight for any operation, so it’s definitely worth reading. You don’t get the nitty-gritty details but you get a good overview.
They’re located in 12 cabinets. That apparently covers their single server, their test server, and ancillary services. If you don’t know, EVE is a single shard setup, which is really technically impressive. They crowd all 50-60K concurrent players into one world. That’s one big reason why network connectivity is so important to them. Yokai mentions it a few times in the blog. That’s a very high quality network he’s got set up, probably because most of those 64 servers may need to talk to any other server. Compare that to an infrastructure where 10 servers make up a shard. I can’t know for sure but it sure seems like you’d need to be ready for more interconnections.
He’s using blades, and the blades have a lot of RAM. IBM makes a really solid blade, by the way. The HS21 is I think one generation old; they’re currently selling the HS22s in that price/performance spot, but once you’ve bought a bunch of blades you don’t upgrade unless you need to. The interesting thing to me is the amount of RAM they’ve got in each blade. 32GB is a fair bit. I don’t want to speculate too much but CCP has never been shy about smart ways to use the fastest possible resources, and RAM is fast. See also that big 2 terabyte SSD SAN (storage area network) he mentions.
Lots of blades means lots of heat. I am not surprised that they need a self-contained cooling system. I should talk some about the blade vs. 1U server question, since while blades do take up less physical space, the practical space they consume may not save you much. On the other hand, as noted, CCP needs the fast interconnects. Blades do help there.
Don’t miss the comment thread, either. The devs are again being very open about some of their choices, which is awfully nice of them.