Lydia Leong has a great post about the question of speedy provisioning. As she says, the exciting bit about getting new hardware in place isn’t the OS and software installation. Even if you’re not virtualized, you can install a new server unattended in a couple of hours. You want to be able to do that even if you never expect to grow, because you need to be able to rebuild servers quickly if they die. This isn’t hard to manage. In a pinch, people will sell you solutions and you can get a consultant in, but it’s easier to just plan ahead.

She hits on the internal aspect: getting someone to sign off on the new servers. If we’re talking about the need to buy more capacity on short notice in our industry, we’re probably talking about launch, which means this problem isn’t so bad for us. But you’ve got to get the ducks lined up in advance. You don’t want to shock your CEO with an order on the third day of launch; she’s worrying about other stuff. Better to get the plan in writing way in advance, along with executive buyoff. Then you can tell the appropriate people you need ten more shards, get the documents signed, and get your vendor moving.

I think that’s a bit trickier than Lydia says, but I also think she’s talking about onesies/twosies. Buying one server is easy, as she notes. Buying a hundred servers for serious expansion is going to take a bit longer, because Dell and HP and IBM hate keeping too much backstock around, so they’re going to have to build those servers for you.

You can alleviate this, of course. First tactic is to let them know it’s coming. None of those companies are going to increase their inventory just for the sake of your possible buy, because you’re too small, unless you’re Blizzard. However, you can and should get some commitments around response time. You can also, and I think this is more important, find out what’s going to ship the fastest. There’s no reason why you shouldn’t take that as an input to your hardware decision matrix. If all else is equal, go with the servers that generally have the largest inventory. Or ask questions about factories: can your vendor literally build 1U servers faster than blades?

Also, make sure the vendor order process is just as quick. As with all vendors, you want your hardware sales people to be on call during the two weeks around launch. Midnight calls are very unlikely; weekend calls are more probable.

Finally, figure out how you’re going to rack and stack a hundred servers quickly. Could be your vendor’s professional services, could be some local contractor. Even if your internal staff racked the rest of the servers, it’s better not to ask them to spend the time in the machine room during launch, cause you are going to be doing a lot of other things.

None of this is really all that hard, it’s just a great example of one of the many rows you need to have your ducks in. It’s not difficult, it’s merely if

A couple of followups as I sip coffee and wait for various and sundry phone calls…

The OnLive claims are continuing to spark debate. Mostly of the form “sha, right, that’s not practical.” Steve Perlman responded in a BBC interview.

There’s some concrete info in there, mostly about the encoding and compression process. They’re depending on a specialized chip which’ll cost them under $20 per chip in bulk. That makes some kind of sense for the hardware console replacement, and I suppose that the Mac/PC versions will have plenty of processor available.

Aiming for sub-80 millisecond round trip ping times between the clients and the data centers is feasible, given that they’re willing to have multiple data centers.

Running 10 games per server is an interesting concept. Possibly whatever custom hardware they’re building around their specialized chip will load multiple chips on each server, such that the tricky work is offloaded from the main CPUs. If they’re planning on running large servers — something like the IBM x3850m2 — and using virtualization, then there’s enough CPU and RAM in a single server to handle that. You’re not going to get 10 games on a little dual CPU quad core 1U server, though. The lesson here: “The company has calculated that each server will be dealing with about 10 different gamers…” is a completely meaningless statement if the word server is undefined.

Thus, my concerns about cost structure remain intact for now.

Meanwhile, Dave Perry (ex-Acclaim) has his own streaming game service in the works, called Gaikai. I love his interview because he hits on one of my favorite business concepts, friction. He’s absolutely right in his discussion of the dangers of making it harder for people to play. His service also looks more flexible and requires less buy-in from studios. On the other hand, he’s saying nothing about the technical difficulties.

One last streaming tidbit: World of Warcraft streamed to an iPhone, from a streaming-oriented company called Vollee. Just a demo, which means you can’t say anything about how it performs over a 3G network, but still neat. I like their capacity for custom UI filtering to adapt to the smaller screen size.

In completely different news, two other people came up with my clever addon/plugin App Store idea, except they both thought it was an April Fool’s joke. Humph.

The great thing about pictures of bad cabling messes is that there are always worse ones. So: worse ones! I know there are other r

Daniel James of Three Rings (Puzzle Pirates, Whirled) made a great post with his slides from his GDC presentation. Attention alert: lots of real numbers! It’s like catnip for MMO geeks.

From a tech ops perspective, I paid lots of attention to those graphs. Page 7 is awesome. That is exactly the sort of data which should be on a graph in your network monitoring software; ideally it should be on a page with other graphs showing machine load, network load, and so on. Everything should be on the same timeline, for easy comparisons. It’s my job to tell people when we’re going to need to order new hardware; a tech ops manager should have a deep understanding of how player load affects hardware load. Hm, let’s have an example of graphing:

Cacti graphs showing network traffic and CPU utilization.
Cacti graphs showing network traffic and CPU utilization.

That’s cacti, which is my favorite open source tool for this purpose right now, although it has its limitations and flaws. This particular pair of graphs shows network traffic on top and CPU utilization for one CPU of the server below; not surprisingly, CPU utilization rises along with network traffic. Data collection for CPU utilization and network traffic is built into cacti, and it’s easy to add collection for pretty much any piece of data that can be expressed as a numeric value.

That sort of trend visualization also helps catch problem areas before they get bad. Does the ratio of concurrent players to memory used change abruptly when you hit a specific number of concurrent users? If so, talk to the engineers. It might be fixable. And if it isn’t, well, the projections for profitability might have just changed in which case you better be talking to the financial guys. Making sure the company is making money is absolutely part of the responsibility of anyone in technical operations; some day perhaps I’ll rant about the self-defeating geek tendency to sneer at the business side of the house.

Page 8, more of the same. The observant will notice one of the little quirks of gaming operations: peak times are afternoon to evening, and the peak days are the weekends. The Saturday peak is broader, because people can play during the day more on weekends. You might assume that browser-based games like Whirled would see more play from work, but nope, I guess not.

I wonder what those little dips on 3/17, 3/18, and 3/20 are? I don’t think Whirled is a sharded game, so that can’t be a single shard crashing. Welp, I’ll never know, but that’s a great example of the sorts of things graphs show. If those were because of crashes, you’d know without needing graphs to tell you because your pager would go off, but if it’s something else you’d want to investigate. Could be a bug in your data collection, for that matter, but that’s bad too.

Less tech ops, but still interesting: the material on player acquisition is excellent. Read this if you want to know how to figure out the economics of a game. If I were Daniel James, I would also have breakdowns telling me how those retention cohorts broke down based on play time and perhaps styles of play. What kinds of players stick around? Very important question. I believe strongly in the integration of billing metrics and operational metrics. That work is something that technical operations can drive if need be; all the data sources are within your control. It’s worth spending the time to whip up a prototype dashboard and pitch it to your CFO.

Then there’s a chunk of advice on building an in-world economy that relates to the real world. Heh: it’s MMO as platform again. Whirled is built on that concept, as I understand it. That dovetails nicely with his discussion of billing. When he says “Don’t build, but use a provider,” he is absolutely correct.

I love this slideshow. In the blog post surrounding it, he talks about how he feels it’s OK to give away the numbers. There are dangers in sharing subscriber numbers and concurrencies, particularly if you’re competing in the big traditional space, but I like seeing people taking the risk. There is plenty of room in the MMO space for more players and plain old numbers are not going to be the secret sauce that makes you rich. How you get those numbers is a different story. So thanks to Daniel for this.

Speaking of streaming games, OnLive wants to implement that cloud gaming solution. 720p resolution at 60 FPS — hey, that’s really similar to what Dyack was saying would be possible, huh? I think some people are going to be disappointed, since there’s not much OnLive can do about intermediary network problems, but we’ll see.

This leaves the question of cost. I’m wondering about the cloud computing resources OnLive plans on using. Barring substantial rewrites, the cloud would need to have either high end video cards or something capable of a really good emulation, right? Maybe some custom hardware to provide banks of nVidia/ATI processors? You wouldn’t run this on a standard cloud, because a standard cloud doesn’t provide really good DirectX capacity.

I can’t really speculate honestly because they don’t talk about their pricing model at all. If they charge a buck an hour, then they’re making enough per year ($8,760) to pay for a single desktop-class computer outright. Assume hosting is another couple hundred bucks a month? I don’t know, because I don’t know what their hardware is. Add on headcount for ops, headcount for everything else, a percentage for the game publishers. I don’t think that really adds up well even if you amortize it out over three years. I’m also simplifying, because the majority of computer equivalents won’t be in use 24/7.

$20/month for an all you can eat subscription? There are 720 hours in a month. Say we’re targeting an average of $2/hour for hours played. The average usage for this ought to be higher than 10 hours a month. You’re selling convenience, after all.

$10 for 24 hours in realtime with one game? That’s not going to fly with consumers.

Whoops, I speculated after all. It’s an intriguing question. Steve Perlman has a decent track record, so I can’t assume this is just a publicity bubble. He does seem to be the kind of guy who’ll spend as long as it takes to polish a product. We may be waiting longer than Winter 2009 to see this sucker.

Greg Costikyan wrote a takedown of Denis Dyack’s editorial on cloud computing and gaming. I think Costikyan’s sort of right, but the semantic errors don’t totally invalidate what Dyack’s trying to say. Even if he’s saying it poorly.

I disagree with Costikyan’s definition of cloud computing. He’s basically defining it by example as Amazon’s cloud computing offering, which allows random people to power up remote compute services in a scalable fashion. I agree that right now, there’s not much value in Amazon-style offerings for game companies. (Note: future post on this, because we ought to be thinking about that particular question for a bunch of reasons.) On the other hand, that’s not what Dyack is talking about and his definition is both broader and more accurate.

He’s talking about Google Docs — or hey, Gmail — in the gaming context. Google Docs is absolutely an example of cloud computing. It happens to be the case that the company providing the service owns the computers on which the service runs, but from our perspective, the documents and the software live out there in the cloud.

From a business-speak perspective, Costikyan is talking about IaaS: Infrastructure as a Service. Cloud computing includes IaaS, but it also includes SaaS, or Software as a Service. Google Docs is Software as a Service; it’s a full featured program that mostly runs on servers, with a relatively lightweight client. Dyack’s talking about SaaS.

And yeah, MMOs are in fact specialized versions of SaaS. I’ve been using that line when I interview at non-gaming companies. It makes people more comfortable when I accurately categorize the last six years of my career as working on SaaS, which I find both pleasing and amusing.

On point two, yep. Linear entertainment is not a commodity. That was a cute way for Dyack to say it’s easy to pirate linear entertainment. But Dyack is right about that, even if his terminology was sloppy again.

Point three, however, is where Dyack is wrong, and it’s for exactly the reasons Costikyan outlines. The user’s already spent money on the desktop CPU. It’s less profitable for gaming companies to pay for CPUs to do work that users can already do. Not too complex.

I guess you could argue that freeing yourself from the risk of piracy is worth a certain investment in servers. I’m not sure how high that value really is. The books I’d want to read… probably Popcap, right? Casual browser games are SaaS. Popcap’s games migrate from browsers into standalone games as a matter of course, so that must be a profitable business decision, assuming the Popcap guys aren’t dumb.

Finally, it’s worth noting that Dyack slipped a casual “Imagine if technology allowed us simply to broadcast a video signal (games) at 60fps at 720p through a server” in there. Yeah, I can imagine that. It’s not all that close, if you assume that you don’t want network lag to affect your gameplay. And you don’t. You also want to make sure college dorms can all play your game at once without problems. Etc.

Blizzard has decided that they don’t want anyone making money from writing in-game addons. This isn’t too surprising. In broad strokes, you can go two ways when it comes to your game: you can try and hold onto all the potential profits yourself, or you can open up the ecosystem to others. Either direction has pros and cons.

In this case, if we’re looking for specific addons which may have prompted the action, we gotta start with Carbonite. Carbonite is basically a quest guide with a million other features baked in; it makes it easier to level. Carbonite has two features which distinguish it from older attempts at commercial addons.

One, it’s aimed at a profitable market. Quest guides and gold-making guides are real business these days — and the companies behind them get bought for real money. Nobody’s successfully selling how to raid guides for money —

Quick digression. Raiding is a group experience; WoW leveling is not. WoW raiders are fairly likely to have at least semi-clued friends. It’s very easy for a solo WoW player to lack such friends; thus, leveling guides have a bigger market. Interesting unanswered question: will Blizzard’s efforts to make raiding more casual result in a bigger market for commercial raiding guides? Digression ends.

— so yeah, RDX didn’t make enough money to keep the developer working on it.

Second, Carbonite went for a free/premium model, with the free version showing ads in-game. I suspect that’s a bigger reason for the change than one might suspect. One addon showing advertisements is no huge deal, albeit annoying. When a majority of your addons are doing it, the user experience is negatively affected.

Carbonites in-game advertising.
Carbonite's in-game advertising. Not subtle.

However, if ads were all Blizzard was worried about, the policy would be different. Blizzard clearly wants to control the monetary space around their game, and why shouldn’t they? They created the platform; they should get to profit from it in the manner they choose.

The best example of the opposite approach is Linden Labs and Second Life. The Lindens go all in with an explicit definition of their product as a platform, which is accurate. They want to sell a basic service that third parties can build on, and their basic service is pretty well tuned for that purpose.

That approach does work. For a traditional Diku-style MMO, however, you’d open yourself up to worries about RMT; once you open the door to micropayments, people start getting agitated.

I don’t actually think that’s an attitude likely to last. I’m old enough to remember when people thought advertisements on the Web were an abomination. Heck, I’m old enough to remember when people thought the Internet should never be used for commercial purposes. We pay for tickets to sporting events, and we don’t freak out when the ticket has an advertisement on it. We pay a monthly fee for cable service, but premium channels still have advertisements.

I think by making this change Blizzard’s actually opened a few doors. Intelligent eloquent people are making voluble arguments against the new restriction. Mostly the one about donations. A couple of popular addons are going to go away, and everyone’s going to know it’s because Blizzard said you can’t charge money/ask for donations. If you liked QuestHelper or Outfitter, there’s a decent chance you’ll be biased towards those arguments.

So while it’s probably not a reasonable transition for WoW, what if the next game Blizzard publishes comes with an iPhone style App Store? Blizzard would get a few nice effects there. First, they’d take — say, 20% of the revenue stream. I pulled that out of my hat; if I were doing ops for Blizzard I’d run the numbers and be smarter. I don’t know if Blizzard logs the addons a player uses, but if I were in charge over there they would, so let’s assume they do. You could make a pretty good stab at the size of the stream.

Second, they’d have a lot more control over the addons available, but no more control than they wanted to have. Again, c.f. Apple. There are a million low-class fart apps for the iPhone; it doesn’t reflect on Apple’s quality. On the other hand, if Blizzard wanted to screen out crap, they could.

Third, the classic problem of distributing addons could be solved. A lot of WoW players rely on addons and feel like they can’t play well without them. On a big patch day, old addons break. Addon sites tend to die under the load of millions of players trying to get addons at once. This is, like it or not, part of the WoW experience. Making it better is a relatively small win, but it’s a win.

The traditional arguments against Blizzard control of addons are workload and responsibility. Curse shows 3,727 addons. WoW Interface shows 2,122 standalone addons plus 459 in the Featured Projects section; they do sort out obsolete addons. This is not a crushing workload. It’s probably one person.

Responsibility is a bigger problem. It’s not so much responsibility to the players — they’ll understand that addon quality isn’t certified. The problem is the need to present a sane relationship to your developers. The key word I kept sneaking in up above: “platform.” WoW’s UI API has been fairly stable, but it’s also always been very clearly and aggressively prone to change. Running it as a platform doesn’t mean you can’t change it. It does mean you have to manage the community better.

In particular, it would be nice if addons didn’t potentially break every time there’s an update to the game. This is a bigger workload than screening submissions.

Still. QuestHelper is about to be cancelled. It has been downloaded, from Curse alone, over 20 million times. There have been around 100 updates, so let’s divide that 20 million by 100, assuming that every user has downloaded every update. 200,000 people have downloaded QuestHelper from one site. Maybe it costs two bucks in the hypothetical store. 20% to Blizzard is $80,000 over the course of the last two years. That’s not pure profit, of course.

I’m cheating, because at a brief glance QuestHelper is the most downloaded addon on Curse. I’m also cheating because on the one hand, I’m being conservative and assuming that each user downloaded the addon 100 times; on the other hand, I’m assuming each download would have been a sale. Who knows? If I were Blizzard I’d have better numbers and be able to do better math. Perhaps the revenue share should be 30%. Maybe it doesn’t make sense at all.

I sort of doubt that you can entirely turn addons into a profit center. But they aren’t supposed to be a profit center — they’re a tool to make the game easier to play and more attractive. If you can make them stickier, you enhance the game, and letting people have a monetary stake in the success of your game is a marketing win.