I waded into the debate on game achievements with my lecture at the 2010 Game Developers Conference entitled Achievements Considered Harmful?, with a strong emphasis on the "?". Since the game industry seems to be careening head first into a future of larding points and medals and cute titles on players for just starting up a video game, I wanted to raise awareness of the large body of research studying the impact on motivation from various types of rewards. Trying to be "fair and balanced", I delved into what the data show and what they don't show.
Sadly, there is a contentious debate amongst psychologists about how rewards affect motivation, and I spend a bunch of time in the talk discussing this debate. Psychology is at the soft end of science, to put it mildly, and so it's easy for people—including academics—to have an agenda or opinion and interpret the data in a way that backs up that agenda or opinion. This is human nature, of course, and confirmation bias is everywhere in life, but reading some of the papers reminds me more of a schoolyard yelling contest than of peer reviewed research.
To hack my way out of the thicket, I focused on the two results that both sides seem to begrudgingly agree are true.
For interesting tasks,
- Tangible, expected, contingent rewards reduce free-choice intrinsic motivation, and
- Verbal, unexpected, informational feedback, increases free-choice and self-reported intrinsic motivation.
I define all these terms in the slides below, and I'll fill out this page more as time goes on.
The first is the scary one, since it seems to have a lot in common with the ways games reward players with achievements and the like. I reiterated many times that there are no studies that I'm aware of on achievements for games, but if they really could be are sapping intrinsic motivation to play games—and I can't see how you can argue with the possibility this is true after looking at the mountain of available data—then I think somebody should start funding research into this. I think Microsoft Research would be the perfect people to do this work, since they have both Xbox Live (meaning a great source of data, and a vested interest in figuring out the truth for the longevity of the platform) and a bunch of smart psychologists on staff. Hopefully they'll heed my call.
Towards the end of the talk, I outline a potential Nightmare Scenario based on all the implications of the research going the wrong way for games:
- make an intrinsically interesting game, congratulations!
- use extrinsic motivators to make your game better
- destroy intrinsic motivation to play your game
- metrics fetishism pushes you towards designs where extrinsic motivation works
- BONUS: women lose even more intrinsic motivation than men do given extrinsic motivation!
Who knows whether things will actually go this way, but it seems clear to me that the potential is there, and so we should look into this more instead of blindly moving forward.
In the talk I also address a bunch of the Common Buts:
- Players like them!
- Our data shows they work!
- We make lots of money!
- Just ignore them if you don't like them!
- They show players different ways of playing!
I go through each of these in turn, trying to address the core of the point.
Finally, I talk a bit about how to Minimize the Damage, if you're forced to have achievements and rewards in your games. As you may know, I'm working on an indie spy game called SpyParty, and since some platform holders currently require you to give away achievements to pass certification, I gathered a list of ways of implementing rewards so they do less harm:
- Don’t make a big fuss about them.
- Use unexpected rewards.
- Use absolute, not relative measures.
- Use endogenous rewards.
- Make them informational, not controlling.
Again, the data shows even following this advice reduces intrinsic motivation, but it's at least something you can do. I talk about this in more detail in the slides below.
Metrics Fetishism
I'm going to put up a page about this concept at some point, but in the meantime, I want to rant briefly about what I call metrics fetishism, and how's it's taking over game design and how we should be careful.
In the old days, people designed games by their instinct and intuition. Notice I didn't say the "good old days", it was just how things were done. If you wanted feedback, you'd playtest the game, and watch the players struggle with the controls, or waltz through some section you thought was hard, or whatever. Then you'd iterate.
Lately, as computers got networked and more powerful, we started recording metrics about how players played our games, and feeding that back into the design process. We dump a lot of information to a database, and then we mine that information for clues about how to optimize certain aspects of our games, whether it's "fun" or "ARPU" or whatever. This happens during development, and now that a lot of games are online, it happens after we ship, with the design being constantly iterated based on metrics gathered live after launch.
Now, metrics and closed feedback loops are great in the abstract, but like any tool, they can be misused. Just as it's usually a bad idea to design a game based totally on your intuition without grounding that intuition in the cold hard reality of another human being touching the controls, it's also bad to blindly gather and follow metrics data.
In the old days, we erred too far on the intuition side of the equation, but now I fear we're erring too far on the metrics side of the equation. This is what I call metrics fetishism.
The problem is not that the data is wrong[1], the problem is that we tend to gather the data that is convenient to gather, we worship that data because it is at least some concrete port in the storm of game design and player behavior, and then we assume we can take a reasonable derivative from that data and hill-climb to a better place in design-space.
But, as anybody who knows any math can tell you, hill-climbing is not a very good algorithm for optimizing complex functions, and game design is a very complex function indeed. The problem with hill-climbing is it can only see local maxima, it doesn't have any visibility into where the global maxima lie in the space. If you blindly hill-climb, you will end up at the top of the nearest hill, terrified to move because all derivatives point down, while the giant mountain of game design awesomeness is sitting right over there.
We really want a mix of intuition and metrics. The intuition gets you out of local maxima, and yes, it sometimes makes things worse for a bit, but it's our only hope for finding higher hills. In a globally optimizing algorithm like simulated annealing, the intuition is the equivalent of turning up the temperature so you jump around in state space. The metrics allow you to polish those new points to perfection.
Materials
I usually just dump a ppt and mp3 up here, but I decided I'd try something fancier this time. After searching far and wide for a good tool for putting presentations online, I found MyPlick, which is a silly name, and the site is unquestionably ugly, but the tools for syncing your slides to audio are really great compared to all the others[2], and their player is simple and efficient and not bogged down with menus and whatnot.
So, here you go:
I'll definitely put the raw ppt and mp3 up for download soon as well, but in the meantime I want to see how well this works. If it works out, I might convert all my other talks to this format as well.
- ↑ Assuming there aren't bugs in the metrics code instrumentating the game code, of course, which is probably not a great assumption, but anyway...
- ↑ I tried SlideShare, and their sync app is a complete joke, the person who wrote it clearly never tried to use it to mark up a large presentation, and a few others, including one that wanted you to upload a separate mp3 for each slide!