I recently learned about the “Benign Violation” theory of humor which some researchers are using as a framework for studying the psychology of humor. The basic idea is that we find something funny when it’s both 1) a violation and 2) benign. So tickling is funny because it’s an attack (a violation) but harmless (benign). Puns are funny because they violate expected usage but in a way that carries no negative consequences. Self-deprecation is funny because it is saying something negative (a violation) but in a way that we can tell isn’t sincere (benign).
It’s an interesting theory because it predicts both what is and what isn’t funny. Tickling by a stranger, for example, isn’t funny because we don’t know if it will be harmless. And self-deprecation isn’t funny if we suspect the person saying it is serious. It suggests that to be funny something needs competing interpretations, one where it’s OK and one where it’s not OK, and they need to be relatively matched in intensity: something that’s a trivial violation in an entirely benign situation isn’t funny, nor is a major violation with a figleaf, at least according to this theory. Intuitively it sounds like a pretty compelling argument to me, and it may be a useful framework for exploring the design of comedic RPGs, a famously fraught endeavor where plenty of games are maligned as “funny to read, not funny to play”, or for keeping unwanted comedy out of dramatic or gritty games.
It seems to me that many classic funny RPG situations map to this theory: The dragon opens its jaws and lunges forward to [rolls to hit] bite you for [rolls damage]… 1 point of damage [laughter ensues]. In the hit/miss framework it’s a violation, but in the damage framework it is benign, so we are amused. One of the most reliable design techniques for comedy RPGs is to have players describe their own character’s failure: since they failed it’s a violation, but since they have control over the situation it’s benign. If a player makes an out-of-character joke it can be funny, and a pretty reliable technique for eliminating that humor is to make it consequential in the fiction by acting as if the character rather than the player said it (i.e. it’s no longer benign). The gold-standard of funny RPGs, InSpectres, seems to fit the pattern, too: it instructs you to play “normal” characters rather than outlandish ones, which would seem to help with interpreting the supernatural events of the scenario as “violations” but the relatively low stakes of the situation make them benign (these supernatural situations are so commonplace in the in-game world that resolving them is a viable franchise business model, and a session is about whether players satisfactorily complete the job to bring in money to keep their business afloat).
While this framework seems appealing at first glance, a deeper analysis is probably warranted. It would be beneficial to go beyond cherry-picking examples that seem consistent with the theory and look at a spectrum of games and see how well the funny/not-funny predictions hold up (analyzing Cthulhu games could be a particularly fruitful way to approach this, given the diversity of approaches in that genre and the fact that they’re sometimes played seriously and sometimes for laughs). Additionally, to be useful as a design tool we’d need to see if we can translate the concepts into game design elements: What counts as a violation? What makes things benign? How do we evaluate the relative intensities? How does wanting games to be “fun” interact with the “funny” that we’d include or exclude when employing this framework? Attempting to use this theory as a prospective guide to designing a new funny (or reliably unfunny) game could also be a fruitful avenue of exploration.
One of the memes floating around in the tabletop RPG design-o-sphere is that it’s desirable to reduce the “social footprint” of games. The thinking goes that busy people have a hard time fitting gaming into their lives, so when games require things like learning rules, regular attendance at scheduled sessions, outside-the-session prep-work, etc., it makes it less likely for the game to happen. While this argument is compelling, we shouldn’t assume that reducing barriers to play is a purely beneficial strategy that has no tradeoffs. In addition to affecting which tools are available from the game designer’s toolbox, the “inconvenience” of getting a game to happen can itself have an impact on play. I recently read a psychology research paper that illustrates an interesting phenomenon.
In the experiment I want to highlight, the subjects were asked to participate in a short test to measure performance on some mental tasks. Different subjects were scheduled to take the test at three different times (selected to be mildly, moderately, or extremely inconvenient). After the students were told about the test and when they’d be taking it (without knowing the scheduled time was a variable) they filled out a survey indicating how important or interesting taking the test would be, and how satisfied they expected to be once they completed it:
As you can see in the chart, subjects in the late-but-not-too-late condition rated taking the test as slightly more important and meaningful to them. The paper puts forward the theory that when we are exposed to short-term costs between us and our goals, our brains use techniques like magnifying the significance of the long-term goal to make sure we get past the bump in the road, but we only do this within the realm of the possible and don’t bother if the short-term cost seems too high. Since the subjects subconsciously anticipated difficulty staying motivated to perform at the moderately inconvenient time, their brains helped them out by deciding the overall task was comparatively more important than their less-inconvenient-time or it’s-so-late-it’s-a-lost-cause peers who were evaluating the exact same task. And here’s the even more interesting result:
You’d probably expect average performance on the test to decline based on the later times (presumably people are more tired later), but the average performance by the moderately-inconvenient-time group was actually slightly higher than for the earlier group. Using some statistical analysis, the experimenters say that there was a negative effect on performance relative to time as you’d expect, but there was also a positive effect on performance relative to how important and significant the subject considered the test (presumably they try harder). Since the moderately-inconvenient subjects thought the task was more important their increased motivation compensated, or more than compensated, for the lateness of the task. Obviously low-inconvenience is better than high-inconvenience, but comparing low and moderate inconvenience may not be so straightforward.
Naturally all the standard caveats apply about the risks of generalizing from an experiment like this and applying it to a different field like RPG design, but I think it’s worthwhile to consider whether trying to minimize the “social footprint” might risk throwing the baby out with the bathwater. When there’s a little resistance to making a game happen the players are probably playing the game with a slightly different mindset compared to players who face no inconvenience, and that can easily have an impact on what techniques, systems, procedures, etc., will work well for those players. This isn’t to say that designers shouldn’t consider the importance of right-sizing the “social footprint” of their games, just to caution that a simplistic “less is better” strategy may not be optimal.
Most aspiring tabletop game designers face a challenge in finding external playtesters for their games. While external playtesting is essential for confirming that that the game works as desired when the designer isn’t there to facilitate play, groups willing to playtest an unknown designer’s game are few and far between. Naturally this leads to pondering whether there are incentives that could convince people to playtest. In our society the “go to” incentive is usually money, but most aspiring designers intuitively grasp that they can’t realistically pay a playtester what they’re worth without taking their project deeply into the red.
Putting a decision in the “money” domain often changes the way people look at it. Consider this hypothetical: My car gets a flat tire and I say “Hey, will you help me change my tire?”. Most people, as long as they’re capable of doing so, would instinctively say yes. Now consider what would happen if I said “Hey, will you help me change my tire for fifty cents?”. Most people would instinctively say no, because their time and energy is worth far more than fifty cents. Offering a small financial inducement usually doesn’t stack on top of people’s natural altruism, it shifts the domain of consideration from a social context to a monetary one.
Of course, money isn’t the only possible inducement. Sometimes people reciprocate in the social domain by exchanging gifts such as bottles of wine, restaurant meals, or (perhaps more realistically for aspiring game designers) PDFs of games or game supplements. Although it’s common to offer the retail-version PDF of the final game to playtesters I’ve always had a philosophical hangup with that: since playtesting may reveal that the game has fundamental flaws that prevent it from ever being productized, and since playtesting may reveal that a particular group doesn’t enjoy playing the game, you can’t guarantee that you’ll have a final retail-version of the PDF that will be valuable to the playtesters, so does it make sense to use that as an incentive? I’ve long hypothesized that it might be easier for a designer to attract playtesters to a second game, since you could use free versions of the first as an inducement. But I just read a behavioral economics research paper that casts some doubt on that idea.
In the experiments they asked subjects to do a task in exchange for various incentives and measured how much effort they put in. In addition to testing with small and medium monetary rewards they also measured how people responded when they were offered small or medium gifts of candy, and additionally measured how they responded when the gifts were described with a clear monetary value, e.g. a “fifty-cent chocolate bar” or a “five-dollar box of chocolates”:
As you can see in the charts, low payment produced lower effort than no payment (the control condition) or nonmonetary gifts. But the same gifts had nearly the same effect as the equivalent amount of cash when their monetary value was clearly understood by the person doing the task. This leads me to suspect that giving people a free PDF which has an easily-ascertained retail price could be just as demotivating as offering a small amount of cash in exchange for playtesting.
It does suggest that offering other things that don’t have an obvious price, such as tchotchkes or perhaps game content that isn’t available in any other way, can be effective. An idea I’ve pondered is generating supplemental content for a popular game, such as a limited-edition playbook for Apocalypse World or a character class for Dungeon World, that could be used as an inducement to playtest a new standalone game. I haven’t pursued that myself since I don’t feel I have enough play experience with those games to create quality content for them, and the new content itself would need playtesting. Also, I suspect that if I tried to engage with those games with a mercenary “means to an end” mindset I wouldn’t enjoy playing them as much, and enjoying games isn’t something I’ve been willing to risk.
If I did have a valuable inducement to offer to playtesters I think it might also help bridge another of my hangups, which is that the “beggars can’t be choosers” effect makes me reluctant to push for playtesters to playtest well since I have trouble getting people to playtest at all. While I’m not usually a fan of purely transactional interactions, it does make it less socially demanding to set expectations when both parties are clearly and concretely benefiting from the interaction.
I recently read a short psychology paper that illustrates a point that tabletop game designers may want to consider. The experiment had two premises: First, that when we have a greater number of choices we’re more likely to find one that’s a good match to our preferences, so having more choices should make an individual more satisfied. Second, that when we have a lot of choices it has also several negative effects, such as requiring a lot of cognitive work to do the comparisons, highlighting the downsides of comparative choices by giving a basis for comparison, etc., so having more choices can make an individual less satisfied. The experimenters ran a trial asking people to evaluate a set of pens to identify the best one and then offered to let the person buy the pen they liked best. Here is a graph of their results:
As hypothesized, the two forces seemed to combine to create a “sweet spot” at which more people were satisfied enough to actually pay for a pen when they found one they liked in a some-choices-but-not-too-many situation.
Obviously not everything is a pen, different people may respond more or less strongly to competing psychological forces, and we should always be cautious about generalizing too much from a psychology experiment, but it doesn’t seem unreasonable to me that the inverted-U-shaped curve in this experiment may be a common phenomenon. If so, it points to a danger of an RPG supplement model that causes a game to include a monotonically increasing set of mechanical options like character classes, feats, powers, etc., because that might move the game out of a sweet spot as the number of choices accumulate. For example, issuing a series of “limited edition playbooks” for an Apocalypse World-style game might increase the odds that an individual player will find the perfect one for them, but it might also result in players agonizing over their choice of playbooks when play begins and make them less satisfied during play as they lament what might have been if they had gone with one of their rejected choices. I’m not trying to criticize the game design or marketing choices of any particular game here, merely pointing out that there may be easily overlooked downsides to embracing only a “more choices is better” strategy.
There’s currently no established consensus about how to develop a tabletop game once you’ve gotten past the initial “burst of inspiration” design stage. Most people more or less agree on the end goal, which is that they want a promising initial design revised until it’s “good enough” that it would be able to comfortably sit on a real or virtual shelf next to other published games (the issue of commercial vs. free publishing sometimes complicates that further). Most people intuitively expect that “playtesting” needs to be part of that process, and many people use “playtesting” as the blanket term to describe what they’re doing. Different models of how this development process should proceed will lead to different strategies and tactics for success. In this blog post I want to describe two different ways to look at it.
First, I’ll describe what I’ll call the “funnel” model. This model starts from the premise that a designer needs feedback about the gameplay in order to get to the game to a publishable state. And then the thought process goes: In order for the designer to get feedback, someone needs to hear about the game, get the playtest materials, read the materials, decide to play, organize a playtest, run that playtest, generate feedback from the playtest, send that feedback to the designer, and the designer uses the useful feedback to improve the game. Then you apply some insight and notice that there transitions between the steps: Not everyone in the world hears about the game. Not everyone who knows about it will get the materials. Not everyone who gets the materials will read them. Etc., etc. You can diagram it like this and notice that the “dropoff” at each level creates a funnel shape:
Since you’re trying to increase the bottom level (useful feedback) until it hits something analogous to a critical mass, the diagram suggests several strategies: Maybe you can widen the top of the funnel by talking about your game in lots of places. Maybe you can address the dropoff between “get the playtest materials” and “read the materials” by giving the playtest document a snazzy layout that makes it very readable. Maybe you can develop notoriety in another field, like blogging or podcasting, thereby expanding your audience. Maybe you can develop a great “elevator pitch” so that hearing about the game is very likely to lead people further down the path. Maybe you can develop a reputation for being smart or trustworthy in another medium (or with previous games) so people are likely to give you some benefit of the doubt to cover transitions where they might otherwise drop off.
Maybe you’ll also notice that the variables aren’t as independent as the diagram may make them seem, and you’ll figure out a way to simultaneously optimize some of the dropoffs. For example, perhaps you’ll conclude that one reason that people might hit the “read it but didn’t want to play” barrier is because they don’t want to play games in genres they dislike. You can avoid that problem by primarily communicating with fans of the game’s genre instead of a general audience: the top of the funnel is narrower, but you won’t have to deal with genre-rejection at the lower levels.
This funnel model has a lot going for it. It tells a story about how to succeed that sounds consistent, and it suggests tactics and strategies for how to make progress towards achieving that success, which is what you need from a development model. It also looks very similar to the way some people envision marketing a completed product, so solving the problems that this model tells you to solve may be helpful for also achieving success with the published game. It also has some downsides which I’ll get to later.
A different model starts from the premise that a well-designed game product will generate enjoyable play, people who experience enjoyable play will want to tell their friends about it, some of those friends will want to also have that enjoyable experience and seek out the game product, they’ll enjoy playing it, and thus you have a self-reinforcing word-of-mouth marketing strategy. Step one in that strategy is having a well-designed game. Few people are lucky enough that all of their ideas are perfect in their initial form, so in order to get a well-designed game you need to develop it from your initial conception into something that works. In order to do that you need playtests to generate data about what is or isn’t working, and you use that data in a successive-refinement game design/development and playtesting cycle until you’ve got a game that reliably generates a good experience. Once you have a game that does something good you can build the rest of your marketing strategy on top of that strong foundation. Let’s call this the “organic growth” strategy.
In this model, instead of assuming that playtesting is something that “happens naturally” to some variable percentage of the population when exposed to an in-development game, it treats it as a task which requires work, and it leaves it to the implementer to develop the strategies or tactics to get that work to happen (just like the previous model left it to the implementer to figure out how to get people to read the materials once they have them, or figure out how to get them to play the game once the materials are read). One “strategy” for getting playtesting to happen might be to have a cadre of supportive friends and acquaintances who will playtest your stuff for you because they like you as a person. Another might be to have employees who playtest as part of their job. Another might be to form a system of mutualism where designers participate in playtests of each others’ games for mutual benefit. (This last one is analogous to the way many writers develop novels: They have a writers’ group who read and give reactions to each others’ drafts, even if they might not be the exact target market of the novel). Finding effective strategies or tactics for this part of the model is hard!
But there are hard parts with the funnel model, too. For example, if you’re relying on “natural” playtesting you have to deal with the fact that many people don’t know how to playtest well (maybe they go into “how can I break this game?” mode when what you needed to know was “is this game fun when played normally?” and playing with someone who’s trying to break the game can contribute anti-fun for the other players). You also have to deal with the tricky process of separating useful feedback from distractions, or figuring out the alchemy that transforms feedback into progress on the game design (there’s an old chestnut about how people giving you feedback are almost always right about there being a problem but almost always wrong about what the solution is, but this is complicated by the fact that many people put on their game designer hats while playtesting and see “problems” that are hard to distinguish from “you haven’t implemented my preferred solution”). In fact, these sorts of difficulties have convinced some people who use the funnel model that playtesting itself is a largely useless process, except as a testbed for exploring a marketing strategy (this stance, naturally, is hard for the people using the organic growth model to wrap their heads around, because for them abandoning playtesting would mean bailing out before step one). Another issue with the funnel model is that discovering a success strategy at one level may provide constraints on the rest of the process. For example, a subsystem with a grabby gimmick can help convince people to play a game once they’ve read it, but if the success path of the game becomes contingent on that subsystem then any further game revisions will necessarily involve changing the rest of the game to work with that subsystem rather than the other way around.
In the independent tabletop RPG design communities that I’m aware of, most people seem to be operating from the assumptions of the funnel model. Personally I’m not sure that’s wise, since some of the level transitions seem to encourage strategies that can easily turn into wasteful zero-sum arms races (for example, if lots of people spam “please playtest my game!” messages on a forum then they’re competing with each other and also wearing out the patience of the forum’s readers). The big downside of the organic growth model is that the “get playtesting to happen” process looks to many people a lot like “and here a miracle occurs” in the current environment.
Let’s imagine a chess game. Two players who both know the rules sit on either side of a board with the appropriate pieces on it. To play, they’ll use their knowledge of how the pieces move, their mutual knowledge of the rules and victory conditions, the current position of each of the pieces on the board, and a mutually remembered bit of information about whose turn it is to make the next move. Obviously there are a few things that could mess this game up. A freak windstorm, for example, could blow all the pieces off the board. Or maybe a loud noise will distract the players for a moment and by the time they’re ready to return to the game their memory of whose turn it is won’t match because one of them (which one!?) got confused during the interruption. Or maybe one of them will do something that uses one of the more exotic rules, like en passant, and they’ll discover that their mutual understanding of the rules of chess isn’t as mutual as they initially thought.
Now let’s imagine that one of these chess players goes on an expedition to Antarctica but still wants to play chess with his cold-averse friend. They still can! What they decide to do is set up two different chess boards, one in Antarctica and one back home, and communicate their moves back and forth through whatever form of long-distance communication they can. When Antarctica-guy physically picks up a pawn on his chessboard and moves it to a different space he just tells his friend which pawn he moved where. The at-home friend picks up the corresponding pawn on his chessboard and moves it to the corresponding place to represent his friend’s move. All the rest of the stuff is the same: the important thing about chess isn’t that there’s a single physical board between the players, it’s that there’s an agreed-upon representation of the current game-state between the players. Having a single physical board certainly makes that easier and more convenient, but the important thing about the game isn’t how it’s physically implemented, it’s how it looks to the players. Each of the players can look at “the” chessboard and make their moves based on the current game-state. It doesn’t matter if “the” chessboard is a convenient fiction for two different physical chessboards that are being kept in synch by an extra process that isn’t normally necessary.
But what if these friends realize that they don’t really like chess that much and want to play something a little more action-oriented? They decide to switch to a first-person-shooter video game played over the net. Conceptually this isn’t too different from the long-distance chess game, but there are a few details that contribute some nuance. One important difference between chess and an FPS is that the turn-based nature of chess provides an easy interface-point for long-latency communications. If it takes much longer for one player’s move to get communicated it just looks to the other player like a really long turn. Since FPSs need to maintain a smooth, continuous-action flow of play they need to have the effect of the moves represented immediately. When the Antarctic player presses his “shoot” button he’d better see his character start shooting right now! The two computers are both running instances of the game, but the other guy’s doesn’t realize the first started shooting until a message dispatched over the network gets to his computer. But maybe at the same time that the Antarctica guy decided to shoot his gun, his target pressed his “run” button and started moving. In Antarctica, the player thinks he’s shooting (right now!) at a stationary target, but at his friend’s house he thinks he’s moving (right now!) and not being shot at. From the Antarctica perspective the shot should hit (assuming the aim is good) and the target should be wounded. From the other perspective he shouldn’t be wounded at all: nobody was shooting, and even if they were his character wasn’t at the place that the bullets would hit! The two simulations aren’t perfectly consistent. But they don’t have to be! As long as they’re close enough, the players won’t notice. As a human player, the warm guy doesn’t know with perfect certainty where the Antarctica shot was aimed, so if the game has an under-the-hood mechanism that gives “hit detection” precedence to the shooter’s POV then the Antarctica computer can tell the other one not only that a shot was fired but that it hit. The at-home computer can play its “gunshot” sound effect, display the “shooting” animation for the other character, reduces the hit points of the target, and most of the time it will seem perfectly normal to the at-home player that the other character shot and hit him at his current location. The important thing to notice is that there doesn’t need to be a single authoritative game-state in a single place in order for both players to feel like they’re playing the same game with the same state. As long as it looks close enough they won’t realize that their two computers are not exactly on the same page at every instant.
As players they maintain the convenient fiction that they are in the same world because the “game” involves making decisions as if you were, just like it makes more sense to interpret what they see on their screens as a window into a 3D world rather than a bunch of pixels on a flat display. Just like it’s not useful when playing long-distance chess for them to dwell on the fact that they don’t have a single physical board between them, it’s not useful for them to dwell on the potential artifacts of network gaming (unless the distortions become so extreme that they overwhelm the suspension of disbelief and they have to give up because there’s “too much lag” over their network). By buying into the illusion of consistency between the somewhat-independent computers they can play this type of game together.
Now let’s imagine that the adventurous friend returns from Antarctica and the two of them get together to play another kind of game they enjoy, a tabletop RPG. Here they also need to maintain a sufficiently-synchronized game-state in order to play. To do so, they buy into the convenient illusion that there’s a single “fiction” or “Shared Imagined Space” between them. They probably have some concrete common physical touchstones like dice or character sheets as part of the game, but a big part of play involves their brains independently keeping track of the current game-state of imaginary people doing imaginary things, and they send messages back and forth to keep each other more-or-less in-synch (using high-tech “talking” technology). Since their brains aren’t as simple as chessboards they can’t rely on being perfectly in-synch at all times, so their game needs to be constructed in a way that encourages and eases synchronization on important points. For example, if their game has a mechanic which gives a “high ground” advantage then the players will be primed to pay special attention to character altitudes relative to each other in “the” imaginary world. Maybe their mental picture of the characters won’t agree on points like whether or not they have mustaches, but they are likely to agree on who is higher than who if they both believe that is important to the game.
Being sufficiently synchronized to game is the foundation for a functioning RPG (and the astute reader will notice how weaselly a word “sufficiently” is). Many RPG techniques and design elements serve to maintain that synchronization. For example, the “fictional trigger” in an Apocalypse World move can serve like the snap-to-grid functionality of a computer painting program to snap the “fuzzy” mental images of the different players around easily-communicated concrete templates. If my character seems close to “Going Aggro” on somebody I am pulled toward embodying that in my roleplaying because I know that the other players are watching for whether characters are Going Aggro and will understand what I’m thinking better and be more easily synchronized to what I’m imagining when they can use that concrete and mutually-understood pattern as a touchstone for how the scene should be playing out in their imaginations. Agreeing with the other players that the “Go Aggro” move should be invoked and starting the corresponding mechanical procedure gives us an explicit way to acknowledge synch-points without drawing unpleasant attention to our efforts to keep synchronized.
I’m not an expert on Topology, but one of the ways I think about the games I like is that they make use of the idea-space inside the human brain as a gameable space. Now, by that I don’t mean that you can imagine places that aren’t real and think up activities people might engage in in places like that. What I mean is that the way we think actually provides “dimensions” along which you can design meaningful interactions in a game. From my reading of what contemporary psychology and cognitive science tell us, we’re capable of perceiving the appropriateness or congruence of matches between ideas. You know the confident “that feels right!” feeling you get when you figure out what the answer to a riddle must be, or when you come up with the perfectly apt humorous remark? Or the “that’s not right” feeling you get when Hollywood miscasts a part in a movie adaptation of a story you know? That’s what it subjectively feels like to have different levels of connection between ideas, which is apparently how the intuitive side of our cognition works. Even though we don’t have scientific units for it, we can get a feel for how “librarian-y” someone is by intuitively comparing them against the idea of “librarian” we have in our brains. We can even get a feel for how weaselly someone is.
This is the entire basis of the game Apples to Apples. In it, one player puts an “adjective” card on the table (for example, “Hot and Sticky”). Then all the other players consult their hand of “noun” cards and put forward the one they think the initial player will select as the “best match” (for example, “The Equator”, “Cinnamon Buns”, “The Sports Illustrated Swimsuit Issue”). Some people have a hard time grasping that there is no single way that people are required to make that “best match” comparison. It’s not always “the most similar” or “the most opposite”. Sometimes it’s the ill-defined “funniest”, but even inveterate jokesters will sometimes feel compelled to pick a straightforward match if it’s dead-on. The way it works is that the player compares the “adjective” they put forward to the various options and picks the one that feels like the “best match”. We don’t need to put a name to a comparison to feel how strong it is. Strictly speaking Apples to Apples tends to be about emphasizing the minor variations between people rather than the commonality because it asks the player to pick a “best” match on each round (thus the way to win, if you care about that, is to “play the player” and put forward cards with matches that are likely to resonate especially strongly), but it illustrates the point that there are dimensions of play that games can lean beyond simple factors like tallest/short, fast/slow, near/far, big/small, etc. Personally I’m not a huge fan of the gameplay in Apples to Apples (my sense of humor tends to run a little more cerebral and surrealistic than average so my joke answers nearly always lose out to the more obvious jokes) but since it uses this abstraction as the central element of play it’s a useful example.
While they don’t always foreground it the way Apples to Apples does, Roleplaying games make heavy use of this concept to inform and constrain play. The old-school “puzzle solving realism” style of play, for example, leans heavily on the ability of humans to mutually imagine “that’s what would happen!” to explore the consequences of poking things with ten-foot poles or pouring acid on them. The Burning Wheel family of games orients players to judge characters by looking through the lens of written character Beliefs, rewarding players for acting along (or dramatically against) the line of those Beliefs. Games with oracle mechanics like Ganakagok use abstract concepts to guide play (“figure out the most ‘Woman of Storms’ way to conclude this scene”). Even something as fuzzy as “what’s the most dramatically appropriate (or dramatically ironic) thing?” or the dreaded “what’s best for The Story?” can be used in a game context. Stories and storytelling have a huge role in human culture and the way that human minds work, so it shouldn’t be surprising that we have a lot of intuitions related to stories and imagination. These intuitions can be built into the “space” of play in these games in the same way that features of human locomotion are as important dimension of play in sports as ball-physics and field geometry.
When analyzing systems that operate on information it’s often valuable to consider how that information matters to the control flow of the system, and games definitely fall into this category of system. One big distinction between types of information is discrete vs. continuous, or digital vs. analog. A continuous “variable” can be any value within a range: think of something like temperature, distance, or time. A discrete variable can only be in one of several mutually exclusive states: on/off, in-bounds/out-of-bounds, too-big/too-small/just-right, etc. Continuous variables are really useful because that’s how almost everything in the actual world we live in works. Discrete variables are really useful because it’s possible to build simple procedures around them: if A do X, but if B do Y.
As a simple but nontrivial example think of a thermostat. It has three continuous inputs: the current temperature, the low set-point and the high set-point. The thermostat is in charge of the heater and knows and controls whether it’s currently on or off. Internally it doesn’t really do anything with the temperature directly, it uses a comparison to create a discrete variable from two of it’s continuous ones: “is it currently hotter than the high set-point?” and “is it currently colder than the low set-point?”. Operating on these discrete concepts lets it make a decision that’s simple enough for it to apply to the binary world of “should the heater be burning right now?”: if it’s hotter than the high set-point and the heater is on, turn it off, but if it’s colder than the low set-point and the heater is off, turn it on.
Lots of games have things like this, too. In soccer, the ball is somewhere in the three-dimensional space where the game is being played, and this feeds into discrete categorical concepts like “is the ball currently in-bounds?” that are used by the game procedures to control the flow of play. In baseball, whether a pitch counts as a “ball” or “strike” corresponds to where it travels through the strike-zone of the batter. In the UFC mixed-martial-arts organization some moves are legal and others, such as punches to the back of the opponent’s head, are illegal. When you look at these distinctions from the digital side of the analog/digital divide there are obvious and categorical differences between them: the difference between an in-bounds ball and an out-of-bounds ball are night and day! From the analog side it can be fuzzier: what if the ball is right on the edge of the line? What about a pitch that’s just grazing the edge of the strike-zone? Heads are kind of round, so the distinction between side and back is not always obvious, right?
In games, translating from the continuous/analog domain to the discrete/digital domain of the rules and procedures of the game usually involves human interpretation or judgment. Oftentimes games will give one participant, such as a referee, a special privilege of having authoritative judgments or interpretations, but even in games like that all of the participants need to understand how those interpretations and judgments will be made and make their own. Soccer players don’t want to play on a field where the lines are invisible to everybody but the refs, they need to be able to predict the rules-consequences of their interactions with the ball in order to play. They may not be able to exactly predict how the ref will make the call in edge-cases, but they can reasonably expect that their own interpretation will be similar to the “official” interpretation, so they can use their own interpretation as a good proxy for evaluating what kind of move they want to make in the game. (And plenty of casual sports are played without an officially designated ref, the players just use some other process, sometimes ad hoc, to resolve edge-cases if there’s no widespread consensus interpretation). Similarly, the intention of the “no strikes to the back of the head” rule in the UFC isn’t to give penalty points to inaccurate punchers but to discourage fighters from engaging in behavior that the UFC has decided is too dangerous: the ref makes the authoritative call in the octagon, but the most important impact of the rule is on the fighter when he decides whether or not to throw a punch based on where he thinks his opponent’s head will be when the punch lands.
Many RPG rules operate on things happening in the analog world of “the fiction” so they have lots of these interpretation elements cooked into them, so looking at the nuances of these interpretative processes is obviously very important in RPG Theory. But we shouldn’t mistake the importance of this concept to RPGs for the idea that interpreting or translating from continuous to discrete concepts is something unique to RPGs. The interplay between the interpretations and judgments of different participants in an RPG is an interesting and important topic if you’re trying to understand RPGs. The interplay between the interpretations and judgments of different participants in a pitcher/batter interaction is an interesting and important topic if you’re trying to understand that part of a baseball game.
(Also, I’ve tried to use simple examples in this blog post in order to write with clarity, not to deny the existence of subtlety. My claim here is that both “is that really Go Aggro?” and “is the ball really in-bounds?” are both examples of interpretation that feed into rules. It can be easy to get distracted by the simple one-dimensionality of the in-bounds/out-of-bounds thing because we can easily imagine constructing a simple mechanical or electronic device that we could rely on for official in-bounds/out-of-bounds rulings while the only thing currently known that can do the Go Aggro thing is a human brain. That’s an important difference worth thinking and talking about! But it’s also worth realizing that “how hard would it be to build a robot referee?” is a different question from “how are the players interacting with this game?”.)
This will probably seem silly, but let’s compare two hypothetical games, game R and game F:
Game R is a guessing game where one player picks a real thing they can see and another player asks a series of up to twenty yes-or-no questions in an effort to guess what thing the first player picked. In game R, when the guesser asks a question the answerer uses their senses on the physical thing they picked, processes that information via the mental act of interpretation and judgment to evaluate what the answer is, and then says that answer.
Game F is a guessing game where one player imagines a kind of thing that exists in the world and another player asks a series of up to twenty yes-or-no questions in an effort to guess what thing the first player imagined. In game F, when the guesser asks a question the answerer takes the information stored in their imagination, processes that information via the mental act of interpretation and judgment to evaluate what the answer is, and then says that answer.
In both games, it’s possible to give bad answers if the answerer is bad at mentally comparing things. If they have an unrealistic estimate of the size of breadboxes, maybe they’ll give an answer to the question “is it bigger than a breadbox?” that unintentionally misleads the guesser.
In game F, it’s possible to cheat! Maybe the answerer will claim to imagine an object but then answer the guesser’s questions arbitrarily and then imagine their thing to retroactively conform to their answers. Maybe they’ll even imagine something and then change the thing they’re imagining to conform with the answer they want to give rather than answer the question based on the thing they’ve been consistently imagining.
In game R, it’s also possible to cheat! Maybe the answerer will claim to pick a real object but then answer the guesser’s questions arbitrarily and then pick their real thing to retroactively conform to their answers. Maybe they’ll even pick something and then change the thing they’ve picked to conform with the answer they want to give rather than answer the question based on the thing they’ve consistently been using as a basis.
Both games expect that the answering player will use a reliable, consistent, predictable, understandable process when evaluating the answers to the questions. If the answering player cheats and uses a different method to answer the questions then the game doesn’t work. Since the choice of possible target objects in game R is limited to things that the answerer can see, their ability to cheat in this way is more tightly constrained than the answerer in game F. Solving a highly constrained problem frequently takes more effort than solving a loosely constrained problem, so we can assume that it generally takes more effort to cheat in game R than in game F. There is natural variation among humans, and some may perform a cost/benefit analysis and be more likely to cheat in low-effort-cheating situations. In game R it is extremely unlikely for the real object to spontaneously transform itself mid-game into a different real object. In game F, the likelihood of the imagined object transforming into a different imagined object is based on the likelihood that the answerer will cheat.
In game F, it’s possible for the answerer to give bad answers because they’re bad at imagining things. Maybe they think elephants are smaller than they really are, so they end up giving answers that are accurate with respect to their small imagined elephant but are inaccurate with respect to real elephants, which would unintentionally mislead the guesser. In game R, it’s possible for the answerer to give bad answers because they’re bad at perceiving things. Maybe they misjudge the distance to the object and believe that the object is smaller than it really is due to the size-distorting effects of perspective. It’s probably reasonable to guess that “bad imagination” problems are more likely among humans than “bad perception” problems.
Is it valuable to say that game R and game F are categorically different games, where game F is a game with fiction and game R is a game with real stuff? For example, the increased likelihood of cheating in game F and the higher odds of incorrect imagination may mean there are important “reliability” differences between the games. Or are game R and game F largely similar, and the real-vs-fictional divide between them is a nuance rather than a meaningful distinction? When discussing games, sometimes that real-vs-fictional distinction can be central and important, and sometimes it’s a useful proxy for discussing consequences of the distinction, but it can also be an obscuring distraction in some contexts (e.g. the most interesting distinctions between RPGs and chess is not always that chess uses real-world playing pieces).
A problem I sometimes see in “RPG Theory” discussions is that it’s easy to go overboard in believing that features that RPGs have are unique to RPGs. I’m going to blog about some “low level” RPG Theory stuff, pointing out a few RPG Theory ideas that are true not because RPGs are unique but because they’re just like other games.
First, all games require group assent to the system of play. There’s a Lumpley Principle of basketball, too. It says “System (including but not limited to ‘the rules’) is defined as the means by which the group agrees to basketball-relevant events during play.” There’s nothing magical about the ball going through the hoop in basketball. The ball going through the hoop only matters because the group agrees that the ball going through the hoop gives a team points. And points only matter because the group agrees that they’ll use the number of points to determine the winner. And winning only matters because the group agrees that it’s important to determine a winner of the game. It’s agreement all the way down, just like RPGs! But, just because basketball requires agreement “all the way down”, that doesn’t mean the game is a constant committee meeting where everyone decides on an event-by-event basis whether or not to consensus-agree to giving it significance. Just like RPGs, people agree to certain principles, rules, etc., which guide play and decision-making going forward. Much of this agreement happens before play begins by using shorthands like “Let’s play basketball”, where the people saying it assume a common understanding of what it means to play basketball which incorporates a bunch of stuff like the ball/hoop/points thing. That doesn’t mean the assumption of mutual understanding is always valid! Maybe not everybody has the exact same understanding of “basketball”, and they’ll only find out during play that they over-assumed, such as when one player claims to get three points for scoring a basket from a particular position on the court and everybody else says that they hadn’t been playing with the three point rule. Different understandings of “the system” among different participants can lead to breakdowns, just like in RPGs. This is a normal human thing that affects not just all games but all human activities!
Believing that the Lumpley Principle is something unique and special about RPGs can easily result in mistaking “no rules except explicit event-by-event group assent/rejection” as a goal or idealized form of play, especially since there’s a tradition in RPG communities of putting “rules-less freeform roleplaying” on a pedestal as some kind of aspirational form. But the Lumpley Principle isn’t about value judgments of what good games look like, it’s just talking about a feature of all functioning games. Saying that basketball requires group assent isn’t an endorsement of rules-less freeform basketball as an idealized form of play, and the Lumpley Principle isn’t endorsing explicit moment-by-moment negotiations as the way well-designed RPGs should function.