Software! Math! Data! The blog of R. Sean Bowman
The blog of R. Sean Bowman
July 13 2016

I like the game Clue (called Cluedo in some parts of the world). I played it with my family when I was young, and we always had a great time. In college I wrote several programs to try to deduce what cards players were holding, but there was always some deduction my program missed. I started thinking about Clue again recently, and found much to think about. Frankly, this was a bit surprising to me! Isn’t Clue pretty straightforward?

Short answer: no, I guess not. Although I originally wanted a "clue assistant" program, I ended up with a platform for agents to play Clue against themselves; you can see the code here. In the course of writing and thinking about the game, I discovered a fascinating new way to deduce cards and answered a small question in dynamic epistemic logic.

The basics

Clue is a board game, but I’m really only interested in the deduction part. So I’ll leave all the extraneous stuff out and describe a core version of Clue.

  • There are 3-5 players.

  • There are three types of cards, “suspects”, “weapons”, and “rooms”. In the version of Clue I have, there are six suspect cards, six weapon cards, and six room cards.

  • One card of each type is chosen at random and put in a “secret box” that no player can see.

  • After this, the cards are shuffled and distributed as evenly as possible among the players.

The narrative behind Clue is that players are solving a murder mystery. The goal is to discover the identity of the murderer, the weapon they used, and the location the murder occurred by deducing the three cards in the “secret box”. The game proceeds like this:

  • The first player makes a “suggestion”, announcing that they believe the murder was committed by particular person with a particular weapon in a particular room (these correspond to cards; note that they player need not hold the cards themselves).

  • The player to the left must either refute the suggestion by showing (secretly) one of the suggested cards, or, if they have none of the cards suggested, pass to the next player.

  • This process continues until the suggestion is refuted or all the players have passed.

At the end of their turn, a player may either make an “accusation”, stating that they know the secret cards, or end their turn, letting the next player have their turn.

A Knowledge Game

Note the information we get from a suggestion: if we’re doing the suggesting, we might see one of the cards we suggested. We might find out that a player has none of the cards suggested. And if we’re not doing the suggesting, we might find out that some player has one of the suggested cards, but we don’t know which one.

From this information, it is possible to deduce the locations of cards. With some practice and a note taking system, people get pretty good at playing Clue. But the point is that Clue is a knowledge game. The state of the game does not change; only the knowledge of the actors over time does. We’ll come back to this later, but first, how do we encode our knowledge of what cards are where?

For simplification, well denote players as P1, P2, and so on, and use a predicate \(has(p, c)\) to mean that player \(p\) has card \(c\). So for example we might know that P1 has the Miss Scarlett card, written \(has(P1, MissScarlett)\). If we know that a player doesn’t have a card, we’ll write \(\neg has(p, c)\). If we know two facts, we join them with \(\wedge\), so that \(has(P1, c)\wedge has(P2, c')\) means that P1 has card \(c\) and P2 has a different card \(c'\). Sometimes we only know that one of several things is true; we write this with \(\vee\). So \(has(P1, c)\vee has(P2, c)\) means that either P1 or P2 has card \(c\), but we don’t know which.

Propositional logic

Now we’re in a position to write down what we know about what cards are where. For example, if we make a suggestion and are shown card \(c\) by player \(p\), we now know \(has(p, c)\). If player \(p\) cannot refute our suggestion of “Mr. Green with the knife in the parlor,” we know \(\neg has(p, MrGreen)\wedge \neg has(p, knife) \wedge \neg has(p, parlor)\). Finally, if P2 makes the suggestion of three cards \(c_1, c_2, c_3\) and we see P3 show P2 a card, we know that \(has(P2, c_1)\vee has(P2, c_2)\vee has(P2, c_3)\). This knowledge, combined with our knowledge of the cards we hold at the beginning of the game, allows us to make deductions about what cards other players hold. This type of logic is known as propositional logic and simple versions are studied by most college students in some form or another.

Some semi-technical details

Here are some vague ideas about how this can be done automatically, with a computer. First, we need to tell the computer some facts about the game:

  1. Every card is either held by exactly one player or is in the secret box,
  2. Each player has some number of cards (we can see how many; this is not a secret), and
  3. There are exactly three cards in the secret box, one of each type.

It is somewhat tedious but not difficult to formalize these statements and have the computer understand them.[1] Then we add our knowledge of our own cards and what we’ve seen so far from our observation of the game. We take all these facts and feed them in to a so called SAT solver, short for satisfiability solver. This allows us to deduce locations of certain cards in the following way: we hypothesize that a certain player has a certain card. If the SAT solver tells us there are no satisfying truth assignments to the resulting sentence in propositional logic, our hypothesis must have been incorrect, and so that player cannot possess that card. On the other hand, if we hypothesize that a player does not have a card and the SAT solver does not find a solution, then we know that that player must have the card. If the SAT solver finds a solution in either case, we learn nothing except that they player may hold the card.

Probability, etc.

In fact, SAT solvers can be used to tell us the probabilities that certain players hold certain cards, from our point of view. I believe that these probabilities correspond exactly to what you’d get by using Bayes’ theorem repeatedly assuming uniform priors.

To get probabilities, you need your SAT solver to enumerate all models (truth assignments) that satisfy your current state of knowledge. Take note of how many times \(has(p, c)\) is true in the truth assignment returned by the SAT solver for the players and cards you’re interested in. The probability that \(p\) holds \(c\) is just that number divided by the total number of models.

This isn’t hard, but it is time consuming, and I’m not sure how much it gets us, so I have left it out of most of my experiments.

Epistemic explorations

So far, we have a computerized assistant that allows us to use all the information from the game to deduce locations of cards. Clearly this is a great help in playing Clue. This is the program I was trying to write in college! At the time I realized that this is a good way to formalize deduction in Clue, I believed that this method used all the information available to the player.

And so, being a mathematically minded fellow and always wondering if we can know a bit more, I began to think about what a player can know about another player’s knowledge. Above, we considered only our knowledge about what cards are where. What can we say about what other players know about the locations of cards? For example, if we know that P2 has card \(c\), we can ask the question “does P3 know that P2 has \(c\)?”

Once we start thinking about these types of questions, there’s really no end to the madness we can create. For example, we can ask things like “does P1 know that I know that P2 doesn’t know that P1 has the kinfe card?” Of course, this is crazy and probably unhelpful, but if we’re able to reason about the knowledge of players, we should ideally be able to reason about arbitrary questions like this.

Common knowledge

As a special case of the knowledge of other players, let’s talk about common knowledge, which you can take to mean in the informal sense of “everybody knows this” if you like. But the real kind of common knowledge I’m thinking of is (as the wikipedia link describes) facts that all players know, that all players know all players know, that all players know that all players know all players know, and so on.

Common knowledge in Clue takes the form of “announcements” that can be seen by everybody: either a player shows a card to another player (but we don’t see which card) or a player passes. This is knowledge that is available to everybody at the table, even an observer who never sees a single card during the entire game. Therefore we represent common knowledge as a silent player, the observer, who does not participate except to observe all actions of the other players.

Using common knowledge, we can divide the deduction constraints in to three groups: constraints on the game itself (each card can be held by at most one player, together with the rest listed previously), constraints given by common knowledge, and constraints given by an individual player’s knowledge of the cards they hold together with the cards other players have shown them.

Note that the vast majority of the constraints come from the first two groups! Clue is a game decided by the very little, but very important, knowledge of the players themselves.

Epistemic logic (and more!)

I’m interested in questions about what we can know about other players’ knowledge in Clue because… well, these questions are interesting to me per se. But there’s another motivation for thinking about this stuff. I began to have my doubts that all the available information was being used in the deduction engine I described above. First of all, it’s clear that other players’ knowledge is not being used in the scheme I describe. But does adding in this extra information actually help us play Clue? I could not give examples where it would, nor prove that it couldn’t.

So, I did a bit of reading. It turns out that people have thought some about Clue, including modeling the game in a theorem prover (Modeling Epistemic Actions in Coq), describing how the knowledge of actors changes over time (Specifying and Verifying the Game Cluedo Using Temporal Logics of Knowledge), and describing player actions using Epistemic modal logic in Description of actions in Cluedo by Hans van Ditmarsch. This last paper especially got my attention.

Epistemic logic is a type of modal logic that allows reasoning about the knowledge of agents. There are several of these logics, but we’ll focus on one called S5. Whereas above we could only make statements or hypotheses like \(has(p, c)\) or \(\neg has(P1, knife)\), in S5 we can say things like "P1 knows that P2 has the kitchen card." This would be written like \(K_1 has(P2, kitchen)\). We can say much more complicated things, too, like \(K_2 (\neg K_1 has(P2, pipe) \vee has(P1, revolver))\). This says "player 2 knows that either player 1 doesn’t know that player 2 has the pipe, or player 1 has the revolver. Clearly these types of statements and questions can get quite complicated.[2]

New (actionable?) information

Epistemic logic is pretty cool. Being able to reason about the knowledge of others is a powerful concept. In the above paper, the author considers an idealized card game not unlike Clue. The game is modeled much as we did above, with “knowledge actions” (actions that transmit information) of showing a card to another player or announcing that you have none of the suggested cards. But a third possibility is included, that of ending your turn without making an accusation. This caught my attention.

Ending your turn seems like a completely innocent act, doesn’t it? When I played Clue for years, I always ended my turn, thinking “yay I learned something” or “I need to record some or another bit of info.” I never thought of ending my turn as a potential source of information for other players. And upon reflection, it’s not entirely clear to me that it does give any useful information. So… does it?

Van Ditmarsch himself asks a strong version of this question at the end of the paper: Can it be that no player knows the secret cards just before a player announces the end of their turn, but once this act has occurred, some other player is able to deduce the secret cards?[3] Note there that we’re talking about deducing the secret cards from the fact that a player ended their turn without making an accusation, not just that some player was slow in figuring out what the secret cards are.

In general, can a player learn anything at all from the fact that a player ended their turn without making an accusation? What a great question! I wanted to know the answer.

Epistemic logic without the modal part

Well we could write our own model checker for S5, or use one of a few interesting academic ones I found. For a while I got excited about writing one, just to be able to answer (maybe?) the crazy questions above (does P2 know that P1 doens’t know that…). I looked in to it and came to the conclusion that it would be a big undertaking. I don’t shy away from difficult things, and frankly I’d still like to write a model checker for a suitably restricted “card game oriented” S5, but I don’t have the time or energy right now. So: is there a way to answer epistemic questions, maybe just simple ones, using the propositional logic framework described above?

Yes, it turns out to be pretty easy, in fact.[4] Suppose that a player (P2, say) passes, and we want to try to make interesting deductions from that. What information can we use? As before, we can use the constraints on the game itself as well as the common knowledge obtained by all players so far. We can also use our knowledge of what P2 knows. We must be careful here, because although we may know, say, that P3 has card \(c\), P2 might not know that. What we can say is that although P2 might not know that P3 has card \(c\), P2 certainly knows that they don’t have card \(c\). So we come up with a list of cards P2 knows they have (because we know they have them) and a list of cards they know they don’t have (because we know they don’t have them) and throw them in to our SAT solver.

For every card \(c\) we know P2 might have, we assume they have that card. If the SAT solver tells us that they can deduce the secret cards from holding \(c\), then they must not hold \(c\)! Otherwise they would not have passed! Similarly, if we assume that they don’t hold card \(c\) and find that they could deduce the secret cards, then they must in fact hold \(c\). Easy as pie, and if we manage to figure out that P2 holds or doesn’t hold some card using this technique, we can go back an re-examine all the other cards we’re unsure about. In that way, we may be able to obtain information about a bunch of cards at once.

Why is it important for us to work outside the propositional framework we outlined above? That framework allowed us only to reason about our own knowledge. Here, we have used “higher order knowledge” at least twice: once because we are considering what happens when a player announces that they don’t know the secret cards (we know that they don’t know the cards), and again when we assemble our knowledge about what the player knows (we know that they know that they have/don’t have a certain card).

Butt-kicking Cluebots

Now I needed data from a bunch of games, but they’re slow to play and my friends and family got bored of me constantly talking about Clue. So I had the computer play itself – you can have a look at the code if you like. The computer agents play fairly aggressively with the single aim of trying to deduce the secret cards. They’re pretty good at logic, too, using the SAT solver and being computers as they are. When one is able to deduce the secret cards at the end of their turn, they win.

Remember, the question we asked above is whether it is possible for a player to deduce the location of cards from the fact that an opponent passed at the end of their turn. And the short answer is: yes! It is possible! In fact, I observed a game in which an agent wins using this strategy. This answers van Ditmarsch’s question above in the affirmative.

A note on real live, idealized assumptions, and disappointment

Note that the strategy described above doesn’t work very well in the real world. I know from my own experience that not all Clue players use all the information available to them and might not even realize that they have enough information to win the game. (For one, I am usually playing for fun, do not keep track of all the information I could, and probably couldn’t process it all even if I did keep track of it – so there is at least one crappy Clue player among us!) If a player passes at the end of their turn because they don’t realize that they can win, we may make erroneous conclusions about the cards they hold.

So, before you get too excited about trouncing your family, classmates, or colleagues at Clue, consider that the above strategy will probably backfire badly. (Unless, you know, your family, classmates, or colleagues are really, really good at logic or very quick with the boolean satisfiability problem.)

Other stuff I learned

Although the main motivation for writing this software was to answer the epistemic questions above, I ended up collecting more data. One interesting and slightly surprising thing I found was that the common knowledge “observer” who watches the game but never sees any specific cards held by players knows quite a lot at the end of the average game. With four players, on average about one secret card was known to the observer just before then end of the game. The observer knew a lot about cards held by individual players, too.

Conclusion

I wrote a program to help me win at Clue. I ended up with a program that played itself at Clue. I learned some things about Clue, logic, and software development that might be interesting to other people. I would love to write more about this at some point.


  1. While encoding these constraints I found a neat identity that allows us to convert formulas for the game constraints, written most naturally in DNF, to CNF, which the SAT solver expects. Nifty! ↩︎

  2. As a small aside, (temporal) epistemic logic and common knowledge can get incredibly weird and complicated. Our intuition of what common knowledge is (or at least my intuition) is not very good. See the link above on common knowledge, which explains the “blue islander puzzle”, as well as Terry Tao’s post on epistemic logic and this puzzle. (The latter is not for the faint of heart as the math/logic gets quite complicated.) ↩︎

  3. Here is the exact question as as van Ditmarsch phrases it. In his notation, \(K\delta\) means that some player knows the secret cards, and \(L_{123}\neg K\delta\) means that players 1, 2, and 3 learn that none of them know the secret cards. (The paper considers a game between only three players, so \(L_{123}\) means that all players learn.) The double turnstile and variable \(s\) are used as in modal logic, and blackboard bold brackets indicate state after an announcement. Van Ditmarsch writes, “Is there a game state \(s\) such that \(s \models \neg K\delta\) but \(s\denote{L_{123}\neg K\delta}\models K\delta\)?” ↩︎

  4. This is a small, special case of translating a model checking problem in epistemic logic to the problem of satisfiability of a propositional formula. This has been done, even in the case of the dynamic epistemic logic considered by van Ditmarsch. Let me just say that this stuff is super cool and applicable to all sorts of real word problems. As usual, there is only a vague distinction between fun, toy problems and big, tough, important problems we find in the garden of mathematics/computer science ↩︎

Approx. 3500 words, plus code