Skip to content

Cheating in game simulations


Suppose you’re writing a simulation for a strategy game in which you’d like to compare how different strategies fare.  You might require that the strategies satisfy an API like the following (in Go):

type Strategy interface { 
    // Tell me what player i am, and tell me the initial board state
    initialize(view GameStateView, player int) 
    // Decide what i should do, in a given game state 
    decide(view GameStateView) Play 
    // Process some game event out of the player's control, e.g. player's play
    process(view GameStateView, event Event) 


1. Out-of-band communication

This setup is fine for chess, but if the game involves partial information and the potential for collaboration (in my case, Hanabi), you can easily cheat:

var external_state *ExternalState = new(ExternalState)

type CheatingStrategy struct {
    internal_state *InternalState
func (s *CheatingStrategy) initialize(view GameStateView, player int) {
    s.internal_state = newInternalState(view, player)
func (s *CheatingStrategy) decide(view GameStateView) {
    // Perfectly fine code
    stuff := s.internal_state.get_stuff()
    decision := some_logic(view, stuff)
    // Cheat!  Read/mutate external state
    partner_cards := external_state.get_partner_cards()
    decision = changed_my_mind(view, partner_cards, decision)
    return decision
func (s *CheatingStrategy) process(view GameStateView, event Event) {
    // Could cheat here too

This cheating is possible because Go lets you have shared mutable state. It’s the equivalent of bridge partners playing footsies under the table.

If the state weren’t shared, it would be merely internal state, e.g. the inner workings of a bridge player’s mind, which is fine.

If the state weren’t mutable, it just means that Strategies may have fixed contracts ahead of time, e.g. bridge conventions. But any nontrivial strategy will automatically have contracts with other instantiations of itself. (I use bridge as an example, because it’s clearly not considered cheating. But I’ve heard people argue this sort of thing is against the spirit of Hanabi. Could be – but I like the game more both theoretically and in practice when allowing it!)

2. Direct hacking attempt

Notice also, you can also attempt to cheat like this:

    // another attempt to cheat
    view.score += 1

So to be safe, the simulator shouldn’t use the GameStateView after giving it away to the strategy. This is likely to be an implementation annoyance and inefficiency, since the simulator will need to repeatedly produce deep copies of parts of the game state.


Obviously you can always cheat by storing things in a file or remote database. So let’s imagine an in-memory, network-less sandbox. Interestingly, there are two quite different solutions.

Solution 1: Functional programming

A pure functional language will make it much harder or impossible to cheat, by disallowing mutability. Here’s the strategy API, in Haskell

class Strategy state where
    initialize :: Player -> GameStateView -> state
    -- Decide what player i should do, in a given game state
    decide :: Player -> GameStateView -> state -> (state, Play)
    -- Process some game event out of the player's control, e.g. player's play
    process :: Player -> GameStateView -> Event -> state -> state

Shared mutable state automatically became impossible, thanks to Haskell’s purity! The decide and process functions aren’t allowed to have side effects. As a bonus, mutating GameStateView became impossible, too.

(Note: You can probably do something more Haskell-y, but I’m not currently the guy to know how.)

Solution 2: Rust

Until pretty recently, using a functional programming language might have been the only option. But along came Rust:

Shared mutable state is evil. So functional programming languages thought: let’s have no mutable state. So Rust thought: let’s have no sharing of state.

Rust happens to be the language I was playing around with, for this Hanabi simulation business. Here’s my strategy API:

pub trait Strategy {
    fn new(&GameStateView, &Player) -> Self;
    fn decide(&mut Self, &GameStateView) -> TurnChoice;
    fn update(&mut Self, &GameStateView, &Turn);

Again, shared mutable state becomes impossible*, thanks to Rust’s ownership system! And again, as a bonus, mutating the GameStateView becomes impossible – you have to have a mutable reference to do so.

*Okay. I lied – it’s still possible, but made harder and highly discouraged. And I think you could tweak the language to disallow it, which is the important point. (Rust was designed as a systems language. I think if another language adopted Rust’s approach, they could get rid of stuff like Cell/Arc/Mutex, and disallow this relatively cleanly. And maybe also get closer to Haskell’s prettiness.)

Lime Rick pictorial walkthrough

I enjoyed this short game called Lime Rick.

Each solution’s captured in one pic.

Until you shall see,

On level twenty three…

If you don’t want spoilers then don’t click!

Read more…

I’m dreaming of a white… lie

All across America, millions of parents conspire to tell their gullible children the same lie, tricking them with fake incentives to behave in a way that conforms more to authority.  Sounds horrible right?  Actually, it is generally considered harmless!  (And also, it’s not religion!)  A big, fat lie.  Do you know what it is?  If you still haven’t figured it out, does the title of the post jingle any bells?

We tell children that Santa makes a list of “naughty” and “nice” children, and gives only the nice ones presents.  There is no naughtiness continuum – it’s black and white, much like hell and heaven.  In theory, if children believe that they could be either naughty or nice, they will act nicer, hoping for presents.  As a matter of fact, the children are probably getting presents from “Santa” regardless.  And of course, some of these kids are actually relatively naughty.  But they probably believe they are “nice”, at least on Christmas.  And indeed, the promise of presents probably makes them behave a bit nicer, so the parents (err, I mean Santa) win.

Presumably the lie is mostly harmless, makes children and adults alike happy, and, most of all, is part of our culture.  Also, many kids probably never believed it, and certainly not for very long.  That said, in light of the holiday spirit, I’m going to celebrate a virtue of having uncertainty about things that are false (hey!  I *am* nice).  Let me explain, with a lie:

  1. Let L be the sentence “If L is true, then Santa exists”
  2. Suppose L is true.  Then, “L is true” is true.  So since L is true, by its definition, we have that “Santa exists” is true.  So we have proven that “If L is true, then Santa exists”.   So we have proven L!
  3. But again, L being true implies Santa exists.  So we have proven Santa exists!

Take a minute to digest that argument.  Notice this argument works for “Satan exists”, or “pigs fly”.  If you’re curious, this argument is closely related to Löb’s theorem, which I learned about in college from LessWrong and some friends.  The general takeaway is supposed to be “Just because something is provable doesn’t mean it’s true”.  Does that mean we shouldn’t believe things are true when we prove them?  No, it just means that just because you believe something doesn’t mean it’s true.

Another closely related theorem is Gödel’s second incompleteness theorem.  It states that if a formal theory proves its own consistency, then it is inconsistent.  Thus a consistent theory should remain agnostic about its own consistency (It also can’t assert its own inconsistency, of course, since it would be wrong about that.  Or would it?)

As a human, I believe I’m relatively inconsistent.  But that doesn’t mean I spout off nonsense at every opportunity (I mean, maybe I do, but that’s not the point).  I still aim to be consistent, and as a result, am somewhat more consistent.  The Santa Claus lie greatly resembles this, if you replace “Truth” and “Falsity” with “Niceness” and “Naughtiness”.  Naughty or nice, a kid should believe they can be nice, and by doing so, they become a bit more nice.  In reality, they’ll never be perfect in the eyes of their parents (assuming the parents are reasonable), but at least they can get some presents.

I was wrong and Transcendence was bad

I made a post over year ago predicting that Transcendence would be a movie my friends enjoyed.  The movie got terrible reviews from the critics.  Having seen it, I am confident (without having asked many of my friends) that my prediction, which I had pegged as 80% likely, was wrong.

I don’t think Transcendence was as bad as the movie critics found it.  That said, there were some painful moments – one so bad I feel compelled to mention it (no, it’s not really a spoiler).  While Evelyn is working with Will’s uploaded data and trying to bring an intelligence out of it, she says “I’ve tried everything.  Language processing, cryptography… coding”.  Anyways, I don’t recommend you watch the movie.

If you’re gonna watch a big blockbuster sci-fi movie, I recommend Interstellar, which I enjoyed greatly.

Google Maps + rideshare

Google Maps works well for me when I use public transit or walk, but I often use UberX or Lyft to get around.  In particular, I often want to compare the costs and times of using rideshare to that of public transit.  It seems like Uber, Lyft, and normal cabs would all be happier businesses if, whenever you searched how to get from destination A and destination B by car (or maybe even public transit), it also estimated how much each service would cost.  This could probably be built pretty unobtrusively in the UI.  It could also be user enabled/disabled (and it would only appear in cities where the services exist, of course).


  • Estimate prices is tricky, especially because of UberX surge pricing and/or Lyft happy hour.  But Uber and Lyft would be highly incentivized to make a good API for this, if Google was willing to use it (and it seems Google is willing to do something of this form for BART and MUNI, and it works fairly well, in my experience).  But surge pricing and happy hour are not known in advance, so the prediction would be slightly poor for later times.
  • Estimating time is tricky (relative to estimating time of driving) as well, if one wants to take into account the time it takes to get the driver (this wouldn’t affect the transit time, just the start time, analogous to the way it works for public transit).  Again, Uber and Lyft could make an API for this, but it would only make sense for if you are looking to travel very soon.

I’m guessing Google Maps doesn’t care to do explicit deals with businesses, but adding this feature makes sense to Google in other ways:

  • It would make the Maps service better for people like me.
  • Google has already shown interest in Uber, investing $258 million in the company.  I suspect Google is interested in the rideshare model, since it could potentially be a natural entrance for self-driving cars.  Also maybe learning things about the market and UI for Maps+rideshare would inform them of how to do things in the future when self-driving cars are ubiquitous.

This feature might increase emissions/traffic very slightly, since it would drive people off public transit onto rideshare.  I’m guessing this effect is tiny (and I currently don’t take externalities into account at all, when deciding whether to take public transit or rideshare) and Google wouldn’t care much about it.  Also, the rideshare of the future, with larger cars and multiple pickups, might be more efficient than the public transit it displaces, so it might actually be better to encourage it.

EDIT (12/25/2014):  I totally forgot about this post, but I’ve seen Uber prices integrated into Google!  Indeed, it was integrated just last month, on 11/05.  SEE:

Is the future close?

Is “the future” closer than I previously thought? Ran into both of these today.

Haven’t read them, and don’t know what to think yet.  But they vaguely sound like progress towards strong AI and towards brain emulation, respectively.

EDIT:  My conclusion after reading more about both – future is not necessarily that close


There’s a movie which is planned for release about a year from now, about the singularity.  Here are some potential selling points:

  • Themes of uploads and the singularity, and even nanotechnology
  • Christopher Nolan
  • Johnny Depp, Morgan Freeman (voicing the singularity?!), Cillian Murphy, and some other “stars” I haven’t heard of

The premise is apparently that Johnny Depp and two other scientists have been working on code for a self-aware computer (via uploads?).  Depp gets killed by anti-technology “terrorists”, and then uploaded into a supercomputer.  After gaining internet access, he backs himself up to every computer in the world.

Overall, not too much is known about it, at this point.  But it seems like it could be a pretty reasonable plot, and as someone pointed out, it sounds like the technophiles probably won’t be the bad guys.  A depiction of an AI with access to the internet attempting to gain power in the world would be highly amusing, if done well.  And the movie may touch upon interesting philosophical points and upon AI risk.

Some of my friends are excited (for different reasons), while at least one other is skeptical.  I am very hopeful.  True, most likely, there will be some highly unnecessary action scenes, and some questionable futurism.  Nevertheless, even if the movie is more likely to be mediocre, I’m quite excited for the long tail of the movie’s awesomeness distribution.

My (possibly bold) prediction:  More than half of my friends, after having seen the movie, when asked whether the movie was in the top 5% of movies they’d watched, say yes.  Probability 80%