Turning a bug into a feature

I was amused to read this post about an arithmetic bug which accidentally turned into an AI feature (found via Raymond’s recent link clearance.) It reminded me of a story that my friend Lars told me about working on a certain well-known first-person shooter back in the day.

The developers of this game had a problem: an enemy computer-controlled character would see a fatal threat (say, a thrown grenade) and run away from it. Trouble is, the half-dozen other enemies in the area would see the same threat and they would all try to leave via the same best exit path. They’d bump into each other, shuffle around, reverse direction… and the result was an unrealistic-looking mess that called attention to the artificiality of the world. Basically, the AI had a bug; it was not smart enough to find efficient paths for everyone, and was thereby making the game less fun.

You might try to solve this problem by implementing a more complex and nuanced escape route finding algorithm for cases where multiple AI characters are all faced with threats. However, machine cycles are a scarce resource in first-person shooters; this solution probably doesn’t fit into the performance budget and it’s a lot of dev work. They finally solved the problem by writing a cheap “reverse detector” that detected when an AI character had radically changed direction more than once in a short time period. When the detector notices that an AI character has been running in two different directions in quick succession, the character’s default behavior is changed to some variation on “crouch down and cover your head”.

With this policy in place an enemy might run away from your grenade, bump into another fleeing enemy, turn back to try to find a new route, and that would trigger the duck-and-cover behaviour. The net result is that not only does this look like realistic human behavior, it is very satisfying to the player who threw the grenade.

I also noticed that Shawn’s post about the arithmetic bug was part of a feature whereby the AI of the racing game was tuned not to make the AI strive to win the game, but rather to make the AI strive to produce a more enjoyable playing experience for the human player. Lars wrote a short paper on the subject of “artificial stupidity” – that is, deliberately designing flaws into AI systems so that they appear intelligent while at the same time creating fun situations rather than frustrating situations. Quite an interesting read. (See this book for more thoughts on game AI design.)

Sadly, bugs in compilers seldom turn out to actually be desirable features, though it does occasionally happen.

Comments

  • Anonymous
    April 05, 2010
    Thie post (espeically at the end) reads like you are trying to further justfiy the wrong behavior in the C# compiler that your last blog post talked about. First, my previous post was my April Fool's Day post. I assume you are talking about the post about the semantics of base calls. Second, the behaviour I describe is not wrong. It is by design. You might disagree that this was a good design decision; apparently most people do. I certainly see that point of view and sympathize with it, but I continue to maintain that the choice made by the language designers is the best choice that could be made given the alternatives. (Which were: preserve a crashing bug, create a subtle breaking change, or do expensive and complex work to enable a scenario we think is a bad idea in the first place. Whether the first is the lesser or greater evil than the second is a judgment call; I don't expect everyone to agree with judgment calls. That's what makes them judgment calls.) And third, this post has nothing whatsoever to do with the post about base calls. This post is about two things, first, that sometimes a bug accidentally causes a desirable behaviour (in the racing game case) or can be cheaply transformed into a desirable behaviour (as in the action game case), and second, that game AIs are designed to be fun, not smart. I have no idea why you would associate either of those two things with a post about an undesirable crashing bug in the runtime leading to a subtle form of the brittle base class problem. I think we all agree that all the behaviours discussed in my post about base classes are in one way or another undesirable. And there was never any accident involved in my earlier post. And my earlier post was about the brittle base class problem, not how to design an efficient AI to make a fun first-person shooter. Fourth, I am not that subtle. Were I attempting to make the paragraph you refer to into a justification, I would have begun the paragraph with "This provides yet another justification for the behaviour I mentioned in my earlier post..." What precisely made you think that I was implicitly referring back to that post? I desire to be a good writer, and that means communicating my intent clearly. Apparently I have failed to do so here, so I'd be interested to know what I did wrong. -- Eric  

  • Anonymous
    April 05, 2010
    > Sadly, bugs in compilers seldom turn out to actually be desirable features, though it does occasionally happen. Do you have an interesting example? Sure, here's one: http://blogs.msdn.com/ericlippert/archive/2006/05/24/type-inference-woes-part-one.aspx -- here we accidentally implement a subtly different rule than the C# specification requires, and I believe that the actual implementation behaviour is better than the specified behaviour. This has happened numerous times in the compiler. Another example would be a bug I made in the method type inference engine; the spec says that in a particular scenario to consider the built-in conversions between all members of a set of types; the behaviour I implemented by accident was to consider all built-in and user-defined conversions. Upon reflection, we decided that the actual implemented behaviour was better than what we'd originally specified. -- Eric

  • Anonymous
    April 05, 2010
    Stefan, Please, give Eric a break. He did not mention anything about the last post and you don't have to draw un-necessary conclusions.

  • Anonymous
    April 05, 2010
    Oh, come on, of course they are! (I'm talking about bugs in compilers) It is always lot of joy to see, that it is not you who is stupid - it is a bug in a compiler. Besides, some bugs lead to enjoyable communications with your other software development sites. A good example is MIDL compiler bug with upper-lowecase. The fact, that neither you nor your remote collegues broke the interface between the modules, but MIDL Compiler did, promotes international peace and frienship :)

  • Anonymous
    April 05, 2010
    As a class exercise (a long, long time ago) I wrote a Tic-Tac-Toe game in Java.  My kids were relatively young at that age so I let them have a go at it.  They got pretty bored with it, though, when they realized that the best they could do was tie.  A classic need for "artificial stupidity" if ever there was one.    They were pretty impressed that Dad could write a game that couldn't be beat; I confess that I didn't go overboard in explaining how simple it was.

  • Anonymous
    April 05, 2010
    The comment has been removed

  • Anonymous
    April 05, 2010
    Bugs are like mutations in the genetic code of a cell, most of them are bad, some few turn out to be good. The organism tries to eliminate the bad ones and keep the good ones through the process of natural selection. Morale of the story: the more bugs you have, the higher the chance some of them will be beneficial ;).

  • Anonymous
    April 05, 2010
    Eric, you've officially been Chenned.  You'll need to start your own Nitpicker's Corner now! @pete.d - If you read Lars' write-up, you'll see that it's all about how the "technically superior" approach is often the antithesis of FUN.  Players don't really want the AI to be smart, they just want it to not look too dumb.

  • Anonymous
    April 06, 2010
    The comment has been removed

  • Anonymous
    April 06, 2010
    The comment has been removed

  • Anonymous
    April 06, 2010
    @Eric Yes, that's exactly what my rewritten A.I. does. In fact it has more than three strategies available and can mutate between them at different stages of the game. I wouldn't say they are Rock Paper Scissors complementary, but not far off. Whenever I make an A.I. change now I run my soak test mode at full speed overnight. Early on I did get cases where someone would get stuck in Australasia and the game would never end.

  • Anonymous
    April 07, 2010
    The comment has been removed

  • Anonymous
    April 07, 2010
    "...seldom turn out to actually be desirable features, though it does occasionally happen." Sounds like there's a story to be told there.

  • Anonymous
    April 07, 2010
    I dabble on the periphery of SecondLife, a free multiuser platform that allows regular users to run their own (sandboxed) code in a 3d multiuser environment. My involvement is mostly in writing documentation for the scripting language (LSL)... but from the community side. LL, the producers of SL, have a bad record of writing the documentation and have never written a spec for the language, let alone the library of functions. This has resulted in a substantial amount of functionality becoming lava-flowed... and in a few cases bugs became features, or better still accidental features. As you might imagine, it makes documenting the platform and language difficult. I'm always asking the question: Is this a bug or a feature? It's even worse for the users. Without transparency users may depend so heavily upon bugs that the bug becomes a feature; a misfeature. This is something that MSDN could work on.

  • Anonymous
    April 07, 2010
    @Daniel Earwicker - Good story.  It can be amusing when people or animals are convinced of patterns that do not exists.  While reading your post, the first thing that came to my mind was climate scientists :) (I then thought of many of the other "origin" scientific theories, but those are probably too controversial to bring up.)