I got a lot of nice feedback from my presentation. Basically the main message I wanted to get across was the following. When you have a plan, and you want to actually start executing it in a real (or virtual) environment, the plan can at some point midway become broken, i.e. no longer possible to execute. The source of this problem is one (or both) of the following:

  • Uncertainty (coming in three flavors)
  • Dynamic Environments

Of the options that people have thought up (classic planning/execution/replanning, policy-based methods, and incremental planning), I argue that incremental planning is by far the best option in virtual environments for solving the Dynamic Environments part. So, ICT is excited about getting Incremental Planning to solve Uncertainty as well.

One of the questions brought up the point of using randomized policies to solve the “rigidity” of policies in MDP based models. This is definitely something that I need to think more about (although I did write an MS Thesis about it). Off the top of my head, though, I think the main questions to be answered regarding using randomized policies is

  1. Is the reduced expected reward you got as a result of putting in randomization worth it?
  2. Can randomized action selection be made to look believable? i.e. If I’m randomly committing actions, will it look like I’m a crazy person who in the long run gets to the goal?

First question: I suspect the answer is yes, depending on how much randomization you put in there, and if you know anything about your adversary. That is, the less you know about your adversary, the less you can exploit anything you know about him, and so simply acting more randomly becomes a better and better recourse.

Second question: ::shrug::

More posts about some of the other talks coming soon. Look forward to a discussion of the Sims 2 AI presentation.

Posted
Authorddini

Dreamfall I just finished playing Dreamfall: The Longest Journey.

First off, this game is awesome. If you’re interested in the story of the The Longest Journey universe, you will definitely find this to be a very compelling game, if a bit short.

I mention it, however, because games such as Dreamfall, Indigo Prophecy, and Shadow of Destiny are screaming for dynamically generated content. I mean this in the sense that is described in Story Director and Interactive Narrative technology in general. In the Story Director methodology, a story author has a sequence of dramatic points that he wishes the audience to experience. He might even have a specific, scripted plan for how the audience should reach those dramatic points. This is, of course, what is done normally in games. In order to make this work, however, the player is prevented from doing anything that really interferes with the story author’s plan, often in a totally unnatural way. By “unnatural”, I mean you see the following sort of thing:

  • Totally fake characters : If the story line requires that the player meet with character X at location L, then X is waiting at L forever, until the player gets there. i.e., characters have no lives outside of talking to you. When they’re not talking to you, they’re waiting to talk to you.
  • Artificially static environments : If a scripted path going from one plot point to another plot point requires character X to go through a doorway, then that doorway better be clear in order for the plot to continue. So now, you can’t have too many computer controlled characters walking around because they might be in the way, and you can’t have the building be destroyed or made inaccessible by the player accidentally setting an explosion off nearby

The result of all this is the human player is painfully aware they’re in a fake reality, and immersion is broken. To address this problem, Story Director technology retains immersion by recovering from an author’s plan made broken through unpredictable user interaction. It does this like so: The author specifies a list of important plot points that the audience must experience, and possibly an initial plan for getting from one plot point to the next. If the user does something that breaks the plan, then a Story Director creates a new plan, that meets the same plot points.

This makes the following situation possible. Suppose you (the player) are about to meet an informant in an alley way, to get some vital information so that the story continues. Upon getting there, the two of you are jumped by some thugs. For some reason, you fail to act quickly, and your informant is killed, before telling you the vital info. Now the system recovers, and finds a new way to get that information to you: The informant’s brother/girlfriend/wife/boss/pet panda sends you a letter, or you have to track down the informant’s sources yourself resulting in a whole new section of the game opening up.

Now your actions have consequences. Now your actions have meaning within games in virtual worlds.

Can game designers tackle the prospect of run-time generated story content?

Posted
Authorddini

A great many AI problems can be phrased simply as a search through a domain of elements for one that meets criteria C. The planning problem is certainly this way. As a simple example, suppose one has an environment (an MDP, for some generality), with finite horizon:

S = (finite) set of states A = (finite) set of actions T = T(s, a, s') =Pr(s' | a, s) R = R(s, a) Horizon = Z steps

Given this formulation, there are finitely many policies to examine. The naive solution would be to enumerate them, see what reward each one got you, and then take the policy that gave you the biggest reward. Of course, we don’t do this because, for any remotely realistic domain, this search space is so huge as to make this process intractable. To actually solve this problem, people developed Value Iteration, or Linear Programming or what have you. These methods allow you to efficiently cut through the giant search space.

The interesting thing is, 15, 20, or 25 years from now, that search space won’t seem nearly as large as it seems to us today. Computers will be much, much faster, and can simply plow through that enormous search space in the blink of an eye. No sophisticated methods of cutting through the space such as Value Iteration (VI) will be necessary. Using VI and using brute force search will be the difference between .20 and .25 milliseconds.

The question is, then, if the constant advance of computer technology will eventually make planning methods totally irrelevant.

Posted
Authorddini